The amount of mistakes is what matters. I've seen videos where lawyer's go crazy because the judges supposedly doesn't understand basic law. Either he judge or the lawyer is way wrong in those cases - and I suspect sometimes both are very wrong and even agree on the wrong opinion. It's like self-driving - it just has to make less mistakes than humans. I think a ChatGPT lawyer is actually very close and could be created even today if that is where it's engineers put the focus. ChatGPT is trained on a wide variety of data and right now, is essentially acting like Google so it can answer nearly any question imaginable - but it doesn't need to have such a vague set of data to draw from. All it takes is training it on a very specific set of cleaned, accurate, and up to date data to make it an expert on a single specific topic.