I was skeptical too, but Supreme Court cases give AI a significant advantage that your example is missing: dozens of pages of briefs describing the case and most relevant facts in great detail for the AI to reference.
In your dispute, the role of a mediator is primarily to find the relevant facts and/or judge the truth of the parties' statements. There's not really any complex legal question to be answered once you determine whose story to believe. This seems like it'd be the case for the vast majority of payment disputes.
The Supreme Court, on the other hand, is trying to decide complex or arguably ambiguous legal questions based on a large corpus of past law, all of which is almost certainly included in an AI's training data. I don't think of the Court as weighing evidence in the way your example requires; all the evidence is already there in the briefs.
So, I'm not sure payment dispute are really strictly simpler than Supreme Court cases, they require a whole different type of reasoning, going beyond the information in the prompt or training data in a way the Supreme Court doesn't have to and the AI cannot.
One can imagine them, but by logical extension and evolution of AI models/implementations such adversarial briefs will become orders of magnitude more improbable.
AI adjudications become a question of when - not if. Likely at first supervised by humans, but for how long will that remain the case as the pressure mounts. The consequences of this and expediting the justice system will be truly profound - perhaps even more so in developing nations whereby the access to justice is so unevenly distributed/unreliable. A non-functioning justice system is at the root of many societal issues.
However, it's also not a stretch to think of the continued descent into an Orwellian dystopia in which individual liberties and freedom are curtailed.
I feel as though I switch between a sense of optimism and being utterly terrified.
> The Supreme Court, on the other hand, is trying to decide complex or arguably ambiguous legal questions based on a large corpus of past law, all of which is almost certainly included in an AI's training data.
Given an AI that is truly unbiased, and only considers either the intent at the time (Originalism) or the literal, textual interpretation of the law, I suspect this won’t go as expected for the Silicon Valley AI crowd since interpreting law based on the text would make the court far more conservative than it is now.
Progressives believe in a “Living Constitution” standard, the idea that the law can change based on 21st century cultural values, not the text as it was written or intended by the legislature.
> the idea that the law can change based on 21st century cultural values, not the text as it was written or intended by the legislature.
Right. If it wouldn‘t, then the law could in fact come to be opposed to what everyone believes even over an extended period of time as well as what can be believed under careful consideration or new evidence. It would then become an oppressive force.
In your dispute, the role of a mediator is primarily to find the relevant facts and/or judge the truth of the parties' statements. There's not really any complex legal question to be answered once you determine whose story to believe. This seems like it'd be the case for the vast majority of payment disputes.
The Supreme Court, on the other hand, is trying to decide complex or arguably ambiguous legal questions based on a large corpus of past law, all of which is almost certainly included in an AI's training data. I don't think of the Court as weighing evidence in the way your example requires; all the evidence is already there in the briefs.
So, I'm not sure payment dispute are really strictly simpler than Supreme Court cases, they require a whole different type of reasoning, going beyond the information in the prompt or training data in a way the Supreme Court doesn't have to and the AI cannot.