Now they're killing people based on language models. It always cracks me up that the big worry with available AI is that people will somehow use it to manipulate other people on the internet; meanwhile, the government has turned it into an assassination tool.
While AI language models can emulate legal and judicial language, they are not sufficient substitutes for Due Process of Law because they have a comparably unacceptable wrongful conviction rate given that there are "hallucinations" and false citations.
This is not the "right" way to use AI to kill people.
AI lets you do sigint and treat it a lot more like humint. You can e.g. wiretap everybody a suspected terrorist has called in the last year, transcribe all their conversations and pass them through an AI model which flags anything "concerning."
Unlike traditional approaches, AI can distinguish between "bomb" in the context of playing counter strike, discussing a news report and planning an actual terrorist attack.
It can't do anything a human can't do, but it's orders of magnitude cheaper, especially if you can't outsource the human labor due to natsec concerns.
I’m aware of rumors about Israel using AI in war, but where are you hearing of it being used in legal and judicial settings? Besides a few lawyers getting caught and sanctioned, I don’t think it’s happening much.
> One Zignal pamphlet from this year advertises the company’s work with the Israeli military, saying its data analytics platform provides “tactical intelligence” to “operators on the ground” in Gaza. The pamphlet also highlights Zignal’s work with the US Marines and the State Department.
I don’t think this really answers the question, how is AI being used in legal and judicial contexts? Not military and executive agencies. State Department maybe overlaps a bit, but no detail is given about what contexts they are using it in.