This is good news. OpenAI's recent decision to dive into the secretive military contracting world makes a mockery of all its PR about alignment and safety. Using AI to develop targeted assassination lists based on ML algorithms (as was and is being done in Gaza) is obviously 'unsafe and unethical' use of the technology:
If you have any links detailing the internal structure of the Israeli 'Gospel' AI system or information about how it was trained, that would be interesting reading. There doesn't seem to be much available on who built it for them, other than it was first used in 2021:
> "Israel has also been at the forefront of AI used in war—although the technology has also been blamed by some for contributing to the rising death toll in the Gaza Strip. In 2021, Israel used Hasbora (“The Gospel”), an AI program to identify targets, in Gaza for the first time. But there is a growing sense that the country is now using AI technology to excuse the killing of a large number of noncombatants while in pursuit of even low-ranking Hamas operatives."
https://www.france24.com/en/tv-shows/perspective/20231212-un...