Hacker News new | past | comments | ask | show | jobs | submit login

"This side towards enemy'.

Doesn't have to be "safe" / "not dangerous" (a lot of weapons systems are very dangerous in different ways) but it should be predictable, using some reasonably simple mental model.

"Refusing to deploy" might not be a sensible option if your enemy is willing to deploy.

It seems safe to predict that human in the loop systems will be quickly overwhelmed by fully autonomous systems. The tradeoff may end up being having to balance the risk of getting killed by your own systems versus being killed by your enemy's systems, because yours were too "tame".




>"Refusing to deploy" might not be a sensible option if your enemy is willing to deploy.

In the AI ethics arena, this is really one of the best arguments I've heard for the military to pursue and have automated systems in place. US leadership may agree and see all the issues with using killer robots but other countries are developing the same technology and may have less ethical concern and considerations then you do. They may be willing to let these systems fail and kill innocent bystanders or even their own soldiers. They may even look at these and consider it a necessary loss to move forward and succeed in their goals. As such, if you're not willing to make such compromises as well, unless your approach ultimately achieves the goal or you have other techniques to mitigate these corner cutting strategies, you too must cave.

If China decides to engage in conflict and releases killer robots and we don't have killer robots, can we contend? If we can't, we might have to engage in the race to the bottom that is competition unless we have an alternative mitigation strategy that doesn't require us to compromise our ethics.

I imagine this is the dilemma scientists faced during WWII when developing nuclear weapons. Part of the argument that lead to nuclear weapons was that Germany was already working on the process. This resulted in other countries engaging and ultimately building successful weapons where Germany failed. Ultimately, we now live in a world where lots of people have nuclear weapons... and overall it's been the proverbial Mexican standoff with nuclear deterrence strategies.

Is there an AI deterrence strategy? Nuclear worked because it lead to the case of a global collapse in terms of what nuclear war would bring. Killer robots that aren't sentient don't seem to have quite the same threat and seem like militaries could utilize them as they fall in an ethical grey area globally and have far less risk of ending humanity on a global scale unless we start talking about sentient terminator killer robot level uprising which is not even remotely where AI is currently or in the foreseeable future.


It’s pretty easy to remove the human in the loop (just say “yes” to everything it would ask the human). In contrast, it can be incredibly difficult to add a human to a system that was designed to be fully autonomous.


I think currently no other country is even close to the level of autonomous weapon systems the US is developing. An arms race where you are miles ahead cannot be used as a reason why you have tonrun faster.


China is likely not far off - they’re pretty far ahead of the curve compared to the US military too.

All of these systems are going to be as classified as possible for any country developing them, so we’d only find out once on or more of the following become true;

- it’s several decades after they’re out of date

- a politician needs a major scary saber to rattle for some reason

- they get used in a shooting war where the public can see them.

Neither of which favor us knowing current capabilities of any of the major players right now, except MAYBE Russia.


If you look at the promo videos of boston dynamics one can gauge the progress, which is not classified


Also unlikely to be cutting edge in the murder drone department, IMO.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: