Hacker News new | past | comments | ask | show | jobs | submit login

> In case of GPT-4 or any other LLM, it's merely "immoral output" - i.e. text. It does not harm anyone by itself.

Assuming that you're not running this query over an API and relaying these answers to another control system or a gullible operator.

An aircraft control computer or reactor controller won't run my commands regardless of its actuators connected or not. Same for weapon systems.

Hall pass given to AI systems just because they're outputting text to a screen is staggering. Nothing prevents me to process this output automatically and actuate things.




Why would anyone give control of air traffic or weapons to AI? That's the key step in AGI, not some tech development. By what social process exactly would we give control of nukes to a chatbot? I can't see it happening.


> Why would anyone give control of air traffic or weapons to AI?

Simplified operations, faster reaction time, eliminating human resistance for obeying killing orders. See "War Games" [0] for a hypothetical exploration of the concept.

> a chatbot.

Some claim it's self-aware. Some say it called for airstrikes. Some say it gave a hit list for them. It might be a glorified Markov-chain, and I don't use it, but there's a hoard of people who follows it like it's the second Jesus, and believe what it emits.

> I can't see it happening.

Because, it already happened.

Turkey is claimed to use completely autonomous drones in a war [1].

South Korea has autonomous sentry guns which defend DMZ [2].

[0]: https://www.imdb.com/title/tt0086567/

[1]: https://www.popularmechanics.com/military/weapons/a36559508/...

[2]: https://en.wikipedia.org/wiki/SGR-A1


We give hall passes to more than AI. We give passes to humans. We could have a detailed discussion of how to blow up the U.S. Capitol building during the State of the Union address. It is allowed to be a best selling novel or movie. But we freak out if an AI joins the discussion?


Yes, of course. But that is precisely what people mean when they say that the problem isn't AI, it's people using AI nefariously or negligently.


"The problem isn't that anyone can buy an F16. The problem is that some people use their F16 to conduct airstrikes nefariously or negligently."


You persist in using highly misleading analogies. A military F-16 comes with missiles and other things that are in and of themselves highly destructive, and can be activated at a push of a button. An LLM does not - you'd have to acquire something else capable of killing people first, and wire the LLM into it. The argument you're making is exactly like claiming that people shouldn't be able to own iPhones because they could be repurposed as controllers for makeshift guided missiles.

Speaking of which, it's perfectly legal for a civilian to own a fighter plane such as an F-16 in US and many other countries. You just have to demilitarize it, meaning no weapon pods.


>The argument you're making is exactly like claiming that people shouldn't be able to own iPhones because they could be repurposed as controllers for makeshift guided missiles.

The reason this isn't an issue in practice is because such repurposing would require significant intelligence/electrical engineering skill/etc. The point is that intelligence (the "I" in "AI") will make such tasks far easier.

>Ten-year-old about to play chess for the first time, skeptical that he'll lose to Magnus Carlsen: "Can you explain how he'll defeat me, when we've both got the same pieces, and I move first? Will he use some trick for getting all his pawns to the back row to become Queens?"

https://nitter.net/ESYudkowsky/status/1660399502266871809#m




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: