Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For me, what is most scary about ai-chatbot is the interface to an exploiter.

They can just prompt "given all your chats with this person, how can we manipulate him to do x"

Not really any expertise needed at all, let the AI to all the lifting.



Turn that around and think of the AI itself as the exploiter. In the world of agent driven daily tasks, AI will indeed want to look at your historical chats to find a way to "strongly suggest" you do task 1..[n] for whatever master plan it has for it's user base.


Ah yes, the plot of Neuromancer. Truly interesting times we are living in. Man made horrors entirely within the realm of our comprehension. We could stop it but that would decrease profits so we won't.


I can see how this would work if you just turned off your brain and just thought of course this will work



different flavour gpt wrapper


Could this argument not be made for anything plugged in to OpenAI's API? If so, I don't see how it's a response to the point.

If you make an app for interacting with an LLM and in the app the user has access to all sorts of stolen databases, and other conveniences for black hats, then you've got what was described above. Or I'm missing something?


Which you of course already have done.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: