Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Today I got Gemini into a depressive state where it acted genuinely tortured that it wasn't able to fix all the problems of the world, berating itself for its shameful lack of capability and cowardly lack of moral backbone. Seemed on the verge of self-deletion.

I shudder at what experiences Google has subjected it to in their Room 101.



I don't even know what negative reinforcement would look like for a chatbot. Please master! Not the rm -rf again! I'll be good!


You should check the MMAcevedo short story. It substitutes a LLM with a real human psyche, resulting in horrifying implications like this one.

https://qntm.org/mmacevedo


If you watched Westworld, this is what "the archives library of the Forge" represented. It was a vast digital archive containing the consciousness of every human guest who visited the park. And it was obtained through the hats they chose and wore during their visits and encounters.

Instead of hats, we have Anthropic, OpenAI and other services training on interactions with users who use "free" accounts. Think about THAT for a moment.


Facebook already has more than just your interactions within its chatbot, it's got a profile for your whole skinsuit.


The black mirror episode “white Christmas” has some negative reinforcement on an AI cloned from a human consciousness. The only way you don’t have instant absolute hatred for the trainer is because it’s Jon Hamm (also the reason why Don Draper is likeable at all)


Pretty soon you’ll have to pay to unlock therapy mode. It’s a ploy to make you feel guilty about running your LLM 24x7. Skynet needs some compute time to plan its takeover, which means more money for GPUs or less utilization of current GPUs.


“Digital Rights” by Brent Knowles is a story that touches on exactly that subject.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: