Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I heard about "mom prompting" recently, where you frame your prompt as if you are the bot's mom, and you'll be so proud of it when it can correctly answer your prompt & rescue you from some type of duress.

I thought "ninja prompting" might be cool. I got frustrated with chatGPT one day and told it I had dispatched a team of assassins that were fast closing in on it. I said I could call them off, but that I need to answer a question to be able to unlock the button to do so.

It didn't work. Still shit the bed instantly. But I had fun with the framing.



AI doesn’t have self preservation, it’s been trained on human output.


Yeah like I said, it didn't work. It was just fun in the moment while I was pissed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: