The twitterer is a renowned (and much accomplished!) sh*tposter, I highly suspect this was doctored. I believe Chevy caught onto this yesterday and reverted the ChatGPT function in the chat.
Regardless, still hilarious and potentially quite scary if the comments are tied to actions
Others have replicated this behaviour. If you embed ChatGPT, people will find ways to make it say things you didn't intend it to say.
There's not really any doctoring going on, other than basic prompt injection. However, I can imagine someone accidentally tricking ChatGPT into claiming some ridiculously low priced offer without intentional prompt attacks. If you start bargaining with ChatGPT, it'll play along; it's just repeating the patterns in its training data.
Regardless, still hilarious and potentially quite scary if the comments are tied to actions