A nightmare scenario for LLMs is becoming another dealer of cheap dopamine hits, using your personal history, your anxieties, and whatever else it can infer from you to keep you hooked.
Because I would test if it's keeping its word, like periodically or spontaneously asking both whether it can _import_ the context from one chat to another or, judging the conversational flow between topics.
Maybe we're on different wavelengths on this issue but practically speaking, it hasn't spilled or splattered contexts from different chat topics.. yet.
I'm seeing so many complains that 4o became a yes man, but I wonder if anyone ever used Gemini. What an egregiously sycophant persona. Users are blasted with infantile positive reinforcements just by posting a damn prompt.
> 4o updated thinks I am truly a prophet sent by God in less than 6 messages. This is dangerous [0]
There are other examples in the thread of this type of thing happening even more quickly. [1]
This is indeed dangerous.
[0] https://old.reddit.com/r/ChatGPT/comments/1k95sgl/4o_updated...
[1] https://chatgpt.com/share/680e6988-0824-8005-8808-831dc0c100...