It's not dumber. You can verify this on their models with any prompt you ran at 0.0 when it first came out, I did. It's the exact same model run the same exact way. They've repeatedly confirmed this.
So I absolutely agree with this. And yet this meme persists. I wonder what's creating the feeling that it's "dumber" in so many? Perhaps they're just noticing the limits that always existed previously? I'm not sure, and am interested in others thoughts on it.
ChatGPT as in the paid subscription hosted by OpenAI. Its quality has deteriorated. It will miss the simplest details in the prompt and hallucinate a lot. I’ve only noticed this before on models with lesser parameters.
In comparison, Bard and Claude are getting better with time.
They've added a lot of safety features. Knowing very little about LLMs, I would assume these prepended prompts are using up a chunk of the limited "attention" the transformer has