Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's not dumber. You can verify this on their models with any prompt you ran at 0.0 when it first came out, I did. It's the exact same model run the same exact way. They've repeatedly confirmed this.


> It's not dumber

So I absolutely agree with this. And yet this meme persists. I wonder what's creating the feeling that it's "dumber" in so many? Perhaps they're just noticing the limits that always existed previously? I'm not sure, and am interested in others thoughts on it.


ChatGPT as in the paid subscription hosted by OpenAI. Its quality has deteriorated. It will miss the simplest details in the prompt and hallucinate a lot. I’ve only noticed this before on models with lesser parameters.

In comparison, Bard and Claude are getting better with time.


They've added a lot of safety features. Knowing very little about LLMs, I would assume these prepended prompts are using up a chunk of the limited "attention" the transformer has


Ooh I have some nice book chapter summaries that ChatGPT generated for me when it first came out. I got them in a Google doc.

If you ask GPT to give you the same thing now, it wont , no matter how hard you try.

That's like hard evidence for me that they've dumbed it down.


They confirmed the API still ran the same base model. But there was no mention of ChatGPT the service. I was referring to the latter above.


The ChatGPT webapp is not the same workflow as the ChatGPT API.


It is and it's documented:

https://arxiv.org/abs/2307.09009


A) its wrong and caused a lot of hand wringing about Arxiv and undergrads B) Not my claim, those are two different models




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: