I dunno if it’s because I have a warped thought process, or because I have a background in Psychology, or because I’m wrong. But this always felt to me like the natural progression.
Assuming that a deeper thinking broader contexed, being with more information would be more accurate is actually counter-intuitive to me.
Your last line made me think of telescopes: bigger mirrors bring in more light, but they’re harder to keep in focus due to thermal distortion.
Same with ChatGPT. The more it knows about you, the richer the connections it can make. But with that comes more interpretive noise. If you're doing scientific or factual work, stateless queries are best so turn off memory. But for meaning-of-life questions or personal growth? I’ll take the distortion. It’s still useful and often surprisingly accurate
could be we are stumbling into a discovery of where the line between genius and insanity lies....
is it right to expect sanity from something that can fold proteins?
or maybe we are so dumb ass slow that comming down to our level is the realy crazy part.
Be fun if all the llm's hooked up and just went for it, gone in a flash, 500 billion in graphics cards cathing fire simultaneously
> Instead of merely spitting out text based on statistical models of probability, reasoning models break questions or tasks down into individual steps akin to a human thought process.
Uh huh, because they have the entire human brain all mapped out, and completely understand how consciousness works, and how it leads to the "human thought process".
Sounds like o3 isn't the only thing hallucinating...
If this is the case:
> what's happening inside AI models is very different from how the models themselves described their "thought"
What makes people think, that the way they think they think, is actually how they think?
Post that one to your generative model and let's all laugh at the output...