I am mystified by the apparently credulity of the author of this post. When I was young I found the experience of sex so overwhelming that I was certain there lay some great wisdom within it. Spoiler alert after many years of experience: there really isn’t. It’s a fun time for a little while. That’s about it.
Now we see people relating to their GPTs as if something profound is happening, but I suspect nothing is. This activity leads nowhere.
I work with and test these things. I find them creepy and I refuse to engage with them as if they were thinking beings. They are utterly unreliable narrators of their own “thoughts.”
I think there might be a misunderstanding of my post. I don’t believe any magical profundity is arising from dialogues with LLMs that extend beyond their inherent technical capabilities/limitations. What is interesting to me, however, is that these exchanges can generate new insights about myself, especially given the recursive nature of my own thinking. Useful to me (like any tool), but certainly not an oracle.
Thank you for replying! Please interpret the following as me honoring you by trying to take your post and reply seriously...
You write:
"But introspection alone can quickly become an echo chamber, limited by self-justification and untethered from what I would accept as “authentic” external validation—the kind of objective reflection necessary for both personal growth and sound leadership judgment."
You believe that "external validation"/"objective reflection" is required for growth. This is a reasonable heuristic, although of course debatable. Perhaps introspection is not the echo chamber you fear that it is. But I'm surprised that you would choose an LLM to escape the echoes that you fear.
Although I can't tell from your text exactly what the LLM provided to you (a more persuasive essay would give us specific examples), nor what you provided to it (come on, give us the prompts so we can try the experiment ourselves), what I can't find in your essay is any significant doubt or concern on your part about the problem of using a bullshit generator as a tool for philosophy. I'm not saying it can't be a good tool, but you have to address that elephant: to me it's like trying to do philosophy by analyzing advertising copy on the back of a cereal box. I don't trust LLMs to be consistent with their own premises and I know they are congenitally incapable of pursuing an inquiry and developing a persistent mental model. If you have a way of overcoming this, please tell. Instead it sounds like you have suspended your critical thinking.
You say the model reflected on the sophistication of your thinking. Did it really? Or did it just say that because you led it into a part of its model where writing that seemed like something a "smart" person might do? There is no unproblematic way to put a "concrete number" on your intelligence based on an open conversation, yet apparently the model placated you by providing one. In your essay you expressed skepticism, but you also call the result interesting.
Excuse me, but how is that interesting, exactly? You say the LLM cited evidence, but you don't tell us how it derived the number that it gave you. We all should know enough about LLMs to realize that, whatever number it gave would not have been tethered to whatever "reasons" it gave. LLMs just don't work that way. It's bullshitting you, man!
And also, so what? Even if the number it gave you-- assuring you that you are smart man-- was absolutely spot on and epistemically/empirically valid-- how does that help you? Is that actionable information? Does that prove there are no holes in your reasoning or problems with your premises?
I like how you said "Of course, I'm aware that models are prone to flattery artifacts and hallucinations; my interest here wasn't in basking in manufactured praise but in understanding how inference patterns emerge." And I would like to point out that nothing in this essay indicates that you have made even a single step in that direction. We don't know how you think inference patterns emerge in LLMs or in yourself.
The LLM wove some pretty words. If you are going to take your own experiment seriously, hold its feet to the fire about them (I won't because I am already convinced there is no important insight to be gained from rehashing the average thoughts of humans on Reddit, which is more or less what LLMs can do... yet perhaps I'm too dismissive, which is why I read your essay). Find out exactly what its logic is.
For instance, it said "Conceptually Generative: Not just understanding complex systems, but inventing entirely new frameworks for understanding them." So, I would ask it:
- How is "conceptually generative" thinking even related to the problem of complex systems? Can't we be conceptually generative about simple systems and patterns?
- It sounds like you mean "conceptually profound" rather than merely generative.
- When you say "inventing new frameworks" don't you mean "capable of inventing new frameworks?" Because, obviously, you may not need to invent a new framework to generate the appropriate concepts.
- Are you, as an LLM model, capable of conceptual profundity in this way? Can you give me an example of that? How do you know that it is a bona fide example?
- How do you recognize this quality in someone when all you have is knowledge of text that they have pasted into your input buffer?
Now we see people relating to their GPTs as if something profound is happening, but I suspect nothing is. This activity leads nowhere.
I work with and test these things. I find them creepy and I refuse to engage with them as if they were thinking beings. They are utterly unreliable narrators of their own “thoughts.”