This is really awful, the text is completely content-free but it's got everyone who's not a domain expert hooked.
I didn't know whether ChatGPT's ability to babble convincingly was going to cause trouble or be a funny quirk that we all knew about and worked around, but this thread is really making it look like the pessimists were right about it. The problem is that it gets past many people's filters because its phrasing sends a lot of intelligent/rational/professional signals, as it was engineered to do. Nobody is used to picking smart-sounding text apart word by word to make sure they agree with it, except maybe academics, and that's the vulnerability it reaches us through.
I also think that OpenAI got human nature backwards when they trained it to hedge on everything it said - everybody knows that people who constantly demure are the most reliable! A safe chatbot would sound pushy, like a bad salesman or an ideological agent; like something incapable of self-questioning.
I (OP) am not a physicist but still a scientist. I’m not gonna say the theories it said are solid. It’s quite clear to me that they’re not. But they are better theories than what I (who knows the fundamentals of physics for the most part) can come up with. So it’s not a subject matter expert but better than the average joe. For what it is, (and given I asked it to imagine theories), its output looks impressive to me. And I can attest to its abilities to do this in biology as well, and I’ll say it does a better job than most biology professors in fact. I just didn’t publish those chats here.
Maybe it did a better job in biology because I was able to correct it at an expert level (which I was not able to here). Given the demonstrably (like right here in these comments) myopic unimaginative nature of physics as a science today, it’s possible not a single physicist would try to entertain this system as a hypothesis generation machine. I mean we have discovered everything already right?
I think you asked it some of the right questions, but even then it was able to slip past your skepticism by being really good at sounding like an intelligent human being that was saving you the trouble of the details. There's no hypothesis in there.
You would have to be an insanely skeptical person, one who would drive anybody nuts to talk to, to approach a ChatGPT session in a field you're not an expert in or maybe even one you are, and evaluate it right... The only normal human perspective that fits what ChatGPT is actually like is the one we take on people we think are terrible, so that is why I say it's awful, although it's just a machine.
Is anyone claiming there's a market niche for hypothesis generation in the natural sciences?
Full disclosure: I have a science PhD and a couple of published papers as a result. It was a long, slow, frustrating grind, but generating hypotheses wasn't even close to being the hard bit.
I too have a PhD and a few papers. If you think generating a good hypothesis wasn’t the hard bit then in my opinion you never did get what science was about. In my opinion. Which is the minority among todays scientists. It’s like People dont even know what they’re doing wrong.
> If you think generating a good hypothesis wasn’t the hard bit then in my opinion you never did get what science was about [..]
Is a good hypothesis important? Sure. Is it easy to get that bit wrong? Yes, and lots of people do.
I'm reminded of one of Paul Graham's quotes:
"I also have a theory about why people think this. They overvalue ideas. They think creating a startup is just a matter of implementing some fabulous initial idea. And since a successful startup is worth millions of dollars, a good idea is therefore a million dollar idea [..] startup ideas are not million dollar ideas, and here's an experiment you can try to prove it: just try to sell one. Nothing evolves faster than markets. The fact that there's no market for startup ideas suggests there's no demand. Which means, in the narrow sense of the word, that startup ideas are worthless."[0]
I love PGs insights and I generally agree with his ideas article. But think about it, is a startup idea a hypothesis or something more than that? To the market each startup is perhaps a hypothesis, but for you as a founder the idea shouldn’t just be a hunch. It can be in initial stages but you need to validate it asap and iterate on it. Which is the message I took from his writing.
And this doesn’t even touch the question of what’s different between basic science and entrepreneurship.
> The problem is that it gets past many people's filters because its phrasing sends a lot of intelligent/rational/professional signals, as it was engineered to do. Nobody is used to picking smart-sounding text apart word by word to make sure they agree with it, except maybe academics, and that's the vulnerability it reaches us through.
I think you're making a mountain out of a molehill. To me ChatGPT is basically a clever way to interpolate and extrapolate coherent text based on user input and its training set. If the training set is lacking in some areas, it underfits the output.
I've tested ChatGPT in a couple of engineering fields I'm familiar and I expected the service to respond poorly, but even though it returned nonsense in some areas I thought were low-hanging topics, such as the release year of an international standard, overall its output was very impressive and very entertaining.
Perhaps it's the engineer in me talking, but it's pointless to waste time waxing lirically about human nature. Tools like ChatGPT might one day be superb expert systems and teaching tools, but like any expert system and learning tool you need to corroborate the results by yourself. Human nature has zero to do with this.
That description of how it functions is true, but it did fool a lot of people, and now we have to explain why. I also note that a lot of people report positive results in their own fields of expertise, which might be part of the explanation. (It might be building trust before failing inexplicably right when it can no longer be checked.)
> That description of how it functions is true, but it did fool a lot of people, and now we have to explain why.
I see your point, and I agree. Nevertheless, these misdirections seem to boil down to a broad temptation to succumb to appeals to authority. People might be falling for ChatGPT misfires just like they fall for fancy talking bullshit artists, but that's hardly a failing of clever auto text generators.
I'm not a domain expert either, but to me the results seem to be entirety what you would expect from a model trained to pattern match very well on existing content.
That is, the ideas, strengths and limitations of the current well-known theories are very well explained and mostly correct. However, the "novel theory" is mostly filler words around some very thin concepts to make it sound like an actual theory with some depth to it, but in reality saying nothing more than that spacetime is granular and there could maybe be some matrices.
Haha that's for now. But humans do this too, attempting to copy the shell around a concept: the way it is expressed.
In American English, this is frequently authority speech mimicking law enforcement, scientists, or lawyers. All while lacking any knowledge of anything but the outer dressings of something.
In this way, the problem is that the AI is just mimicking most of us on most things.
Like that time a science fiction writer started his own religion based on characters from his novel and got a bunch of celebrities financially tied to it
Or when holistic practitioners procedurally generate an unsubstantiated preventative cure
Might as well just turn your brain off now, you won’t need it
I think the future will see academia as the last bastion for human ingenuity, before ultimately ML swallows that up too, or climate change swallows the whole thing up.
Since I got so many downvotes, let me share the specific claims:
“Dynamic Geometric Interactions in Multi-Layered Spacetime"
This hypothesis proposes that spacetime consists of multiple interconnected layers, each with its unique geometry and properties. The fundamental forces and particles emerge from the interactions between these layers and the geometric transformations that take place within and between them.
Multi-Layered Spacetime: The universe is not a single, continuous spacetime fabric but instead is composed of multiple interconnected layers. Each layer has its own geometric structure, which can be influenced by the other layers.
Interactions and Transformations: Particles and forces emerge from the interactions between the layers and the geometric transformations that occur within and between them. These transformations might involve changes in curvature, connectivity, or other geometric properties, leading to the observed behavior of particles and forces at different scales.
Unification: At high energy scales or specific conditions, the interactions and transformations between the layers might become more unified, leading to a single overarching interaction responsible for all fundamental forces.
[And a rough mathematical explanation]
Let's denote the granular spacetime structures as Sij, where i and j represent the indices for the type of spacetime structure and its configuration, respectively. Suppose there are N types of spacetime structures, each corresponding to a specific force or force pattern. The interaction between particles might be described by an interaction matrix I, where each element Iij quantifies the strength of the interaction between spacetime structures Si and Sj. In a high-energy regime, the spacetime structures' patterns might begin to merge, leading to the unification of forces. We can represent this by introducing an energy-dependent matrix U(E), which modifies the interaction matrix I as a function of energy E:
I'(E) = U(E) * I
As the energy E approaches a critical energy level Ec, the matrix U(E) transforms I into a single unified interaction matrix, corresponding to the unified force.
- - -
This comports with my understanding of the Inflaton field, which parametrically resonates (through geometric relationships?) with other fundamental fields during the Big Bang. I’ll pull some references here.
Disagree. The idea that there are different space time geometries for different quantum fields is an interesting and probably testable idea. I mean, yes, late night stoner shit, but next level.
I didn't know whether ChatGPT's ability to babble convincingly was going to cause trouble or be a funny quirk that we all knew about and worked around, but this thread is really making it look like the pessimists were right about it. The problem is that it gets past many people's filters because its phrasing sends a lot of intelligent/rational/professional signals, as it was engineered to do. Nobody is used to picking smart-sounding text apart word by word to make sure they agree with it, except maybe academics, and that's the vulnerability it reaches us through.
I also think that OpenAI got human nature backwards when they trained it to hedge on everything it said - everybody knows that people who constantly demure are the most reliable! A safe chatbot would sound pushy, like a bad salesman or an ideological agent; like something incapable of self-questioning.