In normal English usage, a quantum leap is a step-change, a near-discrete rather than continuous improvement, a large singular advance.
Given we are not talking about state changes in electrons, there is nothing wrong with this description of ChatGPT - it truly does feel like a massive advance to anyone who has even cursorily played with it.
For example, you can ask it questions like "Who was born first, Margaret Thatcher or George Bush?" and "Who was born first, Tony Blair or George Bush?" and in each instance it infers which George Bush you are talking about.
I honestly couldn't imagine something like this being this good only three years ago.
(1) You are correct in that placing both of those questions into Google doesn't quite get you anywhere near the answer that I imagine ChatGPT gives you (as you point out). Although, Google does "infer" which Bush you are talking about, there isn't a clear "this person is older" answer, you have to dive into the wiki pages basically to get the answer.
(2) Counter. I asked it the other day "how many movies were Tom Hanks and Meg Ryan in together" and the answer ChatGPT gave was 2 ... not only is that wrong it is astonishingly wrong (IMO). You could be forgiven for forgetting Ithaca from 2015. I could forgive ChatGPT for forgetting that one. But You've Got Mail? That's a very odd omission. So much so I'm genuinely curious how it could possible get the answer wrong in that way. And for the record, Google presents the correct answer (4) in a cut out segment right at the top, a result and presentation very close to what one would expect from ChatGPT.
I don't know about other use cases like generating stories (or tangentially art of any kind) for inspiration, etc. But as a search engine things like ChatGPT NEED to have attributions. If I ask the question "Does a submarine appear in the movie Battlefield Earth?" it will confidently answer "no". I _think_ that answer is right, but I'm not really all that confident it is right. It needs to present the reasons it thinks that is right. Something like "No. I believe this because (1) the keyword submarine doesn't appear in the IMDb keywords (<source>), (2) the word submarine doesn't appear in the wikipedia plot synopsis (<source>), (3) the film takes place in Denver (<source>) which is landlocked making it unlikely a submarine would be found in that location during the course of the film."
The Tom Hanks / Meg Ryan question/answer would at least more interesting if it explained how it managed to be so uniquely incorrect. That question will haunt me though ... there's some rule about this right? Asking about something you have above average knowledge in and watching someone confidently answer it incorrectly. How am I supposed to ever trust ChatGPT again about movie queries?
The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.
Because they are all ill defined in the manner they are used in common language. Hell, we have trouble describing what they are, especially in a scientific fact based setting.
Before this point in history we accepted 'I am that I am' because there wasn't any challenger to the title. Now that we are putting this to question we realize our definitions may not work well.
>The biggest thing I’ve learned from chatGPT is that real people struggle with the difference between intelligence, understanding, and consciousness / sentience.
Well, I'm no fan of chatGPT. But it appears most people are worse than chatGPT, because just regurgitate what they hear with no thought or contemplation. So you can't really blame average folks who struggle with the concepts of intelligence/understanding that you mention.
Which should be no surprise, as people have been grappling with these ideas for centuries, and we still don't have any definitive idea of what consciousness/sentience truly is. What I find interesting is that at one point the Turing test seemed to be the gold standard for intelligence, but chatGPT could pass that with flying colors. So how exactly will we know if/when true intelligence does emerge?
Well, my point wasn’t that there is a good definition of consciousness.
My point was that “consciousness” and “intelligence” are very different things. One does not imply the other.
Consciousness is about self reflection. Intelligence is about insight and/or problem solving. The two are often correlated, especially in animals, especially in humans, but they’re not the same thing at all.
“Is chatgpt consciousness” is a totally different question than “is chatgpt intelligent”.
We will know chatgpt is intelligent when it passes our tests of intelligence, which are imperfect but at least directionally correct.
I have no idea if/when we we know whether chatgpt is conscious, because we don’t really have good definitions of consciousness, let along tests, as you note.
The most annoying thing to me is people thinking AI wants things and gets happy and sad. It doesn't have a mamailian or reptilian brain. It just holds a mirror up to humanity generally via matrix math and probability.
I like this take. It has many clear applications already and LLM's are still only in their infancy. I both criticize and use ChatGPT at work. It has flaws and it has advantages. That it's bullshit or "ELIZA" is a short-sighted view that overvalues the importance of AGI and misses what we're already getting.
But yes indeed, there are many, many AI products launched during this era of rapid progress. Even kind of shoddy products can be monetized if they provide value over what we had before. I think the crowded market and all the bullshit and all the awesome, all at once, is a sign of very rapid progress in this space. It will probably not always be like this and who knows what we are approaching.
I've used it to proof emails for grammar, and it's done ok.
I'll also throw random programming questions into it, and it's been hit and miss. SO is probably still faster, and I like seeing the discussion. The problem with chatGPT right now is it gives an answer like it's certainty when it's often wrong.
I can see the benefits of this interaction model (basically summarizing all the things from a search into what feels like a person talking back), but I don't see change the world level hype at the moment.
I also wonder if LLMs will get worse over time through propagation error as content is generated by other LLMs.
I’m not the person you replied to but I’ve been using OpenAI’s API a lot for work. Some examples:
- Embedding free text data on safety observations, clustering them together, using text completion to automatically label the clusters, and identifying trends
- Embedding free text data on equipment failures. Some of our equipment failures have been classified manually by humans into various categories. I use the embeddings to train a model to predict those categories for uncategorized failures.
- Analyzing employee development goals and locating common themes. Then using this to identify where there are gaps we can fill in training offerings.