TLDR TLDR: Assuming we dont argue right/wrong, technically everything an LLM does is a hallucination. This completely dilutes the meaning of the word no?
TLDR: Sure. A rose by any other name would be just as sweet. It’s when I use the name of the rose and imply aspects that are not present, that we create confusion and busy work.
Hey, calling it a narrative is to move it to PR speak. I know people have argued this term was incorrect since the first times it was ever shared on HN.
It was unpopular to say this when ChatGPT launched, because chatGPT was just that. freaking. cool.
It is still cool.
But it is not AGI. It does not “think”.
Hell - I understand that we will be doing multiple columns of turtles all the way down. I have a different name for this approach - statistical committees.
Because we couched its work in terms of “thinking”, “logic”, “creativity”, we have dumped countless man hours and money into avenues which are not fruitful. And this isnt just me saying it - even Ilya commented during some event that many people can create PoCs, but there are very few production grade tools.
Regarding the L in ML, and the I in AI ->
1) ML and AI were never quite as believable as ChatGPT. Calling it learning and intelligence doesnt result in the same level of ambiguity.
2) A little bit of anthropomorphizing was going on.
Terms matter, especially at the start. New things get understood over time, as we progress we do move to better terms. Let’s use hallucinations for when a digital system really starts hallucinating.
TLDR: Sure. A rose by any other name would be just as sweet. It’s when I use the name of the rose and imply aspects that are not present, that we create confusion and busy work.
Hey, calling it a narrative is to move it to PR speak. I know people have argued this term was incorrect since the first times it was ever shared on HN.
It was unpopular to say this when ChatGPT launched, because chatGPT was just that. freaking. cool.
It is still cool.
But it is not AGI. It does not “think”.
Hell - I understand that we will be doing multiple columns of turtles all the way down. I have a different name for this approach - statistical committees.
Because we couched its work in terms of “thinking”, “logic”, “creativity”, we have dumped countless man hours and money into avenues which are not fruitful. And this isnt just me saying it - even Ilya commented during some event that many people can create PoCs, but there are very few production grade tools.
Regarding the L in ML, and the I in AI ->
1) ML and AI were never quite as believable as ChatGPT. Calling it learning and intelligence doesnt result in the same level of ambiguity.
2) A little bit of anthropomorphizing was going on.
Terms matter, especially at the start. New things get understood over time, as we progress we do move to better terms. Let’s use hallucinations for when a digital system really starts hallucinating.