Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You take the prompt and caclulate what next word after the prompt is most probable. Like T9 with letters, but bigger.



and how do you "calculate what word is most probable" next for a combination of words that has never occured before? Note that most sentences over about 20 words have statistically probably never been written in human history before.

The whole reason there is an AI here is because a markov chain, which is what you are describing, doesn't work beyond one or two word horizons.

Not to mention that it doesn't just select which word it thinks is MOST probable, because that has been shown to lead to stilted and awkward output. Instead it randomly selects from the top few thousand possible words with probability based on the model's estimation


I am not talking about the concrete realization, I am talking about the principle. You are right, LLMs are just Markov's chains on steroids, thus they cannot "reason". For reasoning you need a knowledge model, a corpus of facts, Boolean algebra and so on. Not a petabyte of words downloaded from all over the internet and crunched and sifted thru huge self-supervised transformer network.


Your corpus is the internet. Words on the internet are for the most part not randomly placed next to each other. The neural network created by this has implicitly created reasoning model. Much like saying an ant hive exhibits intelligence.


But... ant hive does not posess any intelligence, right? Despite colonies of ants are able to perform quite complex tasks.


What is intelligence? The ability to acquire and apply knowledge and skills. It's all relative. Not as intelligent as a human but more intelligent than a plant.


"The ability to achieve objectives in many different environments" is as good of a definition you need in order to achieve very powerful things.

Would be nice to have enough of a theory of intelligence to be more precise than that, but the above definition will go very far.


We actually made a wide swing from reasoning to intelligence. So I propose to ditch ants and get back on track.


Reasoning, an easier thing to prove, we can literally go ask bing chat to determine something and it will follow a logical thought process to answer your question (this is reasoning). They've confirmed it was running GPT4.

Humans are very irrational but are still very good at this when they want to be but not always. A limiting factor for GPT4 is probably computing space/power.


Finding most probable words from the internet for a given prompt have nothing to do with reasoning.


Please go type a query into bing chat or chatgpt 4 where reasoning is involved and it can answer you. Ask it something you haven't seen.

AI can reason, it might not be the greatest especially at numbers and where there's data contamination but it can do it.

There's something called abductive reasoning, a gift and a curse at the same time.


I will try another analogy. What if we have a parrot with exceptional memory, which can not only repeat things it heard some time ago, but to continue words it hearing now. I come to the cage and say: "Cogito?" and parrot continue "Ergo sum!". Is parrot intelligent and able to reason, even if I do not know about Descartes?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: