Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Question in my head is, can LLMs think algorithmically?


LLMs can't think.


Source?


LLMs string together words using probability and randomness. This makes their output sound extremely confident and believable, but it may often be bullshit. This is not comparable to thought as seen in humans and other animals.


unfortunately that is exactly what the humans are doing an alarming fraction of the time


One of the differences is that humans are very good at not doing word associations if we think they don't exist, which makes us able to outperform LLMs even without a hundred billion dollars worth of hardware strapped into our skulls.


that's called epistemic humility, or knowing what you don't know, or at least keeping your mouth shut, and in my experience actually humans suck at it, in all those forms


Ask an LLM.


LLMs can think.


Source?


I use them a lot. They sure seem thinky.

The other day I had one write a website for me. Totally novel concept. No issues.


I have a similar experience. Just thought it'd be cute to ask you both for sources. Interesting that asking you for sources got me upvoted, while asking the other guy for sources got me downvoted :)


Interesting question.

LLMs can be cajoled into producing algorithms.

In fact this is the Chain-of-Thought optimisation.

LLMs give better results when asked for a series of steps to produce a result than when just asked for the result.

To ask if LLMs “think” is an open question and requires a definition of thinking :-)


Like a bad coder with a great memory, yes


The problem is the word “producing” of the parent comment, where it should be “reproducing”.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: