Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Interesting question.

LLMs can be cajoled into producing algorithms.

In fact this is the Chain-of-Thought optimisation.

LLMs give better results when asked for a series of steps to produce a result than when just asked for the result.

To ask if LLMs “think” is an open question and requires a definition of thinking :-)



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: