Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

For the human brain, the LSAT requires reasoning. But not for an LLM. Do we even know exactly what data this is trained on? I have only seen vague references to what data they are using. If it is trained on large chunks of the internet, then it certainly is trained on LSAT practice questions. And because LSAT questions follow a common pattern, it is well suited to a LLM. There isn't any reasoning or general intelligence at all. Just really good statistics applied to large amounts of data.



> For the human brain, the LSAT requires reasoning. But not for an LLM.

Exactly, much like a chess bot can play perfectly without what humans would call thinking.

I think (ironically) we'll soon realize that there is no actual task that would require thinking as we know it.


This made me think of a Dijkstra quote

> The question of whether computers can think is like the question of whether submarines can swim

It has only become more relevant.


From the article: "We did no specific training for these exams. A minority of the problems in the exams were seen by the model during training, but we believe the results to be representative—see our technical report for details."


I’m skeptical. There is a lot wiggle room in “no specific training”. Could just mean the didn’t fine tune the model for any of tests. Their training data probably included many past LSAT exams and certainly included many instances of people discussing how to solve LSAT problems.


How is it different than humans preparing for LSAT by studying sample questions and reading explanations?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: