Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm reminded of the interview where a researcher asks firemen how they make decisions under pressure, and the fireman answers that he never makes any decisions.

Or in other words, people can use implicit logic to solve puzzles. Similarly LLMs can implicitly be fine-tuned into logic models by asking them to solve a puzzle, insofar as that logic model fits in their weights. Transformers are very flexible that way.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: