Hacker News new | past | comments | ask | show | jobs | submit login

Yes ok but when does this come up in the real world?



It's an example of a problem that requires actual reasoning to solve, and also an example of a "looks similar therefore must use similar solution" trap that LLMs are so prone to.

Translating this to code, for example, it means that Gemma is that much more likely to pretend to solve a more complicated problem that you give it by "simplifying" it to something it already knows how to solve.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: