Hacker News new | past | comments | ask | show | jobs | submit login

In order to believe this, you'd need to be able to imagine a specific test of something that an LLM could not do under any circumstances. Previously, that test could have been something like "compose a novel sonnet on a topic". Today, it is much less clear that such a test (that won't be rapidly beaten) even exists.



You could use a Markov chain to generate poetry with rhyme and meter[1]. Granted, it wouldn't be a very good one, but that just makes an LLM a refinement to older probabilistic methods.

As for something LLMs are unlikely to do under any circumstances, there's already a fairly obvious example. They can't keep a secret, hence prompt injections.

[1] https://us.pycon.org/2020/schedule/presentation/112/


Do you really believe that an LLM that can keep a secret cannot be made? I suspect that we could do this trivially and the "LLMs can't keep a secret" is a specific product of finetuning for helpfulness.


How about make new scientific discoveries?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: