Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

nope, there are limits to what next-token predictions can do, we we have hit those limits. cursor and the like are great for some usecases - for example a semantic search for relevant code snippets, and autocomplete. But beyond that, they only bring frustration in my use.


Arguably most of the recent improvement in AI coding agents didn't exactly come from getting better at next token prediction in the first place. It came from getting better at context management, and RAG, and improvements on the usable context window size that let you do more with context management and RAG.

And I don't really see any reason to declare we've hit the limit of what can be done with those kinds of techniques.


I am sure they will continue to improve just as the static-analyzers and linters are improving.

But, fundamentally, LLMs lack a theory of the program as intended in this comment https://news.ycombinator.com/item?id=44443109#44444904 . Hence, they can never reach the promised land that is being talked about - unless there are innovations beyond next-token prediction.


They do lack a theory of program. But also, if there's one consistent theme that you can trace through my 25 of studying and working in ML/AI/whateveryouwanttocallit, it's that symbolic reasoning isn't nearly as critical to building useful tools as we like to think it is.

In other words, I would be wrong of me to assume that the only way I can think of to go about solving a problem is the only way to do it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: