Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It is pretty clear that the long horizon tasks are difficult for coding agents and that is a fundamental limitation of how probabilistic word generation works either with transformer or any other architecture. The errors propagate and multiply and becomes open ended.

However, the limitation can be masqueraded using layering techniques where output of one agent is fed as an input to another using consensus for verification or other techniques to the nth degree to minimize errors. But this is a bit like the story of a boy with a finger in the dike. Yes, you can spawn as many boys but there is a cost associated that would keep growing and wont narrow down.

It has nothing to do with contexts or window of focus or any other human centric metric. This is what the architecture is supposed to do and it does so perfectly.



Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: