Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, yes — because modern models can solve all the examples in the article. The theory of compositionality is still an issue, but the evidence for it recedes.

I think most of the issue comes from the challenge of informational coherence. Once incoherence enters the context, the intelligence drops massively. You can have a lot of context and LLMs can maintain coherence— but not if the context itself is incoherent.

And, informationally, it is just a matter of time before a little incoherence gets into a thread.

This is why agents have so much potential—being able to separate out separate threads of thought in different context windows reduces the likelihood of incoherence emerging (vs one long thread).

Actually, maybe “cybernetic ecologies” are closer to what I mean than “agents.” See Anthropic’s “Building Effective Agents.” https://www.anthropic.com/research/building-effective-agents



>I think most of the issue comes from the challenge of informational coherence. Once incoherence enters the context, the intelligence drops massively. You can have a lot of context and LLMs can maintain coherence— but not if the context itself is incoherent.

As a non-expert, part of my definition of intelligence is that the system can detect incoherence, a.k.a reject bullshit. LLMs today can't do that and will happily emit bullshit in response.

Maybe the "gates" in the "workflows" discussed in the Anthropic article are a practical solution to that. But that still just seems like inserting human intelligence into the system for a specific engineering domain; not a general solution.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: