Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Good take from Dwarkesh. And I love hearing his updates on where he’s at. In brief - we need some sort of adaptive learning; he doesn’t see signs of it.

My guess is that frontier labs think that long context is going to solve this: if you had a quality 10mm token context that would be enough to freeze an agent at a great internal state and still do a lot.

Right now the long context models have highly variable quality across their windows.

But to reframe: will we have 10mm token useful context windows in 2 years? That seems very possible.



How long is "long"? Real humans have context windows measured in decades of realtime multimodal input.


I think there’s a good clue here to what may work for frontier models — you definitely do not remember everything about a random day 15 years ago. By the same token, you almost certainly remember some things about a day much longer ago than that, if something significant happened. So, you have some compression / lossy memory working that lets you not just be a tabula rasa about anything older than [your brain’s memory capacity].

Some architectures try to model this infinite, but lossy, horizon with functions that are amenable as a pass on the input context. So far none of them seem to beat the good old attention head, though.


Speak for yourself. I can barely remember what I did yesterday.


I believe in Demmis when he says we are 10 years away from - from AGI.

He basically made up the field (out of academia) for a large number of years and OpenAI was partially founded to counteract his lab, and the fears that he would be there first (and only).

So I trust him. Sometime around 2035 he expects there will be AGI which he believes is as good or better than humans in virtually every task.


When someone says 10 years out in tech it means there are several needed breakthroughs that they think could possibly happen if things go just right. Being an expert doesn't make the 10 years more accurate, it makes the 'breakthroughs needed' part more meaningful.


>He basically made up the field (out of academia) for a large number of years

Not even close.


Not even close? So DeepMind wasn’t the clear leader in AI for years and years until OpenAi started showing new and interesting efforts ?


This guy has a vested interest in talking nonsense about AGI to attract investors' money and government subsidies worth billions.

Privately, he doesn't think it's likely in next 25 years.


He doesn’t need any more money I think. Not when he has Google money.


Well, even being part of Google doesn't mean that you have infinite funds at your disposal. Recently, his startup raised 600 million.

"The drug development artificial intelligence (AI) startup founded by Google DeepMind co-founder and CEO Demis Hassabis has raised $600 million in its first external funding round."

https://www.pymnts.com/artificial-intelligence-2/2025/ai-sta...


I don’t think there’s a lack of VC money in AI… there are a few 1-10$ billion companies without a product.


Still it's the sales pitch that allowed it. This is why I'm sceptical about his AGI views. Is this his genuine opinion or just the hype he's interested in sustaining as an AI startup owner?


He started DeepMind 10 years ago and said they are on track for 20 years up to AGI.


Agreed, I was blown away by some of his achievements, like AlphaGo. Still I'm not very much convinced that AGI is around the corner yet, even if he appears to claim otherwise.


There was a company that claimed to have solved it and we hear nothing but the sound of crickets from them.


I'm sure we'll have true test-time-learning soon (<5 years)but it will be more expensive. Alphaproof (for Deepmind's IMO attempt) already has this.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: