Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I basically agree with you, and I think the thing that is missing from a bunch of responses that disagree is that it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling. That is, most folks were pretty astounded by the gains you could get from just stuffing more training data into these models, but like someone who argues a 15 year old will be 50 feet tall based on the last 5 years' growth rate, people who are still arguing that past growth rates will continue apace don't seem to be honest (or aware) to me.

I'm not at all saying that it's impossible some improvement will be discovered in the future that allows AI progress to continue at a breakneck speed, but I am saying that the "progress will only accelerate" conclusion, based primarily on the progress since 2017 or so, is faulty reasoning.



  > it seems fairly apparent now that AI has largely hit a brick wall in terms of the benefits of scaling
What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.

I don't know about the rest, but I spoke up because I didn't want to hit a brick wall, I want to keep going! I still want to keep going! But if accurate predictions (with good explanations) aren't a reason to shift resource allocation then we just keep making the same mistake over and over. We let the conmen come in and people who get too excited by success that they get blind to pitfalls.

And hey, I'm not saying give me money. This account is (mostly) anonymous. There's plenty of people that made accurate predictions and tried working in other directions but never got funding to test how methods scale up. We say there's no alternatives but there's been nothing else that's been given a tenth of the effort. Apples and oranges...


> What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.

You need to model the business world and management more like a flock of sheep being herded by forces that mostly don't have to do with what actually is going to happen in future. It makes a lot more sense.


  > mostly don't have to do with what actually is going to happen
Yet I'm talking about what did happen.

I'm saying we should have memory. Look at predictions people make. Reward accurate ones, don't reward failures. Right now we reward whoever makes the craziest predictions. It hasn't always been this way, so we should go back to less crazy


Practically no one is herded by what is actually going to happen, hardly even by what is expected to happen. Business pretends that it is driven by expectations, but is mostly driven by the past, as in financial statements. What is the bonus we can get this year? There is of course the strategic thinking, I don't want to discount that part of business, but it is not the thing that will drive most of these, AI as a cost saving measure, decisions. This is the unimaginative part of AI application and as such relegated to the unimaginative managers.


> It is difficult to get a man to understand something, when his salary depends on his not understanding it.”

It's all a big hype bubble and not only is no one in the industry willing to pop it, they actively defend against popping a bubble that is clearly rupturing on its own. It's endemic of how modern businesses no longer care about a proper 10 year portfolio and more about how to make the next quarter look good.

There's just no skin in the game, and everyone's ransacking before the inevitable fire instead of figuring out how to prevent the fire to begin with.


> What's annoying is plenty of us (researchers) predicted this and got laughed at. Now that it's happening, it's just quiet.

Those people always do that. Shouting about cryptocurrencies and NFTs from the rooftops 3-4 years ago, now completely gone.

I suspect they're the same people, basically get rich quick schemers.


Sure, you were right.

But if you had been wrong and we would now have had superintelligence, the upside for its owners would presumably be great.

... Or at least that's the hypothesis. As a matter of fact intelligence is only somewhat useful in the real world :-)


I am not sure the owners would keep being that in case of real superintelligence, though.


I dont see any wall. Gemini 2.5 and o3/o4 are incredible improvements. Gen AI is miles ahead of where it was a year ago which was miles ahead of where it was 2 years ago.


The actual LLM part isn't much better than a year ago. What's better is that they've added additional logic and made it possible to intertwine traditional, expert-system style AI plus the power of the internet to augment LLMs so that they're actually useful.

This is an improvement for sure, but LLMs themselves are definitely hitting a wall. It was predicted that scaling alone would allow them to reach AGI level.


> It was predicted that scaling alone would allow them to reach AGI level.

This is a genuine attempt to inform myself. Could you think to those sort of claims from experts at the top?


There were definitely people "at the top" who were essentially arguing that more scale would get you to AGI - Ilya Sutskever of OpenAI comes to mind (e.g. "next-token prediction is enough for AGI").

There were definitely many other prominent researchers who vehemently disagreed, e.g. Yann LeCun. But it's very hard for a layperson (or, for that matter, another expert) to determine who is or would be "right" in this situation - most of these people have strong personalities to put it mildly, and they often have vested interests in pushing their preferred approach and view of how AI does/should work.


The improvements have less to do with scaling than adding new techniques like better fine tuning and reinforcement learning. The infinite scaling we were promised, that only required more content and more compute to reach god tier has indeed hit a wall.


I probably wasn't paying enough attention, but I don't remember that being the dominating claim that you're suggesting. Infinite scaling?


People were originally very surprised that you could get so much functionality by just pumping more data and adding more parameters to models. What made OpenAI initially so successful is that they were the first company willing to make big bets on these huge training runs.

After their success, I definitely saw a ton of blog posts and general "AI chatter" that to get to AGI all you really needed to do (obviously I'm simplifying things a bit here) was get more data and add more parameters, more "experts", etc. Heck, OpenAI had to scale back it's pronouncements (GPT 5 essentially became 4.5) when they found that they weren't getting the performance/functionality advances they expected after massively scaling up their model.


I basically agree with you also, but I have a somewhat contrarian view of scaling -> brick wall. I feel like applications of powerful local models is stagnating, perhaps because Apple has not done a good job so far with Apple Intelligence.

A year ago I expected a golden age of local model intelligence integrated into most software tools, and more powerful commercial tools like Google Jules to be something used perhaps 2 or 3 times a week for specific difficult tasks.

That said, my view of the future is probably now wrong, I am just saying what I expected.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: