Hacker News new | past | comments | ask | show | jobs | submit login

Yudkowsky just mentioned that even if LLM progress stopped right here, right now, there are enough fundamental economic changes to provide us a really weird decade. Even with no moat, if the labs are in any way placed to capture a little of the value they've created, they could make high multiples of their investors' money.



Like what economic changes? You can make a case people are 10% more productive in very specific fields (programming, perhaps consultancy etc). That's not really an earthquake, the internet/web was probably way more significant.


LLMs are fundamentally a new paradigm, it just isn't distributed yet.

It's not like the web suddenly was just there, it came slow at first, then everywhere at once, the money came even later.


The LLMs are quite widely distributed already, they're just not that impactful. My wife is an accountant at a big 4 and they're all using them (everyone on Microsoft Office is probably using them, which is a lot of people). It's just not the earth shattering tech change CEOS make it to be , at least not yet. We need order of mangitude improvements in things like reliability, factuality and memory for the real economic efficiencies to come and its unclear to me when that's gonna happen.


Not necessarily, workflows just need to be adapted to work with it rather than it working in existing workflows. It's something that happens during each industrial revolution.

Originally electric generators merely replaced steam generators but had no additional productivity gains, this only changed when they changed the rest of the processes around it.


I don't get this. What workflow can have occasional catastrophic lapses of reasoning, non factuality, no memory and hallucinations etc? Even in things like customer support this is a no go imo. As long as these very major problems aren't improved (by a lot) the tools will remain very limited.


We are at the precipice of a new era. LLMs are only part of the story. Neural net architecture and tooling has matured to the point where building things like LLMs is possible. LLMs are important and will forever change "the interface" for both developers and users, but it's only the beginning. The Internet changed everything slowly, then quickly, then slowly. I expect that to repeat


So you're just doing Delphic oracle prophecy. Mysticism is not actually that helpful or useful in most discussions, even if some mystical prediction accidently ends up correct.


Observations and expectations are not prophecy, but thanks for replying to dismiss my thoughts. I've been working on a ML project outside of the LLM domain, and I am blown away by the power of the tooling compared to a few years ago.


> What workflow can have occasional catastrophic lapses of reasoning, non factuality, no memory and hallucinations etc?

LLMs might enable some completely new things to be automated that made no sense to automate before, even if it’s necessary to error correct with humans / computers.


There's a lot of productivity gains from things like customer support. It can draft a response and the human merely validates it. Hallucination rates are falling and even minor savings add up in these areas with large scale, productivity targets and strict SLA's such as call centres. It's not a reach to say it could already do a lot of Business process outsourcing type work.


Source on hallucination rates falling?

I use LLMs 20-30 times a day and while it feels invaluable for personal use where I can interpret the responses at my own discretion, they still hallucinate enough and have enough lapses in logic where I would never feel confident incorporating them into some critical system.


My own experience, but if you insist

https://www.visualcapitalist.com/ranked-ai-models-with-the-l...

99% of systems aren't critical and human validation is sufficient. My own use case, it is enough to replace plenty of hours of human labour.


Government and healthcare workers have been using AI for notes for over a year in Louisiana; an additional anecdote to sibling.


It's a force multiplier.

Think of having a secretary, or ten. These secretaries are not as good as an average human at most tasks, but they're good enough for tasks that are easy to double check. You can give them an immense amount of drudgery that would burn out a human.


What drudgery, though? Secretaries don't do a lot of drudgery. And a good one will see tasks that need doing that you didn't specify.

If you're generating immense amounts of really basic make work, that seems like you're managing your time poorly.


As one example, LLMs are great at summarizing, or writing or brainstorming outlines of things. They won't display world-class creativity, but as long as they're not hallucinating, their output is quite usable.

Using them to replace core competencies will probably remain forbidden by professional ethics (writing court documents, diagnosing patients, building bridges). However, there are ways for LLMs to assist people without doing their jobs for them.

Law firms are already using LLMs to deal with large amounts of discovery materials. Doctors and researchers probably use it to summarize papers they want to be familiar with but don't have the energy to read themselves. Engineers might eventually be able to use AI to do a rough design, then do all the regulatory and finite element analysis necessary to prove that it's up to code, just like they'd have to do anyway.

I don't have a high-level LLM subscription, but I think with the right tooling, even existing LLMs might already be pretty good at managing schedules and providing reminders.


Very limited thinking AI is a tool


It's an echo chamber.

It is - what? - a fifth anniversary of "the world will be a completely different place in 6 months due to AI advancement"?

"Sam Altman believes AI will change the world" - of course he does, what else is he supposed to say?


It is a different place. You just haven't noticed yet.

At some point fairly recently, we passed the point at which things that took longer than anyone thought they would take are happening faster than anyone thought they would happen.


Yep totally agree. It will also depend who captures the most eyeballs.

ChatGPT is already my default first place to check something, where it was Google for the previous 20+ years.


I use it for all kinds of unique things, but ChatGPT is the last place I look for facts.


Eyeballs aren’t enough though. Unlike Google ChatGPT is very expensive to run. It’s unlikely they can just slap ads on it like Google did.


Inference costs will keep dropping. The stuff the average consumer does will be trivially cheap. More stuff will move on device. The edge capabilities of these models are already far beyond what the average person can use or comprehend.

The point I wonder about is the sustainability of every query being 30+ requests. Site owners aren't ready to have 98% of their requests be non-monetizable bot traffic. However, sites that have something to sell are..


With no moat, they aren't placed to capture much value; moats are what stops market competition from driving prices to the zero economic profit level, and that's even without further competition from free products that are being produced by people who aren’t even trying to support themselves in the market you are selling into, which can make even the zero economic profit price untenable.


Market competition doesn't work in an instant; even without a moat, there's plenty of money they can capture before it evaporates.

Think pouring water from the faucet into a sink with open drain - if you have high enough flow rate, you can fill the sink faster than it drains. Then, when you turn the faucet off, as the sink is draining, you can still collect plenty of water from it with a cup or a bucket, before the sink fully drains.


The startups that are using API credits seem like the most likely to be able to achieve a good return on capital. There is a pretty clear cost structure and it's much more straightforward whether you are making money or not.

The infrastructure side of things, tens of billions and probably hundreds of billions going in, may not be fantastic for investors. The return on capital should approach cost of capital if someone does their job correctly. Add in government investment and subsidies (in China, the EU, the United States) and it become extremely difficult to make those calculations. In the long term, I don't think the AI infrastructure will be overbuilt (datacenters, fabs), but like the telecom bubble, it is easy to end up in a position where there is a lot of excess capacity and the way you made your bet means getting wiped out.

Of course if you aren't the investor and it isn't your capital, then there is a tremendous amount of money to be made because you have nothing to lose. I've been around a long time, and this is the closest thing I've felt to that inflection point where the web took off.


> Market competition doesn't work in an instant; even without a moat, there's plenty of money they can capture before it evaporates.

Sure, in a hypothetical market where before they try to extract profits most participants aren't losing money with below-profitable prices trying to keep mindshare. But you’d need a breakthrough around which a participant had some kind lf a moat to get, even temporarily, there in the LLM market.


Oh really? How are these changes supposed to look like? Who will pay up essentially? I don't really see it, aside from the m$ business case of offering AI as a guise for violating privacy much harsher to better sell ads.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: