Hacker Newsnew | past | comments | ask | show | jobs | submit | yxgao's commentslogin

I don’t think it’s going to be a “winter”, but there are definitely some bubbles to burst. Especially when LLMs become half-assed products and the general public’s heightened expectations are not met.


Years ago so many ‘machine learning’ startups failed because their predictions were 90% accurate but 99.99% was needed for businesses to pay for them. These old scars seem to be missing in LLM mania - why will businesses pay for them now when the previous non hallucinating ML models weren’t reliable enough.


I think the biggest bubble that needs to burst is probably the whole "AGI" thing.

The definition of it isn't clear but from what I gather it's basically an aggregate of emergent capabilities that work together to produce a singularity.

Maybe with enough resources it's possible but I highly doubt it'll be economically feasible given how much has gone into it so far and how far we really are away from something like that with current models.


In my opinion you could make an LLM 100x bigger and it would only get better at generating the next token in a sentence. And everyone knows that the best sentences are not constructed by the most intelligent people with the most accurate world model, but by the people who are best at constructing sentences. It's a dead end in terms of real intelligence and reasoning imo.


People that make the best sentences don't necessarily make the world go round. In most scenario's, a barely adequate sentence is enough to keep the world turning..


AGI, in the sense you mention, is an imagined/hoped-for supreme-power that will save/destroy us all (or maybe just the "worthy"/"unworthy" ones).

In an age of such hopelessness about the future, this looks a lot like an emotional crutch wrapped in the veil of rationality - just the thing an anxious materialist needs to make sense of the world.

Like many cults and religions it mistakes the plausible for possible and possible for probable.

The problem with religious beliefs like these is that they don't just disappear with evidence or sufficient reasoning.

I don't think that particular bubble is bursting anytime soon.


Well, that's one definition but I think most people are thinking of general intelligence like humans have rather than godlike. That is more doable.


If that's the goal then perhaps we already surpassed it and I personally am not impressed.

It's useful but basically every method of quality control requires a human.

I've found that components of general intelligence specialized beyond human capability are much more useful than a model that can mimic a human.

I think an LLM is just trying to do too much at once, all of the individual NLP algorithms most of them are made of are very useful to us, but an LLM is just not specialized enough to be any more useful than a human without specialization.

Which isn't to say they're _useless_, but obviously not as useful as a specialist (in special contexts, denoted by whatever kind of specialist they are)

ETA: as an aside, I'd like to contextualize my presumption that AGI is about AI singularity with the fact that Sam Altman casually stated that he doesn't care if it takes $50 billion to reach AGI.

In the real world, with 50 billion dollars, you can do something much more useful than trying to build a product that's basically contradictory by definition.

An AGI is (presumably) a general intelligence model but it's implicitly touted as being extremely useful for specialized tasks (because, humans can specialize), but once you specialize, I would argue your general intelligence tends to weaken. (For example I wouldn't expect a Harvard PhD to be 100% up to date with modern slang terms, but I'd be shocked if I went to a local bar and met someone who didn't know what rizz means).

This is basically just trying to squeeze two opposite ends of a spectrum together, which sounds kind of like a singularity to me.


Some of the reasons people like Altman get excited are that if AI is as good as humans all round then you can replace the workforce. Also given the way of these things it will get better each year. We'll see.


> people like Altman get excited are that if AI is as good as humans all round then you can replace the workforce.

I get that. I guess my point is this already seems to exist. We could combine AI with machinery to replace almost everything humans can do already, someone just has to build for that solution (e.g. train some models).

AGI just sounds like a sort of automation of that process. And I don't think a bigger LLM will accomplish that task. I think more developers will.

Which I wager would be cheaper and arguably more fortuitous to the human race than $50 billion thrown into one pot


Current AI is pretty patchy in it's abilities. Chess is great, chatbot stuff has recently become quite good, but hook it to a robot and tell it to pop down Tescos to get some milk and then come back and tidy the house and it's hopeless!

But yeah developers are needed, a bigger LLM won't fix everything.

The money's a funny one. Global GDP is about $85,000 bn/yr so if someone can spend $50bn on getting AGI and taking it over it's a bargain. But if you spend $50bn and just get a loss making chatbot then less so.


The use case you describe hardly describes what most workers do though. That's a robot Butler, not a desk worker who takes calls and fills out forms based on what the customer on the other end of the line says, or a factory worker (where automation has already been replacing tons of dangerous jobs without AI since the advent of engineering really).

Also, I still think you can probably build something (or rather, many, many somethings) with existing tooling to accomplish exactly that.

> A bigger LLM won't fix everything

I'm not sure if there's a camp that says it probably won't fix anything, but I'm in that camp if it exists.

If you think about how humans actually work, I think a basic, non AGI LLM routing information to different agents/models is closer to how most humans behave (when productivity is their goal).

E.g. a person's behavior is driven almost entirely by the current context they are in most of the time.

It's not that our minds become overexcited by loads of previous information and we magically are able to do other specialized tasks, we decide based on context what specialty in our toolset best fits the scenario.

> The money's a funny one. Global GDP is about $85,000 bn/yr so if someone can spend $50bn on getting AGI and taking it over it's a bargain. But if you spend $50bn and just get a loss making chatbot then less so.

If that's true then the same could be said of just dumping $50 billion into grants/research/funding for education around AI so that developers worldwide have an easier time developing AI enabled technologies and services.

At least with that plan, there is extremely little risk of creating nothing more than a chatbot (and extremely low risk of tech companies monopolizing labor the same way they try and monopolize everything else; I don't have much faith that if a few companies automate all or most labor that they'll redistribute wealth)


I think theconstruct has something called the rosject which is similar to what you are describing here, but AFAIK it never gain any traction apart from usage in their own online courses. I do think a platform that hosts robotics datasets would nice though.


Thanks, I checked their website, but couldn’t find anything similar to my thoughts. I think of something more of a directory with backlinks to code and in the future a web interface/tools that connect to your local gazebo, omniverse or whatever


AI in itself does not pose any threats (yet). The "safety" concern here is always about how humans would (mis)use it.


Yes, we all have made up our minds about it. But we are still trying to figure out the logistics.


Good luck to you. Coup d'etat isn't something everyone can say that they participated in. This will either work out well and you'll be high-fiving the remaining team—or you're all getting fired. Simple as.

FWIW, I've worked at places (plural) like the one you've described. In my experience, "reading the writing on the wall" and focusing on my resume and marketability (in preparation for leaving) was "the move". I burnt no bridges, even when I really wanted to.


If the team is so amazing, you should probably rethink firing the dude who formed it. Why don’t you guys start your own company?


I won't provide too much details about our company, but we use computer vision to provide health-related services.


I meant as in LLC, C-corp, co-op, etc


I was facing the same problem. And recently I finally made the decision to migrate from gmail to Fastmail and use masked emails everywhere except a few trusted services. It’s not a trivial task and as of now the migration is still ongoing, but the result is amazing. Finally, no more triple-digit notification bubbles.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: