Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I keep seeing this asertion: "the robots will get there" (or its ilk), and it's starting to feel really weird to me.

It's an article of faith -- we don't KNOW that they're going to get there. They're going to get better, almost certainly, but how much? How much gas is left in the tank for this technique?

Honestly, I think the fact that every new "groundbreaking" news release about LLMs has come alongside a swath of discussion about how it doesn't actually live up to the hype, that it achieves a solid "mid" and stops there, I think this means it's more likely that the robots AREN'T going to get there some day. (Well, not unless there's another breakthrough AI technique.)

Either way, I still think it's interesting that there's this article of faith a lot of us have "we're not there now, but we'll get there soon" that we don't really address, and it really colors the discussion a certain way.



IMO it seems almost epistemologically impossible that LLM's following anything even resembling the current techniques will ever be able to comfortably out-perform humans at genuinely creative endeavours because they, almost by definition, cannot be "exceptional".

If you think about how an LLM works, it's effectively going "given a certain input, what is the statistically average output that I should provide, given my training corpus".

The thing is, humans are remarkably shit at understanding just how exception someone needs to be to be genuinely creative in a way that most humans would consider "artistic"... You're talking 1/1000 people AT best.

This creates a kind of devils bargain for LLMs where you have to start trading training set size for training set quality, because there's a remarkably small amount of genuinely GREAT quality content to feed this things.

I DO believe that the current field of LLM/LXM's will get much better at a lot of stuff, and my god anyone below the top 10-15% of their particular field is going to be in a LOT of trouble, but unless you can train models SOLELY on the input of exceptionally high performing people (which I fundamentally believe there is simply not enough content in existence to do), the models almost by definition will not be able to outperform those high performing people.

Will they be able to do the intellectual work of the average person? Yeah absolutely. Will they be able to do it probably 100/1000x faster than any human (no matter how exceptional)?... Yeah probably... But I don't believe they'll be able to do it better than the truly exceptional people.


I’m not sure. The bestsellers lists are full of average-or-slightly-above-average wordsmiths with a good idea, the time and stamina to write a novel and risk it failing, someone who was willing to take a chance on them, and a bit of luck. The majority of human creative output is not exceptional.

A decent LLM can just keep going. Time and stamina are effectively unlimited, and an LLM can just keep rolling its 100 dice until they all come up sixes.

Or an author can just input their ideas and have an LLM do the boring bit of actually putting the words on the paper.


I get your point, but using the best-sellers list as a proving point isn't exactly a slam-dunk.

What's that saying? "Nobody ever went broke overestimating the poor taste of the average person"


I’m just saying, the vast majority of human creative endeavours are not exceptional. The bar for AI is not Tolkien or Dickens, it’s Grisham and Clancy.


IMO the problem facing us is not that computers will directly outperform people on the quality of what they produce, but that they will be used to generate an enormous quantity of inferior crap that is just good enough that filtering it out is impossible.

Not replacement, but ecosystem collapse.


We have already trashed the internet and really human communication with SEO blogspam brought even lower by influencers desperately scrambling for their two minutes of attention. I could actually see quality on average rising, since it will now be easy to churn out higher quality content, even more easily than the word salad I have been wading through for at least the last 15 years.

I am not saying it's not a sad state of affairs. I am just saying we have been there for a while and the floor might be raised, a bit at least.


Yes, LLMs are probably inherently limited, but the AI field in general is not necessarily limited, and possibly has the potential to be more genuinely creative than even most exceptional creative humans.


I loosely suspect too many people are jumping into LLMs and I assume real research is being strangled. But to be honest all of the practical things I have seen such as by Mr Goertzel are painfully complex very few can really get into.


Agreed. I think people are extrapolating with a linearity bias. I find it far more plausible that the rate of improvement is not constant, but instead a function of the remaining gap between humans and AI, which means that diminishing returns are right around the corner.

There's still much to be done re: reorganizing how we behave such that we can reap the benefits of such a competent helper, but I don't think we'll be handing the reigns over any time soon.


In addition to "will the robots get there?" there's also the question "at what cost?". The faith-basedness of it is almost fractal:

- "Given this thing I saw a computer program do, clearly we'll have intelligent AI real soon now."

- "If we generate sufficiently smart AI then clearly all the jobs will go away because the AI will just do them all for us"

- "We'll clearly be able to do the AI thing using a reasonable amount of electricity"

None of these ideas are "clear", and they're all based on some "futurist faith" crap. Let's say Microsoft does succeed (likely at collosal cost in compute) in creating some humanlike AI. How will they put it to work? What incentives could you offer such a creature? What will it want in exchange for labor? What will it enjoy? What will it dislike? But we're not there yet, first show me the intelligent AI then we can discuss the rest.

What's really disturbing about this is hype is precisely that this technology is so computationally intensive. So of course the computer people are going to hype it--they're pick and shovel salespeople supplying (yet another) gold rush.


AI has been so conflated with LLMs as of late that I'm not surprised that it feels like we won't get there. But think of it this way, with all of the resources pouring into AI right now (the bulk going towards LLMs though), the people doing non-LLM research, while still getting scraps, have a lot more scraps to work with! Even better, they can probably work in peace, since LLMs are the ones under the spotlight right now haha


LLM’s are not the last incarnation. I assume that all the money, research and human ingenuity will eventually find better architectures.

I’m not sure we really want that, but I am pretty sure we’ll try for it.


People are taking it as an article of faith because almost every prediction that "AI will not be able to do X anytime soon" has failed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: