Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I love this take. Most AI results provoke a torrent of articles listing pratfalls that prove it's not AGI. Of course it's not AGI! But it is as unexpected as a talking dog. Take a second to be amazed, at least amused. Then read how they did it and think about how to do better.


I mean I'm not so impressed, because it seems like someones figured out the ventriloquist trick and and is just spamming it to make anything talk. Its fun enough, but unclear what this is achieving


>ventriloquist trick

I don't understand your analogy here. It really is the machine talking; there is no human hiding behind the curtain.


I guess there is a bunch of data hiding behind the curtains, and there is human feeding the data to the model. However I don’t agree with GP here as a ventriloquist’s dummy is not doing anything a human can’t. A well trained language model can produce an output in seconds what it takes human weeks to.


I think better analogy is parrot speaking English. Certainly no ventriloquism there.


This metaphor doesn't do justice to parrots or language models. Parrots only speak a phrase or two. LMs can write full essays.

On the other hand parrots can act in the environment, LMs are isolated from the world and society. So a parrot has a chance to test its ideas out, but LMs don't.


Marketing people love to make false claims, setting crazy expectations. Increased competition encourages these small lies, and sometimes even academic fraud.

This hurt and will continue to hurt the ML field.


I agree. The talking dog analogy deflates those claims while still pointing out what is unique and worth following up on about the results.

Meanwhile, the chorus of "look this AI still makes dumb mistakes and is not AGI" takes has gotten louder in many circles than the marketing drumbeat. It risks drowning out actual progress and persuading sensitive researchers to ignore meaningful ML results, which will result in a less representative ML community going forward.


It's rarely productive to take internet criticism into account, but it feels like AI is an especially strong instance of this. It seems like a lot of folks just want to pooh pooh any possible outcome. I'm not sure why this is. Possibly because of animosity toward big tech, given big tech is driving a lot of the research and practical implementation in this area?


> It seems like a lot of folks just want to pooh pooh any possible outcome. I'm not sure why this is. Possibly because of animosity toward big tech

It's much more simple, and more deep: most humans believe they are special/unique/non-machine like spiritual beings. Anything suggesting they could be as simple as the result of mechanical matrix multiplications is deeply disturbing and unacceptable.

There is a rich recent anti-AGI literature written by philosophy people which basically boils down to this: "a machine could never be as meaningful and creative as I am, because I am human, while the AGI is just operations on bits".


There was a time when we were proud that Earth was the center of the universe, nobody dare say otherwise!


though at the same time, and in the same population, the existence of other planets full of conscious beings was basically non-controversial:

Life, as it exists on Earth in the form of men, animals and plants, is to be found, let us suppose in a high form in the solar and stellar regions.

Rather than think that so many stars and parts of the heavens are uninhabited and that this earth of ours alone is peopled—and that with beings perhaps of an inferior type—we will suppose that in every region there are inhabitants, differing in nature by rank and all owing their origin to God, who is the center and circumference of all stellar regions.

Of the inhabitants then of worlds other than our own we can know still less having no standards by which to appraise them.

Nicholas of Cusa, c1440; Cardinal and Bishop


AI has been over-hyped, that's all.

The machine learning techniques that were developed and enhanced during the last decade are not magical, like any other machines/software.

But people have different and irrational expectations about AI.


Has it been over hyped? Some ML created in the last 8 years is in most major products now. It has been transformative even if you don’t see it, is informing most things you use. We’re not close to AGI but I’ve never heard an actual researcher make that claim or the orgs they work for. They just consistently show for the tasks they pick they beat most baselines and in a lot of cases humans. The models just don’t generalize but with transformers we’re able to get them to perform above baselines for multiple problems and that’s the excitement. I’m not sure who has overhyped it for you but it’s delivering in line with my expectations ever since the first breakthrough in 2013/2014 that let neural nets actually work.


It's just that the day to day instances of "AI" that you might run into are nowhere near the level of hype they initially got. For instance all kinds of voice assistants are just DUMB. Like, so so bad they actively put people off using them, with countless examples of them failing at even the most basic queries. And the instances where they feel smart it looks like it's only because you actually hit a magic passphrase that someone hardcoded in.

My point is - if you don't actually work with state of the art AI research, then yeah, it's easy to see it as nothing more than overhyped garbage, because that's exactly what's being sold to regular consumers.


I agree about the assistants that they are not as much as I would expect but also there are self driving cars heavily using a.i. even at the current state I am personally impressed or indirectly we get the help during pandemic for protein folding/ mRNA vaccine development [1] , I also remember a completed competition for the speeding up the delivery of cold storage mRNA vaccines to quickly figure out which ones could fail

[1] https://ai.plainenglish.io/how-ai-actually-helped-in-the-dev...


It is overhyped because in the end it has failed to deliver the breakthrough promised years ago. Car drinking cars are a great example.

The AI that has taken over google search has made a great product kinda of awful now.

What breakthroughs are you referring to?


A. Most people still think Google search is good. B. Unless you work for Google specifically on that search team I'm going to say you don't know what you're talking about. So we can safely throw that point away.

I've implemented a natural language search using bleeding edge work, the results I can assure you are impressive.

Everything from route planning to spam filtering has seen major upgrades thanks to ML in the last 8 years. Someone mentioned the zoom backgrounds, besides that image generation and the field of image processing in general. Document classification, translation. Recommendations. Malware detection, code completion. I could go on.

No one promised me AGI so idk what you're on about and that certainly wasn't the promise billed to me when things thawed out this time but the results have pretty undeniably changed a lot of tech we use.


Why would you discount someone who has been measuring relevancy of search results and only accept information from a group of people who don't use the system? You are making the mistake of identifying the wrong group as experts.

You may have implemented something that impressed you but when you move that solution into real use were other's as impressed?

That's what is probably happening with the google search team. A lot of impressive demos, pats on the back, metrics being met but it falls apart in production.

Most people don't think Google's search is good. Most people on Google's team probably think it's better than ever. Those are two different groups.

Spam filtering may have had upgrades but it is not really better for it and in many cases worse.


Maybe because a single anecdote isn't really useful to represent billions of users? They have access to much more information.

I used it in real use, the answer was still a hard yes.


One of Deepmind's goals is AGI, so it is tempting to evaluate their publications for progress towards AGI. Problem is, how do you evaluate progress towards AGI?

https://deepmind.com/about

"Our long term aim is to solve intelligence, developing more general and capable problem-solving systems, known as artificial general intelligence (AGI)."


AGI is a real problem but the proposed pace is marketing fluff -- on the ground they're just doing good work and moving our baselines incrementally. If a new technique for let's say document translation is 20% cheaper/easier to build and 15% more effective that is a breakthrough. It is not a glamorous world redefining breakthrough but progress is more often than not incremental. I'd say more so than the big eureka moments.

Dipping into my own speculation, to your point about how to measure, between our (humanity's) superiority complex and with how we move the baselines right now I don't know if people will acknowledge AGI if and until it's far superior to us. If even an average adult level intelligence is produced I see a bunch of people just treating it poorly and telling the researchers that it's not good enough.

Edit: And maybe I should amend my original statement to say I've never heard a researcher promise me about AGI. That said that statement from DeepMind doesn't really promise anything other than they're working towards it.


Shane Legg is a cofounder of DeepMind and an AI researcher. He was pretty casual about predicting human level AGI in 2028.

https://www.vetta.org/2011/12/goodbye-2011-hello-2012/

He doesn't say so publicly any more, but I think it is due to people's negative reaction. I don't think he changed his opinion about AGI.


If we are going to start saying "but it hasn't achieved X yet when Y said it would" as a way to classify a field as overhyped then I don't know what even remains.


I mean, Zoom can change your background of video in real time, and people all over the world do so every day. This was an unimaginable breakthrough 10 years ago.


>This was an unimaginable breakthrough 10 years ago.

We had real time green screens 10 years ago. I don't think it's that unimaginable.


What does that have to do with AI though??


How do you think that's done?


Definitely not with AI?

I mean Photoshop could do that 10+ years ago without "machine learning" even being a thing people talked about.


This is sort of the interesting thing with AI. It's a moving target. Every time when an AI problem gets cracked, it's "yea but that's not really AI, just a stupid hack".

Take autonomous cars. Sure, Musk is over-hyping, but we are making progress.

I imagine it will go something like:

Step 1) support for drivers (anti sleep or colision).. done?

Step 2) autonomous driving in one area, perfect conditions, using expensive sensors

Step n) gradual iteration removes those qualifications one by one

.. yes, it will take 10/20 years before cars can drive autonomously in chaotic conditions such as "centre of Paris in the rain". But at each of those steps value is created, and at each step people will say "yea but..".


Arguably most autonomous driving solutions can already perfectly emulate typical driving in Paris in the rain.


And you'd be wrong. The key part here is "live in real time video"

Photoshop definitely cannot do that, I know that for a fact.

https://towardsdatascience.com/virtual-background-for-video-...

There's an example article on the subject.


I just don't see how that's AI , sorry. Machine learning to recognize a background isn't AI.


ML is most certainly AI. I had a visceral feeling you'd respond with this. Sorry but what ever magic you have in your head isn't AI -- this is real AI and you're moving goal posts like alot of people tend to do.


You have single cell organisms which are able to sense their nearby surroundings and make a choice based on the input - they can differentiate food from other materials and know how to move towards it. They are a system which can process complex input and make a decision based on that input. Yet you wouldn't call a basic single cell organism intelligent in any way. The term usually used is that it's simply a biochemical reaction that makes them process the input and make a choice, but you wouldn't call it intelligence and in fact no biologist ever would.

I feel the same principle should apply to software - yes, you've built a mathematical model which can take input and make a decision based on the internal algorithms, if you trained it to detect background in video then that's what it will do.

But it's not intelligence. It's no different than the bacteria deciding what to eat because certain biological receptors were triggered. I think calling it intelligent is one of the biggest lies IT professionals tell themselves and others.

That's not to say the technology isn't impressive - it certainly is. But it's not AI in my opinion.


> We’re not close to AGI but I’ve never heard an actual researcher make that claim or the orgs they work for.

The fact that the researchers were clear about that doesn't absolve the marketing department, CEOs, journalists and pundits from their BS claims that we're doing something like AGI.


> The machine learning techniques that were developed and enhanced during the last decade are not magical, like any other machines/software.

You might be using a different definition of "magical" than what others are using in this context.

Of course, when you break down ML techniques, it's all just math running on FETs. So no, it's not extra-dimensional hocus pocus, but absolutely nobody is using that particular definition.

We've seen unexpected superhuman performance from ML, and in many cases, it's been inscrutable to the observer as to how that performance was achieved.

Think move 37 in game #2 of Lee Sedol vs. AlphaGo. This move was shocking to observers, in that it appeared to be "bad", but was ultimately part of a winning strategy for AlphaGo. And this was all done in the backdrop of sudden superhuman performance in a problem domain that was "safe from ML".

When people use the term "magic" in this context, think of "Any sufficiently advanced technology is indistinguishable from magic" mixed with the awe of seeing a machine do something unexpected.

And don't forget, the human brain is just a lump of matter that consumes only 20W of energy to achieve what it does. No magic here either, just physics. Synthetically replicating (and completely surpassing) its functionality is a question of "when", not "if".


Was Go ever "safe from ML" as opposed to "[then] state of the art can't even play Go without a handicap"? Seems like exactly the sort of thing ML should be good at; approximating Nash equilibrium responses in a perfect information game with a big search space (and humans setting a low bar as we're nowhere near finding an algorithmic or brute force solution). Is it really magical that computers running enough simulations exposes limitations to human Go theory (arguably one interesting lesson was that humans were so bad at playing that AlphaGoZero was better off not having its dataset biased by curated human play)? Yes, it's a clear step forward compared with only being able to beat humans at games which can be fully brute forced, or a pocket calculator being much faster and reliable than the average humans at arithmetic due to a simple, tractable architecture, but also one of the least magical-seeming applications given we already had the calculators and chess engines (especially compared with something like playing Jeopardy) unless you had unjustifiably strong priors about how special human Go theory was.

I think people are completely wrong to pooh pooh the utility of computers being better at search and calculations in an ever wider range of applied fields, but linking computers surpassing humans at more examples of those problems to certainty we'll synthetically replicate brain functionality we barely understand is the sort of stretch which is exactly why AGI-sceptics feel the need to point out that this is just a tool iterating through existing programs and sticking lines of code together until the program outputs the desired output, not evidence of reasoning in a more human-like way.


I strongly disagree that we've seen anything unexpected so far.

AlphaGo is nothing else than brute force.

And brute force can go a long way, it should not be underestimated.

But so far, this approach has not let to emergent behaviors, the ML blackbox is not giving back more than what was fed.


AlphaGo is decidedly not brute force, under any meaningful definition of the term. It's monte carlo tree search, augmented by a neutral network to give stronger priors on which branches are worth exploring. There is an explore/exploit trade-off to manage, which takes it out of the realm of brute force. The previous best go programs used Monte Carlo tree search alone, or with worse heuristics for the priors. Alpha Go improves drastically on the priors, which is arguably exactly the part of the problem that one would attribute to understanding the game: Of the available moves, which ones look the best?

They used a fantastic amount of compute for their solution, but, as has uniformly been the case for neutral networks, the compute required for both training and inference has dropped rapidly after the initial research result.


> AlphaGo is nothing else than brute force.

This statement is completely false with accepted definitions of "brute force" in the context of computer science.


If recent philosophy taught us anything it's that brains are special. The hard problem of consciousness shows science is insufficient to raise to the level of entitlement of humans, we're exceptions flying over the physical laws of nature, we have free will, first person POV, and other magical stuff like that. Or we have to believe in panpsychism or dualism, like in the middle ages. Anything to lift the human status.

Maybe we should start by "humans are the greatest thing ever" and then try to fit our world knowledge to that conclusion. We feel it right in our qualia that we are right, and qualia is ineffable.


> The hard problem of consciousness shows science is insufficient to raise to the level of entitlement of humans, we're exceptions raising over the physical laws of nature, we have free will and other magical stuff like that.

That's not my understanding of the 'hard problem of consciousness'. Admittedly, all I know about the subject is what I've heard from D Chalmers in half-a-dozen podcast interviews.

Can you point to a definitive source?


I mean, it is magical, in a sense that we are not sure how and why it works.


I'm not sure how it works, but I'm sure it doesn't think. Not till it can choose its own loss function.


Are you sure that you are thinking? Can you choose your loss function?


It's not like people can arbitrarily choose their own loss function; our drivers, needs and desires are what they are, you don't get to just redefine what makes you happy (otherwise clinical depression would not be a thing); they change over time and can be affected by various factors (things like heroin or brain injury can adjust your loss function) but it's not something within our conscious control. So I would not put that as a distinguishing factor between us and machines.


Sure, it doesn't think, just as submarines don't swim, as EWD said.


People always reach for these analogies. "Planes don't fly like birds." "Submarines don't swim like fish."

Backpropagation has zero creativity. It's an elaborate mechanical parrot, and nothing more. It can never relate to you on a personal level, because it never experiences the world. It has no conception of what the world is.

At least a dog gets hungry.

Not persuaded? Try https://news.ycombinator.com/item?id=23346972


> Backpropagation has zero creativity. It's an elaborate mechanical parrot, and nothing more. It can never relate to you on a personal level, because it never experiences the world. It has no conception of what the world is.

The problem is: it's not really clear how much creativity we have, and how much of it is better explained by highly constrained randomized search and optimization.

> It can never relate to you on a personal level

Well, sure. Even if/once we reach AGI, it's going to be a highly alien creature.

> because it never experiences the world.

Hard to put this on a rigorous setting.

> It has no conception of what the world is.

It has imperfect models of the world it is presented. So do we!

> At least a dog gets hungry.

I don't think "gets hungry" is a very meaningful way to put this. But, yes: higher living beings act with agency in their environment (and most deep learning AIs we build don't, instead having rigorous steps of interaction not forming any memory of the interaction) and have mechanisms to seek novelty in those interactions. I don't view these as impossible barriers to leap over.


I agree GPT isn't grounded and it is a problem, but that's a weird point to argue against AlphaCode. AlphaCode is ground by actual code execution: its coding experience is no less real than people's.

AlphaGo is grounded because it experienced Go, and has a very good conception of what Go is. I similarly expect OpenAI's formal math effort to succeed. Doing math (e.g. choosing a problem and posing a conjecture) benefits from real world experience, but proving a theorem really doesn't. Writing a proof does, but it's a separate problem.

I think software engineering requires real world experience, but competitive programming probably doesn't.


If anything is overhyped in AI it's deep reinforcement learning and its achievements in video games or the millionth GAN that can generate some image. But when it solves a big scientific problem that was considered a decade away, that's pretty magical.


I believe modelling the space of images deserves a bit more appreciation, and the approach is so unexpected - the generator never gets to see a real image.


The GANs are backdooring their way into really interesting outcomes, though. They're fantastic for compression: You compress the hell out of an input image or audio, then use the compressed features as conditioning for the GAN. This works great for super-resolution on images and speech compression.

eg, for speech compression: https://arxiv.org/abs/2107.03312


Most of the stuff featured on https://youtube.com/c/K%C3%A1rolyZsolnai looks pretty magical to me


It’s like a lot of the crypto stuff. The research is really cool, and making real progress toward new capabilities. Simultaneously there are a lot of people and companies seizing on that work to promote products of questionable quality, or to make claims of universal applicability (and concomitant doom) that they can’t defend. Paying attention in this sort of ecosystem basically requires one to be skeptical of everything.


> but it feels like AI is an especially strong instance of this. It seems like a lot of folks just want to pooh pooh any possible outcome. I'm not sure why this is.

I presume the amount of hype AI research has been getting for the past 4 decades might be at least part of the reason. I also think AI is terribly named. We are assigning “intelligence” to basically a statistical inference model before philosophers and psychologists have even figured out what “intelligence” is (at least in a non-racist way).

I know that both the quality and (especially) the quantity of inference done with machine learning algorithms is really impressive indeed. But when people are advocating AI research as a step towards some “artificial general intelligence” people (rightly) will raise questions and start poohing you down.


It doesn't matter if we call it "intelligent" or "general", the real test is if it is useful. A rose by any other name...


The naming does indeed matter here. The concept of general intelligence is filled with pseudo-science and has a history of racism (see Mismeasure of Man by Stephen J. Gould). Non-linear statistical inference with very large matrices could be assigned intelligence as it is very useful, but it is by no means the same type of intelligence we ascribe to humans (or even dogs for that matter).

If your plant actually looks like a moss you probably shouldn’t call it a rose. (Even though your moss is actually quite amazing).


It's because you have to pay really close attention to tell if it's real or hype. It's really easy to make a cool demo in machine learning, cherrypick outputs, etc.


I suspect it is more visceral: AGI would demolish human exceptionalism, and also of the human mind as being the last refuge of vitalism.


Well in this case I think many programmers just see it as a direct assault to their identity.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: