Hacker News new | past | comments | ask | show | jobs | submit login

I’m not sure how to word my excitement about the progress we see in AI research in the last years. If you haven’t read it, give Tim Urbans classic piece a slice of your attention: https://waitbutwhy.com/2015/01/artificial-intelligence-revol...

It’s a very entertaining read from a couple of years ago (I think I’ve read it in 2017), and man, have things happened in the field since then. If feels like things truly start coming together. Transformers and then some incremental progress look like a very, very promising avenue. I deeply wonder in which areas this will shape the future more than we are able to anticipate beforehand.




Not you specifically, but I honestly don't understand how positive many in this community (or really anyone at all) can be about these news. Tim Urban's article explicitly touches on the risk of human extinction, not to mention all the smaller-scale risks from weaponized AI. Have we made any progress on preventing this? Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?

Even the best-case scenario that some are describing, of uploading ourselves into some kind of post-singularity supercomputer in the hopes of being conscious there, doesn't seem very far from plain extinction.


I think the best-case scenario is that 'we' become something different than we are right now. The natural tendency of life(on the local scale) is toward greater information density. Chemical reactions beget self-replicating molecules beget simple organisms beget complex organisims beget social groups beget tribes beget city states beget nations beget world communities. Each once of these transitions looks like the death of the previous thing and in actuality the previous thing is still there, just as part of a new whole. I suspect we will start with natural people and transition to some combination of people whose consciousness exists, at least partially, outside of the boundaries of their skulls, people who are mostly information on computing substrate outside of a human body, and 'people' who no longer have much connection with the original term.

And that's OK. We are one step toward the universe understanding itself, but we certainly aren't the final step.


Let's be real.

Not long from now all creative and productive work will be done by machines.

Humans will be consumers. Why learn a skill when it can all be automated?

This will eliminate what little meaning remains in our modern lives.

Then what? I don't know, who cares?


>Then what?

Growing tomatoes is less efficient than buying them, regardless of your metric. If you just want really cleanly grown tomatoes, you can buy those. If you want cheap tomatoes, you can buy those. If you want big tomatoes, you can buy those.

And yet individual people still grow tomatoes. Zillions of them. Why? Because we are inherently over-evolved apes who like sweet juicy fruits. The key to being a successful human in the post-scarcity AI overlord age is to embrace your inner ape and just do what makes you happy, no matter how simple it is.

The real insight out of all this is that the above advice is also valid even if there are no AI overlords.


Humans are great at making up purpose where there is absolutely none, and indeed this is a helpful mechanism for dealing with post-scarcity.

The philosophical problem that I see with the "AI overlord age" (although not directly related to AI) is that we'll then have the technology to change the inherent human desires you speak of, and at that point growing tomatoes just seems like a very inefficient way of satisfying a reward function that we can change to something simpler.

Maybe we wouldn't do it precisely because it'd dissolve the very notion of purpose? But it does feel to me like destroying (beating?) the game we're playing when there is no other game out there.

(Anyway, this is obviously a much better problem to face than weaponized use of a superintelligence!)


Any game you play has cheat codes. Do you use them? If not, why not?

In a post-scarcity world we get access to all the cheat codes. I suspect there will be many people who use them and as a result run into the inevitable ennui that comes with basing your sense of purpose on competing for finite resources in a world where those resources are basically free.

There will also be many people who choose to set their own constraints to provide some 'impedance' in their personal circuit. I suspect there will also be many people who will simply be happy trying to earn the only resource that cannot ever be infinite: social capital. We'll see a world where influencers are god-kings and your social credit score is basically the only thing that matters, because everything else is freely available.


Does social status even matter if you can plug yourself into a matrix where you are the god-king?


I feel exactly the opposite. AI has not yet posed any significant threats to humanity other than issues with the way people choose to use it (tracking citizens, violating privacy, etc.).

So far, we have task-driven AI/ML. It solves a problem you tell it to solve. Then you, as the engineer, need to make sure it solves the problem correctly enough for you. So it really still seems like it would be a human failing if something went wrong.

So I'm wondering why there is so much concern that AI is going to destroy humanity. Is the theoretical AI that's going to do this even going to have the actuators to do so?

Philosophically, I don't have an issue with the debate, but the "AI will destroy the world" side doesn't seem to have any tangible evidence. It seems to me that people seem to take it as a given that it's possible AI could eliminate all of humanity and they do not support that argument in the least. From my perspective, it appears to be fearmongering because people watched and believed Terminator. It appears uniquely out-of-touch.


Agreed. People think of the best case scenario without seriously considering everything that can go wrong. If we stay on this path the most likely outcome is human extinction. Full stop


Says a random internet post. It takes a little more evidence or argument to be convincing, besides hyperbole.


Mechanized factories failed to kill humanity two hundreds ago and the Luddite movement against them seems comical today. What makes you think extinction is most likely?


this path will indeed lead to human extinction, but the path is climate change. AI is one of the biggest last hopes for reversing it. from my perspective, if it does kill us all, well, it's most likely still a less painful death.


> Or is HN mostly happy with deprecating humanity because our replacement has more teraflops?

If we manage to make a 'better' replacement for ourselves, is it actually a bad thing? Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake. AI made by us could well make us extinct. Is that a bad thing?


Your comment summarizes what I worry might be a more widespread opinion than I expected. If you think that human extinction is a fair price to pay for creating a supercomputer, then our value systems are so incompatible that I really don't know what to say.

I guess I wouldn't have been so angry about any of this before I had children, but now I'm very much in favor of prolonged human existence.


> I'm very much in favor of prolonged human existence.

Serious question - why?


What are your axioms on what’s important, if not the continued existence of the human race?

edit: I’m genuinely intrigued


I suppose the same axioms of every ape that's ever existed (and really the only axioms that exist). My personal survival, my comfort, my safety, accumulation of resources to survive the lean times (even if there are no lean times), stimulation of my personal interests, and the same for my immediate 'tribe'. Since I have a slightly more developed cerebral cortex I can abstract that 'tribe' to include more than 10 or 12 people, which judging by your post you can too. And fortunate for us, because that little abstraction let us get past smashing each other with rocks, mostly.

I think the only difference between our outlooks is I don't think there's any reason that my 'tribe' shouldn't include non-biological intelligence. Why not shift your priorities to the expansion of general intelligence?


Why should general intelligence continue to survive? You are placing a human value on continued existence.


We have Neanderthal, Denisovan DNA (and two more besides). Our cousins are not exactly extinct - we are a blend of them. Sure no pure strains exist, but we are not a pure strain either!


> If we manage to make a 'better' replacement for ourselves, is it actually a bad thing?

It's bad for all the humans alive at the time. Do you want to be replaced and have your life cut short? For that matter, why should something better replace us instead of coexist? We don't think killing off all other animals would be a good thing.

> Our cousin's on the hominoid family tree are all extinct, yet we don't consider that a mistake.

It's just how evolution played out. But if there was another hominid still alive along side us, advocating for it's extinction because we're a bit smarter would be considered genocidal and deeply wrong.


>happy with deprecating humanity because our replacement has more teraflops?

For me immortality a bigger thing than the teraflops. Also I don't think regular humanity would be got rid of but continue in parallel.


Excitement alone won't help us.

We should ask our compute overlords to perform their experiments in as open environment as possible, just because we, the public, should have the power to oversee the exact direction this AI revolution is taking us.

If you think about it, AI safetyism is a red herring compared to a very real scenario of powerful AGIs working safely as intended, just not in our common interest.

The safety of AGI owners' mindset seems like a more pressing concern compared to a hypothetical unsafety of a pile of tensors knit together via gradient descent over internet pictures.


That Tim Urban piece is great. It's also an interesting time capsule in terms of which AI problems were and were not considered hard in 2015 (when the post was written). From the post:

> Build a computer that can multiply two ten-digit numbers in a split second—incredibly easy. Build one that can look at a dog and answer whether it’s a dog or a cat—spectacularly difficult. Make AI that can beat any human in chess? Done. Make one that can read a paragraph from a six-year-old’s picture book and not just recognize the words but understand the meaning of them? Google is currently spending billions of dollars trying to do it. Hard things—like calculus, financial market strategy, and language translation—are mind-numbingly easy for a computer, while easy things—like vision, motion, movement, and perception—are insanely hard for it.

The children's picture book problem is solved; those billions of dollars were well-spent after all. (See, e.g., DeepMind's recent Flamingo model [1].) We can do whatever we want in vision, more or less [2]. Motion and movement might be the least developed area, but it's still made major progress; we have robotic parkour [3] and physical Rubik's cube solvers [4], and we can tell a robot to follow simple domestic instructions [5]. And Perceiver (again from DeepMind [6]) took a big chunk out of the perception problem.

Getting a computer to carry on a conversation [7], let alone draw art on par with human professionals [8], weren't even mentioned as examples, so laughably out of reach they seemed in the heathen dark ages of... 2015.

And as for recognizing a cat or a dog — that's a problem so trivial today that it isn't even worth using as the very first example in an introductory AI course. [9]

If someone re-wrote this post today, I wonder what sorts of things would go into the "hard for a computer" bucket? And how many of those would be left standing in 2029?

[1] https://arxiv.org/abs/2204.14198

[2] https://arxiv.org/abs/2004.10934

[3] https://www.youtube.com/watch?v=tF4DML7FIWk

[4] https://openai.com/blog/solving-rubiks-cube/

[5] https://say-can.github.io/

[6] https://www.deepmind.com/open-source/perceiver-io

[7] https://arxiv.org/abs/2201.08239v2

[8] https://openai.com/dall-e-2/

[9] https://www.fast.ai/


> And as for recognizing a cat or a dog — that's a problem so trivial today

Last time I checked - though it's been a long while I could not check thoroughly owing to other commitments - "«recognizing»" there was "consistently successfully guessing", not "critically defining". It may be that the problem was solved in the latest years, I cannot exclude it - but I have not seen around in the brief "news checking" exercise the signals required for the solution.

The real deal is far from trivial.

A clock can tell the time but does not know it.


That human intelligence might just be token prediction evolving from successive small bit-width float matrix transformations is depressing to me.


That's a poor usage of "just": discovering that "X is just Y" doesn't diminish X; it tells us that Y is a much more complex and amazing topic than we might have previously thought.

For example: "Life is just chemistry", "Earth is just a pile of atoms", "Behaviours are just Turing Machines", etc.


> That human intelligence might just be token prediction

I mean have you heard the word salad that comes out of so many people's mouths? (Including myself, admittedly)


Eating salad is good for your health. Not only word salad, but green salad and egg salad.


This trains in seeing what intelligence is not, not the opposite!


Wait till you find out all of physics is just linear operators & complex numbers


Unless nature is mathematical, the linear operators & complex numbers are just useful tools for making predictive models about nature. The map isn't the territory.


It’s most fascinating (or very obvious) - look at Conway’s Game of Life, then scale it up - a lot. Unlimited complexity can arise from very simple rules and initial conditions.

Now consciousness on the other hand is unfathomable and (in its finitude) extremely depressing for me.


Dear god I hope that we are using something more complicated than sampling with top_p, top_k, and a set temperature as our decoder!


Stop being depressed because it simply, clearly, certainly, is not. I just wrote a few paragraphs about it in an immediately previous post. This confirms that this phase is getting some people fooled on basics.


Is that what biologists or neuroscientists think the nervous system is actually doing?




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: