Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That cycle has gone on since the start of the Industrial Revolution. Jobs are destroyed, people complain and protest, new jobs are created and the total economic pie and the absolute size of the average person’s slice increases.

A valid question is are there inventions for which this would not be true? I think yes for general AI, but also yes for people who are unable to migrate between a job lost and any of the new jobs created due to lack of education or willingness to reinvent themselves or relocate to where the new jobs are. Innovation can definitely create winners and losers. That’s bad for the losers, but not necessarily for society as a whole. Unless so many losers are created that they rise up and overthrow the system. That’s a real long tail risk if the pace of change sufficiently outpaces our ability to adapt to it.



You're right, but the phrase "people complain" elides a lot and shows a callous lack of empathy. People complain because they lose their jobs, their homes, their standard of living, their future prospects, their ability to feed their families, their relationships, their social status, their healthcare benefits and therefore their healthcare, their mental health, and in some cases their lives (suicide is not unusual for people who lose all of the above). For every Priya who gets a new job as an AI wrangler, there will be many who do not.


How that is handled really comes down to how your society has agreed to establish a social safety net.

In Northern Europe it’s handled quite well. In the U.S. it’s handled with a “callous lack of empathy” as you phrased it.

My point is disruption is the engine of progress, but it also causes temporary pain (that might not be temporary on the scale of human lifetimes.) It’s the wrong reaction to want to stop or slow progress. You can actually prove that through the lens of game theory and the fact that we have multiple human societies. The right thing to do is ensure your society doesn’t leave the losers of that process behind.


Sure, but many of the "progress is inevitable" people are also often the "get your stinking government hands off my hard-earned money" people. Increasing taxation of the beneficiaries of AI technology to provide a safety net for the losers is one way of dealing with it, but I'm not sure it's politically achievable in much of the world in a way that will instill confidence in the people who face the loss of their livelihoods.


"progress is inevitable": It's not actually. Progress is only made if people take the risk and effort to advance progress. That does require rewarding them appropriately for the risk taken. I'm not saying we get that balance exactly right currently, but it is necessary.

In general just taxing the winners (wealthy) period is well tolerated politcally, but it also requires a goverment that's somewhat fiscally responsible and not spending $800B a year on their military instead of social programs. The US hasn't had a fiscally responsible government since Clinton, and the pigeons are currently coming home to roost in the form of inflation and loss of confidence in the US dollar as the reserve currency.


Disruptions in the past have almost always been good for society overall, but there have also always been significant localized negative effects. For example, a farrier may have been thrown out of work and had to switch to unskilled labor, but their sons and daughters were better off.

The issue is that as change comes faster and faster, a higher proportion of people fall into the "disrupted" category.


I agree, I said as much about winners and losers being created.

It’s still a good thing for society - the alternative is halting or slowing progress.


But what if >50% of people are in the "disrupted" category?


Then you get a revolution and the whole system is torn down and rebuilt in a different form. That's highly undesirable, and as I mentioned, is a real tail risk to consider.


Progress towards what?


Don't be pedantic. From Oxford dictionary: "advance or development toward a better, more complete, or more modern condition"


I'm not being pedantic. What the definition of "better" in this context is. Is everything.


You are being pedantic. You also came by with a three word troll comment. Why don't you make your argument properly and then I'll spend the effort to address it.


It's important to consider the goals and motivations behind the development of AGI. Who is in control of this technology and how will it be used? The current power structure in our society is not interested in creating a better future for everyone. There is a growing concern that the development of AGI could be driven by the interests of a small group of people with a lot of power and money, rather than being used to create a better future for all people.

In this context, I think it's important to ask "progress towards what?" and to define what we mean by "better". We need to ensure that the development of AGI is done with a goal of benefiting everyone, not just the wealthy and powerful. This requires a shift in our political and economic systems, so that power is more evenly distributed and the needs of all people are taken into account.

I'm not saying that AGI is inherently bad, but we need to be cautious and intentional about its development and use. It's possible that AGI could be used to further entrench the power and control of the wealthy elite, rather than to create a better future for all people. To avoid this, we need to work towards creating a more democratic society where the benefits of technological progress are shared by all.


Ok, I understand your point now, and I agree with it for the most part. AGI really is a singularity. We can't see past it, and nobody knows what a society with AGI would look like, or even if there is room for humans in it. I think not. AGI will likely be our last invention as a species and the next step in natural evolution of life will be intelligent design (initiated by us!) Oh the irony.

So you're right that running full speed towards AGI is incredibly dangerous, and while it might still mean progress for life, it might not be progress for humanity. AGI may be one of the few technologies that are not progress. I'd argue nuclear fission so far has not been progress either, but that story has not yet fully played out. You could also think of other hypothetical and current technologies where the risks far outweigh the rewards. Imagine we find a way to unlock an energy so vast that a small group could unlock it and unleash it and super-heat the entire atmosphere of the planet, killing nearly all life. There's no law of physics that says that's impossible - and once discovered, there's no way to defend against some suicidal nutjobs doing exactly that. That's one of the solutions to the Fermi Paradox.

But AGI also may be our destiny - there may be no way to avoid it. Even if we could agree in the US to stop advancing AI, other countries will not agree, and so it continues anyway and the US just loses control over it. You can replace the US above with any country with the same game theoretic outcome. So game theory involving competing groups may possibly be an unstable system that ends in self destruction. That's another solution to the Fermi Paradox.

Then there's the detail that current LLMs like ChatGPT are not AGI, and probably don't lead there. They're a fancy parlour trick, but not really intelligence. So progress on LLMs may or may not bring us closer to AGI, nobody really knows. Stopping work on it now would halt progress, but only for those groups foolish enough to do so.

I don't yet see a path to being cautious and intentional about its development and use of AI. The genie is out of the bag, and can't be put back in. The same way nuclear fission can't be undone - although that's a bad analogy since it's much easier to control. Maybe we figure out a way to do that in future, but AI development is just the development and spread of information, and that's impossible to control.

What I think we can do, as you mentioned, is modify our societal and economic systems to be more fair and to not leave so many people behind who's skills have been obsoleted.


Also, I don't think you would need a human-like level of consciousness for a general problem solving device i.e. general intelligence. We could end up with a Deus Ex Machina situation where the general problem solving device appears to be human. Even exhibiting heart string pulling capabilities, not to mention cock string pulling capabilities, while still having the end goal being something absurd like going to a specific crosswalk at a Manhattan street and stand like a useless toaster.

That isn't the singularity, but it sure as hell is a general problem solving device i.e. AGI.


I think it's generally useful AI, but not AGI as people discuss the term. AGI would need to be sentient, self-aware. It would need to be alive and intelligent by any definitions of the term. ChatGPT is generally useful, but still very far from alive.

Anything short of that could mean large disruptions and societal changes, sure, but not a threat to humanity. Just technological progress as we know and love.


I don't understand why it's called the singularity. Shouldn't it be called the event horizon?


I guess the AGI is the singularity which causes the event horizon beyond which we can't see. But now we really are getting pedantic ;)




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: