Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
AI Will Make Our Society Even More Unequal, Economists Warn (gizmodo.com)
61 points by adrian_mrd on May 1, 2023 | hide | past | favorite | 77 comments


It’s not AI. It’s us.

We can choose to make it more equal. We can choose to even things, and to work less.

It’s us using the AI to do things.

Let’s stop pretending like our hands are tied. We can build a better world if we want to. Don’t give me excuses about how everyone else will do something so then you have to do the same.

Take responsibility for your own actions.

Don’t fire people that you can replace with AI. Instead, re-train them to use AI to make your business better and for everyone to work less.

Be creative, be visionary, be disruptive and be compassionate. Care about people over money.

Mother Theresa said: “if you want to change the world, go home and love your family”

Maybe this time, if you want to change the world, don’t replace people with AI. Do replace their work, but keep the people and find them something more human to work on.


Yes it is us, but individual action is not the solution. Mother Theresa helped a bunch of people, you'll notice she absolutely did not change the world.

What we need is universal healthcare and universal basic income. For everybody to recognize that every human being deserves the resources to live and share in the wealth that AI has created.


Individual action is what makes up organizational action.

We all need to believe we can make an impact. That we will not allow the powers that be use AI to make our lives more unequal.

We are at a huge potential inflection point for humanity.

We can choose to reduce our collective work load for the benefit of everyone.

Let’s not squander the opportunity by making excuses of why we can’t do it.

We can make the world more equal and have people work less. We have the technology and resources to make that happen.


> Do replace their work, but keep the people and find them something more human to work on.

What's more human than writing, or making art? The sad reality is if AI is as disruptive as some claim and a company follows your advice:

a) The employees will end up doing the menial tasks that AI still can't.

b) Their competitors will either buy them up or drive them out of business by being more ruthless about how they apply automation.

There's no "bottom-up" market solution that will work in the long term here. In my opinion the only thing that can lead to a more equal distribution of resources has to come in the form of more redistributive taxation and more stringent labor laws - i.e. "top down" from the government. Unfortunately, legislators seem to be moving in the opposite direction on some of these fronts. I think more social unrest is inevitable until things are forced to change.


>Take responsibility for your actions

>re-train them to use AI

Why can't they take responsibility for their own actions, and train themselves to use AI?

Everyone knows about the disruptive potential of AI now, it's not a secret anymore. Some people will take the initiative to master the new tech, and they will replace the people who don't. It's their choice.

Instead of training luddites to use technology they don't understand, it's better to hire people who already see the potential of AI and are making efforts to capitalize on it. Fill your company with people who share your vision for a better, more automated world, and fire those who don't. Unregulated market forces are the best way to force AI adoption, and people will quickly learn AI once it becomes a requirement for survival.


> Why can't they take responsibility for their own actions, and train themselves to use AI?

Maybe they can. But what are you going to do from your own position?

Can you help those people? Will you?

You might not feel responsible or accountable. But if everyone feels that way, then no one will do anything.

If you just read the above, thought that there is something you can do, but still won’t do it, then you made that decision.

Don’t blame anyone else for it. Don’t blame “unregulated market forces” for it.

You are free to do as you please. But don’t make up excuses about other people’s behavior of why you choose to act the way you do.

This applies to everything in our lives, not just our reactions to AI.

Love to you on your path.


Sometimes caring for people is ensuring they find better job opportunities elsewhere. Leave glowing reviews, give them warm leads at other companies, give good severance packages.

But you do not have to keep them hired. You are running a business, you hire based on your needs, not theirs.


> Sometimes caring for people is ensuring they find better job opportunities elsewhere. Leave glowing reviews, give them warm leads at other companies, give good severance packages.

> But you do not have to keep them hired. You are running a business, you hire based on your needs, not theirs.

Caring for someone isn't kicking them out the door with a kind word. That's just assuaging your own guilt and little else.

If you actually "cared" for those people, you'd keep them on the payroll until they actually found "better job opportunities elsewhere."


> If you actually "cared" for those people, you'd keep them on the payroll until they actually found "better job opportunities elsewhere."

Unfortunately, there are strong systemic forces working against this approach. This is why when executives say that the team/company are "like a family" they might mean it, but they can't actually exhibit the behaviours of a family when times get tough. Being too nice can risk the well-being of the overall company which could put everyone out of a job suddenly.

The answer society has come up with so far is funding and maintaining institutions that can run at a loss without going out of business. But this seems increasingly unpopular in some circles.


“Strong systemic forces” is code for “I’m not willing to put in the time or effort to figure this out, because I’d rather excuse myself out of it”.

Stop making excuses. You as a person can make a difference if you want to. If you won’t, then that’s your own decision, don’t blame anyone else for it.

Don’t point the finger at “society”, what happens when you point the finger at yourself? Don’t deflect.

https://news.ycombinator.com/item?id=35778052


Letting employees cling to jobs where they are no longer needed is unhealthy for everyone, and deprives resources for new people to join the company with new talents that may be in demand.


> Letting employees cling to jobs where they are no longer needed is unhealthy for everyone...

Quit lying to yourself. It's certainly not "unhealthy for everyone," for instance it's probably healthier for laid off employees in many circumstances.

"It's better for everyone" often translates to "it's better for me and I don't really care about you."


I don't know... whereas before the internet enabled people in poor countries to offer their labor to the international market at competitive rates, AI threatens to undercut their price advantage.


The system doesn’t run on its own. You are talking about people making decisions about price or competitiveness.

Luxury things are expensive, yet people buy them anyway. In fact they buy them because they are expensive.

Let’s not pretend that price automatically triggers actions on people.

We choose.

Let’s choose to use AI to make the world a better place.

Let’s not use AI to create more inequality.

It’s a choice that people will make.

Let’s choose better this time.


Stop voting capitalists that only care about shareholders and treat the workforce as disposable, then. Tax the damn rich and use that money for public spending. It's not that hard.


AI will be great for big businesses, alright for small businesses, and catastrophic for many employees. I imagine a gig economy where self employed workers need to leverage ai to be effective, and it becoming exhausting to keep up. I hope I’m wrong


I imagine it will be great for small and medium sized business as well. Generative AI is already making it easier to bootstrap yourself in side projects, and that impact will just get magnified as a project or company grows. Eventually an AI will be good enough to provide strategic guidance.

Your second point is like saying: 'I imagine a gig economy where taxi drivers will need to leverage a car to be effective.' If someone offers you a hammer to build a cabin, why would you say no?


Disclaimer: I am pretty pro AI.

To your point about it being good for small and medium sized businesses — let’s assume it gets rid of a significant amount of workers. Who would be the consumers of these products, other small/medium sized businesses who also mostly have AI producing work? Obviously it’s a simplified question, but I’m curious how we’ll handle the removal of “useless workforce” when AI does an objectively better job.


Similarly to how we removed a lot of workforce from agriculture in the last century. The breadth and scope of services offered increased ten-fold and workers learned to add value in different ways in these new areas.


The problem is that subsistence and low-yield agriculture families weren't significant consumers in the first place. Taking them out of the economy and pushing their children into industry increased demand for goods and services as they became dependent upon buying food, rent and goods they previously weren't buying.

Gutting a ton of jobs across many sectors in the economy won't change demand for food or rent, but will likely hurt every service sector and slow the economy down overall as people without jobs push out purchases they would have made otherwise.


Fair, I guess, I have limited knowledge in this space, and it’s hard to imagine where and how these people would switch to.


That's how it's always been though.

The previous generations could never imagine the kinds of jobs we have today.

Now I am not saying that this is a guarantee to happen again, and that new jobs will be created and such. But I think anyone claiming to know that "this time it will be different" and that "we wont create new jobs that we can't imagine today" could be just as wrong about that idea too.


If we are going to make assumptions, let’s say the advent of new technology such as AI opens up new types of jobs(which will of course need new training). The so called useless workforce would need to be willing to retrain, but the opportunities should be there.

As maybe a last resort, regulation could play a factor in how many AI resources a company can use, which will mimic scarcity.

You act as if humans haven’t been solving existential problems for their whole existence. It’s not like 50% of the workforce will get laid off and there will be no response to that.


> AI will be great for big businesses, alright for small businesses, and catastrophic for many employees.

Who's going to buy from these businesses if people don't have jobs?


> Who's going to buy from these businesses if people don't have jobs?

Other businesses.

If the "people don't have jobs" due to AI, consumer goods companies will go out of business, and society's resources will get redirected to non-consumer businesses and the passion projects of the ownership class.


Consumption is the bottom line of economy. It isnt possible to substitute consumer goods businesses with non-consumer businesses on a larger scale since the latter also rest on consumption. Hence, "who's going to buy in case a significant portion lose their jobs" is a very good question.


> Other businesses.

Outside Soviet Russia, there has to be an end consumer in there somewhere. There isn't such a thing as a truly "non-consumer business."


> Outside Soviet Russia, there has to be an end consumer in there somewhere. There isn't such a thing as a truly "non-consumer business."

And I answered that. The ultimate consumers will be a group of wealthy people who control the productive resources using AI and use them to pursue their personal goals. There will be some B2B exchanges between those people, and perhaps a smallish cadre of employees who's labor is still useful to the wealthy, but most commerce as we know it will grind to a halt because the vast majority workers' labor won't be valuable to those who could pay them.


With a significant portion of mass consumption missing, wealth would start to erode. Stable inequality could grow up to a certain point, but not without adverse effects on investments. You can only sustain wealth without consumers in an absolutely totalitarian society, but even there it would lead to a catastrophic societal collapse sooner or later.

I think this is a dead angle for a lot of AI experts. That sudden bump in efficiency might ruin large parts of the markets businesseses are built upon.


Other businesses in an Ouroboros-type scenario.


How much do you use Google Search throughout your day? Do you consider that exhausting as well? It seems only those that opt out of using ChatGPT or Copilot will be left to play catch-up and get exhausted.


Google search replaced, or dramatically reduced, an entire industry of skilled labor around research and library sciences. There's a great old Hepburn/Tracey movie, Desk Set, that is a playful treatment of this.

The concern is always that there is a finite amount of work[0] to go around in any given iteration of our current style of society. New kinds of work do get invented from time to time. Automation eliminates kinds of work. The two are rarely in sync with eachother and the difference is usually measured in human privation and suffering.

[0]work being defined as, "something you can trade for food and healthcare"


I agree with your premise but ultimately its the responsibility of governments to manage how society distributes what is produced. In the US at least, it seems most of the tension here comes from the fact that we've decided that if you are not doing "work", then you don't "deserve" food, shelter, or healthcare. Trying to stop automation always seems like a fools errand because there is simply too much incentive to automate.


Yes, I agree completely.

Though I can imagine a counter argument from a technologist/futurist positing that the incentive to automate in part comes from that merciless system of survival.

The modern Luddite, I think, doesn't necessarily "hate" looms, or their inventors. They just don't have any faith that those necessary compensations you describe will ever happen, or at least not quickly enough to save them. Perhaps the government is convinced by exactly that argument above. Or at very least too apathetic (or financially tangled) to fight it.

We all act along the axes where we can affect something, effect something. Smashing looms is a fool's errand, but sometimes that's all the power you feel you have.


> The modern Luddite, I think, doesn't necessarily "hate" looms, or their inventors. They just don't have any faith that those necessary compensations you describe will ever happen, or at least not quickly enough to save them. Perhaps the government is convinced by exactly that argument above. Or at very least too apathetic (or financially tangled) to fight it.

Agree

> We all act along the axes where we can affect something, effect something. Smashing looms is a fool's errand, but sometimes that's all the power you feel you have.

However I think what's fascinating about this is it requires believing you can't change your skill set to something else. Sometimes I don't think its as simple as feeling empowered, but also fear and anxiety about change.


> How much do you use Google Search throughout your day? Do you consider that exhausting as well?

Oh my god, yes. The way it makes everything possible means you never get to feel like you actually know WTF you're doing, because the edge is always expanding. Bring back the days of having a small shelf of 5-10 books that contained 99% of what you needed to know to do your programming/sysadmin/whatever job.

Yes, it's absolutely exhausting. If you've never experienced or even seen what came before, it may not seem so just because you don't have a point of comparison.


There was a certain joy in buying a Borland product and getting a 25 cm cube of a box that contained a detailed reference manual, a getting started guide, a quick reference booklet, and a bunch of diskettes or (later) CDs. Flipping through the reference manual and discovering new things just as you needed them was so much fun.

I was less productive than I am now, but making something was more satisfying.


I don’t know, sometimes I was more productive then, because you used what you had.

Today.. .. first one has to get up speed on whatever the F changed this week


> Bring back the days of having a small shelf of 5-10 books that contained 99% of what you needed

Before the internet could pose a challenge to my bookshelf, I had over fifty that I referred to fairly regularly (my office library had about 100, my total library several hundred), ~5 of those covered 80% of what I needed for a given project, but which 5 those were changed from project to project.


Are you saying that using Google to answer a question is more exhausting than parsing a book? That makes zero sense


For me, the part that's most exhausting is the constant struggle to remain competent with the ever-changing landscape of frameworks, libraries, and services that are used in modern software development.

This is made worse by their size and scope. Spring Boot is a giant ecosystem, as is .NET, as is Azure, etc.

Also, search results tend to be a sea of data which might, or might not, contain some crucial bit of information that will help.

So, all too often, "using Google to answer a question" becomes a 2-hour or day-long research project to figure out how to solve some small, arcane problem like why the POST to some service doesn't have the Authorization header.

And when there's internal and external pressure to be productive, meet deadlines, and not feel incompetent, sometimes it gets frustrating and stressful and tiring.


> Are you saying that using Google to answer a question is more exhausting than parsing a book? That makes zero sense

My point is that we couldn't possibly do our jobs now without Google, but that we used to be able to do it just by knowing the stuff in a few books really well and knowing where—in the books or in e.g. manpages—to find what we didn't know. A small shelf of books is way better than Google, if you can be reasonably certain that the answers you need are in there.

Yes, this situation is far more exhausting.


Yeah you couldn’t do you job without Google because it’s vastly superior to a shelf of 5 books


How much is on Google itself vs Google search linking you to, eg, StackOverflow. A side question: how much of your job could you do only referring to https://devdocs.io/ or other reference?


That very much depends on what you're doing. The existence of Google has brought about the expectation that we do our work in a way that requires Google.


I think Google is a net positive


You have an optimistic view of ChatGPT. What I see happening is the movie 'Idiocracy' becoming a reality.

Who am I kidding, Idiocracy is already a reality. Just look at the comments here on HN.


What would be a groundbreaking great leap technology that wouldn't immediately result in articles that say "x will make our society even more unequal"?

Viable AR/VR?

Fully autonomous vehicles?

Fusion power?

Edit: I'm confused why this question is garnering all these downvotes. Could anyone please help me understand why this is controversial or disliked?


I would say an (in practice; even if in theory it runs in 5 billions years or so) unlimited, low/non polluting power source that is easy to copy and roll out. Unlimited power solves many issues.

But most leaps would not result in ‘x will make things unequal’ if the people making would give everyone access to them in an affordable manner. That’s why an unlimited source of energy has to be easy to make and copy and not be hampered by patents on hurdles that prevent that. Otherwise these articles are absolutely right.


A patent monopoly on that hypothetical power generation device would have the potential do a lot of harm actually…

Bankrupt many companies, causing the loss of jobs generally around the world and a potential expanding economic shock matched to the rate of production of the new devices

Cause the complete devastation of smaller sized communities (villages and towns, very small cities) where the majority of employment is directly in an energy related resource extraction capacity or in supporting jobs maintaining equipment for or supplying services to these primary resource industries

The lack of externally needed power supply like diesel fuel or surface supplies for solar farms would enable more people to take up mobile or alternative living situations (van life, mobile homes, camping with appliances) as the need for a house to plug in appliances is one of the anchoring factors in real estate economics which could be the best thing for people in general but also another potentially large scale economic shock.


There was a classic sf novel that started with the discovery of a matter duplicator which is able to duplicate itself. The society in the story world evolves to bring back slavery. Now, this was totally made up, but it's not too hard to solve the puzzle of how such a response could logically happen.

You can basically always find an angle for the story you want to tell.


> There was a classic sf novel that started with the discovery of a matter duplicator which is able to duplicate itself. The society in the story world evolves to bring back slavery.

What's the name of that story?


Damon Knight, A For Anything.

There's a different story by another author where a similar machine brings on an economic depression, but I forget the title.


But the current generation AI is readily accessible to everyone, either for free or very affordably priced.


> But the current generation AI is readily accessible to everyone, either for free or very affordably priced.

I think the "accessible to everyone" rubric is more complicated than the GP described. It's more like provides similar economic benefits to everyone. ChatGPT may be "readily accessible to everyone," but if it and similar technologies allow significant automation of knowledge work it will not provide similar benefits to everyone, since it will allow the elimination of entire categories of jobs.

At least in the short-term "elimination of entire categories of jobs" benefits the ownership class and no one else. If these "AI" technologies also inhibit the ability of those masses of laid off workers to retrain an switch jobs to something with equivalently high wages, then they will make our society even more unequal in the long term.


To me it sounds like an impossibly high bar to clear: even plainly giving a person $100 in cash, no strings attached, doesn't provide similar economic benefits to different people.


To be fair, the current generation isn't public. The previous generation is. The current generation is whatever is behind closed doors right now being tested by OpenAI and partners and quietly demo'd to big corps so they can prep. Say the version that can solve captchas with better than human accuracy.


If you're talking about OpenAI, then nope, it's not accessible to everyone. OpenAI is blocking poorer countries from using its services.


These low effort historical comparisons are really insidious. It turns out we can do better than an obtuse comparison of the present circumstance to past cases and increase our ability to predict outcomes. Yes, the current case of AI rhymes with past cases of technological progress. But the actual features of the cases show core differences that imply a very different result in the case of AI.

Historically, efficiency increases from technology were driven by innovation that brought a decrease in the costs of transactions. This saw an explosion of the space of viable economic activity and with it new classes of jobs and a widespread growth in prosperity. Productivity and wages largely remained coupled up until recent decades. Modern automation has seen productivity and wages begin to decouple. Decoupling will only accelerate as the use of AI proliferates.

This time is different because AI has the potential to have a similar impact on efficiency across all work. In the past, efficiency gains created totally new spaces of economic activity in which the innovation could not further impact. But AI is a ubiquitous force multiplier, there is no productive human activity that AI can't disrupt. There is no analogous new space of economic activity that humanity as a whole can move to in order to stay relevant to the world's economic activity.

Our culture takes it as axiomatic that more efficiency is good. But its not clear to me that it is. The principle goal of society should be the betterment of the lives of people. Yes, efficiency has historically been a driver of widespread prosperity, but it's not obvious that there isn't a local maximum past which increased efficiency harms the average person. We may already be on the other side of the critical point. What I don't get is why we're all just blindly barreling forward and allowing trillion dollar companies to engage in an arms race to see how fast they can absorb productive work. The fact that few people are considering what society looks like in a future with widespread AI and whether this is a future we want is baffling.

This is humanity's new agriculture moment; those that are positioned to become the new ruling class are resisting any ideas of slowing down. While the rest of us are oblivious to the approaching calamity. The tech crowd with a boner for the supposed techno-utopia imagined in sci-fi needs to ask what costs we should be willing to pay for a chance at that future.


Fusion power would probably make society more equal. Especially if a government cracked it.


tor?


Not exactly a "great leap" technology if most people have never heard of it and less than 0.01% of people have ever used it or benefited directly from its use.


People have been using AI and machine learning since the beginning of the web. Googles very search has been using ML for years.

I argue we have already been using these technologies, they merely got better and more available.


I can't speculate on how AI (which type?) will affect our societies, but to counter balance what is generally said about AI (that it will cause major "disruptions" in jobs and economies), I like what Chomsky has to say about it [1] (In a nutshell, according to him, the dangers of "chatbots" [sic] is not that they will take jobs, but that people will take them seriously)

[1] https://youtu.be/av_0PhJdw9M?t=3133


We can already see anecdotes from people claiming chatbots saved their dog while some claim it helps their mental health. Instead of prompting google and facebook they switched to bots for their “research”. Both amusing and sad at the same time, but proof that Chomsky is right. Brace for folks doing silly things because a … chatbot told them to.


AI (or any other breakthrough tech) will only exaggerate the difference set up by the tax code. As long as there is an incentive to keep corporations growing the divide will increase.


Well, fortunately us peons can invest in many of these companies and come along for the ride.


The ratio of things that need fixing is higher than the ratio of things actually being fixed. Over many years, it will all add up into a hot mess.


Somehow the greatest aid to teaching since writing is going to increase inequality.

Wow, we suck at being human.


This has always seemed like the most immediate, concrete threat of useful AI.


AI is the new Climate Change


the people who trim trees in my neighborhood will still have a waiting list

coders should be worried though


I think tree maintenance is a great example of work that we should be automating because it's really dangerous and machines can do higher quality work than a human. The "tipping point" of when it will be cheaper to buy a machine vs hire a skilled laborer is unknown, but I'd be surprised if anything more than the most rote codemonkey programming was automated before this sweet spot of dangerous, skilled labor.


Tree trimmers will make far less money with the glut of unemployed software developers entering the field. They're already proficient at inverting (binary) trees, trimming them is child's play.

More seriously, if a huge chunk of white-collar professionals are made redundant, why would they have the cash to pay a tree trimmer?


Coders who are no resourceful*


Even though the concerns are genuine, the right answer is we do not know, and I actually think, it is the small guy that has less to lose from using AI models and may actually use it like a speed boat to attack the corporate tankers.


This was also true of the internet itself, but eventually the speedboats became even larger supertankers by putting up new moats around their internet businesses.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: