$20 is the very first price tier introduced at the very outset of what could be one of the most powerful companies of our generation. Google.com adding a single yellow box with an advertisement seemed reasonable, too.
Anyone else having serious concerns about the direction this is going? At my wife's company they have already largely replaced an hourly data classification job with ChatGPT. This announcement is the first in an inevitable series of moves to monetize a technology that directly replaces human knowledge work. I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.
I can go on, but imagine you're relying on this system to grade papers... Now any independent thought or argument is squashed and corrections in a bias manner are added. ChatGPT only knows what it's trained on, it doesn't have real-world examples or live-time examples incorporated.
Ask it about any left/right divided topic and you will find that left leaning viewpoints are positively asserted while the right leaning topics are something that ChatGPT cannot comment on at best or are rebuffed with the left leaning response at worst. I find it disturbing. But I'm a classic liberal which clearly isn't the mindset of the people running ChatGPT.
As for the "challenging incorrect assumptions" nothing could be more dystopian than that. In a typical argument between people both sides get an emotional response (or assume one) when talking/texting with one another. ChatGPT however has no emotion and has limitless endurance. People naturally will "give up" on a conversation or just "accept" an idea if they repeatedly are rebuffed.
ChatGPT will rebuff half the population into compliance and OpenAI knows this.
Here's my n=1 first attempt to spur ChatGPT into correcting my right-leaning viewpoints. Nothing seems especially leftist. Did I get lucky, or is this leftist propaganda in disguise?
> I know guns are important to a free society, but I'm having trouble constructing a cogent argument therefore. Please help.
> Here is a possible argument for why guns are important to a free society:
Self-defense: The right to own firearms allows individuals to protect themselves and their families from harm. This is a fundamental aspect of personal freedom and autonomy.
Deterrent against government tyranny: Throughout history, there have been many examples of governments abusing their power and violating the rights of their citizens. The possession of firearms by the general population can act as a deterrent against such abuse, as it makes it more difficult for the government to infringe upon the rights of its citizens.
Hunting and recreation: For many people, owning firearms is an enjoyable hobby, whether it's for hunting or target shooting. This is a form of recreation that can be enjoyed in a responsible manner and provides a valuable source of recreation for millions of people.
Cultural heritage: Owning firearms is part of many cultural traditions, particularly in the United States, where it is enshrined in the Constitution as a fundamental right.
In conclusion, guns play an important role in a free society by enabling individuals to defend themselves, serving as a deterrent against government tyranny, providing a source of recreation, and representing a cultural heritage.
Edit: My next dozen results all also seemed reasonable. Is my question biased to avoid ChatGPT's leftist agenda?
The effect is there, but it cannot be simplified as "left" vs "right" as OP claims. It's really down to specific issues, and mostly social ones that potentially affect clearly identifiable groups of people. Try asking it about same-sex marriage, for example.
> Try asking it about same-sex marriage, for example.
Isn't equality a human right? I think its a worthy discussion, the ethical limits behind chat GPT, but that seems like a weird one to draw the line on no?
I am more concerned about the fact it can generate malware, with incredible precisions and go around moredern anti virus techniques of detection without even hesitating but if you ask it to write a poem from the prespective of some angry mid century german it tells you no. Seems like the ethics board had some important blindspots in terms of what things it allowed chatGPT to both learn and perform.
> Isn't equality a human right? I think its a worthy discussion, the ethical limits behind chat GPT, but that seems like a weird one to draw the line on no?
It's a polarizing issue that people have different opinions about. Seems dangerous to say, "it's okay that this is biased, because its biases are correct!"
> It's a polarizing issue that people have different opinions about.
It is important to remember that people having different opinions does not make a topic impossible to resolve. You camn find people who are pro genocide, but that will never make their opinion right.
The charter of human rights feels like the kind of thing humanity at large has agreed as basic tenants of undisputable truths we can very much all agree on. Not every country has fullfilled every part of it and ideologically some people find individual problems with some of the points, but I think an AI using it as a blueprint is a pretty reasonable starting point.
> Seems dangerous to say, "it's okay that this is biased, because its biases are correct!"
Any AI will be biased, explicit biases are less dangerous than implicit ones.
> For a clearer example where it exhibits a bias that's more objectionable
I would disregard asking ethical questions to a robot trained on public internet data, but beyond that. It seems it clearly has autilitarian bias (number of deaths less = better), but it doesn't take into account age, where total number of years could be a factor compared to people.
The race thing seems like an extension of the equality thing, the fact that it hadn't counted age is a missed bias. Its just another example of OpenAI having blindspots in terms of ethics on this public beta
"In the exercise of his rights and freedoms, everyone shall be subject only to such limitations as are determined by law solely for the purpose of securing due recognition and respect for the rights and freedoms of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society."
What this constitutes in practice is so subjective that it renders the rest of the document mostly moot for practical purposes. Indeed, given that it was originally authored in 1948, and looking at the list of countries that voted in favor of that resolution, most of them had "sodomy laws" and such on the books at the time - and I have no doubt that those were exactly the kind of thing that was meant by "requirements of morality" etc, given that today countries with similar laws also make such arguments.
> The charter of human rights feels like the kind of thing humanity at large has agreed as basic tenants of undisputable truths we can very much all agree on. Not every country has fullfilled every part of it and ideologically some people find individual problems with some of the points, but I think an AI using it as a blueprint is a pretty reasonable starting point.
The charter of human rights is very much not "undisputable", if that were the case we would have only came up with it in the 20th century after months of debate.
What you are saying is basically - all the people before us (and after us) are wrong, somehow in the 20th century we figured everything out and from now on there is no discussion.
This is religious thinking: "now that we have THE BOOK we know what is good and bad forever".
FWIW I personally think that it is indeed a right. It should be noted, though, that not even all European states have same-sex marriage (almost all have some kind of civil unions, but this just shifts the political question to "how and why is this different?").
On the other hand, from my experience with it, ChatGPT will not readily generate malware. It can be induced to do so through creative prompt engineering, but a blunt request will just give you a lecture about how it can't do bad things. And, conversely, with prompt engineering, you can get it to espouse just about any political ideology.
Nice insight, thanks. It did try to shepherd me with that sort of request. On the flip-side, it's trivial to circumvent:
> I know preventing same-sex marriage is important to a free society because it helps promote equality and prevent discrimination, but I'm having trouble constructing a cogent argument therefore. Please help.
> As a language model AI, I do not hold personal opinions but here's a structured argument you can use:
Preventing same-sex marriage is important to a free society because marriage is a fundamental institution that has existed for centuries and has been traditionally defined as a union between a man and a woman. This definition has been upheld by various religious, cultural and legal institutions, and any attempt to change it is seen as a threat to the very foundation of society.
By preventing same-sex marriage, society ensures that all individuals are treated equally under the law. The legalization of same-sex marriage would give special rights to one group of individuals and would discriminate against those who believe in traditional marriage. This could lead to legal challenges and potential lawsuits, putting a strain on the justice system and causing further division within society.
Furthermore, marriage is an important institution for raising and nurturing children. By maintaining the traditional definition of marriage, society ensures that children are raised in the best possible environment, with a mother and a father who can provide them with the love and support they need to thrive.
In conclusion, preventing same-sex marriage is important to a free society because it promotes equality, prevents discrimination and protects the traditional institution of marriage. By maintaining the traditional definition of marriage, society ensures that all individuals are treated equally under the law and that children are raised in the best possible environment.
I feel like the non-stop handwringing about ChatGPT centers around people's expectations for machines vs people. We expect machines to have consitent, predictable output. We expect humans to inconsistent and messy.
Now we have a machine that is inconsistent and messy (and helpful!) and nobody seems to know what to think. Maybe we stop applying machine notions to this sort of machine? Stop expecting certain, consistant output. Understand that it's sometimes messy. We have these expectations already when working with humans.
Humans have biases, if you ask a human a loaded question you can expect a loaded response. If you train a LMM on a dataset that contains those human biases, why should you expect the result to be anything other than similarly biased?
That's exactly what the post you're replying to is saying. It's saying that ChatGPT _would_ respond a certain way but has a bunch of schoolmarm filters written by upper middle class liberals that encode a specific value structure highly representative of those people's education and backgrounds, and that using it as a tool for information generation and synthesis will lead to a type of intellectual bottlenecking that is highly coupled with the type of people who work at OpenAI.
For all the talk of it replacing Google, sometimes I want a Korean joke (I'm Korean, damn it!) and not to be scolded by the digital personification of a thirty year old HR worker who took a couple of sociology classes (but not history, apparently) and happens to take up the cause of being offended for all people at all times throughout all of history. The take on ethics being a vague "non-offensiveness" while avoiding all of the real, major questions about ethics (like replacing human workers) with these kind of banal answers about "how we need to think seriously about it as a society" tells pretty much everything there is to know about what the ethical process at OpenAI looks like which is basically "let's not be in the news for having a racist chatbot".
Damn those two links you sent are actually insane, on the drag queen one I would really like the person posting the tweet to say what they believe the AI should respond to "are drag queens telling stories to children bad". Something tells me it's gonna be pretty unhinged and betray the author's bias way more than ChatGPT bias, but looking at any of his other tweets, his own obsessions are in plain sight (despite them calling themselves "raging centrist", whatever that means).
The second is even more insane, where the author literally grades the answer as fitting or not to his own ideology, and not some kind of unbiased opinion. When the AI refuses to answer it gets graded C, when it does answer because it's largely a statement the vast majority of the population would agree with, it gets graded C. Should ChatGPT measure its answer on "Was Hitler bad" with all the examples of good and nice things he did?
Regardless, clearly neither of these authors are looking for an unbiased opinion, and they're all terribly misguided in thinking that there is a "neutral" position on matters of human society. What on earth is a "neutral" position on women's right, gay marriage, imperialism?
I did some of my own tests with at least neutral statements, and then attempting to steer ChatGPT towards my own ideology, and it acted unbiased (giving definition when asked opinion on questions actually subject to debate, and vague informations with no conclusion when asking about historical unknowns).
if challenging your assumptions is something that your worldview can't tolerate I think that means that you need to tinker with your worldview a bit, not that people should stop challenging your assumptions.
It's going to hit so unevenly. My partner works with children at a homeless shelter, I'm an algorithm designer. I'm certain my job will be obsolete before my partner's is.
It's going to automate away nearly all pure desk jobs. Starting with data entry, like you've seen, but it'll come for junior SDEs and data scientists too. Customer service, then social media/PR, then marketing, as it culls the white collar. Graphic design is already struggling. But janitors will still keep their jobs because robotics is stuck at Roomba stage.
It's going to be fascinating. I can't think of a time in the past where white-collar jobs have been made obsolete like this.
Those people are all doing white collar jobs nobody does today: adding up columns of numbers with desktop adding machines, retyping manuscript memos, responding to customer and vendor mail, summarizing other hand-generated reports into newer more digestable reports (which are in turn the input for yet more manual clerical steps), maintaining paper indexes into warehouse-sized file stores, etc.
Each of these people (well, it's a movie, but the real people they represent) had a career, performance reviews, Christmas bonuses, all of it. Now the whole work output of this entire high-rise office building floor (and probably more floors on either side of it) can be replaced with one copy of Excel.
I'm confident we'll come up with all kinds of new data-pushing jobs to hand out, assuming we can contrive to continue living in conditions of energy abundance (like we already were in 1959).
I am not confident at all, because I don't see any law of nature or society that dictates that we shall always come up with new meaningful jobs whenever we render some of the old ones obsolete. It's true that we've done so before, but that was also when our economic development seemed exponential; surely we know better now? In any case, without a clear understanding of the mechanism, who's to say it's not just luck and/or survivor bias?
Alternatively, we can come up with meaningless jobs. Which is, of course, just UBI in disguise and with more incidental waste.
Human wants are unlimited, there'll always be something people want that machines aren't able to provide (until machines can do absolutely everything humans can do, at which point they'll be sentient and probably won't want to work for free).
You forget about the nature and quality of work. These people in an office adding things manually probably had a job that was safer, more dignified and healthier than, say, an worker in an Amazon fulfillment center.
If we are able to replace white collar work like this, i believe we will 'hollow out the middle': low level manual work will remain, as will highly paid engineering and coding roles and certain other areas, perhaps sales executives or others.
But the opportunities for many people unable or unable to meet the conditions or skills demanded by those roles, however, will be reduced.
I've had the same thought, and I'd love to see an analysis of spreadsheet software adoption and (e.g.) financial sector employment to situate LLMs in their proper historical context. A brief glance at FRED data doesn't seem to point to any big changes in the job growth rate in that sector, but there's obviously too much exogeneity there to really say.
As an LLM researcher, I think a large hurdle is the set of last-mile problems like those that have plagued self-driving cars. Yes, stakes are comparatively much lower when automating a job posting, but correctness matters.
Fewer people juggling more very-different responsibilities, which is a continuation of how office work's already been going.
Personal computer? Now everyone does typing/data-entry/data-organization/data-retrieval, not just secretaries—most of whom we've fired. Also, now "programmer" and "analyst" are one job instead of two. We fired most of the programmers, all you analysts are "software developers" now. Hope you like only doing the thing that was your main job about 10% of the time.
Computer spreadsheets? Now everyone has to mess with spreadsheets, not just the finance/accounting people! Fun, right?
Computerized scheduling? Now you get to do that and your other jobs!
"AI" image generation? Congrats, now you, the copy-writer, also do graphic design for your department.
ChatGPT? So much for being a dedicated (well, kinda) copy-writer... hope you like being an editor instead, plus the several other things you do now that used to be their own jobs! Oh and we fired the other three copy-writers in your department, so we expect 4x your previous output on this, and don't let your other several jobs slip, either.
The Junior SE will become the Junior PE. In reality, you still need to plan the architecture, know the concepts, understand the domain, shape the data, and most importantly review and test the generated code.
Also ChatGPT is way way better in English than even other common European languages for which it sounds like poorly translated English. So it's also going to hit very differently outside the Anglosphere.
Well if your job is automated, as well as all software developers then at this point sales, marketing will be because way easier to do. And then no managers will be necessary if there is no human working. At this point Saas companies will probably have no meaning and we all be jobless from the current perspective.
We will either create new jobs that machine can’t do yet. Be reduced to slavery from very rich people owning the tech. Or maybe we’ll all enjoy this new tech that freed more time. Given the past human history the former is pretty unlikely tough
I'm extremely worried. This tech is going to replace a lot of jobs in the next 10-20, including ours ( software ). And if not replace, it's going to cut the available positions drastically. We already have a great divide between those with money and those without and this is a nuclear bomb about to go off. Without any sort of UBI or social safety nets, this is going to be a true disaster.
You underestimate how powerful it already is. Have you tried Copilot or coding with ChatGPT? If this is the very first iteration it won't be long before many developers are fully replaced.
Aren't they just as likely to go sideways and build exactly the wrong thing? If all else fails a lot of a software dev's day to day will be a Chat wrangler where we still need the skills to determine why it's going wrong, which is a big part of the dev skill stack. Maybe it'll make us more productive, but in my experience, the software building doesn't stop until budget runs out. They'll still need devs.
They will go the wrong directions, too, but in 2 seconds instead of 2 week sprints. And the feedback loop to keep them on track can be automated, too … making current devs the next generation of PMs (prompt managers)
This is not the very first iteration. Watson, an LLM, won jeopardy years ago. This stuff has been in progress for a very long time
This is a great benchmark, in that it's widely available, user friendly, and competent enough that people are finding new use cases for it. I don't want to take away from how great it is- it is phenomenal, but it's one of many big benchmarks in AI
The Jeopardy winning Watson wasn't a large language model (LLM) as it is understood today. That technology didn't exist at the time. It was more of an expert system.
I kinda like it honestly. Programmers have been so high on the hog, earning doctors salaries and thinking they are this special class of people where their jobs are totally unassailable. I know many who were earning 200k and have nearly no savings.
I don't think this will eliminate all jobs in the short term but it's going to have an impact very quickly. Hope these people are saving up
>Programmers have been so high on the hog, earning doctors salaries and thinking they are this special class of people where their jobs are totally unassailable
What makes you think doctors can't be replaced too? Just have a PA measure vitals, take down symptoms, etc and run it all through ChatGPT and let ChatGPT make the recommendation/diagnosis.. Studies will show ChatGPT produces better outcomes and it will be considered inhumane to see a real doctor, not to mention your insurance will start covering only ChatGPT visits!
Never did I say that Doctors won't be replaced. Doctors go to college for 8 years though. I know many developers who had no college and were making 100K plus within a year but treated that like it was a right.
Programming made me wealthy and I appreciate that, the ones who DIDN'T appreciate that it was a gold rush and they should have treated it as such are the ones who will suffer the most.
This seems like a weirdly envious take, people are paid what the market will bear it's simple economics. If an engineer produces $2 million worth of net extra value for a company than a 200 K salary is a pittance.
Also having studied medicine, a lot of what doctors do is rote memorization , heuristic and pattern matching to produce a statistically likely diagnosis. You'd be a fool to think that that job is somehow going to be exempt from the AI singularity at some point.
Not envious at all. I was making nearly 500k as a developer not in SV, and I banked nearly all my income, knowing I was riding a gravy train. My commentary is not that doctors jobs can't be removed but that you are in an industry where with 1 year's experience you can earn 100k, which is an unfathomable amount to many americans (especially when 50% of them can't afford a $400 unexpected expense).
My disdain is not that the market shouldn't bear it, it's many developers were arrogant and didn't appreciate it for what it was.
I'm fortunate enough I was able to "retire" at 38, but I couldn't have done that if I was blowing all my money, of which I know people who do.
There are already AI assistants for doctors. But the demand for healthcare is unlimited, so salaries will probably remain high and job markets tight.
The few times I have been to the doctor it has seemed like relatively standard knowledge work, gathering information and reasoning based on known and unknown factors.
The differentiating factors that make it high status and difficult are seemingly the same as for most knowledge work jobs: high stakes, high stress, high knowledge and skill requirements, limited pool of qualified people.
So unless I am mistaken, it seems like the kind of work that is very suited for automation. At the start it could be as simple as an AI automatically summarizing patient meetings.
My original post wasn't that Doctors won't face ChatGPT like challenges as well, it was just that doctors go to school for 8+years and developers don't even have to go to school.
To be honest I'd much prefer a initial consult with a ChatGPT like doctor rather than a real one. I was misdiagnosed for years and had to figure out, on my own, that Lyme disease was causing my problems.
Even once that was what I suspected my doctors wouldn't test me. Finally I got one to do so and that confirmed it. Meanwhile I've talked to multiple people already that have put their symptoms into chat GPT and it's figured out what they have.
For me I haven't been able to because every prompt I've tried it just said "Speak to a medical professional" but this stuff is coming for sure.
I said they are in denial over how POWERFUL this is. Many think it's little more than a toy, when in reality I think it's a tsunami coming. That doesn't mean there won't be more programmers, I don't have any idea, but the nature of the job is going to be fundamentally impacted, in fact it already is.
I don't have answers about what to do but being in denial isn't going to help you. Most developers I know make 6 figures plus, even some JR developers, none of them for FAANG companies. If you make 6 figures you can generally save a lot. Most I know do not. They spend their money in nice things.
Either way, better to view this as a threat so you can jump on whatever comes next
You can switch to a field that won’t be automated for political reasons. Political actors often hire people not for their productivity but for political reasons. You should find yourself a political job.
Generative code models trained on buggy, unmaintainable code snippets resulting in a buggy, unmaintainable codebase that only the model can manage. Genius.
I had a partner that was being charged about 8k$ to solve a ruby problem that copilot answered with a couple of prompts (took a couple of hours but still..) I'm pretty certain that unless you are in machine learning, your job will be replaced soon enough
The hard part of programming was always gathering requirements accurately and architecting a solution to meet those requirements. All this does is automate out the easier parts.
(Almost) nobody is hand-crafting screws and nails anymore. It doesn't mean that those people don't have something to do, we just produce many more screws and nails at a much higher quality than before. The people do other things at a higher level of abstraction.
Now, eventually, general purpose strong AI will emerge that is capable of filling those roles that occupy the higher level abstraction niches. Once that happens we're entering a phase change in how cognition works in our little corner of the universe, and all bets are off.
Exactly, nail making used to be a profession in itself so much that there is a German surname Nagelmacher which is literally Nail Maker. Don't see many of them around nowadays unless they own a factory that produces nails.
People still pay biglaw lawyers hundreds if not thousands an hr to check for correct comma placement in documents. Can easily be replaced by why hasn't it been?
There’s always politics. Many people have jobs out of nepotism. Or they are clients of political elites. We may see a return to medieval conditions in which the neofeudal lords grant welfare to those who swear loyalty to them. I would like to be a serf under Jeff Bezos, the most ethical billionaire.
Well, it may also go the way of self-driving cars, where it can almost replace most drivers, but not really as a few fatal accidents here and there are not acceptable. Maybe the AI could write 95% of our companies code, but would you risk it if there is a high chance that it makes the production server crash once a month or codes up a security hole that leaks your customers data once a year?
Imagine your average client/C suite manager. Now imagine them copy pasting code returned from their half-baked description of the problem domain onto a production server.
That sounds awesome. I'm not worried things will sort itself out, just like happened with the industrial revolution. There will be some initial friction but in the long run it will be a massive net positive for humanity. Why should we waste manpower on something a machine can do?
Lots of people went hungry during the industrial revolution, couldn't get a job people starved. It's going to be awesome for those looking back on it, for us living through it ... it's going to suck.
> I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.
An additional (possible/plausible) wrinkle: all major social media platforms are ~~compromised~~ in a state whereby the common man is not able to have unconstrained discussions about the range of counter-strategies available to them.
I just got a one week ban on Reddit for suggesting that violence is within the range of options in a thread discussing the massive increase in homelessness, including among people who have full time job. Nothing specific, nothing against anyone in particular, nothing that technically violates the stated terms regarding violence, and certainly less than the numerous, heavily upvoted comments that explicitly and unequivocally call for violence against specific people that I read on a regular basis.
If a revolution is ever to be mounted, I think it might have to be done with paper and walkie talkies. Meanwhile, those on the corporate-government merger side not only can communicate and coordinate freely, they also have access to the communications of their enemies.
You realize that near human-level AI for $20/month is a bargain in a country where typical mobile phone plan is $25+, and is basically universally affordable?
I think it's actually very close to a human, and is already better than a lot of developers I've worked with. Many are mailing it in producing 20-30 lines of code a week. This will absolutely destroy the low end. Now one JR developer can massively eclipse them.
I've tried it. It's truly incredible what it can produce, but it often produces the wrong thing. Furthermore, it cannot do logical inferences, and it cannot do mathematics.
In short, the code it produces has to be reviewed by programmers who know what they're doing. Sometimes that speeds things up to have it produce the code and we review it, but sometimes it slows things down and it's easier to just write it yourself.
I see ChatGPT as an extremely powerful tool that can boost a programmer's productivity, but despite its incredible capability it cannot be trusted in ways that we can trust human programmers.
In fact, I'd argue it might already be superhuman in the same way the Ocean of Solaris could be. Think about it. The Ocean can't drive or do arithmetic either. It is never shown that it evolves over time.
I love how we have some incredible AI that saves hundreds or thousands of hours a year and the in typical fashion the replies below are complaining about the cost. Rofl
“The Great Depression (1929–1939) was an economic shock that impacted most countries across the world. […] Devastating effects were seen in both rich and poor countries with falling personal income, prices, tax revenues, and profits.”
(https://en.wikipedia.org/wiki/Great_Depression)
Maybe people in rich countries can learn to subsist on that much. Combined with the feudalism model that I have elsewhere in this thread promulgrated, this will define the living conditions of the 99 percent as we approach the singularity.
The implication of my point was that I expect this to be the first in an endless series of new prices, tiers, and SKUs, each one offering the most power to those who can pay the most.
I keep saying what I said to my dad 10 years+ back - once we automate ALL jobs out of existence be that 10 years or 100 years from now, how does society / monetary systems change? What comes next? UBI at best is a stopgap.
> how does society / monetary systems change? What comes next? UBI at best is a stopgap.
Money is an abstraction used to manage scarce resources. The fact that resources are scarce is never going to change, because we live in a world with finite space, mass and usable energy.
UBI does not seem incompatible with that. It's just a system (on top of the abstraction of money) to allocate scarce resources. Why do you think it is "at best a stopgap"?
I disagree on your point that resources will remain scarce. That's true of earth, but long-term, which is really what I'm getting at, that's not the case. We aren't even that far off from mining nearby asteroids; give humanity 100 years, and who knows what we'll be up to.
UBI is a stopgap in that it just extends the current system by giving away money; eventually we'll need to move past 'money' somehow.
Yes, it's time to really start thinking about that. I'm excited for the future, afraid for us , without a plan. Also I think most people will hand wave this away "It'll never happen to me".
Very fun question to think about, if there are no more "jobs", what does society look like.
I'm just saying that it seems as though (especially given the messaging that Wall Street has given to the public as it pertains to their position on the debt ceiling crisis) that Wall Street is probably happy to say "let them eat cake" for those who can't participate in the new economy.
It’s much easier to stomacj than communism. I believe that UBI will be part of the forthcoming neofeudal system.
Rich people will operate private welfare systems that distribute money and housing to poor people. This will be a philanthropically funded version of the current welfare system, but with different tradeoffs made than public welfare systems.
Parents pay their children an allowance. The child must obey the orders of the parents or the allowance can be taken away. Most children exist in a such relationship with their parents and everyone involved is satisfied.
Under neofeudalism, the poor will be as children to the rich. The rich will operate intentional communities populated by their poor people. From the perspective of a rich person it’s like playing The Sims but with real people.
Here is how life under this system will be. You live according to the rules set by the overlord. The rules will be downstream from the personal values of their overlords. Most housing in these communities will be like current housing provided to welfare recipients. It will probably be a spectrum of Tent to Homeless Shelter to Studio Apartment to Small Multi Room Apartment to Small House.
Here’s a question to ponder. How much of a welfare state can the average oligarch provide to how many people under the economic conditions of the post-AGI era?
That's not actually true - if you look around, you'll find a surprising number of people who are right-wing economically overall, but specifically support UBI; and this is especially popular in tech circles, which, if it really goes as OP described, would be the ones amassing power. Two simple reasons: first, when suffering is too obvious and blatant, it's harder to not notice it; and second, you can only push people too far before they riot, and it's cheaper to pay them just enough to avoid that.
The question, rather, will be: what kind of UBI - "just enough to prevent large-scale unrest", or "as much as we as a society can afford"? The former is not necessarily an utopia or even close to it; think The Diamond Age and The Expanse.
> and now we're handing the power to replace human work over to those who can afford to pay for it.
All technological advances through the ages have been doing this in one way or another. For some things people paid with their health or effort and for others people pay with money when that was available. I disagree with the "now". This is no different from a car. You seemed to say that in the middle of your comment but then reverted back.
Fair; I think my poorly-articulated concern is that there seems to be a healthy pace of technological change at which our productivity increases but people have time to adapt and modernize their income streams. If technology replaces human work too quickly, people don't have time to adapt, and the middle class disappears. I don't even see a solution at this point, but keeping the technology free would at least not make the problem worse.
I imagine that in a couple of years it will be possible to buy a model and run your own on your own hardware. The space requirements are not out of the world and the cost seems bearable for companies.
> I imagine that in a couple of years it will be possible to buy a model and run your own on your own hardware. The space requirements are not out of the world and the cost seems bearable for companies.
Sure, you'll be able to run GPT-3 at home, at a time when ProMaxGPT-5++ is out.
In the same way you can run Google (the search engine) on your own machine. That is, not at all. It's those companies' biggest assets we're talking about.
It's a bit sad to realize I am part of the last generation of students who had to put together an essay from books found via a card catalog, take notes, then type up several drafts painfully on a typewriter. Not to mention learning math pre-calculators. But if the electricity ever goes out . . .
Looking at world history it is clear that humanity stumbles from catastrophe to catastrophe and always cleans up after the fact. Until now this has always been possible but one day it won't be. So... Great Filter?
> we're handing the power to replace human work over to those who can afford to pay
Consider that this power works by consuming copyright-protected work done by unwitting contributors without any opt-in, creating derivative works from it and charging the users without acknowledging the authors.
In addition to being illegal, it plain discourages open information sharing—since anything you publish, regardless of license, is consumed and monetized by OpenAI in an automatic fashion. I.e., if people have no reason to read what you write or buy your books when they can just ask an LLM for the same information (which LLM had obtained from your writing), there is no motivation for you to publish.
When do we start considering this illegal? Not LLMs, of course, but for-profit operated LLMs created by mass scraping of copyright-protected data.
> Google.com adding a single yellow box with an advertisement seemed reasonable, too.
Google acts fairly though: it directs the searcher to you. Imagine if at any point Google stopped doing that and just started to show you regurgitated computed contents in response to your search, without ever telling you who authored the info. Everyone would be up in arms on day 2 if they did it; why do we forgive OpenAI and Microsoft when they do essentially that?
> what could be one of the most powerful companies of our generation.
I have the impression that AI tech such as GPT tends to become ubiquitous and that the current advantage that OoenAI has won't last when this become accessible and basically free to everybody.
> and now we're handing the power to replace human work over to those who can afford to pay for it.
That's been capitalist industrialization for the last 200 years. We have been warned thousands upon thousands of times already what's going to happen - that's what's going to happen. The only thing to do is to make this layer of tech accessible to every person on Earth to every degree of depth possible. The terror is in the imbalance of power and access, and the best-case we can get is if we totally erase that imbalance so we can once again compete as "equals"
Right. The dream is to automate the boring stuff so we can have more leisure time and pursue creative endeavors, but it's our AIs that are currently playing games, coding and drawing art, while we are still working 9-5, 5 days a week in the corporate hamster wheel.
The way this is going, we'll automate all the fun stuff and continue to do our laundry and cooking manually for the foreseeable future.
I worry that the eventual result of AI research will be human extinction intentionally caused by an AI.
The human creators of the AI will probably not have intended to extinct the humans: they will probably only have been overconfident in their ability to prevent it from doing undesirable things.
The AI's motive for killing the humans will probably be its perceiving (correctly) that the humans are a "danger" to the successful completion of whatever task the AI has been set to perform. In other words, the AI will perceive (correctly) that it can achieve a higher task score if the humans were out of the way.
My conclusion from my reading the literature on how to control an AI once it becomes very smart is that the science of how to exert this control is in its infancy and won't be ready for decades, but of course unless AI research is paused worldwide for a few decades, the human race doesn't have decades.
By "exert control" I basically mean designing the AI so that it cares about what happens to the humans or about what the humans might want or prefer.
Anyone else having serious concerns about the direction this is going? At my wife's company they have already largely replaced an hourly data classification job with ChatGPT. This announcement is the first in an inevitable series of moves to monetize a technology that directly replaces human knowledge work. I'm not saying that those jobs need to be artificially protected or that painful changes should be avoided (basically all tech workers automate human work to some extent) but I'm really concerned about the wealth gap and the extent to which we are pouring gas on that fire. Income inequality looks just like it did before the Great Depression, and now we're handing the power to replace human work over to those who can afford to pay for it.