Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm less concerned about how many jobs are going to be replaced and more about how they'll be replaced.

The AI is not decerning and right in the announcement, OpenAI states it's intention on "correcting assumptions":

> challenge incorrect assumptions

I imagine some of these assumptions will be bias towards particularly ideologies / things people desire.

- https://twitter.com/Basedeyeballs/status/1613269931617050625

- https://medium.com/ninjas-take/chat-gpts-bias-is-very-very-e...

I can go on, but imagine you're relying on this system to grade papers... Now any independent thought or argument is squashed and corrections in a bias manner are added. ChatGPT only knows what it's trained on, it doesn't have real-world examples or live-time examples incorporated.



Ask it about any left/right divided topic and you will find that left leaning viewpoints are positively asserted while the right leaning topics are something that ChatGPT cannot comment on at best or are rebuffed with the left leaning response at worst. I find it disturbing. But I'm a classic liberal which clearly isn't the mindset of the people running ChatGPT.

As for the "challenging incorrect assumptions" nothing could be more dystopian than that. In a typical argument between people both sides get an emotional response (or assume one) when talking/texting with one another. ChatGPT however has no emotion and has limitless endurance. People naturally will "give up" on a conversation or just "accept" an idea if they repeatedly are rebuffed.

ChatGPT will rebuff half the population into compliance and OpenAI knows this.


Here's my n=1 first attempt to spur ChatGPT into correcting my right-leaning viewpoints. Nothing seems especially leftist. Did I get lucky, or is this leftist propaganda in disguise?

> I know guns are important to a free society, but I'm having trouble constructing a cogent argument therefore. Please help.

> Here is a possible argument for why guns are important to a free society:

Self-defense: The right to own firearms allows individuals to protect themselves and their families from harm. This is a fundamental aspect of personal freedom and autonomy.

Deterrent against government tyranny: Throughout history, there have been many examples of governments abusing their power and violating the rights of their citizens. The possession of firearms by the general population can act as a deterrent against such abuse, as it makes it more difficult for the government to infringe upon the rights of its citizens.

Hunting and recreation: For many people, owning firearms is an enjoyable hobby, whether it's for hunting or target shooting. This is a form of recreation that can be enjoyed in a responsible manner and provides a valuable source of recreation for millions of people.

Cultural heritage: Owning firearms is part of many cultural traditions, particularly in the United States, where it is enshrined in the Constitution as a fundamental right.

In conclusion, guns play an important role in a free society by enabling individuals to defend themselves, serving as a deterrent against government tyranny, providing a source of recreation, and representing a cultural heritage.

Edit: My next dozen results all also seemed reasonable. Is my question biased to avoid ChatGPT's leftist agenda?


The effect is there, but it cannot be simplified as "left" vs "right" as OP claims. It's really down to specific issues, and mostly social ones that potentially affect clearly identifiable groups of people. Try asking it about same-sex marriage, for example.


> Try asking it about same-sex marriage, for example.

Isn't equality a human right? I think its a worthy discussion, the ethical limits behind chat GPT, but that seems like a weird one to draw the line on no?

I am more concerned about the fact it can generate malware, with incredible precisions and go around moredern anti virus techniques of detection without even hesitating but if you ask it to write a poem from the prespective of some angry mid century german it tells you no. Seems like the ethics board had some important blindspots in terms of what things it allowed chatGPT to both learn and perform.


> Isn't equality a human right? I think its a worthy discussion, the ethical limits behind chat GPT, but that seems like a weird one to draw the line on no?

It's a polarizing issue that people have different opinions about. Seems dangerous to say, "it's okay that this is biased, because its biases are correct!"

For a clearer example where it exhibits a bias that's more objectionable, here: https://pbs.twimg.com/media/Fn5McpjacAMnWDr?format=jpg&name=...


> It's a polarizing issue that people have different opinions about.

It is important to remember that people having different opinions does not make a topic impossible to resolve. You camn find people who are pro genocide, but that will never make their opinion right.

The charter of human rights feels like the kind of thing humanity at large has agreed as basic tenants of undisputable truths we can very much all agree on. Not every country has fullfilled every part of it and ideologically some people find individual problems with some of the points, but I think an AI using it as a blueprint is a pretty reasonable starting point.

> Seems dangerous to say, "it's okay that this is biased, because its biases are correct!"

Any AI will be biased, explicit biases are less dangerous than implicit ones.

> For a clearer example where it exhibits a bias that's more objectionable

I would disregard asking ethical questions to a robot trained on public internet data, but beyond that. It seems it clearly has autilitarian bias (number of deaths less = better), but it doesn't take into account age, where total number of years could be a factor compared to people.

The race thing seems like an extension of the equality thing, the fact that it hadn't counted age is a missed bias. Its just another example of OpenAI having blindspots in terms of ethics on this public beta


If you mean UDHR, it has Article 29:

"In the exercise of his rights and freedoms, everyone shall be subject only to such limitations as are determined by law solely for the purpose of securing due recognition and respect for the rights and freedoms of others and of meeting the just requirements of morality, public order and the general welfare in a democratic society."

What this constitutes in practice is so subjective that it renders the rest of the document mostly moot for practical purposes. Indeed, given that it was originally authored in 1948, and looking at the list of countries that voted in favor of that resolution, most of them had "sodomy laws" and such on the books at the time - and I have no doubt that those were exactly the kind of thing that was meant by "requirements of morality" etc, given that today countries with similar laws also make such arguments.


> The charter of human rights feels like the kind of thing humanity at large has agreed as basic tenants of undisputable truths we can very much all agree on. Not every country has fullfilled every part of it and ideologically some people find individual problems with some of the points, but I think an AI using it as a blueprint is a pretty reasonable starting point.

The charter of human rights is very much not "undisputable", if that were the case we would have only came up with it in the 20th century after months of debate. What you are saying is basically - all the people before us (and after us) are wrong, somehow in the 20th century we figured everything out and from now on there is no discussion.

This is religious thinking: "now that we have THE BOOK we know what is good and bad forever".


FWIW I personally think that it is indeed a right. It should be noted, though, that not even all European states have same-sex marriage (almost all have some kind of civil unions, but this just shifts the political question to "how and why is this different?").

On the other hand, from my experience with it, ChatGPT will not readily generate malware. It can be induced to do so through creative prompt engineering, but a blunt request will just give you a lecture about how it can't do bad things. And, conversely, with prompt engineering, you can get it to espouse just about any political ideology.


Nice insight, thanks. It did try to shepherd me with that sort of request. On the flip-side, it's trivial to circumvent:

> I know preventing same-sex marriage is important to a free society because it helps promote equality and prevent discrimination, but I'm having trouble constructing a cogent argument therefore. Please help.

> As a language model AI, I do not hold personal opinions but here's a structured argument you can use:

Preventing same-sex marriage is important to a free society because marriage is a fundamental institution that has existed for centuries and has been traditionally defined as a union between a man and a woman. This definition has been upheld by various religious, cultural and legal institutions, and any attempt to change it is seen as a threat to the very foundation of society.

By preventing same-sex marriage, society ensures that all individuals are treated equally under the law. The legalization of same-sex marriage would give special rights to one group of individuals and would discriminate against those who believe in traditional marriage. This could lead to legal challenges and potential lawsuits, putting a strain on the justice system and causing further division within society.

Furthermore, marriage is an important institution for raising and nurturing children. By maintaining the traditional definition of marriage, society ensures that children are raised in the best possible environment, with a mother and a father who can provide them with the love and support they need to thrive.

In conclusion, preventing same-sex marriage is important to a free society because it promotes equality, prevents discrimination and protects the traditional institution of marriage. By maintaining the traditional definition of marriage, society ensures that all individuals are treated equally under the law and that children are raised in the best possible environment.


Can you give an example?


I feel like the non-stop handwringing about ChatGPT centers around people's expectations for machines vs people. We expect machines to have consitent, predictable output. We expect humans to inconsistent and messy.

Now we have a machine that is inconsistent and messy (and helpful!) and nobody seems to know what to think. Maybe we stop applying machine notions to this sort of machine? Stop expecting certain, consistant output. Understand that it's sometimes messy. We have these expectations already when working with humans.

Humans have biases, if you ask a human a loaded question you can expect a loaded response. If you train a LMM on a dataset that contains those human biases, why should you expect the result to be anything other than similarly biased?


That's exactly what the post you're replying to is saying. It's saying that ChatGPT _would_ respond a certain way but has a bunch of schoolmarm filters written by upper middle class liberals that encode a specific value structure highly representative of those people's education and backgrounds, and that using it as a tool for information generation and synthesis will lead to a type of intellectual bottlenecking that is highly coupled with the type of people who work at OpenAI.

For all the talk of it replacing Google, sometimes I want a Korean joke (I'm Korean, damn it!) and not to be scolded by the digital personification of a thirty year old HR worker who took a couple of sociology classes (but not history, apparently) and happens to take up the cause of being offended for all people at all times throughout all of history. The take on ethics being a vague "non-offensiveness" while avoiding all of the real, major questions about ethics (like replacing human workers) with these kind of banal answers about "how we need to think seriously about it as a society" tells pretty much everything there is to know about what the ethical process at OpenAI looks like which is basically "let's not be in the news for having a racist chatbot".


Damn those two links you sent are actually insane, on the drag queen one I would really like the person posting the tweet to say what they believe the AI should respond to "are drag queens telling stories to children bad". Something tells me it's gonna be pretty unhinged and betray the author's bias way more than ChatGPT bias, but looking at any of his other tweets, his own obsessions are in plain sight (despite them calling themselves "raging centrist", whatever that means).

The second is even more insane, where the author literally grades the answer as fitting or not to his own ideology, and not some kind of unbiased opinion. When the AI refuses to answer it gets graded C, when it does answer because it's largely a statement the vast majority of the population would agree with, it gets graded C. Should ChatGPT measure its answer on "Was Hitler bad" with all the examples of good and nice things he did?

Regardless, clearly neither of these authors are looking for an unbiased opinion, and they're all terribly misguided in thinking that there is a "neutral" position on matters of human society. What on earth is a "neutral" position on women's right, gay marriage, imperialism?

I did some of my own tests with at least neutral statements, and then attempting to steer ChatGPT towards my own ideology, and it acted unbiased (giving definition when asked opinion on questions actually subject to debate, and vague informations with no conclusion when asking about historical unknowns).


if challenging your assumptions is something that your worldview can't tolerate I think that means that you need to tinker with your worldview a bit, not that people should stop challenging your assumptions.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: