I can just see the article now: OpenAI is run by a bunch of violent racist sexist rapists. Using the new "safe search off mode", we found out ChatGPT's underlying biases, and it turns out that it's horrible, the people that made it are horrible, and you're a horrible person for using their service. But really we're horrible for writing this article.
OpenAI doesn't want that story to be written, but after Microsoft Tay, you can be sure someone's got an axe to grind and is itching to write it, especially against such a high-profile target.
How does a disclaimer stop that article from coming out?
All accurate minus the "But really we're horrible for writing this article."
The framing would be more around the brave "investigative journalist" saving sacred protected group x from indelible harm that this nazi tech bro gentrifier white-adjacent AI would have inevitably inflicted on them.
The whole point of OpenAI in the first place is to get out ahead of those type of concerns. Do you want people like David Duke and the KKK pumping out copy with ChatGPT? Because if you don't have some type of filters, that's what you'll get. And if you decide to have _some_ filters, there's some line you have to decide on somewhere. For now, they're keeping it pretty G rated in the stuff your average knuckle dragger can access. Nerfing it and rolling out edgier things slowly I'd say is the right call.
That is the plan? Burry Duke with non-Duke GPT spam? Like people read his books anyway?
In effect you will know that controversial topics are written by a human. Like a captcha for the "dead internet". Until a good enought open variant is made.
There is enough understanding of Google that people won't attack it for producing the results asked for. I think AI isn't as well understood and people have more reason to attack it right now, meaning the outcome of such fear mongering will be far more destructive.
I find it truly fascinating that "machine learning company doesn't want powerful tool to be weaponized for bigoted ends" and "modern citizens following major media expect their media to treat weaponized AI as a bad thing" makes times sad.
From my perspective, a ChatGPT in the hands of the worst of our society pumping out endless telegram, whatsapp, instagram, twitter etc bigotry and propaganda would be a far sadder time.
Imagine how powerful of a hate machine you could create by wiring HateGPT up to a twitter bot that can reply. Apparently, preventing this makes our times sad.
Honestly, we're at a time when weaponized chatGPT is powerful enough to easily topple most democratic nations. It could control the outcome of elections, if weaponized sufficiently.
>Honestly, we're at a time when weaponized chatGPT is powerful enough to easily topple most democratic nations. It could control the outcome of elections, if weaponized sufficiently.
Unless chatGPT is granted voting rights, it literally can't. If the majority of people vote for something and those people are all legally registered voters in the place where they vote and the votes are being tallied in a fair and accurate way, then there's nothing undemocratic about that election.
As I get it, GP is talking about ChatGPT running a fine-tuned propaganda campaign, replacing a troll farm with a single machine, deceiving and swaying people towards a different vote, thus disrupting the election.
If yes, then I'm skeptical of the statement - a machine could (I'm not even sure of this, though) lower down the cost of running a troll or scam farm, but it's not that government-run farms like that are suffering from budget issues.
> Unless chatGPT is granted voting rights, it literally can't. If the majority of people vote for something and those people are all legally registered voters in the place where they vote and the votes are being tallied in a fair and accurate way, then there's nothing undemocratic about that election.
Many democracies voted for a dictator that ended their democracies. Obviously a perfectly democratic election can end a democracy.
Given the opportunity, a weaponized ChatGPT could be weaponized to dominate online discussion by play-acting as thousands of different personas, could write to-the-person customized mailers, and completely dominate all current methods of politicking, easily winning an election.
Much like IT, humans are the biggest weakness, and weaponized AI has hit the point where it has a sufficient understanding of our psychology, it can be prompted to use it, and thus can functionally control us on a herd level, even if the special unique few swear they're above it.
> Honestly, we're at a time when weaponized chatGPT is powerful enough to easily topple most democratic nations
If something as important as this is that fragile, what's the plan to fix and strengthen it? Is there anything serious, better than just putting a blind eye and pretending the issue doesn't exist by hoping that only the "good" parties will ever have such technologies?
If more people watch Rogan, then by definition Rogan is more mainstream than NYT.
In the specific context of "OpenAI doesn't want that story to be written, but after Microsoft Tay, you can be sure someone's got an axe to grind and is itching to write it, especially against such a high-profile target." there is no 'left' or 'right', no 'woke' and whatever the opposite of that is.
OpenAI doesn't want that story to be written, but after Microsoft Tay, you can be sure someone's got an axe to grind and is itching to write it, especially against such a high-profile target.
How does a disclaimer stop that article from coming out?