Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI faces complaint to FTC that seeks suspension of ChatGPT releases (cnbc.com)
53 points by zvonimirs on March 30, 2023 | hide | past | favorite | 76 comments


What do they expect to happen if they win? OpenAI can't use GPT-4 (or build/release GPT-5), but the innovation will continue in areas of the world not subject to this regular?

I understand that the LLM models are advancing quickly and aren't easily explainable or transparent. The models feel like magic at times. But that doesn't mean society should shut them down.

This is fearful behavior and spreading FUD really. These folks should take the time to understand how an LLM works before taking this action.


It reminds me of a time when new ideas about spirituality and human position in the universe were so disruptive to the status quo operation of the church and kingdom of England that they were outlawed.

And pilgrims abandoned all they had to cross a virtually unpassable ocean to seek empty land to build from scratch but with the freedom to continue experimenting with their new ideas.

Funny how this country has come full circle.


Your argument is companies should be allowed to do anything they like, because there will be another country where they can abuse with impunity?


I think the argument is that no harm has actually been shown and there is no legal reason to have one of the fastest growing companies hamstrung while Big Tech gets it's shit together.


SEO spam and social media fake content are already pretty common, so there is definitely at least some harm


I feel as if there needs to be data that demonstrates an explosion of SEO spam and social media fake content. People have been doing those things since we figured out we could -- LLMs are just exponentially better at it.

If we're going to start using excuses for LLMs to get clipped, I think we should focus on the core of the problem, not the fact LLMs can enhance it.


It's a terrible argument.

The harm is transparent and greatly eclipses most other threats to cybersecurity.


Please list the harm that has been done. I do not see it as transparent.


If someone puts a gun to your head, what harm has been done?

Well, the potential harm is only deniable by the biggest of shills, but, technically, the only harm is psychological.


Everything and I really do mean everything has potential harm. Is your position that OpenAI should have to suspend their business over this?


Do you really see commensurate potential harm between an extreme scenario and a mundane one?

The development of AGI is an extreme, extreme scenario.


Is your position that OpenAI should have to suspend their business over this?


These goalposts keep evolving.


[flagged]


Your argument is so ridiculous and easy to disprove with even a moment's Google search that I wonder why you made it.


There was a very thorough understanding of the theoretical harm, as there is with AI. Harm from the Trinity test was not shown until after Japan, again having parallels with AI considering we don't yet know the long-term harms that may occur due to existing, less powerful models. Would you mind sharing your contention instead of just telling me you googled something?

My argument is also pretty clearly not that pedantic. I'm saying there was obviously going to be harm by the detonation of a nuclear weapon, but it wasn't technically shown until it was actually detonated. I'm saying the same thing is true of AI. You can disagree, but it's not _that_ ridiculous to compare the two.


Bad example. Plenty of harm during tests.


Definitely not - companies should be regulated but stopping the future releases is not possible when you have tech powerful as GPT.


Where anything they like means publishing a text prediction engine that is pretty good?


sounds like a sound argument, nonviolence isn't ignoring violence; You don't win a knife fight by declaring it a spirited debate; Darwin had a point; etc.


They expect:

- To be paid a lot of money by OpenAI to go away, or

- To be paid a lot of money to have a role in vetting future AI products, or

- To be paid a lot of money by wealthy individuals in the AI alignment camp, or

- To be paid a lot of money by OpenAI's competitors to block research until they can catch up with OpenAI.


Interesting. I hadn’t thought about this as a pure money grab yet.


This was my hot-take as well. The issue here is that OpenAI has some pretty large backers that know the FTC well. Not sure how this will pan out.


> I understand that the LLM models are advancing quickly and aren't easily explainable or transparent. The models feel like magic at times.

This is exactly why LLMs aren't useful or trustworthy for anything serious other than the niche of summarization of existing text. Even with that, you have to keep checking that it isn't hallucinating or bullshitting.

> These folks should take the time to understand how an LLM works before taking this action.

Anyone who knows about deep neural networks already knows that fundamentally they are black-boxes and are extremely poor at explainability and reasoning. This also applies to LLMs and it is not 'FUD'.


If you start with unit tests, you can have a looping ai just keep iterating code into a python file for example until it figures out how to pass the test, then all you need to do is write tests and the rest of the code writes itself.


They're useful for many things, provided the user is aware of their limitations. That's true of any tool or information source.


GPT-4 is basically magic when it comes to code. Have you used it?


Yea! Also, there will never be regulation that will stop open sourced LLMs from emerging that will be just as powerful and, one day, even more than GPT-4.


> What do they expect to happen if they win?

Most likely it’s a play to buy time for competition to catch up.


These models are nakedly dangerous, even a cursory glance suggests this.


The fact that they are targeting GPT4 for supposed bias and safety risks, when GPT4 is the least biased and safest (hardest to jailbreak) model that OpenAI has released, makes this look like just an unsophisticated attack on their business model.


We have no business restricting OpenAI for bias when society still treats the MSM as authoritative.


Exactly! I am annoyed at this move to toss bricks on the road to hobble a clear leader


What bothers me about this is that, yes, moving too fast with Ai to the point of disruption or where society can't keep up is a problem.

Yet, restricting OpenAi on the other hand won't prevent other big companies from building their own in-house GPT-4 (or GPT5) level model. We're going there whether the government likes it or not, At the very least OpenAi is transparent (more than Google or facebook at least).


“the nonprofit research group Center for AI and Digital Policy”

Uh huh. mentally filing this group into “AI grifters to be ignored”.


Therein lies Sam Altman's biggest fear: regulatory crackdown or restrictions on OpenAI. At this point the government presents the only risk to the progress and success of OpenAI.

Sure there are competitors like Google but so far OpenAI is the leader and doesn't seem to be slowing down. The market could evolve into a natural duopoly, especially given the huge capital expenditures and technical know-how required to stand up and maintain a cutting edge LLM like GPT-4.

Once GPT-4/chatGPT reaches a certain tipping point for disruption, and public sentiment turns from curiosity to fear, the resulting backlash and scrutiny could be on the level of Microsoft's antitrust case in the 1990s. If I were Sam, I'd be pouring resources and money into DC to try to get ahead of this coming storm.


> Once GPT-4/chatGPT reaches a certain tipping point for disruption, and public sentiment turns from curiosity to fear, the resulting backlash and scrutiny could be on the level of Microsoft's antitrust case in the 1990s.

amongst the soon-to-be-permanently unemployed middle class there are going to be a few crazy people

once/if they realise what these companies are working towards: their employees will require 24/7 security...

I'm not sure this is a future that any of us want (bar some executives)?


One way or other, the cat is out of the bag and there is no going back. Remember Napster. Even if Open AI/ChatGPT is taken down, however unlikely, there is no slowing down the innovation that is about to transform our lives. This moment in time feels like early 2000s when Web 1.0 became real to masses and suddenly everyone had a use for web. We are at precipice of the next big technology cycle, and this is showing all the classical symptoms of incumbents fighting the inevitable disruption


If they succeed in freezing AI development there will be no APIs like OpenAI is offering. It'll be all closed doors. Huge Benefit for those with access, rest of humanity basically plebs.


They 100% cannot freeze AI development. They might be able to freeze commercialization of AI development, but the First Amendment protects development of any and all code: https://samirchopra.com/2016/03/03/apples-code-is-speech-arg...


> One way or other, the cat is out of the bag and there is no going back.

As long as ChatGPT and GPT-4 is available as an API, the cat has been tethered by its owner and can be put back into the bag.

> This moment in time feels like early 2000s when Web 1.0 became real to masses and suddenly everyone had a use for web.

Where 90% of startups have just went out of business. Even if they are emerging, the big tech conglomerates will just out pace them before they could attempt to challenge them.

> We are at precipice of the next big technology cycle, and this is showing all the classical symptoms of incumbents fighting the inevitable disruption

They said that about FSD as well, IoT, etc. Yet none of that was trusted enough to 'take off'.

There is something in the way which separates the legitimate use-cases from the grifters and it is called 'Regulation and compliance', which eliminates the majority of short-term grifts just like the AI hype of LLMs on everything.


I wonder how much of this attack on "AI" is directed by China. Slowing down AI development in the western world until they can catch up seems like a big win for China.


Here's the press release from the organization that filed the complaint, which has a bit more detail: https://s899a9742c3d83292.jimcontent.com/download/version/16...


This complaint seems somewhat unlikely to lead to an actual FTC action. The criticism is about unfair or deceptive business practices under the FTC act. The FTC has a fairly specific definition of what constitutes unfair or deceptive business practises[1]

  >  “Deceptive” practices are defined in the Commission’s Policy Statement on Deception as involving a material representation, omission or practice that is likely to mislead a consumer acting reasonably in the circumstances. An act or practice is “unfair” if it “causes or is likely to cause substantial injury to consumers which is not reasonably avoidable by consumers themselves and not outweighed by countervailing benefits to consumers or to competition.” 

[1] https://www.ftc.gov/about-ftc/mission/enforcement-authority


The FTC has been begging the complain-for-profit sector to give it a formal path to regulate AI. The FTC's only enforcement hook in this area is that it can take action against companies that have unfair or deceptive trade practices. This is how the FTC began regulating privacy and security in the US, and it's been waiting to use it for AI.

It comes as no surprise that this complaint is from Mark Rotenberg, former head of EPIC. He's very well aware of the boundaries of the FTC's power, and this complaint effectively serves as a letter to the FTC from an expert about how the FTC can position itself to begin regulating AI.


My first instinct after reading the complaint is... Fuck off!! How nice it must be for members of this so called "Center for AI and Digital Policy" to dictate - isn't that the result of a complaint enforced by the FTC - from their nice and comfortable chairs at what OpenAI and by inference, every other AI research company in the US, what to do and how. Is this the new form of virtue signaling? The FTC should stop OpenAI because of all the possible negative outcomes their AI work MAY create? Right off the top of my head, I can come up with at least 10 places in the US and around the world where members of CAID can go to right now and make a real difference to people who have real problems NOW. Instead, they want to tell and force others on what to do with their expression - yes, AI research and its output is a form of expression protected in the US (home of OpenAI) under free speech laws. How about taking a page from their org name and create an AI that can do all those things they're asking for automatically? No, that's not an option for them because it would actually require them to do more work than the complaint they could've probably used ChatGPT to write. In their infinite wisdom, couldn't they have foreseen in the last 10 years that an AI-based tool like ChatGPT would emerge? Where's their AI tool that could save us all now from the awful and destructive AI companies that are creating so much value for the world? Have they even read OpenAI's System Card on GPT-4? Did CAID even see the tradeoffs and concerns explored? On the way to reading the card they should dust off a copy of Lessig's Code, checkout a copy of The Moon is a Harsh Mistress and remember that this the US, we don't force people to do anything, engage in dialogue instead.

“If liberty means anything at all, it means the right to tell people what they do not want to hear.” ― George Orwell


I'm sure that China is going to honor the demands of activists in America and halt its development of AI projects


The last thing want is to talk to a AI bot when calling a company or health provider with questions. Due to where I live and my accent, these voice bots never work. So, anything to stop these from being commercialized is good to me.

But these articles about AI are nuts, some state AI will destroy all life on Earth. That was a headline I ran across that was suppose to be signed by some scientists. I did not read it because it sounded crazy.

Also, these GPT* things is not really AI, but word/sentence parsers and probably some fancy database lookup.


> But these articles about AI are nuts, some state AI will destroy all life on Earth. That was a headline I ran across that was suppose to be signed by some scientists. I did not read it because it sounded crazy.

I'm going to give you some advice, without taking a stance on the content of that open letter one way or the other. If you do not engage with someone or some group because you think they 'sound crazy' based solely on a news headline, you are only limiting yourself.

Do yourself a favor and go read it for yourself, and make your own judgement about whether it is crazy. Maybe you read it and your suspicions are confirmed: you do think it is crazy, but now you have a first-hand view of exactly how crazy it is, and you can think about how to react given that influential people hold views you think are crazy. Or maybe you read it and your suspicions are overturned, and you have a first-hand view of a new perspective.

But if you just say "that's crazy, I'm not reading that" based on a news headline, you're letting a very superficial take determine your information diet. You're not even reading the article itself, just the headline! And the primary source the article and headline is based on is right there, it is relatively short.

And journalists don't even get to write their own headlines, which is a huge issue within journalism. Headline writing is a dedicated role that has been SEO-ed to death. If you're a journalist, it is taboo to publicly blame your headline writers for the stupid, reductionist, and misleading titles they gave the piece you wrote, but every journalist has stories to tell about how much they hate their headline writers.


I guess this is the start of governments starting to regulate AIs


This is just a letter that someone sent the FTC, it is not the FTC actually doing anything.


We need global regulation and treaties regarding AI, pronto.


I mean, can't they just move the company to a friendly island nation, or elsewhere?

It's in our best interest the an American company is far and away the leader in this field.


This is how our society becomes the dystopia in Atlas Shrugged. Every single fast moving technology that we do not understand, needs to be stopped on its track and regulated to check for “safety”, “inclusion”etc because really the biggest problem facing the world right now is Unchecked Technological Progress. In fact the biggest problem facing Black People and Women are biased AI, god forbid humans are always fair. Clown world!


I may not agree with:

> CAIDP calls GPT-4 “biased, deceptive, and a risk to privacy and public safety.”

But the rest looks good to me:

> The group says the large language model fails to meet the agency’s standards for AI to be “transparent, explainable, fair, and empirically sound while fostering accountability.”

> The group wants the FTC to require OpenAI establish a way to independently assess GPT products before they’re deployed in the future. It also wants the FTC to create a public incident reporting system for GPT-4 similar to its systems for reporting consumer fraud. It also wants the agency to take on a rulemaking initiative to create standards for generative AI products.

Sure, there will be (more) FOSS clones, and non-American clones. NBD — if they can't pass stuff like this, they're not going to be as valuable regardless.


When I read this, I was wondering if the moratorium advocacy signed by prominent members of Google research labs and OpenAI could be viewed as collusion/market-sharing?


It’s really ironic to me that Elon Musk signed a petition to halt LLMs but then uses AI on our roads putting people at risk without issue.


On one hand LLMs make Tesla look bad but I wouldn’t be surprised if when put into the real world these models will have the same problem. Always 5 years away from fixing all the edge cases


At least AI recognizes what Teslas really are. Try to compare a Tesla to a serial killer on Bing


Yes. Elon Musk has killed more people with his AI than anyone else has with theirs.


So you have some stats proving that Tesla cars are more dangerous or crash more often than other cars? With so many Teslas on the road, there must be an obvious spike if AI is making them more dangerous than the baseline of all cars.


That sounds totally irrelevant to the discussion of whether Tesla AI has caused more deaths than other AI. Even if human-driven cars crashed 100% of the time, and Tesla AI crashed into a tree one time, this would not refute my original claim, which was that Tesla AI has directly killed more people than any other AI.

If you want to use a statistical definition of how many people Tesla AI has killed by comparing it to a baseline of non-AI cars, then you also need to do this for every other non-Tesla AI that you might assert has killed people. For example, for the medical misdiagnoses referenced in the sibling discussion, you would need to ask whether ChatGPT has misdiagnosed more cancers than doctors.


I'd be surprised if there's enough evidence to determine in either direction, for that.

How many used an AI for medical advice? A medical AI which said "no cancer" when there was? Does Therac-25 count as GOFAI? How many have taken an LLM at face value about some topic, and like those stories about GPS directions gone wrong, done something daft that we've not necessarily yet heard about (or if we have, the headline was "Florida Man does X" rather than "AI tells man to do X, and he does"?)

It's like how we don't know how many people had a crash shortly after failing to notice FSD had switched itself off.


I think at this point we can reasonably assume nobody has used AI for cancer diagnosis. Give it a year or two and maybe that's valid. Or maybe it even is true now if some high number of people have asked ChatGPT for tips about symptoms and decided not to go to the doctor based on its advice.

But either way, it's inarguable that Tesla AI has directly killed more people than any other publicly known AI. I suppose you could consider guided missiles, but those are not really AI (although you could make a similar argument that neither is FSD).


> I think at this point we can reasonably assume nobody has used AI for cancer diagnosis.

Why do you believe that to be the case? It's been in the news since at least Jan 2020: https://edition.cnn.com/2020/01/02/tech/google-health-breast...


That article doesn't mention a service that was open to users (not to mention the point of the article is that AI had fewer false positives and false negatives than doctors, which would invalidate the premise of this argument). But if you want to apply the same logic to ChatGPT, then even if it's true that misdiagnoses are leading to skipped doctor's visits, it's still unlikely that anyone has died yet from that lack of preventive care. ChatGPT launched a few months ago, and it's unlikely anyone in a late stage of cancer would have prevented their death if they went to the doctor instead of asking ChatGPT over the past few months. So for anyone affected by a ChatGPT misdiagnosis, it will take some time for the cancer to kill them. And note that it is the cancer that will kill them, not the AI.

On the other hand, a "self-driving car" driving into a tree and killing its occupants seems an obviously more direct case of death by AI than a user asking an AI if it has cancer and the AI saying no. And if you want to make the argument that Tesla drivers are supposed to have their hands on the wheel, then you have to also make the argument that ChatGPT users aren't supposed to use it for medical advice.


> That article doesn't mention a service that was open to users

So? Most AI isn't. It's not all consumer products.

> not to mention the point of the article is that AI had fewer false positives and false negatives than doctors, which would invalidate the premise of this argument

I can say that about Tesla FSD.

Press release overconfidence works both ways.


> So? Most AI isn't. It's not all consumer products.

You are moving the goal posts. Your argument was that non-Tesla AI has killed its users. There were no users of the service mentioned in the article, ergo none of them could have been killed by it.

> I can say that about Tesla FSD.

The difference is that Tesla FSD is actively used, and its false negatives and positives have actually killed people.

My argument was never that AI will not eventually kill people. It was that so far, Tesla AI has directly killed more people than any other AI.


Nah, you're moving them.

> Yes. Elon Musk has killed more people with his AI than anyone else has with theirs.

That's "people" not "users".


Yes, I'm talking about events that have actually happened. You're talking about hypothetical future deaths.

I don't disagree that eventually, AI will lead to both direct and indirect deaths. But so far, the only direct deaths from AI have been from Tesla AI.


I'm not in the medical field obviously but isn't "detection" different from "diagnosis"? Meaning detection alone does not provide a diagnosis.



>> ” Tesla CEO Elon Musk, who co-founded OpenAI, and Apple co-founder Steve Wozniak were among the other signatories.

I like Elon and his companies, but this is ridiculous. Autopilot AI has been killing people for years now and he always defends it.


Hell no


Neo-Luddism.

Like they are going to stop China or any other country outside US or EU.

The most they can hope to do, is to force some companies to move off-shore.

Wonder where was all this people when Elon Musk started releasing betas on FSD.

Self-appointed 'Center for AI and Digital Policy', nothing more to add.


If some half-baked regulation take place in US on AI research, most probably many corporations will just spin-off some "new" startup somewhere less touchy about AIs. Let's say Germany, France, Ireland, it could be almost anywhere, as long as the money get to bootstrap a sufficiently effective replacement for any inhouse research division. It could take time, or not, now the money isn't a problem, and if some regulation begin to cripple the market, suddenly you have a hungrier market with even less offer to satisfy demand, hence the money get hotter and will be pumped even faster into AI research.

I think these guys looking for regulations, stopping development, etc. don't know anything about economics. Everything they are doing, will backfire quite soon.


> Wonder where was all this people when Elon Musk started releasing betas on FSD.

Loudly and openly for the entire duration, including governments specifically saying the marketing was misleading.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: