OpenAI started as a non-profit, went for-profit. Still owned by the big players.... Something isn't right.
Is OpenAI just a submarine so the tech giants can do unethical research without taking blame??? Its textbook misdirection, nonprofit and "Open" in the name, hero-esque mission statement. How do you make the mental leap from "we're non-profit and we won't release things too dangerous" to "JK we're for-profit and now that GPT is good enough to use its for sale!!". You don't. This was the plan the whole time.
GPT and facial recognition used for shady shit? Blame OpenAI. Not the consortium of tech giants that directly own it. It may just be a conspiracy theory but something smells very rotten to me. Like OpenAI is a simple front so big names can dodge culpability for their research.
I know it's trendy (and partly justified) to look down on OpenAI, but can you actually give any basis for this claim?
What kind of research is OpenAI doing that all the other big AI players (Google/DeepMind, FB, Microsoft) aren't also invested in?
And even if others are doing the same, what part of OpenAI's research do you consider unethical?
> What kind of research is OpenAI doing that all the other big AI players (Google/DeepMind, FB, Microsoft) aren't also invested in? And even if others are doing the same, what part of OpenAI's research do you consider unethical?
I believe all of them are doing unethical research, especially facial recognition. Notice the public backpedaling this week from all the big tech companies on this too. By directing their cash through OpenAI they can avoid whatever fallout comes from unleashing things like GPT3 on the world.
The most straightforward use case for GPT3 is generating fake but believable text. AKA spam. That's what it was designed to do. If you think fake news is a problem now, wait till someone is generating a dozen fake but believable news articles per minute by seeding GPT3 with a few words and hitting a button.
Its a conspiracy theory with some circumstantial evidence. We will probably never know either way, because who would admit to it if it was true.
> I believe all of them are doing unethical research, especially facial recognition.
Yes all of them are doing facial recognition research, except... OpenAI, so how exactly is OpenAI used as a scapegoat to be able to do that kind of research without public backlash?
> By directing their cash through OpenAI they can avoid whatever fallout comes from unleashing things like GPT3 on the world.
GPT-3 si not unethical research. It is what you decide to with it and how you decide to release it that can potientially be unethical.
Also, OpenAI is just ahead of other labs because they have an insane compute budget and really talented people, but if you have been following a little bit the NLP news, you will see that your theory of OpenAI being a front for unethical research just makes no sense.
OpenAI release GPT-2, 1.5B billions parameters, then NVIDIA realeased Megatron, 8B parameters, Google released T5 at 11B and recently Microsoft did turing-nlg at 17B. So they are clearly working on this in their own names and very much publicizing their work.
They redefined the org from non-profit to "capped profit", whatever that means.
They're directly selling GPT 3 even though they originally said they wouldn't release it because of potential bad uses.
They paid MS a ton of money for hardware and got a huge equity investment from them.
And lets be honest here, the easiest and most straight-forward use of GPT3 is generating spam and low quality clickbait. Its the only use case that requires zero effort. The whole thing is built to generate fake but believable text. Its DeepFakes for text.
I'm not saying the whole thing is nefarious and evil, just suggesting that OpenAI may not be what it seems. There's a lot of odd things going on with it. They should have done what universities do, spin off the technology into a different for-profit company and sell it. Instead of redefining their entire org structure to make money.
wow you just made the connection for me. GPT2 was too dangerous to release, and now GPT3 is so much better - is there no point at which things become too dangerous anymore? what was the conclusion on that one?
> What specifically will OpenAI do about misuse of the API, given what you’ve previously said about GPT-2?
> We will terminate API access for use-cases that cause physical or mental harm to people, including but not limited to harassment, intentional deception, radicalization, astroturfing, or spam; as we gain more experience operating the API in practice we expect to expand and refine these categories.
With Amazon having a moratorium of their rekognition API, I wonder if a Cambridge Analytica type event could happen to OpenAI where someone abuses and escapes the terms of service.
hmm i dont love this. either OpenAI has implicitly promised to monitor all its users, or has adopted a "report TOS violations to us when they happen and we will judge" stance. neither are great roads to go down.
More fake news and generated AI content there is more people would stop trusting social media. It will saturate to that tipping point that humanity will need to find more genuine ways to communicate. So I say bring it on.
You replied within an hour and now Im seeing it in 20 hours.
We exchange short texts, we both dont have a context, we dont know what each of us feels at that moment and we have absolutely no feedback about each other's mental states.
THIS isnt working! The conversational part of the social media is out of sync with reality and timespan. Evolution never had to optimize for this b/c there was never a need for it.
We struggle to understand, we just throw words at each other in passing and fill in the dots in our own minds - which is terrible b/c those dots are too far apart.
Signal is a needle in a haystack. Its not worth trying to keep fixing and reshaping the haystack so we dont keep loosing the needle. Lets just admit this tool isnt working and move on to better alternatives.
Edit: Clarifications (Need for post-editing also supports my point btw. )
Based on my experience with non-profits, they are just like regular corps except they don't pay taxes, and they're always attached to a for-profit interest. The real community organizations don't tend to incorporate, as then you have to hire people to manage the corp or do it yourself.
This OpenAI work is almost certainly a way for these bigger corps to collude. Proving that would be impossible, though.
GPT-2 is hard to do "shady" things right now
(speaking from experience)[1] but maybe GPT-3 might do better?
I could get poems to generate well. Tweets were a bit harder but I don't think we are at the point where you could just use a generative model to fool people that would be cheaper than actually hiring someone to write fake news. (Also shameless plug below)
I think it's simply because OpenAI is fundamentally created and controlled by venture capitalists, and the tech they created turned out to be just too juicy an opportunity to not monetize.
I can’t say I blame them, when they realize they are sitting on the technological equivalent of a mountain of gold. What would you do?
> sitting on the technological equivalent of a mountain of gold. What would you do?
Greed is not justified. I get that people are weak, selfish, they can't stop themselves. Some feel sympathy because they've been weak too. "Maybe it's justified," they like to think. "Everybody lies." But seriously, those who care so much about money and power they can't do things in a civilized respectable way: they are not yet an adult and must be hard barred from the upper tiers of capitalism until they learn that life does not revolve around them.
I blame them for being shitty, and blame everyone around them for letting it happen.
The literal millennia of humans who've achieved that then got to the end of their life just to look back and say "I wish I'd focused on family and friends more."
Basically everyone on their death bed says focus on the experience, not the material. And everyone who does it agrees.
Is OpenAI just a submarine so the tech giants can do unethical research without taking blame??? Its textbook misdirection, nonprofit and "Open" in the name, hero-esque mission statement. How do you make the mental leap from "we're non-profit and we won't release things too dangerous" to "JK we're for-profit and now that GPT is good enough to use its for sale!!". You don't. This was the plan the whole time.
GPT and facial recognition used for shady shit? Blame OpenAI. Not the consortium of tech giants that directly own it. It may just be a conspiracy theory but something smells very rotten to me. Like OpenAI is a simple front so big names can dodge culpability for their research.