Wasn't the whole idea behind OpenAI that it would actually be "open"? Or is the name of the organization now entirely a misnomer?
Not only are they not releasing Codex (and GPT-3), but in order to get access to the API you have to apply for access and be judged against a proprietary set of criteria that are entirely opaque.
Furthermore, I imagine that if you do any innovative work building on top of Codex (or GPT-3) they would control that work product, they would be able to cut you off from accessing your work product at any time if it suits them, and they would be able to build off of your work themselves, co-opting any unique value that you may create.
Why the hell should anyone building an AI business even want to work with them? Sure, it might accelerate your effort right at the beginning, but if you are unable to reproduce your results outside of their platform, you will always be beholden to them.
In a few years will we be reading stories about unfortunate entrepreneurs who had built their businesses on top of OpenAI only to have the rug pulled out from under them, like Amazon sellers whose product was cloned by Amazon Basics, or Twitter clients cut off from the API, or iOS apps made redundant by their core functionality being copied by Apple, or search-driven businesses circumvented by the information cards that Google displays directly in the search results...? Etc, etc.
OpenAI transitioned from non-profit to for-profit in 2019, took about $1 billion from Microsoft (there has been speculation that this was mostly in the form of Azure credits), and announced that Microsoft would be their preferred partner for commercializing OpenAI technologies: https://openai.com/blog/microsoft/
The name is now a complete misnomer.
There may still be some benefit for researchers to collaborate with them (same as with any of the other corporate research labs), but anyone trying to build a business on non-public APIs should obviously tread carefully.
They started as a non-profit and sort of claimed they would actually be open:
"As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We’ll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies." - https://openai.com/blog/introducing-openai/
However, a couple paragraphs down might have been a clue to the likely future: "Sam, Greg, Elon, Reid Hoffman, Jessica Livingston, Peter Thiel, Amazon Web Services (AWS), Infosys, and YC Research are donating to support OpenAI."
Currently they are not really non-profit and mostly working with Microsoft.
They needed money in order to compete with Google and Facebook, so they switched the model to "investors make a maximum of 10x return and after that they lose their stake" or something like that.
As for being open, I think keeping potentially dangerous tech private for a while while openly sharing the results of research is prudent. The last thing I want is some AI model goes public then we find a way to generate a bunch of computer viruses or propaganda.
Agreed that not releasing models might be a good thing, I'm just pointing out that's not how they initially pitched the organization. I wonder if they might have gotten less favorable publicity when they launched if they just said "we're starting an AI company to compete with Google".
As for the business model, there's nothing wrong with it in principle, it's just not what they said they would do. There's no reason a well-funded nonprofit research organization needs to compete with Google and Facebook. They changed their funding model because they wanted to compete, not because they needed to. And it hasn't been very long since they said "Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact." You have to assume they knew from the start that they'd probably want to pivot to a business.
I think it's maximum of 100x return. So, it's for profit with practically unlimited return.
> Returns for our first round of investors are capped at 100x their investment (commensurate with the risks in front of us), and we expect this multiple to be lower for future rounds as we make further progress.
Lenin believed in democracy. That is: "democracy of the elite". That is: democracy in the politburo. That is: democracy among a group of a handful of people. And no more.
By Lenin's definition the all those Democratic People's Republics really were democratic.
This is how America was conceived as well. Democracy was not meant to be universal and there were checks placed even on the limited portion of society who could vote.
The electoral college, for example, is set up this way. The idea is that electors—who were not democratically elected—could block a populist candidate.
Not the same. Lenin believed in a democracy of not more than 8 or so people. The bolsheviks won the Soviet elections and Lenin still staged a coup.
The American Constitution is an indirect democracy. Direct at the local level, indirect at the Federal level (though eventually by statute and amendment it became direct for Congress).
> As we saw in 2016, this check failed.
Nonsense. That "failure mode" was designed for. It has happened a few times. Working as expected.
> By Lenin's definition the all those Democratic People's Republics really were democratic.
Not by lenin's definition. Just the definition of democracy and where it came from. We get democracy comes from the greeks - where only the wealthy slave owning elites could vote.
The founders also followed that model. Only wealthy landowning whites could vote. The first few elections only like 4 or 5% of the population voted.
Isn't it crazy how propaganda has shifted your understanding of democracy? Democracy was never "of the people, by the people, for the people ", it was always about the elite few.
> The founders also followed that model. Only wealthy landowning whites could vote. The first few elections only like 4 or 5% of the population voted.
The Founders did not write that into the Constitution.
> Isn't it crazy how propaganda has shifted your understanding of democracy? Democracy was never "of the people, by the people, for the people ", it was always about the elite few.
> The Founders did not write that into the Constitution.
Because the founders wanted voting to be controlled at the state level, not the federal level...
> Propaganda is what you're writing.
Historical facts is propaganda?
So what is propaganda? That only landowning whites could vote? Democracy came from the ancient greeks? That ancient greeks owned slaves and only allowed wealthy slave owners to vote?
I think OpenAI tried to be Open, but then ran into two problems:
First, turns out that neural nets get better when you throw more compute at them. OpenAI was full of researchers looking to push the state of the art, and the state of the art became less and less accessible to the average person, who could not afford the supercomputers necessary to train GPT-3. "Democratizing deep learning" became less important as a goal, since it conflicted with the true priority internally: improving the state of the art in deep learning.
Second, it looks like they lost the interest of their initial funders. The execs were left with a big money hole in their budget, and had to go looking for some way to fill it. Bingo bango bongo, and now they are a for-profit looking for income streams.
I don't feel critical of them. It's very hard to do something both altruistic and expensive. Money doesn't just flow to those looking to do good in the world.
> Not only are they not releasing Codex (and GPT-3), but in order to get access to the API you have to apply for access and be judged against a proprietary set of criteria that are entirely opaque.
I have yet to hear of one person who has gotten access without either a) being Twitter-notable in the ML space, or b) using a personal connection to jump the queue (I hit up someone a couple steps removed from OpenAI and got lucky). As far as I can tell they are just collecting email addresses to gauge interest, and are not even evaluating people who cold-apply through their form.
Please correct me if I'm wrong, though, I only know what I've heard within my own network! It's totally possible they're allowing a very very slow trickle of external unconnected people in.
Yup it's only going to get worse - at least for now, it's difficult for these models to generate long news articles that are coherent.
> mean human accuracy at detecting articles that were produced by the 175B parameter model was barely above chance at ∼52% [...] Human abilities to detect model generated text appear to decrease as model size increases [...] This is true despite the fact that participants spend more time on each output as model size increases [1]
> for news articles that are around 500 words long, GPT-3 continues to produce articles that humans find difficult to distinguish from human written news articles [1]
Not only are they not releasing Codex (and GPT-3), but in order to get access to the API you have to apply for access and be judged against a proprietary set of criteria that are entirely opaque.
Furthermore, I imagine that if you do any innovative work building on top of Codex (or GPT-3) they would control that work product, they would be able to cut you off from accessing your work product at any time if it suits them, and they would be able to build off of your work themselves, co-opting any unique value that you may create.
Why the hell should anyone building an AI business even want to work with them? Sure, it might accelerate your effort right at the beginning, but if you are unable to reproduce your results outside of their platform, you will always be beholden to them.
In a few years will we be reading stories about unfortunate entrepreneurs who had built their businesses on top of OpenAI only to have the rug pulled out from under them, like Amazon sellers whose product was cloned by Amazon Basics, or Twitter clients cut off from the API, or iOS apps made redundant by their core functionality being copied by Apple, or search-driven businesses circumvented by the information cards that Google displays directly in the search results...? Etc, etc.
Am I missing something here?