Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

OpenAI is 'open' in the name only, so no. I don't think they have any plans of opening full access to the public either, considering that their previous model (that ChatGPT builds upon) was sold to Microsoft for exclusive use:

https://en.wikipedia.org/wiki/GPT-3



It's really obnoxious for non-open companies to include "open" in the name. Stolen valour, basically.


Open as in "open for business" not as in "open source", you might say.


Maybe if you compare it to open source. But if you consider that many of these algorithms were historically invented by a private company and kept private / proprietary as a competitive advantage I think the fact OpenAI puts no unreasonable restrictions on who can use it makes it fairly "open"

Though, I too, would rather be able to run the model myself.


Publishing research is pretty normal for traditional companies. The pledge to make patents available is a small step towards open.

But I think the obvious and natural thing for a company in the business of training machine learning models that claims to be open to do is make the models themselves available.

OpenAI does rougly the opposite: not only do you not have acccess to the underlying models, even the API access you are given is to a model itself deliberately trained to avoid answering certain classes of queries.

To me that's the opposite of open; it's closed, restricted, and centrally controlled.

(Very impressive results tho!)


> Publishing research is pretty normal for traditional companies.

Published research is "open" to the extent that it is transparent but it is not "open" to the extent that it can be used and accessed by people. Unless you are an AI researcher, half these papers (to be generous) might as well not exist.

My argument is from that perspective (ability for the average person to use it), academic research only gives the illusion of openness.

Not only that but the training data is often -- but not always -- omitted from academic research. So reproducing the exact results they did is often out of reach without a significant investment in building your own collection of training data.

For example: Facebook and Google have both announced similar technology to OpenAI yet neither is usable out of the box (or at all for practical purposes) where as OpenAI despite being "closed" I can get started in 5 minutes.

Take by contrast to both of those, Stable Diffusion. Which I think is miles ahead of DALL-E... their code and their pre-trained weights are very easy to use as well as being open.


All good points. The cambrian explosion of stable diffusion variants & uses is a good demonstration of the benefits of a more open system.


I think a more accurate name might be Alignment AI. [0]

I realize it's a charitable interpretation of their behavior, and I used to be very angry about them not being truly open. However, after playing with ChatGPT I think I am beginning to understand and even support their behavior. [1]

My personal sea change was realizing that giving dual-use tools to the global @everyone and hoping for the best might not be the greatest plan. I came to this realization thinking about bio-tech and GNC software, but it may apply to ML products as well. [2]

While I used to think universally/religiously "it should be open, will be one day anyway, etc," I now think about these things on a case by case basis.

[0] https://openai.com/alignment/

[1] https://news.ycombinator.com/item?id=33928400

[2] Imagine how many nodes this bot farm would have if it wasn't limited by the existing C&C bottleneck. ChatGPT is a productivity multiplier. This is productivity I want to put off multiplying as long as possible: https://news.ycombinator.com/item?id=34165350


Truly a shame. Good marketing tho




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: