Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They were founded on the premise that some large player (specifically Google) would develop AGI, keep it closed, and maybe not develop it in the best interests (safety) of the public. The founding charter was essentially to try to ensure that AI was developed safely, which at the time they believed would be best done by making it open source and available to everyone (this was anyways contentious from day 1 - a bit like saying the best defense against bio-hackers is to open source the DNA for Ebola).

What goes unsaid, perhaps, is that back then (before the transformer had even been invented, before AlphaGo, what people might have imagined AGI to look like (some kind of sterile super-intelligence) was very different from the LLM-based "AGI" that eventually emerged.

So, what changed, what was the fresh information that warranted a change of opinion that open source was not the safest approach?

I'd say a few things.

1) As it turned out, OpenAI themselves were the first to develop a fledgling AGI, so they were not in the role they envisaged of open sourcing something to counteract an evil closed source competitor.

2) The LLM-based form of AGI that OpenAI developed was really not what anyone imagined it would be. The danger of what OpenAI developed, so far, isn't some doomsday "AI takes over the world" scenario, but rather that it's inherently a super-toxic chatbot (did you see OpenAI's examples of how it was before RLHF ?!) that is potentially disruptive and negative to society because of what it is rather than because of it's intelligence. The danger (and remedy) is not, so far, what OpenAI originally thought it would be.

3) OpenAI have been quite open about this in the past: Musk leaving, being their major source of funds, forced OpenAI to make changes in how they were funded. At the same time as this was happening (around GPT 2.0), it was becoming evident how extraordinarily expensive this unanticipated path to AGI was going to be to continue developing (Altman has indicated a cost of $100M+ to train GPT-3 - maybe including hardware). They were no longer looking for a benefactor like Musk willing/able to donate a few $10's of millions, but needed a partner able to put billions into the effort, which necessitated an investor expecting a return on investment, and hence the corporate structure change to accommodate that.




> some large player (specifically Google) would develop AGI, keep it closed, and maybe not develop it in the best interests (safety) of the public

https://youtu.be/1LVt49l6aP8




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: