Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The messy secret reality behind OpenAI (technologyreview.com)
83 points by stillsut on April 12, 2020 | hide | past | favorite | 11 comments


An audio discussion of the contents of this article was front page HN back when this article came out: https://news.ycombinator.com/item?id=22453211

I'll go ahead and say what I said in that thread again re this article : "Person doing PhD in AI here (Ive seen all of OpenAI's research, been to their office couple times, know some people there) - tbh the piece was a pretty good summary of a lot of quite common somewhat negative takes on OpenAI within research community (such as that they largely do research based on scaling up known ideas, have at times hyped up their work beyond merit, changed their tune to be for profit which is weird given they want to work in the public interest, and that despite calling themselves OpenAI they publish and open source code much less frequently than most labs -- and with profit incentive they will likely publish and open source even less). The original article also presented the positive side (OpenAI is a pretty daring endeavor to try to get AGI by scaling up known techniques as they are, and people there do seem to have their heart in the right place) ."


What I get from this article is that OpenAI is in many aspects similar to other research organizations, but with more resources and better PR. I think this article is reassuring for anyone doing AI research outside of the big labs. However, I also think that nothing that the article reports puts OpenAI in a particularly bad light; it's just the ordinary problems a research organization typically has.

(Note: GPT-2 communications were unfortunate and insincere, but I've witnessed similar, less high-profile communication blunders.)


Recollections from past episodes of AI Hype Cycles. Soar was a massive project that spanned almost a decade and universities on multiple continents with dozens of researchers and significant funding from Darpa. See also Cyc.

"Soar is a cognitive architecture, originally created by John Laird, Allen Newell, and Paul Rosenbloom at Carnegie Mellon University.

The goal of the Soar project is to develop the fixed computational building blocks necessary for general intelligent agents – agents that can perform a wide range of tasks and encode, use, and learn all types of knowledge to realize the full range of cognitive capabilities found in humans, such as decision making, problem solving, planning, and natural language understanding. It is both a theory of what cognition is and a computational implementation of that theory. Since its beginnings in 1983 as John Laird’s thesis, it has been widely used by AI researchers to create intelligent agents and cognitive models of different aspects of human behavior. The most current and comprehensive description of Soar is the 2012 book, The Soar Cognitive Architecture."

Source:

https://en.wikipedia.org/wiki/Soar_(cognitive_architecture)

https://en.wikipedia.org/wiki/Cyc


I read this when it came out and found it lame. It has the pretense of being an investigative hit piece, with all the length and gloss, but there's almost nothing to it. They come across as a bunch of painfully awkward geeks who are true believers and have been swimming in money. The reporter makes them look bad, you can make anybody look bad. I'm an AGI skeptic, but I thought she came out looking worse. A better or more experienced journalist would have dropped the story and moved on to something with some there there.


As I've followed OpenAI, the recurring theme is that the subject/company goals opens messy politics and philosophies rather than focusing on the research. In some ways I can appreciate their retraction and controlling visibility so they can focus on the research. However, this only further exposes more politics and philosophical questions. They have to decide what "they want to be when they grow up" and filter out anything else that gets in the way, even the criticism.


I think that AGI is a poorly understood concept and whatever it would signify, we're a couple of paradigm shifts away from it. One thing that bothers me is how there is little concern of legal personhood here, apart from popular action film style "rebellion" fears. (Let's remind ourselves of the 8 billion human level intelligences we're having right now, mostly not wreaking much havoc individually).

Let's say that some organization builds an AGI that's conscious and capable of human-like thoughts. There's an argument to be made that at that moment, if we don't immediately grant the AGI personal and political rights of citizens, this organization is essentially owning slaves. Which is contrary to the rules of our societies and never ends well. It seems likely to me that, barring some immediate and fantastical gains from AGI that would allow the organization to "break" the society, it would be soon forfeited as property with no compensation.

Which makes a commercial venture to build AGI a dubious proposition.


Given that we don't even know AGI is logically possible, probably isn't, unsurprising a company founded on that idea will flounder insofar as it stays true to the goal.


I have to say I've never heard this argument before. Why would AGI be logically impossible? Didn't we humans evolve to possess AGI?


I don't know if AGI is possible or not but I do think this is a valid question. The AI risk people gloss over this fundamental question and proceed directly to the question of "AI safety". This is a major weakness in their argument.


Perhaps humans just aren't smart enough to design AGI? Let's see a real, working AGI roughly equivalent to a mouse. Then I'll believe that human level AGI might actually be possible.


"humans evolve to possess AGI" is based on a couple of assumptions:

1. humans evolved

2. human mind is reducible to the brain

3. intelligence is computable

None of these are proven by anything. They are all assumed, and are the current fashion in science due to materialism.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: