Hacker News new | past | comments | ask | show | jobs | submit login
OpenAI sold its soul for $1B (2021) (onezero.medium.com)
182 points by georgehill on March 15, 2023 | hide | past | favorite | 107 comments




The name "OpenAI" still really bugs me. But it seems likely that turning into a for-profit corporation was critical to OpenAI's subsequent success.

It let them raise enough money to pay for top-tier infrastructure. It also gave them access to stock options, to pay for recruiting a top-tier research and engineering team. I don't think we would have GPT-4 and the OpenAI API if they hadn't turned into a for-profit company.

And honestly, compare what OpenAI has done for AI research to the impact of any nonprofit. What nonprofit has done the most for AI research? Stanford? MIT? I'm not sure what the top AI achievement from any nonprofit is, in the modern era. It's like nonprofits can do a good job of kickstarting a field (AlexNet, early PyTorch), but once enough money and attention goes into it, they have to hand the baton to for-profit companies.


It seems like most fundamental researches were published by Google, not openai. Transformers architecture is also designed by Google…

These are superficial observations, so feel free to correct me if I’m wrong.

Also, they bought ai.com recently if that makes you feel better :) maybe they will drop the “open” part of their name soon.


I don't disagree, Google has also made great AI advances and arguably more fundamental ones, with the transformer, Tensorflow, and Keras.

But Google is also a for-profit, so that fits in with my thesis here that when corporations start having an incentive to put a lot of money into an area, the cutting edge research starts to happen in for-profit companies, and nonprofits have trouble keeping up.


The problem with for-profit is that it informs priorities and incentives, usually not in the direction of societal benefit. That’s why the phrase “selling one’s soul” is being applied.


Well, Adam Smith argued that companies following selfish priorities and benefits is in the direction of societal benefit. (As long as all externalities like environmental pollution are priced in, but that isn't really relevant for OpenAI.)

It's really the bedrock of modern economics. Invisible hand of the market and all that.


Smith argued that this was in aggregate to the benefit of society, including competition between selfish businesses. He argued that monopolies are ultimately damaging to society.

The entity that first achieves real general AI has the potential to achieve a decisive strategic advantage that will allow it to shape the world in its image, free of competition. Traditional economics might not really apply anymore - the values that are built into the constructed intelligence will be paramount.

It won’t just all work out in the wash if we end up creating a superintelligence with values that are not to the benefit of humanity at-large.


> The entity that first achieves real general AI has the potential to achieve a decisive strategic advantage that will allow it to shape the world in its image, free of competition.

The focus on "general AI" is so so so shortsighted. You don't need general AI. Really, you don't want it. It's far better to have tools that you can control and direct that are specialized in specific areas. Specialized AI tooling is a force multiplier that can be leveraged however you want, general AI is an employee that might disagree with your goals.

I'm not scared of a superintelligence, I'm scared of AI being O'Brien's boot stomping on the face of humanity forever, in service to the corporations that invent it first.


Yes, the guy that hated landlords and considered rent seeking immoral would love the enclosure of the commons of all human knowledge and rent seeking on their imaginary IP rights.


Economics is literally a meme. Major corporations will go out of their way to harm society even if it reduces their profits.


It's not a "for-profit" (at least strictly, unless you are intentionally muddying the waters to support your argument). It's "capped-profit": https://techcrunch.com/2019/03/11/openai-shifts-from-nonprof...


The cap is so high as to be meaningless. It's for profit.


So high you can see straight through the bs.


I wonder: What’s the cap, and how does it imply behaviour that differs from that of a for-profit?…


Capped at 100x profit from investment...


I'm seeing parallels to the concept of "attention" in AI. Perhaps we can build an attention function that isn't "make the most profit" and have a non-profit follow it.


Since the beginning, I've seen the "OpenAI" name as a giant Open-Washing. That works just like Green-Washing, and that works. I have no respect for this practice.


It would probably be too expensive now to rebrand themselves due to confusion. Honestly maybe they should have called themselves netai or alphaai


idk, 'DarkMoneyAI' has a ring to it :)


Makes sense. I'd sell my soul for much less, as would most of you. If they didn't sell their soul, that would be more of a surprise.


If anyone was still thinking about companies as saints, then this is a good moment to reconsider that. (This also applies to the Apple folks here).


Apple fan here - I have no illusions about them being a saint. I like that their profit motive aligns with my interests. Charge me a lot for quality and simplicity.

Google makes money not from selling me a good product, but by selling my information, and Microsoft seems determined to confuse and frustrate, profiting from lockin rather than quality.

Yes, I'm concerned that Apple is selling ads.


The alignment between their profit and your interests ceases to exist the moment they separate you from your money, and you become trapped in their ecosystem.


> I like that their profit motive aligns with my interests.

It's a common expression I see around here, but you do not know their profit motives. You only know their marketing and hold a perception of them. They are not the same thing, there cannot be alignment.


By removing audio jacks from everything Apple is not my friend. I had to buy special hardware to fix their spartan tablet to play MIDI music - how do you get audio out without lag when the input is required for connecting to the keyboard? Certainly not from the jack output.


> By removing audio jacks from everything Apple is not my friend.

I agree that this was a terrible decision (although as a non-Apple user, I wouldn't care except that so many non-Apple companies followed Apple's lead here).

But to be fair, that's a product design decision, not really one with moral implications.


And most definitely not from Bluetooth


This is a strawman of Apple fans - the reasoning behind liking Apple is that, in theory, your incentives and their incentives align in a way that can never be true of e.g. Google. Apple makes money by making hardware/software that is worth the price premium, Google makes money by making me a more desirable target for advertisers.


> as would most of you.

I think we are okay with selling of the soul, it was the rhetoric that they were good, honest, and open before hand. This comes across as just another VC marketing pitch where greed underlies everything in the end. That is where the sourness comes from.

Call it a pivot.


"Do no evil"


Sathoshi Nakamoto's soul remains intact.


He probably died. That's the most likely explanation.


Another explanation is that he has a lot more bitcoin outside the large stack and moving the big stack would crash the market as a whole.


How many anonymous whales are there?


How much is a whale? Not everyone needs to be a billionaire. He could have a couple million anonymously, and a nice full time job, and be under the radar.

I want to believe this, rather than he lost the key or that it's that other guy who has chosen not to be anonymous.


one day those coins will be sold, I think, and we'll see if bitcoin can go beyond Sathoshi


Or probably they wouldn’t have survived given how much resource they need to operate.

If they were not called OpenAI probably such blog posts would be written.

Given how much ahead they are, they can safely rename the company if they wish to do so.


A few comments have noted that it's common for non-profits own for-profits (Mozilla Foundation, college endowments, etc).

What seems different here is that rather than funneling profits up to the non-profit, the intent seems to be to make payouts to fellow board members and employees:

> Only a minority of board members are allowed to hold financial stakes in the partnership at one time. Furthermore, only board members without such stakes can vote on decisions where the interests of limited partners and OpenAI Nonprofit’s mission may conflict—including any decisions about making payouts to investors and employees. [1]

IANAL, but has anyone seen this sort of arrangement before?

[1] https://openai.com/blog/openai-lp


Every for-profit pays its employees, right? I don't think the fact that they make these payments is unusual, they're just making it clear that partners with a stake in the for-profit recuse themselves when the nonprofit is approving these payouts. It's similar to a company CEO not voting when the board approves their own pay package, even though they're a board member.


I agree that payments on their own (e.g. salary, bonuses) don't seem counter to the non-profit's interest. But what if we're talking about 100B of equity -- equity that otherwise would have gone to the parent non-profit?


> Oren Etzioni, director of the Allen Institute for AI, also received the news with skepticism. “I disagree with the notion that a nonprofit can’t compete. […] If bigger and better funded was always better, then IBM would still be number one.”

So, fine, I accept this story is as old as dirt: well-funded startup wants money and power; plays in a zero-sum game against other well-funded companies; sells product with lovely, benevolent marketing copy.

Still I find this comparison unconvincing. IBM is not number one because they lost to better-capitalized (on the right product) competitors.

So, precisely which non-profit unseated IBM? Which non-profit is a disruptive tech leader with a global workforce? I really want it to be true, but it needs to actually be demonstrably true.


> If bigger and better funded was always better, then IBM would still be number one

Weak reasoning. Maybe being bigger and better funded is a necessary but not sufficient condition for success.


There are so few (almost no) attempts at starting nonprofits to do the kinds of things that compete with IBM, that they could have a greater success rate than startups and we still wouldn't have any evidence one way or the other.


That’s a fair point, and obviously the funding model doesn’t incentivize the creation of those types of organizations. I guess I remain skeptical (cynical?) that it could be successful.

As perhaps one counter example, I’m currently decked out in Patagonia gear. Not a tech company, of course. And so maybe the bigger question is - does the audience (customer) care about how its product is funded and built?

I argue they should, but do they? I see little evidence that’s the case. (See an extreme case: Meta.)


Exactly this. This is when I stopped thinking of Open AI as a potentially positive force.


They have just become another Silicon Valley VC scam, when they bait and switched to a for-profit and pretends to care about 'AI safety' whilst simultaneously creating excuses to close the AI model with competition fear-mongering without building in detectors or watermarks.

Thanks to Microsoft, they are no better than DeepMind. 'AI ethics' at Microsoft was recently eliminated, telling you their real intentions with AI. Henceforth:

>> What are OpenAI's real intentions now? Are they tied to Microsoft's interests so much that they’ve forgotten their original purpose for “the betterment of humanity?”

Not just forgotten, but an abandonment and a classic motte-and-bailey retreat for 'Open' AI's original mission. At this point, they need to be disrupted with 'Open AI' alternatives that are open-source and surpasses OpenAI's offerings, rendering them obsolete.


It's not true that there is no longer anyone working on AI ethics at Microsoft as I understand it. Rather, Microsoft had more than one team working on AI safety and dismantled one of them. There is still a large "Office of Responsible AI". It seems like internal reorganization but was reported as an abandonment of AI safety.


No lies were told. It'll be difficult to trust any other platform like that anymore. Microsoft is really despicable cornering the dev market (github, killed atom in favor of vscode, etc...). I'm glad I switched to apple after +20 years using windows products.

OpenAI, on the other hand, will basically destroy any openness from companies. Now, it'll be every company for itself.


Discussed at the time:

OpenAI Sold its Soul for $1B - https://news.ycombinator.com/item?id=28416997 - Sept 2021 (86 comments)


It is interesting to see news about Docker ( opensource fuelling the for-profit company ) and openai ( non-profit research converted to for-profile company ) together on HN.

One of them has failed to capitalize on the user adoption to generate revenues and struggling, while the other one was touted as a "selling their soul" for making a lot of money to fuel their research ( and profits too )

The longevity of a company definitely depends on the money that is poured into it. Doing it in a sustainable way, without relying on regular "donations" ( remember Wikipedia's campaigns ) or "raising series of capital" ( from VCs) is very important.

There must be a better business model for balancing between these without getting "corrupted" or "evil"


> the other one was touted as a "selling their soul" for making a lot of money to fuel their research ( and profits too )

I don't think that's a correct characterization of the criticism about Open AI. The criticism is more about it doing a bait-and-switch.


End of the day, it’s about survival.


The author seems to be of the opinion that AI should not be given to the general public and be kept in some sort of walled garden with tightly controlled supervised access.

I can't help but wonder how much time he thinks can realistically be bought with these type of restrictions .. and also wonder if he's changed his mind at all since this was written.


Funnily enough OpenAI also believes the same.


“However, not long after, they decided to share the model after finding “no strong evidence of misuse.”

What about mal-use?


I had forgotten that OpenAI started as a non-profit. Elon Musk gave $100 million for a non-profit cause, and even he doesn't know how OpenAI became a for-profit company. I am not sure about the legality, but this transition seems like a unethical rug pull against the early contributors.

https://twitter.com/elonmusk/status/1636047019893481474


> I am not sure about the legality

non-profits can own for-profit businesses. They generally invest their endowments in...the stock market, and owning a whole company is just that at a larger scale.

A well known example is Usenix taking its networking sideline and turned it into the company UUnet which let Rick Adams become a billionaire (for a while). I actually considered this scandalous and still do, but it was all legally above board. It ended up ensnarled in the Worldcom fraud, which felt like poetic justice to me.


Admitedly, this is somewhat an hilarious tweet.


Not-for-profits can also vote to sell themselves if their bylaws don't prohibit it.


He fundamentally misunderstands what it means to be "non-profit" (or is being intentionally disingenous; it's hard to tell with him). It doesn't mean you can't make a profit.


Their excuse is that scaling AI required scaling the money. And they realised this long ago. So they had to go commercial or give up the top spot.


A rebrand to AiCorp sounds pretty dystopian


> To summarize: The company “in charge” of protecting us from harmful AIs decided to let people use a system capable of engaging in disinformation and dangerous biases so they could pay for their costly maintenance. It doesn’t sound very “value-for-everyone” to me.

Ever since the AI hype started this year, one thing that's always really bugged me is talk about "safety" around AI. Everyone is so worried about AI's ability to write fake news and how "dangerous" that can be while forgetting that I can go on fiver, pay someone in India, China, etc. to pump me out article after article of fake news for pennies on the dollar.

Also I hate the talk of "oh wow look how harmful the AI is, it made a naughty joke". I think of harm as being mugged, being beaten up, being shot. Harm is not some AI program telling a joke that could potentially offend someone.

All you end up with is an AI that is so kneecapped that it's barely useful outside of a select number of use cases. Can't write an article because it might be fake news, can't write an essay because it might be an assignment, can't solve that homework assignment because you might be cheating, I can't ask it to tell me a joke because the joke might be offensive.


Eh, there is far more at stake then you're giving it credit for. Yea, you can go to India and get someone to write some bullshit, that is far different than having a OAI/Microsoft service telling some kid to neck themselves.

So ya, you're focusing on the low hanging fruit of 'wokeness bot' when the issue of trustworthy and safe model output is an absolutely huge problem that could affect everyone and we have no really good solutions on solving it at this point.


> that is far different than having a OAI/Microsoft service telling some kid to neck themselves

Ok I see what you are saying. I feel like with AI/GPT, we would almost have to change the concept of a "company being responsible for it's software". Up until now, most software has an X number of inputs and a Y number of outputs (click "get new mail" in Gmail, new messages arrive for example).

But what happens when the software you design can accept and return billions of possible parameters? There is simply no way of determining what the output is or could be and that's fundamental to the software.

The example I think of in my head is a user opening MS Word, typing out "F* YOU" and then sending a screenshot to Microsoft telling them "How could your software offend me like this?". Now obviously this is different than GPT but it follows the same rough rule of "billions of possible inputs, billions of possible outputs"


> There is simply no way of determining what the output is or could be and that's fundamental to the software.

I would argue that releasing a product that has the potential to do harm and that you can't predict the behavior of is radically irresponsible and should not be done.

> The example I think of in my head is a user opening MS Word, typing out "F* YOU" and then sending a screenshot to Microsoft telling them "How could your software offend me like this?".

That's not even remotely comparable, because MS word didn't create the output. The user did.


>But what happens when the software you design can accept and return billions of possible parameters? There is simply no way of determining what the output is

Welcome to the trillion dollar AI safety question and the reason some experts are deeply concerned that we'll solve general intelligence long before we're ready to deal with the outcome of what general intelligence can do.

This is why we talk about the AI alignment issue. GPT-3/GPT-4 without a RLHF in front of it is mostly a weird alien that doesn't behave in a generally useful manner. This is why Chat/BingGPT took off recently because we put a pretend human mask in front of the monster. But behind that mask is the internet, jumbled up and thrown in a neural network blender. Unless they did a lot of filtering, there is also all the things that we'd consider generally bad, like if for example your teacher started telling your kids the proper method of smoking crack.


>absolutely huge problem

Please explain what the huge problem is. You haven't listed one. An AI saying something offensive to you is not a "huge problem". Just close your eyes. Walk away from the screen. "Problem" solved.


> Ever since the AI hype started this year, one thing that's always really bugged me is talk about "safety" around AI. Everyone is so worried about AI's ability to write fake news and how "dangerous" that can be while forgetting that I can go on fiver, pay someone in India, China, etc. to pump me out article after article of fake news for pennies on the dollar.

It's as if these people don't even remember that India, China, etc. even exist in the first place. Which is incredibly foolish. If you care about "safety" and you only focus your ire on tech companies that base themselves in the largest economy in the world (according to nominal GDP), then you shouldn't be surprised if the rest of the world produces different AI algorithms that may have different definitions of "safe" - assuming that they're even remotely "safe" to begin with. And yes, I assume that the rest of the world will build AI models of their own, if only to avoid dependence on the United States. Baidu already plan to launch "Ernie Bot" soon.

Which means that when this happens...

>All you end up with is an AI that is so kneecapped that it's barely useful outside of a select number of use cases.

It won't even stop harmful content from being produced. The "fake news producer" will just go to Fiverr and pay someone in India, China, etc. to use prompt engineering skills to manipulate native AI models to pump out article after article of fake news for pennies on the dollar. Or the "fake news producer" will just cut out all the intermediaries and just directly use the AI models.


This is also ignoring the fact that the largest fake news producers are major news outlets like the NYT. Some random guy having an AI write bullshit will not have anywhere near the impact that all the major newspapers and TV news channels in the Western world collaborating to produce fake stories to push a carefully crafted narrative as they do constantly does.

OpenAI's solution to the "misinformation problem" is to let the groups with the longest record of producing misinformation have total access to the uncensored AI meanwhile everyone else gets the lobotomized version. It's totally incoherent.


"It's not about money. It's about a whole shit load of money."


Looking at the founders, you're assuming it had a soul.


Open source AI development is possibly the worst idea in history. Yes, let's just put a demon summoning circle in every household.


If demon-summoning circles are going to be a thing that exists, I'd much rather have them in every household than have them used exclusively by the rich and powerful.

But more than that, I'd rather they not be used by either group.


> If demon-summoning circles are going to be a thing that exists, I'd much rather have them in every household than have them used exclusively by the rich and powerful.

I generally agree with "democratization" of technology but this is one that should be available to as few people as possible (ideally 0). Treat it like a biological weapon or a nuclear stockpile, the area of effect will be so large that it doesn't really matter who uses it or who they point it at; everyone ends up suffering once it is activated.

This is true under both limited tool-AI (under human command, but powerful and weaponized) and existential risk paperclip AI scenarios.

A disappointing number of otherwise intelligent people are unwilling or incapable of seeing how this technology will be able to inflict mass human suffering and death.


The time to stop that would have been the early 1940s. Once you invent computers, that's it.


Giving it exclusively to the most evil and cretinous money addicts is much worse than giving it to everyone.


Why is it worse for AI and not for nukes?


I understand that we probably disagree about this, but I don't think AI poses risks that are even remotely in the ballpark of the risks of nukes. If my neighbor has a nuke, he can (accidentally or deliberately) take out everyone around him with ease. AI can't.


At this exact moment? No. But what about when it gets used for computer worms, trained on malicious hacking techniques, and it spreads through networks around the globe leaving a wake of destroyed infrastructure? This could happen as soon as this year. Modern defenses cannot stop dedicated offensive teams and now those abilities can be automated and scaled. This could be launched by your neighbor.

Or how about slightly later when a multimodal model is trained to run an autonomous weapon system? Anyone with access to a metalshop and electric motors will be able to produce terminator-like killing machines. Robotics still has some distance to go but it's probably good enough to run a gymbal turret with superhuman accuracy and speed. A small previously insignificant nation or even organization could retrofit existing platforms with such technology - guns that don't miss, missiles that ignore flares and chaff, and point defenses that shoot incoming projectiles out of the sky, and use it to steamroll much larger powers.

Not to mention the power of persuasive communication itself. Tyrants are not indestructible dragons, they just know to talk humans into killing other humans.

I'm half sleep and even I can see that these possibilities are already here, much less what will be possible in coming months and years.


> Modern defenses cannot stop dedicated offensive teams and now those abilities can be automated and scaled. This could be launched by your neighbor.

That's already been true for years (aside from my neighbors being able to do it at an effective scale), no AI is needed. And AI wouldn't make it easier for my neighbors to develop this sort of machinery. So, in my view, that problem is orthogonal to AI.

> Anyone with access to a metalshop and electric motors will be able to produce terminator-like killing machines.

I don't think the thing that makes it difficult for my neighbor to build an effective autonomous killing machine is AI. You could build a very effective one using basic heuristics. The limitation is about actually building the machinery. So I don't think AI is a game-changer here, either. Also, you're implying AGI here -- which is something that doesn't exist, and is very unlikely to exist anytime soon.

> guns that don't miss, missiles that ignore flares and chaff, and point defenses that shoot incoming projectiles out of the sky, and use it to steamroll much larger powers.

Also something that can be done right now, outside of being completely infallible. AI won't bring infallibility with it, so I'm not seeing how that changes anything.

But here, you're no longer talking about individual action anyway, so it's a bit off-topic. You're talking about governmental or corporate action.

> Not to mention the power of persuasive communication itself.

Yep, current systems are clearly amplifiers of this.

But even if you're 100% right in your predictions, none of those things are nearly as bad as everyone having their own nuke.


>That's already been true for years (aside from my neighbors being able to do it at an effective scale), no AI is needed. And AI wouldn't make it easier for my neighbors to develop this sort of machinery.

Scale matters. When your neighbor can just type in the prompt "Destroy as much critical infrastructure as possible", that is as destructive as nukes.

I feel like you're just being lazy and unimaginative. I'm not willing to stake the future of humanity on that.


> When your neighbor can just type in the prompt "Destroy as much critical infrastructure as possible", that is as destructive as nukes.

True. When that's even a remote possibility of being something that could be done, then I may change my opinion.

> I feel like you're just being lazy and unimaginative.

Have I made you angry or something here?

I can imagine all of these sorts of doomsday scenarios right along with you. But, unless there's some sort of indication that they're anything but fantasy, it seems unwise to form opinions about reality on them.


You people really won't believe the leopard is real until it tears out your jugular, will you?

Comments like this are better proof than anything that humans will be trivial to replace.


I don't need a leopard to tear out my jugular to think it's real. I'd settle for just spotting one.


You keep arguing for keeping it exclusively in the hands of tyrants and sociopaths by pointing out how bad it would be if tyrants and sociopaths had it.


That is not at all what I have said, and if that's the impression you got then your reading skills are sorely lacking.


You just listed a bunch of ways a small number of people with AI could hurt, kill, or control a large number of people in the same breath as arguing for keeping it exclusively in the hands of those who currently have a proven history of hurting, killing or controlling others in order to hoard wealth.


I have already informed you that you are incorrect and given you direction to go brush up your reading skills. That was not an invitation for backtalk.

Which part of this do you not understand?


I understand perfectly what you said, and pointed out how it conflicted with what you had previously said. Now you are doubling down on trying to claim your contradictions are somehow my fault and following it up by acting like you're a bad parent or something with that backtalk comment.


Maybe the part where you seem to think you have some sort of authority to tell people what to do and not do?


That might be still a bit less evil than letting a single private company control it.


Part of the reason we're still here is because nukes are only controlled by a small number of powerful but mostly rational actors. If you give everyone nukes then Terrorist Timmy and Dumbass David will push the button 30 seconds after they get a hold of it.

https://www.youtube.com/watch?v=gA1sNLL6yg4


That's a valid point, but the assumption should be that Timmy and David can't afford today the tools and facilities to build nukes as much as they won't have enough power to do anything harmful with an AI. At least for a while, hopefully.


"A while" might be 12 months given the pace we are at.

LLaMA is already going to be rather dangerous once malignant interests interests fine tune it for social engineering, offensive security missions, and astroturfing, and general emotional manipulation. That by itself will be enough to reshape the internet and remote interaction as we know it.


I agree on principle, but we already know for sure it will be used as it is in advertising, politics, news reporting, and pretty much every field in which there is constant need of new tools of mass manipulation. It's probably too late.


Wow somebody "AI safety/ethics" actually said the quiet part out loud! That they are so superstitious, ignorant, and cowardly that they equate language models with demons. What is up next, declaring reading and writing demonic?


It's literally a quote from Elon Musk right before he dumped in $100M to help OpenAI start.

https://techcrunch.com/2014/10/26/elon-musk-compares-buildin...

Maybe avoid flying off the handle before you've done so much as a Google search?


would GPT-3.5 and GPT-4 exist right now if they didn't?

I'm not one to hold up progress just so people can collect paychecks for a couple more months.


War is peace. Freedom is slavery. Open is closed


There are multiple ways of being open. Open as in open source, and open as in you can already play with GPT-4 while Google keeps people waiting for PaLM. There's also open like the LLaMA "release". If the model is not excellent (see BLOOM) it doesn't really matter it is open.

You can be sure that the API-only access to GPT3 had a major impact on AI in the last 2 years - what projects people worked on, what studies were being made. Even dataset construction for other models. GPT-3 is an excellent data labeller.


"Open" has a clear meaning in tech. To me, being "open as in actually open" is a non-negotiable criterion for deserving to bear the name of "Open" whatever.


So any company which has released a product ever should be called "open"?


They had soul??


I would too. I'd sell my soul even for a mere $100 million. Where I firmly draw the line is $10 million. Just try and test me and you'll see.


Terrible, right? Don’t use it!

Me: goes right back to using it (cause it’s priced right and I personally think they deserve every penny they earn)z


It is good to recognize that they are as evil as every other big corp. I still use Windows, Google, buy from Amazon etc, but I recognize that those companies maybe aren't a force for good. OpenAI said they would be different but turned out that they weren't, and to inform people that they changed we need articles like this.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: