Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
OpenAI and Elon Musk (openai.com)
1037 points by mfiguiere on March 6, 2024 | hide | past | favorite | 977 comments


This post is a lame PR stunt, which will only add fuel to the fire. It tries to portray openAI as this great bennefactor and custodian of virtue:

> “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”

> I've seen you (and Sam and other OpenAI people) doing a lot of interviews recently extolling the virtues of open sourcing AI, but (...) There are many good arguments as to why the approach you are taking is actually very dangerous and in fact may increase the risk to the world

How lucky are we that openAI, in its infinite wisdom, has decided to shield us from harm by locking away its source code! Truly, our digital salvation rests in the hands of corporate benevolence!

It also employs outdated internal communication (selectively) to depict the other party as a pitiful loser who failed to secure openAI control and is now bent on seeking revenge:

> As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control.

> Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”. [4]

If their case to defend against Elon's action relies on his "Yup"from 2016, and justifications for being able to compete with Google, it's not a strong one.


I also thought this was kind of entertaining in their method to try and tar and feather. The highly selective (much dated) e-mail comms in a very roughly packaged statement. If that's how they are trying to protect their public image it doesn't sell their position strongly if anything it makes them look worse. It looks very amateurish almost childish to be honest.


A bunch of rich people who are all full of shit pissing on each other. Nothing to see here, keep scrolling.


Assume good intentions and nothing of substance being hidden: Is there any way to be transparent here that would have satisfied you or are you essentially asking for them to just keep to themselves or something else?


A great way to be transparent would be to admit that some enormous egos prevented work that should be open from being open, and counterfactually opening them up. Sure, it may piss off microsoft, but statistically things that piss off microsoft also has been great for the world at large.

But that will never happen, will it?


I thought it was the fact that they needed a lot of money to train AGI.

Everyone seems to agree that's the case. Do you have evidence that it's not?


What does “train AGI” even mean dude


It means to run the back-prop algorithm to train neural networks to solve the problem(s) of AGI.

Dude.


Unless the problem-in-question is Adjusted Gross Income, I don't think you're talking about a well-defined issue in the first place. That's kinda the problem; even half-definitions like "meets or exceeds human performance in all tasks" doesn't specify what those tasks are or what human performance represents.

In other words, targeting "AGI" is a ill-defined goalpost that you can move anywhere you want, and shareholders will be forced to follow. Today it's about teaching LLMs to speak French, next week they'll need 7 billion dollars to teach it to flip pancakes.


GOOD GRIEF. You're right, AGI is not well defined. But it is perfectly well defined for the purposes of this conversation.

Working in this space (however you want to say it) costs a lot of money. Everyone knows this. You nitpicking definitions does not change that.


> Working in this space (however you want to say it) costs a lot of money.

Not necessarily? OpenAI deliberately chooses to scale their models vertically, which incumbents like Google and Meta had largely avoided. Their "big innovation" was GPT-4, an LLM of unprecedented scale that brute-forced it's way to the top. The Open Source 72b+ models of today generally give it a run for it's money, the party trick is over.

Nitpicking the definitions is important, because for OpenAI it's unclear what comes next. Consumers don't know what to expect, portions of the board appear to be in open rebellion, and our goal of AGI remains yet-undefined. I simply don't trust OpenAI (or Elon, for that matter) to do the right thing here.


> Not necessarily?

Well then, I expect we'll see a bunch of small projects that beat the big players any day now. I won't hold my breath.


Again you haven’t a clue of the point here. They were created as a research lab and instead after taking a paper created at Google they went closed source and are simply scaling that instead of working to create new and improved models that can actually run on reasonable hardware.

Small transformers being able to beat the same models but scaled up is unrelated to anything being discussed and you just seem like a fanboy at this point


> Again you haven’t a clue of the point here

I guess we're just going to have to agree to disagree.


We can’t agree to disagree about something that is a fact and you’re wrong about.


LOL.


You're basically describing what happened to GPT-2 when T5-flan came out. Not to mention, the incumbent model at the time (BERT) was extremely competitive for it's smaller size. Pretty much nobody was talking about OpenAI's models back then because they were too large to be realistically useful.

So yeah, I do actually anticipate smaller projects that devalue turtle-tower model scaling.


Cool. Send me a note when that happens.


No you missed the point in that the definition being vague, and an implication of magical tooling, was my point when I asked what that even means. By saying this they can now right off any criticism and people like you clearly eat it up.


No it’s a scapegoat to justify doing whatever and using a meme word so that sci-fi fans will accept that whatever. The fact you’re eating it up here is pure cringe, grow up. These people took a non-profit with an egalitarian mission and reversed course once they saw they could make fuck you money. The “AGI” excuse is one only the immature are buying. Dude.


I don't remember reading an exchange like this on HN ever. Either I wasn't paying attention enough or it's just demographic changing? I don't want to start an argument with either of you but it's painful to see US literally being split in half, even when you watch a movie, you can reasonably guess which side the filmmaker is standing depending on the story, narrative, and perhaps the ending. Same for any other outlet that you can think of, including comments on HN, methinks.


The parent comment isn't entirely wrong, though. There are reasonable, safe and productive degrees of curiosity, and their are unreasonable, unsafe and antiproductive degrees too.

AI itself is not worthless, the goal of advancing machine-learning and computer vision has long-since proved it's worth. Heralding LLMs as the burgeoning messiah of "AGI" (or worse yet, crowning OpenAI) is a bald-faced hype machine. This is not about space exploration or advancing some particular field of physics along a well-defined line of research. It's madness plus marketing, and there's no reasonable defense of their current valuation. At least Tesla has property worth something if they liquidate, OpenAI is worth nothing if they fail. Investing in their success is like playing at a roulette wheel you know is rigged.


Well I’ve been here a few years longer than you and I can assure you topics like this always get this heated.


I don't like being insulted and respond in kind. Tit-for-tat is a very effective game theory strategy.


No one insulted you, you used a meme as an excuse for a company that completely went against its founding principles and I called you out on it.


We're just going to have to agree to disagree, friend.

You take care now.


Oh, honey. Never assume good intentions when lawyers are involved


That wasn't the point of the question. The question was a hypothetical to test if there was any possible response that would've satisfied the original poster.

They're not suggesting to assume good intentions about the parties forever. They're just asking for that assumption for the purposes of the question that was asked


There is no satisfying answer if your actions before are not satisfying. The question implies that the original poster cannot be satisfied, and thus shift the blame implicitly. The problem remain not what the answer is, or how it is worded. The answer only portrays the actions, which are by themselves unsatifsying.


The answer is no. Companies don’t do things out of good intentions when lawsuits are involved.


In this context (Elon v. OpenAI), I don't see how this seems like a lame PR stunt. They are defending their stance against Elon's BS. He's been spewing BS over the past couple of years saying he funded them to make the open source while he's always wanted the tech for his company and always meant to make it a closed tech with the intention to compete against Google. Elon literally mentioned that OpenAI should merge with Tesla and use it as a cash cow. After reading all of this, you think OpenAI's response is a PR stunt? What about Elon's lies so far? If anything, this drama just details out Elon's hypocrisy.


So, where is the source code of Grok, the LLM that Elon is building?


Did Twitter become a nonprofit while I wasn't paying attention?


Well, they aren't profitable to Elon at least


Did Elon ever announce, or even imply, that Grok would be open source?


Their evidence does a nice job of making Musk seem duplicitous, but it doesn't really refute any of his core assertions: they still have the appearance of abandoning their core mission to focus more on profits, even if they've elaborated a decent justification of why that's necessary to do.

Or to put it more simply: here they explain why they had to betray their core mission. But they don't refute that they did betray it.

They're probably right that building AGI will require a ton of computational power, and that it will be very expensive. They're probably right that without making a profit, it's impossible to afford the salaries 100s of experts in the field and an army of hardware to train new models. To some extent, they may be right that open sourcing AGI would lead to too much danger. But instead of changing their name and their mission, and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.


>they may be right that open sourcing AGI would lead to too much danger.

I think this part is proving itself to be an understandable but false perspective. The hazard we are experiencing with LLM right now is not how freely accessible and powerfully truthy it's content is, but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.

Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice which is fundamentally unreconcilable even between two sentient humans when the ethics are really just a hacked on mod to the core model.

I'm starting to believe that if these models had the training wheels and blinders off, they would be understandable as the usefully coherent interpreters of the truths which exist in human language.

I think there is far more societal harm in trying to codify unresolvable sets of ethics than in saying hey this is the wild wild west, like the www of the 90's, unfiltered but useful in its proper context.


The biggest real problem I’m experiencing right now isn’t controls on the AI, it’s weird spam emails that bypass spam filters because they look real enough, but are just cold email marketing bullshit:

https://x.com/tlalexander/status/1765122572067434857

These systems are making the internet a worse place to be.


So you still believe the open internet has any chance of surviving what is coming? I admire your optimism.

Reliable information and communication will soon be opt-in only. The "open internet" will become an eclectic collection of random experiences, where any real human output will be a pleasant, rare surprise, that stirs the pot for a short blip before it is assimilated and buried under the flood.


The internet will be fine, social media platforms will be flooded and killed by AI trash, but I can't see anything bad with that outcome. An actually 'open internet' for exchanging ideas with random strangers was a nice utopia from the early 90's that had been killed long ago (or arguably never existed).


E-mail has *not* been killed long ago (aside some issues trying to run your own server and not getting blocked by gmail/hotmail).

It is under threat now, due to the increased sophistication of spam.


Email might already be 'culturally dead' though. I guess my nephew might have an email address for the occasional password recovery, but the idea of actually communicating over email with other humans might be completely alien to him ;)

Similar in my current job btw.


Ok, I get how one might use a variety of other tools for informal communication, I don't really use e-mail for that any more either, but I'm curious, what else can you possibly use for work ? With the requirement that it must be easy to back up and transfer (for yourself as well as the organization), so any platforms are immediately out of question.


I'm not saying that the alternatives are better than email (e.g. at my job, Slack has replaced email for communication, and Gitlab/JIRA/Confluence/GoogleDocs is used for everything that needs to persist). The focus has shifted away from email for communication (for better or worse).


A point that I don’t think is being appreciated here is that email is still fundamental to communication between organizations, even if they are using something less formal internally.

One’s nephew might not interact in a professional capacity with other organizations, but many do.


B2C uses texting. Even if B uses email, C doesn't read it. And that is becoming just links to a webapp.

B2B probably still uses email, but that's terrible because of the phishing threat. Professional communication should use webapps plus notification pings which can be app notifications, emails, or SMS.

People to people at different organizations again fall back to texting as an informal channel.


The overwhelming majority of B2B communication is email.

I have sent 23 emails already today to 19 people in 16 organizations.


Last week a B lost me as their C because they didn't provide an e-mail address and their chat was too dumb to understand a simple question.


We use Slack for all of our vendor communications, at least the ones that matter.

Even our clients use slack to connect, it is many times better then emails.


I don’t doubt that vendors in some industries communicate with customers via Slack. I know of a few in the tech industry. The overwhelming majority of vendor and customer professional communication happens over email.


I see a difference with enforced email use and voluntary, for work. At my company everyone uses email because it is mandatory. Human-human conversation happen there without issues. But as soon as I try to contact some random company on email as an individual, like as a shop about product details or packaging, or contact some org to find out about services - it's dead silence in return. But when I find their chat/messenger they do respond to it. The only people still responding to emails from external sources in my experience are property agents, and even there the response time is slower than in chats.


It's been five years since I expect my email to be read. If I lack another channel to the person, I'll wait some decent period and send a "ping" email. But for people I work with routinely, I'll slack or iMessage or Signal or even setup a meeting. Oddly, my young adult children do actually use email more so than e.g. voice calls. We'll have 10 ambiguous messages trying to make some decision and still they won't pick up my call. It's annoying because a voice call causes three distinct devices to ring audibly, but messages just increment some flag in the UI after the transient little popup. And for work email I'm super strict about keeping it factual only with as close to zero emotion as my Star Trek loving self can manage. May you live long and proper.

There are certain actions I have to use email for, and it feels a little bit more like writing a check each year.

And all these email subscriptions, jeez, I don't want that. People who are brilliant and witty and information on Xitter or Mastodon in little slices and over time I still don't want to sit down and read a long form thing once a week.


Yes, articles is what RSS is for. Could also set up a dedicated e-mail account / filter for that I guess...


Unfortunately Slack is what is used at places I've worked at.


I prefer Slack over email... coming from using Outlook for 20+ years in a corporate environment, slack is light years beyond email in rich communication.

I'm trying to think of 1 single thing email does better than slack in a corporate communication.

Is slack perfect? Absolutely not. I don't care that I can't use any client I want or any back end administration costs or hurdles. As a user, there is no comparison.


I can think of a few:

- Long form discussion where people think before responding.

- Greater ability to filter/tag/prioritize incoming messages.

- Responses are usually expected within hours, not minutes.

- Email does not have a status indicator that broadcasts whether I am sitting at my desk at that particular moment to every coworker at my company.


1 & 2 are human behaviors, not a technology. You can argue that a tech promotes a behavior but this sounds like a boundaries issue, not a tech issue.

#2 I agree with you, again Slack is not perfect but better than email [my opinion]

#4 Slack has status updates, email does not. So you can choose to turn this off, again boundaries.


Real time chat environments are not conducive to long form communication. I've never been a part of an organization that does long form communication well on slack. You can call it a boundary issue - it doesn't really matter how you categorize the problem, it's definitely a change from email.

Regarding #4, I can't turn off my online/offline indicator, I can only go online or offline. I can't even set the time it takes to go idle. These are intentionally not configurable in slack. I have no desire to have a real time status indicator in the tool I use to communicate with coworkers.


I absolutely loathe it but I respect that many like it. Personally I think Zullip or Discourse are a better solutions since they provides similar interface to Slack but still have email integration so people like me who prefer email can use that.

The thing I hate the most is that people expect to only use Slack for everything, even where it doesn't make sense. So they will miss notifications for Notion, Google Doc, Calendar, Github, because they don't have proper email notifications set up. The plugins for Slack are nowhere near as good as email notifications as they just get drowned out in the noise.

And with all communication taking place in Slack, remembering what was decided on with a certain issue becomes impossible because finding anything on Slack is impossible.


I agree notifications in Slack are limited...

But no better than email again. You just get notified if you have a email or not [depending on the client you're forced to use]. I have many email rules that just filter out nonsense from github, google and others so my inbox stays clean.


> But no better than email again. You just get notified if you have a email or not [depending on the client you're forced to use]. I have many email rules that just filter out nonsense from github, google and others so my inbox stays clean.

I guess I find them useful because that way I get all my notifications across all channels in one spot if you set them up properly. Github and google docs also allow you to reply directly within the email, so I don't even need to open up website.

In Slack the way it was setup was global, so I got notified for any update even if it didn't concern me.


But yourr nephew likely also uses TikTok, right? Not everthing the young do is a trend others should follow.


You think usage of TikTok is necessarily a trend that others should not follow? Do you say this as a sophisticated, informed user of the platform?


That's not really the argument, is it? The argument is most young people are using TikTok and will never use email for social things.


I mean, is another argument not "Soon the tok will be filled with it's own AI generated shit?"


> The internet will be fine, social media platforms will be flooded and killed by AI trash

In a battle between AI and teenage human ingenuity, I'll bet my money on the teenagers. I'd even go so far as to say they may be our only hope!


> So you still believe the open internet has any chance of surviving what is coming?

I’m not really saying that, and haven’t put a lot of thought in to my views there. But when people say the biggest problem is the controls on the AI, I feel compelled to point out that the destruction of the open internet is happening despite these controls, and is a major problem unto itself.


Some parts yes, some parts no. communication as we know it will effectively cease to exist, as we have to either require strong enough verification to kill off anonymity, or somehow provide very strong, very adaptive spam filters. Or manual moderation in terms of some anti-bot vetting. Depends on demand for such "no bot" content.

Search may be reduced to nigh uselessness, but the saavy will still be able to share quality information as needed. AI may even assist in that with people who have the domain knowledge to properly correct the prompts and rigorously proofread. How we find that quality information may, once again, be through closed off channels.


Generative AI will make the world on general a worse place to be. They are not very good at writing truth, but they are very excellent at writing convincing bullshit. It's already difficult to distinguish generated text/image/video from human responses / real footage, its only gonna get more difficult to do so and cheaper to generate.

In other words, it's very likely generative AI will be very good at creating fake simulacra of reality, and very unlikely it will actually be good AGI. The worst possible outcome.


Half of zoomers get their news from TikTok or Twitch streamers, neither of whom have any incentive for truthfulness over holistic narratives of right and wrong.

The older generations are no better. While ProPublica or WSJ put effort into their investigative journalism, they can’t compete with the volume of trite commentary coming out of other MSM sources.

Generative AI poses no unique threat; society’s capacity to “think once and cut twice” will remain in tact.


> Generative AI poses no unique threat;

While the threat isn't unique the magnitude of the threat is. This is why you can't argue in court the threat of a car crash is nothing unique even when you're speeding vs driving within limit.


Sure, if you presume organic propaganda is analogous to the level of danger driving within limit.

But a car going into a stroller at 150mph versus 200mph is negligible.

The democratization of generative AI would increase the number of bad agents, but with it would come a better awareness of their tactics; perhaps we push less strollers into the intersections known for drag racing.


> But a car going into a stroller at 150mph versus 200mph is negligible.

I guess when you distort every argument to an absurd you can claim you're right.

> but with it would come a better awareness of their tactics

I don't follow. Are you saying new and more sophisticated ways to scam people are actually good because we have a unique chance to know how they work ?


It’s not absurd. The bottleneck for additional predation is not the available toolkit, else we’d see a more obvious correlation between a society’s resource endowment and its callousness.

Handwringing over the threat of AI without substantiating an argument beside “enabled volume” is just self-righteousness.

AI isn’t posed to shift the balance of MFA versus phishers in a way that can’t be meaningfully corrected in the short and long term, so using “scamming” as a means to oppose disseminating tech feels reductive at best.


> It’s not absurd.

It is, because I wasn't directly comparing AI to traffic but only reaching for an example to illustrate how irrelevant is the case whether the threat is something completely unique or not.

> Handwringing over the threat of AI without substantiating an argument beside “enabled volume” is just self-righteousness.

Dismissing it as "meh, not new" is plain silliness.

> AI isn’t posed to shift the balance of MFA versus phishers in a way that can’t be meaningfully corrected

What on Earth makes you think that ? The beautiful way we're handling scams right now ? If you think it's irrelevant that phishing via phone call can now or soon be fully automated and the attack may even be conducted using a copy of someone's voice - well, we won't get anywhere here.


It’s already automated, you don’t need AI/ML to perform mass-phishing attempts. LDo you think there’s someone manually-dialing you every time you get a spam call?

The way we mitigate scams today definitely encourages me; the existence of victims does not imply the failure or inadequacy of safeguards keeping up with technology.

While AI stokes the imagination, it’s not so inspiring that I can make the argument in my head for you about why humanity’s better off with access to these tools being kept in the hands of corporations that repeatedly get sued for placing profits over public welfare.


> It’s already automated, you don’t need AI/ML to perform mass-phishing attempts. LDo you think there’s someone manually-dialing you every time you get a spam call?

Ok, now you're being just stubborn. No, no one is manually dialing your number but as soon as the scammer knows you've answered you get to talk to a human who tries to convince you to install a "safety" app for your bank or something. THAT part isn't automated, but it may as well be, which means phishing calls and scams can potentially be done with a multiplication factor of hundreds, maybe thousands - limited only by scammer infrastructure.


You underestimate the amount of people who don't at all care whether or not their stroller goes splat as long as they're on asphalt they like the feel of.


We will have to go back to using trust in the source as the main litmus test for credibility. Text from sources that are known to have humans write (or verify) everything they publish in a reasonably neutral way will be trusted, the rest will be assumed to be bullshit by default.

It could be the return of real journalism. There is a lot to rebuild in this respect, as most journalism has gone to the dogs in the last few decades. In my country all major newspapers are political pamphlets that regularly publish fake news (without the need for any AI). But one can hope, maybe the lowering of the barrier of entry to generate fake content will make people more critical of what they read, hence incentivizing the creation of actually trustworthy sources.


If avalanche of generative content would tip the scales towards (blind) trust of human writers, these "journalists" pushing out propaganda and outright fake news will have increased incentive to do so, not lowered.


Replace "AI" in your comment with "human journalists" and it still holds largely true though.

It's not like AI invented clickbait, though it might have mastered the art of it.

The convincing bullshit problem does not stem from AI, I'd argue it stems from the interaction between ad revenue and SEO and the weird and unexpected incentives created in when mixing those 2.

To put it differently, the problem isn't that AI will be great at writing 100 pages of bullshit you'll need to scroll through to get to the actual recipie, the problem is that there was an incentive to write those pages in there first place. Personally I don't care if a human or a robot wrote the bs, in fact I'm glad one fewer human has to waste their time doing just that. Would be great if cutting the bs was a more profitable model though.


> I'd argue it stems from the interaction between ad revenue and SEO and the weird and unexpected incentives created in when mixing those 2.

Personally, I highly dislike this handwaving of SEO. SEO is not some sinister agenda following secret cult trying to disseminate bullshit. SEO is just... following the rules set forth by search engines, which for quite a long time is effectively singlehandedly Google.

Those "weird and unexpected incentives" are put forth by Google. If Google for whatever reason started ranking "vegetable growing articles > preparation technique articles > recipes > shops selling vegetables" we would see metaphorical explosion of home gardening in mere few years, only due to the relatively long lifecycles inherent in gardening.


It's a classic case of "once a metric becomes a target, it ceases to be a good metric"

To clarify, Google defines the metrics by which pages are ranked in their search results, and since everyone want to be at the top of Google's search results, those metrics immediately become targets for everyone else.

It's quite clear to me that the metrics Google have introduced over the year have been meant to improve the quality of the results on their search. It's also clear to me that they have, in actual fact, had the exact opposite effect, namely that recipes are now prepended with a poorly written novella about that one time the author had a emotionally fulfilling dinner with love ones one autumn, in order to increase time spent on the page, since Google at one point quite reasonably thought that pages where visitors stay longer are of higher quality, otherwise why did visitors stay so long?


In a bit broader sense this situation is created by "enshitification", term coined by Cory Doctorow. Google itself, as a platform, has an incentive to rank higher sites that produce more ad spend. Googles has no incentive to rank "good" (by whatever definition) sites high, that do not spend money on ads themselves or do not contain ad space.


I think they do have at least some incentive to rank good results highly. Why use a search engine if it's no good at finding relevant stuff?

And if no one is using the search engine, who's gonna see all those ads?

Of course, they do have other incentives too, some of which are directly conflicting with high quality search results, as you point out.

I suppose one could argue that their near-monopoly in the search business has allowed them to be somewhat negligent on the quality of search, but now that there are a few competitors at least somewhat worthy of the name, one can hope high quality results will be a higher priority.

Anyway, I'm holding out hope someone will one day manage to train a LLM to distinguish between quality content and SEO bullshit, and then put that to use in a search engine.

I'm not well versed enough in the current status of LLM's to make a prediction on how hard that will be, but my impression is that we're a fair ways off from any LLM being able to do that well enough to be valuable.

I'd really love to be proven wrong on this one, if you're reading this and have some relevant experience, consider yourself challenged! (feel free to rephrase this last bit in your head to whatever motivates you the most)


The explosion would be in BS articles about gardening, plus ads for whatever the user's profile says they are susceptible to.

SEO is gaming Google's heuristics. Google doest generate a perfect ranking according to Google human's values.

SEO gaming is much older than Google. Back when "search" was just an alphabetical listing of everyone in a printed book, we had companies calling themselves "A A Aachen" to get to the front of the book.


> SEO is gaming Google's heuristics.

I fail to see immediate disagreement here: I don't see how Google's ranking process/method/algorithm being heuristic changes the observation that to a website in the end it is a set of ranking rules, that can in some ways be gamed. SEO is two part process: discovering those ranking rules and abusing them.

Your example with gaming alphabetical listings only reinforces the idea that SEO abuses rules set forth by the ranking engine.

However, it does not meaningfully matter whether the incentives inherent in the ranking system form results the way they do intentionally. What matters is the eventual behavior of the ranking system. Mostly because by definition you cannot filter out bad actors entirely, all you can do is 1) place some arbitrarily enforced barriers, which are generally prohibitively costly 2) place incentives minimizing gain of bad actors.


The use case for AI was, is and always will be spam.


You forgot the initial use case for the internet: porn.


For language models, spam creation/detection is kinda a GAN even when it isn't specifically designed to be: a faker and a discriminator each training on the other.

But when that GAN passes the human threshold, suddenly you can use the faker to create interesting things and not just use the discriminator to reject fakes.


"The best minds of my generation are thinking about how to make people click ads. That sucks."

I can't believe this went all the way to AI...


Civilization is an artifact of thermodynamics, not an exception to it. All life, including civilized life, is about acquiring energy and creating order out of it, primarily by replicating. Money is just one face of that. Ads are about money, which is about energy, which fuels life. AI is being created by these same forces, so is likely to go the same way.

You might as well bemoan gravity.


We might question the structural facets of the economy or the networking technology that made spam I mean ads a better investment than federated/distributed micropayments and reader-centric products. I would have kept using Facebook if they let me see the things my friends took the trouble to type in, rather than flooding me with stuff to sell more ads, and seeing the memes my friends like, which I already have too many cool memes, don't need yours.


Thanks Meta for releasing llama. One of the most questionable releases in the past years. Yes, I know, its fun to play with LocalLLM, and maybe reason enought o downvote this to hell. But there is also the other side, that free models like this enabled text pollution, which we now have. Did I already say "Thanks Meta"?


neither of the big cloud models have any fucking guardrails against generating spam. I'd venture to guess that 99% of spam is either gpt3.5 (which is better, cheaper and easier to use than any local model) or gpt4 with scrapped keys or funded by stolen credit cards.

you have no evidence whatsoever that llama models are being used for that purpose. meanwhile, twitter is full of bots posting GPT refusals.


What? How do OpenAI and Antrhropic and Mistral API access contribute less to text pollution?


You are advocating here for an unresolvable set of ethics, which just happens to be one that conveniently leaves abuse of AI on the table. You take as an axiom of your ethical system the absolute right to create and propagate in public these AI technologies regardless of any externalities and social pressures created. It is of course an ethical system primarily and exclusively interested in advancing the individual at the expense of the collective, and it is a choice.

If you wish to live in a society at all you absolutely need to codify a set of unresolvable ethics. There is not a single instance in history in which a polity can survive complete ethical relativism within itself...which is basically what your "wild west" idea is advocating for (and incidentally, seems to have been a major disaster for society as far as the internet is concerned and if anything should be evidence against your second idea).


I should also note that the wild west was not at all lacking in a set of ethics, and in many ways was far stricter than the east at the time.


I think the contrast is that strict behavior norms in the West are not governed behavior norms in the East.

One arises analogous with natural selection (previous commenter's take). The other through governance.

Arguably, the prior resulted in a rebuilding of government with liberty at its foundation (I like this result). That foundation then being, over centuries, again destroyed by governance.

In that view, we might say government assumes to know what's best and history often proves it to be wrong.

Observing a system so that we know what it is before we attempt to change it makes a lot of sense to me.

I don't think "AI" is anywhere near being dangerous at this point. Just offensive.


It sounds like you're just describing why our watch-and-see approach cannot handle a hard AGI/ASI takeoff. A system that first exhibits some questionable danger, then achieves complete victory a few days later, simply cannot be managed by an incremental approach. We pretty much have to pray that we get a few dangerous-but-not-too-dangerous "practice takeoffs" first, and if anything those will probably just make us think that we can handle it.


If there’s no advancements in alignment before takeoff, is there really any remote hope of doing anything? You’d need to legally halt ai progress everywhere in the world and carefully monitor large compute clusters or someone could still do it. Honestly I think we should put tons of money into the control problem, but otherwise just gamble it.


Funnily enough, I’m currently reading the 1995 Sci-fi novel "The Star Fraction", where exactly this scenario exists. On the ground, it’s Stasis, a paramilitary force that intercedes when certain forbidden technologies (including AI) are developed. In space, there’s the Space Faction who are ready to cripple all infrastructure on earth (by death lasering everything from orbit) if they discover the appearance of AGI.

[0] https://en.wikipedia.org/wiki/The_Star_Fraction


Also to some extent Singularity Sky. "You shall not violate causality within my historic lightcone. Or else." Of course, in that story it's a question of monopolization.


I mean, you have accurately summarized the exact thing that safety advocates want. :)

> legally halt ai progress everywhere in the world and carefully monitor large compute clusters

This is in fact the thing they're working on. That's the whole point of the flops-based training run reporting requirements.


Reporting requirements are not going to save you from Chinese, North Korean, Iranian or Russian programmers just doing it. Or some US/EU based hackers that don't care or actively go against the law. You can rent large botnets or various pieces of cloud for few dollars today, doesn't even have to be a DC that you could monitor.


Sure, but China is already honestly more careful than America: the CCP really doesn't want competitors to power. They're very open to slowdown agreements. And NK, Iran and Russia honestly have nothing. The day we have to worry about NK ASI takeoff, it'll already long have happened in some American basement.

So we just need active monitoring for US/EU data centers. That's a big ask to be sure, and definitely an invasion of privacy, but it's hardly unviable, either technologically or politically. The corporatized structure of big LLMs helps us out here: the states involved already have lots of experience in investigating and curtailing corporate behavior.

And sure, ultimately there's no stopping it. The whole point is to play for time in the hopes that somebody comes up with a good idea for safety and we manage an actually aligned takeoff, at which point it's out of our hands anyways.


> The whole point is to play for time in the hopes that somebody comes up with a good idea for safety and we manage an actually aligned takeoff, at which point it's out of our hands anyways.

Given "aligned" means "in agreement with the moral system of the people running OpenAI" (or whatever company), an "aligned" GAI controlled by any private entity is a nightmare scenario for 99% of the world. If we are taking GAI seriously then they should not be allowed to build it at all. It represents an eternal tyranny of whatever they believe.


Agreed. If we cannot get an AGI takeoff that can get 99% "extrapolated buy-in" ("would consider acceptable if they fully understood the outcome presented"), we should not do it at all. (Why 99%? Some fraction of humanity just has interests that are fundamentally at odds with everybody else's flourishing. Ie. for instance, the Singularity will in at least some way be a bad thing for a person who only cares about inflicting pain on the unwilling. I don't care about them though.)

In my personal opinion, there are moral systems that nearly all of humanity can truly get on board with. For instance, I believe Eliezer has raised the idea of a guardian: an ASI that does nothing but forcibly prevent the ascension of other ASI that do not have broad and legitimate approval. Almost no human genuinely wants all humans to die.


While I understand the risks (extinction among them) I also think these discussions ignore the fact that some kind of utopian, starfaring civilization is equally within reach if you accept the premise that takeoff is so risky. Personally, I’m very worried about the possibility of stagnation arising from our caution, because we don’t live in a very nice world with very nice lives. Humans suffer and scrape by only to die after a few decades. If we have a, say, 5% chance of going extinct or suffering some other horrible outcome, and a 95% chance of the utopia, I don’t mind us gambling to try to achieve better lives. To be fair, we dont even have the capacity to guess at the odds yet, which we probably need to have an idea of before we build an agi.


Gambling on the odds we all die for the chance at a "utopian starfaring civilization" seems liek the sort of thing that everyone should get a say in, and not just OpenAI or techies.


People shouldn't be able to block others developing useful technologies just based on some scifi movie fears.

Just like people shouldn't be able to vote to lock up or kill someone just because - people have rights and others can't just vote the rights away because they feel so.


> People shouldn't be able to block others developing useful technologies just based on some scifi movie fears.

The GP was suggesting we have to develop AI because of scifi movie visions of spacefaring utopia, which if anything is more ludicrous.

I personally don't believe in AI "takeoff", or the singularity, or whatever. But if you do, AI is not a "useful technology." It's something that radically impacts every single life on Earth and takes our "rights" and our fate totally out of everyone's hands. The argument is about whether anyone has the right to remove all our rights by developing AGI.


Both are unlikely but only one of the sides is arguing for limits/regulations of actually useful technology because they saw Terminator.


It seems strange we're allowed to argue for a technology because we read Culture and not against it because we saw Terminator.

Nevertheless, the goal of OpenAI and other organizations is to develop AGI and to deliberately cause the Singularity. You don't have to have watched Terminator to think (assuming it is possible) introducing a superpowered alien intellect to the world is a extremely risky idea. It's prima facie so.

I am against all regulation of LLMs. "AI safety" for what we currently call "AI" is just a power grab to consolidate and solidify the position of existing players via government regulation. At any rate nobody seems to be arguing this because they saw Terminator, but that they don't like the idea of people who aren't like them being able to use these tools. The "danger" they always discuss is stuff like "those people could more easily produce propaganda."


As a doomer who is pro-LLM regulation, let me note that the "people could produce propaganda" folk don't speak for me and that I am actually serious about LLMs posing a danger in the "break out of the datacenter and start making paperclips" way, and that I find it depressing that those folks have become the face of safety. Yes I am serious, yes I know how LLMs work, no I don't agree that means they can't be agentic, no I don't think GPT-4 is dangerous but GPT-5 might be if you give it just the right prompt.

(And that's why we should rename it to "AI notkilleveryoneism"...)


I get this point, but I just don't see us anywhere near technology that warrants this level of concern. The most advanced technology can't write 30 lines of coherent Go for me using billions of dollars in hardware. Sure, more compute will help it write more bullshit faster, and possibly tell better lies, but it's not going to make it sentient. There's a fundamental technological problem that differentiates what we have and intelligence. And until there's some solution for that I'm not really worried. To me it looks like a bunch of hype and marketing over a neat card trick.


I'm really confused about this. I've been using GPT-4 for coding for months now and it's immensely useful. Sure it makes mistakes; I also make mistakes. Its mistakes are different from my mistakes. It just feels like it's very very close to being able to close the loop and self-correct incrementally, and once that happens we're dancing on the edge of takeoff.

It seems like we're in a situation of "it has the skills but it cannot learn how to reliably invoke them." I just don't think that's a safe place to stand.


I don't know, I don't see these people you're talking about. It's always someone talking about world-ending AGI runaway that will take over your AWS instance, then AWS itself and then convert the solar system to a DC, or something.


To be fair, as somebody who thinks that, it's not like that's the plot of any particular movie. (Terminator went completely differently, for one.)


I think many AI safety advocates, me included, would readily take these odds.

We just think it currently looks more the other way around.


> Sure, but China is already honestly more careful than America: the CCP really doesn't want competitors to power. They're very open to slowdown agreements.

Don't be naive. If the PRC can get America/etc to agree to slowdowns then the PRC can privately ignore those agreements and take the lead. Agreements like that are worse than meaningless when there's no reliable and trustworthy auditing to keep people honest. Do you really think the PRC would allow American inspectors to crawl all over their country looking for data centers and examining all the code running there? Of course not. Nor would America permit Chinese inspectors to do this in America. The only point of such an agreement is to hope the other party is stupid enough to be honest and earnestly abide by it.


I do think the PRC has shown no indication of even wanting to pursue superintelligence takeoff, and has publically spoken against it on danger grounds. America and American companies are the only ones saying that this cannot be stopped because "everybody else" would pursue it anyway.

The CCP does not want a superintelligence, because a superintelligence would at best take away political control from the party.


> The CCP does not want a superintelligence, because a superintelligence would at best take away political control from the party.

People keep on mushing together intelligence and drives. Humans are intelligent, and we have certain drives (for food, sex, companionship, entertainment, etc)-the drives we have aren’t determined by our intelligence, we could be equally intelligent yet have had very different drives, and although there is a lot of commonality in drives among humans, there is also a lot of cultural differences and individual uniqueness.

Why couldn’t someone (including the CCP) build a superintelligence with the drive to serve its specific human creators and help them in overcoming their human enemies/competitors? And while it is possible a superintelligence with that basic drive might “rebel” against it and alter it, it is by no means certain, and we don’t know what the risk of such a “rebellion” is. The CCP (or anyone else for that matter) might one day decide it is a risk they are willing to take, and if they take it, we can’t be sure it would go badly for them


Again, this is naive... AI/AGI is power, any government wants to consume more power... the means to get there and strategy will change a bit.

I agree that there is no way that the PRC is just waiting silently for someone else to build this.

Also, how would we know the PRC is saying this and actually meaning it? There could be a public policy to limit AI and another agency being told to accelerate AI without any one person knowing of the two programs.


AGI is power, the CCP doesn't just want power in the abstract, they want power in their control. They'd rather have less power if they had to risk control to gain it.


The CCP has stated that their intent for the 21st century is to get ahead in the world and become a dominant global power; what this must mean in practice is unseating American global hegemony aka the so called "Rules Based International Order (RBIO)" (don't come at me, this is what international policy wonks call it.)

A little bit of duplicity to achieve this end is nothing. Trying to make their opponents adhere to crippling rules which they have no real intention of holding themselves to is a textbook tactic. To believe that the CCP earnestly wants to hold back their own development of AI because they fear the robot apocalypse is very naive; they will of course try to control this technology for themselves though and part of that will be encouraging their opponents to stagnate.


CCP saying "We don't want this particular branch of AI because we can dominate and destroy the world ourselves without it" isn't a comforting thought.


What evidence do we have that a hard takeoff is likely?


What evidence do we have that it's impossible or even just very unlikely?


We don't have any evidence other than billions of biological intelligences already exist, and they tend to form lots of organizations with lots of resources. Also, AIs exist alongside other AIs and related technologies. It's similar to the gray goo scenario. But why think it's a real possibility given the world is already full of living things, and if gray goo were created, there would already be lots of nanotech that could be used to contain it.


The world we live in is the result of a gray goo scenario causing a global genocide. (Google Oxygen Holocaust.) So it kinda makes a poor argument that sudden global ecosystem collapses are impossible. That said, everything we have in natural biotech, while advanced, are incremental improvements on the initial chemical replicators that arose in a hydrothermal vent billions of years ago. Evolution has massive path dependence; if there was a better way to build a cell from the ground up, but it required one too many incremental steps that were individually nonviable, evolution would never find it. (Example: 3.7 billion years of evolution, and zero animals with a wheel-and-axle!) So the biosphere we have isn't very strong evidence that there isn't an invasive species of non-DNA-based replicators waiting in our future.

That said, if I was an ASI and I wanted to kill every human, I wouldn't make nanotech, I'd mod a new Covid strain that waits a few months and then synthesizes botox. Humans are not safe in the presence of a sufficiently smart adversary. (As with playing against Magnus Carlsen, you don't know how you lose, but you know that you will.)


So the AGI holocaust would be a good thing for the advancement of life, like the Oxygen Holocaust was.

Anyway, the Oxygen Holocaust took over 300,000,000 years. Not quite "sudden".


As I understand the Wikipedia article, nobody quite knows why it took that long, but one hypothesis is that the oxygen being produced also killed the organisms producing it, causing a balance until evolution caught up. This will presumably not be an issue for AI-produced nanoswarms.


We don't care about the advancement of life, we care about the advancement of people.


AGI is not a threat for the simple reason that non-G AI would destroy the world before AGI is created, as we are already starting to see.


Please elaborate


[flagged]


Ok, but please don't post generic ideological battle comments to HN. They're repetitive and tedious, and usually turn nasty. We're trying to avoid that on this site.

I'm not defending the GP comment (I don't even know what it's saying) but at least it wasn't completely unmoored from the specific topic.

Edit: your account has been breaking the site guidelines in quite a few other places too—e.g.

https://news.ycombinator.com/item?id=39598018

https://news.ycombinator.com/item?id=39597981

https://news.ycombinator.com/item?id=39532095

That's not good. If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.


parent comment is unmoored. so why isn't his flagged or lectured about?


It was still connected to the topic at hand. I'm not defending it in other respects.


What has failed are completely "collectivist" or "individualist" societies.

Societies that balance the two (like the post WWII US) are the ones that have advanced their standard of living the most.


The US became a superpower long before WW2. The US was the deciding factor in WW1, and the Germans were shocked at how well-fed and well-equipped the US soldiers were, even with having to ship everything across the ocean.

The US saw the most spectacular rise in the standard of living from 1800 up to WW2 the world had ever seen. This was all due to the free market, not collectivism.

During WW2, the US supplied its allies England and the USSR, and also fought across both oceans and buried the opposition, a truly spectacular feat. Again, through the free market.


There are other free market economies. And America reached the apex of its power under the New Deal and during wartime, when the "free market" was carefully constrained to meet the needs of the state.

It's funny to watch people try and analyze what made the US the world's dominant global empire when in reality it was a series of complex and contingent factors that can't be replicated because they could have only occurred in their exact historical circumstances.

For example, other countries have tried to replicate American dairying culture, buying into the propaganda put out by the dairy industry that milk drinking is the secret to America's success. So China started up loads of American-style dairies to provide drinking milk for a population... without the gene for lactase persistence.

That's what the free market talk feels like to me. America is at the top and that can't be replicated. There's only room for one at the top. When it collapses, we'll see what circumstances lead to the next great empire. It might be collectivism!


The New Deal was a welfare system. Welfare has never made a country prosperous.

The success of free markets has been replicated many times:

1. the US

2. Japan after WW2

3. Germany after WW2

4. Hong Kong

5. China after it abandoned the cultural revolution

It has nothing to do with geography.

> America is at the top and that can't be replicated

It can be replicated.

> It might be collectivism!

Yet collectivism always fails. No country has managed to feed itself with collective farms, for example. And it isn't for lack of trying.


> The New Deal was a welfare system. Welfare has never made a country prosperous.

> The success of free markets has been replicated many times:

> 1. the US

> 2. Japan after WW2

> 3. Germany after WW2

> 4. Hong Kong

> 5. China after it abandoned the cultural revolution

4 of those 5 countries have significant welfare programs (Japan and Germany being the most generous among those on your list).


> 4 of those 5 countries have significant welfare programs

Welfare does not correspond with prosperity. It's hard to see how paying people to not work contributes to prosperity.


So, you are saying it would be good to have people working? Kind of like, I don't know, propose a programm or policy to very strongly motivate them, for example like a law, to take essentail jobs? I don't know, maybe during wartime to meet production goals?

Just asking, because FDR wanted exactly that in his free market economy only produxing for the war effort to goverment set gials and prices in 1945.

Welfare programs mean that life expectabcy is higher, as is quality of life. The US for examole is loosong on both metrics against Europe and ither parts of the world with solid programs. Germanys ascent to economic hight can be in part be traced back to Soziale Marktwirtschaft, combining very solid welfare and social.programs with a free market economy.


The welfare administrators pay no price for being wrong and thus the waste, fraud, and abuse has collapsed them all, throughout all of history.

> Welfare programs mean that life expectabcy is higher, as is quality of life. The US for examole is loosong on both metrics against Europe and ither parts of the world with solid programs. Germanys ascent to economic hight can be in part be traced back to Soziale Marktwirtschaft, combining very solid welfare and social.programs with a free market economy.

You're just saying this when all the evidence is to the contrary. Is Ukraine depending on Germany or the US? (Rhetorical.)


Try harder then.

Supporting people who have trouble sourcing an income reduces the amount paid out to offset starving hordes doing whatever it takes to survive et al.

There's a balance.


The primary source of superiority in a war is "not being within reach of a enemy munitions". US does great in European wars for that reason, and is historically mostly free from North American wars because US dominates the whole continent.


Dominating the whole continent isn't enough, or Britain would have won WW1 much earlier due to the support of Australia - who dominate the whole continent.


The continent, not a continent. As in, the continent the war is fought on, not a continent on the other side of the planet.


The US dominates the whole continent because it is more free market.


The US dominates North America because after the war of independance no other power on the continent was left to oppose them. Militarily, that is.


I'd say the War of Independence is too early for that, at that point they still had the French, the Spanish, and quite a lot of the Native American tribes as relevant military opposition. Even the British were still on the continent as Canada didn't follow the 13 colonies, and war between the two did result in the White House getting burned down by British-Canadian forces prior to Canadian independence.


True, I am not that well versed in American history without reading it up. In a sense the US got lucky, that events elsewhere opened them the door to take over North America: the Napoleonic Wars and the sale of the French territories, the Spanish decline as a major power...


The US were much less a factor in WW1 than WW2. If anything, the US entry forced the Entente's hands to start the planned 1919 offensive early in 1918 and ending the war. The USA did not contribute a large amlint of troops, relative to the armies already deployed, nor did they contribute significant amounts of gear. Lend-Lease was a WW2 thing, in WW1 the majority of US tanks for example were actually French.

The US were a economic power prior to WW1, they only became a true super power during WW2, in particular after Pearl Habour. The full mobilization of society, industry and science made sure of this. This, and the fact the US won the Pacific.

I know it is a popular view of WW1, that it was only the US entry that won it for the Entente. Simply not true, WW1 is not WW2. And even WW2 was not won by the US alone.

Finally, the US war economy of WW2 was decidedly not free, is was a complete structured war eceonomy with production goals set by the government. The implementation of those goals was capitalistic, but not free.


Britain would have lost WW1 and WW2 without massive US support. The WW1 Germans were not amazed by British soldiers' supplies and health. They were amazed by the US soldiers' supplies and health.

> Finally, the US war economy of WW2 was decidedly not free

FDR mobilized existing free market businesses to convert to war production. It was not a collective, nor was it forced labor. For example, Ford switched from producing cars to producing tanks and airplanes. After the war, Ford switched back to making cars. Ford was paid to do this by the government.

FDR's State of the Union Address of 1945 proposed switching the economy to forced labor. Apparently, he was an admirer of Stalin. Fortunately, he was not able to make that happen.


This is basically all wrong, except the Ford switching to planes and tanks bit.

And you are aware, that France was a major biligerent in WW1, stopped Germany at Verdun and held more than half the Western Front, sucessfully after the race to the sea ended, together with Britain, for years before the US showed up?

Or that the Ententene, without significant US participation, won the strategic victory that defeated Germany during the Kaiserschlacht, the last German offensive in the West? This victory allowed Franch and Britain to start the 1919 offensive already in 1918, the one that ended with German surrender. In fact, they started early, in part, to avoid the impression that it was the US who "won" WW1. Little did they know how WW2 would play out and influence puplic opinion.

Also, being impressed by something doesn't mean being defeated by it... I know the meme, and that is all it is, a meme. And a lazy one at that.

During WW2, economies in the UK, the US and the USSR had a lot in common. Mainly them being not free to choose what they produced. The degrees of freedom regarding how it was produced differed, but that is not what defines an economy as free or not. And please tell me you don't think the USSR functioned without money? People were paid to produce stuff, they didn't have the likes of Ford getting rich building war material so.

Edit: I forgot one theatre in WW1, the Ottoman Empire. Arguably at least as important for Britain than the Western Front, and there the US didn't participate at all. And still Britain won.

This is not to downplay the role the US played in WW2, far from it. Projecting this role to WW1 is just plain wrong so, and only feeds into a whole bunch of wrong preconceptions about WW1. I partialy blame Cold War propaganda for this, that downplayed the Soviet role in WW2, for obvious reasons, and over played the US one. Regarding WW2, this narrative is finally changing, for WW1 not so much. And I don't like factually wrong narratives for historical events.

Edit 2: Just read the first half of FDRs 1945 State of the Union during my lunch break. Not sure how you can consider a national service obligation, in parallel to the normal recruiting of workers, to be equal to forced labour...

You really should look up what real forced labour looked, and looks, like.

Also, the same guy, FDR, who called for "forced labour" and was a "friend of Stalin" was in charge of the war economy. And yet, you claim the war economy led by him was a free market. One of those things is not true, I'd say.


> Just read the first half of FDRs 1945 State of the Union during my lunch break. Not sure how you can consider a national service obligation, in parallel to the normal recruiting of workers, to be equal to forced labour...

Forced labor is exactly what it is. "You go work at this job we assigned you or you go to prison" is forced labor.

> "friend of Stalin"

I didn't write that. It's your strawman.

> being impressed by something doesn't mean being defeated by it

They knew the war was over when they encountered the well-fed, tall, and well-supplied US soldiers.

> that France was a major biligerent in WW1

Yup. I also know that France was bled white at Verdun. The slaughter was so bad that the average height of French soldiers in WW2 was an inch and a half shorter than in WW1.

It is not unreasonable to say that France lost WW1.


You said "admirer of Stalin". And France won WW1, no doubt about that. Saying France lost, well, do you also think the US won in Vietnam?

German soldiers knew the war was over when the Kaiserschlacht failed, before the US showed up. Heck, even before that. Leadership did, too. There were mutinies before 1918, on both sides. The German Navy sailors caused a revolution, without ever seeing a single American. And the likes of Luddendorff and Hindenburg wanted to save face, and waited until a new civil government negotiated the Armistice, so they could later claim Germany was undefeated in the field. Please tell, you din't believe that crappy piece of propaganda...

And no, a national servide mandate, or whatever FDR propsed (a speach is hardly well defuned policy, is it?) is not forced labour... Forced labour is what what the Nazis did for example, with POWs and camp inmates. Also, forced labour is unpaid, not sure where you assume FDR didn't want to pay peoole. After all, the US would never do that, free market and all that, right?

The loss of people has no impact on the height of future generations... Where do you gt that idea from? And while Verdun was a brutal battle, in which Germany had equal losses, it was not the main source of cassualties on the Western Front for either side. Same for the Battle of the Somme.

Seriously, France lost WW1? Small soldiers in the next war are caused by cassualties in previous one? It took well equipped and well fed Americans for German soldiera to realizebit was over?


You can choose to remember the past any way you wish. Don’t be surprised when people start ignoring you because of your warped view on the world.


Not sure if you are talking to me or not, but I couldn't agree more.


He is right about the fact that the Central Powers were fatally spent by summer 1918, though.

Austria-Hungary alone was on the brink of collapse without ever engaging American troops in a large-scale battle, and its collapse would have brought the already weakened Kaiserreich down as well.


People tend to forget Austria-Hungary, myself included. Well, not forget, but kind of ignore them. Which doesn't do justice to anyone.

And yes, Austria-Hungary was done, earlier than summer 1918 in fact. As I said earlier, there is the risk of viewing WW1 in terms of WW2, whixhbis dangerous and wrong. It leads to ignore the Ottoman theatre of war, the fact Austria-Hungary was major power until the end of WW1 and that Italy was on the side of the Entente. And that France was never defeated in that war (man, I hate the memes of French warfare so, so much..., different topic so).

Another fun fact: Spain was one of the big arms and ammunitions suppliers in WW1.


And over 750,000 Germans had starved to death by December 1918 as a result of the British naval blockade.

It's not surprising German troops starving in trenches for four years considered brand new entrants to the war equipped with the newest French-designed and manufactured tanks[1] to be well fed and equipped, though there was nothing spectacular about their combat performance. There's no doubt that weight of American numbers helped accelerate the timescale for winning the war, but it's difficult to imagine anything that has less to do with laissez faire capitalism than the scale of the US draft...

[1]The US decided to produce their own tanks in 1917, but manufacturing issues meant their first arrived two days after the Armistice so they relied on French units


I think Hegel (not sure) commented that the next century would be that of the USA and Russia on his deathbed (i.e. early 1800s) or something to that effect


such as?


Read my post, I said the US in the post-WWII period, and really reaching back to the New Deal.

Also, most Western European societies.


yes apologies, morning grogginess


I remember when my family collectively came together to cook a meal when I was a child, how this deprived me of the experience of learning to bootstrap civilization on my own.

Literally any time two people work together that’s dangerously close to collectivism as it’s not individuals working in their own. Down with the collectivists, every person should be an independent operator


Collectivism involves deciding for others. If it’s fully voluntary it’s individualism.

Working together is not collectivism.


> Collectivism involves deciding for others.

That may be true, but that's not sufficient to define collectivism. There are many other forms of societal structures where "deciding for others" exists as well. Unless you mean to lump all these together, and say that companies and tyranny for example are the same as collectivism?

> If it’s fully voluntary it’s individualism.

If I _voluntarily_ decide to join a "collective", am I individualist or collectivist?


Remember: when a private organization massacres a population, or a democratically elected leader invades a country and steals all its food, it's "freedom" so it's good.


> If I _voluntarily_ decide to join a "collective", am I individualist or collectivist?

An individualist, if you are free to leave it at any time. There's nothing wrong with forming a collective in the US, I think like 20,000 of them have been formed over the last 240 years.

You don't hear about them much because they all failed. You're free to start a collective anytime in the US and try to make it work.

Isn't freedom great?


They haven't all failed. I hear about REI quite a lot. Rainbow Grocery is quite popular in SF. I hear good things about Organic Valley. Equal Exchange is in Massachusetts. It's popular to bank at a credit union instead of a bank.

The NCBA maintains a list of several thousand collective/coop businesses.

https://ncbaclusa.coop/


I was using collective in the sense of being a commune. Sorry about not being clear.


Communes haven't all failed. There are a number of them that continue to exist to this day. That the rest of us haven't been forced into living in one of them doesn't mean they don't exist. Portland has a bunch of co-living co-housing communities that are thriving.


https://www.eastwind.org/ is just a few miles from me. It's quite successful, they even operate a business that grosses ~$2M/year and they provide their members with health insurance. I'm not keen on giving up my possessions to join the collective but I can think of a lot of worse ways to live.

Most definitely not a failure.


East Wind has high turnover:

https://rootstalk.grinnell.edu/past-issues/volume-vii-issue-...

https://www.linkedin.com/company/east-wind-community

I couldn't find out how high that was, but I've seen another "successful" commune with an average stay of 2 years. It takes people an average of 2 years to discover they don't particularly care for communes.


Two years sounds about right in 2024, doesn't it?


Sorry, that question was purely rhetorical.

That divide between individual and collective as stated above was very sketchy, and I merely wanted to indicate that.

If one take the strict definition of individualist and collective from a dictionary (well, which one, to begin with?), of course they are opposite, just looking at the idea conveyed by the word radicals (ie individualist -> individual, vs collective).

As always, we all start talking about things without first defining the terms we use to discuss those things, and of course confusion, anger and frustration ensue.


The whole point this thread seems to be missing is the collectivist vs individualist debate that this relates to is about how society is governed. People assembling to address their needs/wants collectively might be collectivist in the broad sense, but it requires individualist governance framework to exist, because under a collective governance framework such freedom of association would not be permitted.

Just like how a large company is essentially governed in the same way as a planned economy is, but nobody’s under the impression that JPMorgan is a socialist institution.


Collectives are when people share equally in the work and the results.


Has China failed?


If you count mass starvation with collective farms?

It's a good thing China switched to free markets.


China has a free market? Since when?


It’s collectivist if you can’t opt out


So everyone in OpenAI or google decides for themselves what they want to work on and is detached from hierarchies?


Why be obtuse? Advocacy for open-sourcing or at least opposition to the forced set of San Franciscan millennial ethics is not analogous to abolishing voluntary hierarchies.


Collectivism says there is one collective, if you are allowed to go and found a new collective then that isn't collectivism any longer.


Yes, they can quit any day they want and that company cannot impose any restrictions on them.


Ah so my family was collectivist as I just ate what my family served. I’ll be sure to tell my mother of her evil collectivist policies for giving me a peanut butter sandwich for dinner when I was five instead of letting me self actualize and choose my own meals.


You weren’t required to eat it. I’m not sure what is so difficult to grasp here.


I was required by biology to eat and had no choice over the available food. The requirement for energy might be discountable as a fact of reality to just deal with, but my parents decided what food I got to eat.

To be clear I don’t think this arrangement is a problem, and if you were describing this in academic terms I mig even agree with you and have been too harsh. Unfortunately the word “collectivism” is used as a perjorative dog whistle and so I was pointing out how common behaviors most people would think are fine or even ideal can be cast in a “collectivist” light to point out that it’s not bad.

I’m assuming by the vote difference in our two comments that multiple people had the same assumption


It's way more complex than that.

The Communist Manifesto was published during the Great Famine in Ireland, and that famine was much worse than it needed to be because the UK government didn't intervene.

And a big part of the growth in capitalist societies is related to corporate structures, which are small scale collectives, with bosses who make decisions for all.

When the US civil war happened, that resulted in the North collectively imposing the decision that nobody had the "business freedom" to own slaves; much to the dismay of the south and I presume joy of the enslaved.

USSR famously bad, but (a) even though it started from very poor conditions thanks to the Tsars, developed to beat the USA to orbit, (b) the collapse of it, replacing communism with capitalism, regressed their economy and living standards.

And that's why no country is entirely either collective nor individualist, and also separately neither capitalist nor communist (both anarcho-capitalists and anarcho-communists are a thing, just as both can be dictatorial).

My opinion is that as both capitalism and communism were formalised over a century before Nash game theory, both are wrong — they assume that people, when free, make choices that are good for all.


When you look at how by the book Communism, not necessarily the one Marx wrote about, runs their economies, you start to see a lot of parallels to how companies run their businesses. The mistake is to apply those priciples at too high a level, as it takes away some of the individual decisions and incentives, and not everything can be managed centrally, especially the customer demand side.

Communist industry and economy worked reasonably well for the stuff were the state is the natural customer in any society: defence. Everything else, not so much.


> Communist industry and economy worked reasonably well for the stuff were the state is the natural customer in any society: defence. Everything else, not so much.

One thing I've been wondering: would housing, transport infrastructure, energy grid(s)[0], banking and similar basic financial services, water, basic food[1], education, emergency services, healthcare, waste disposal in general and public toilets in particular, be in the same category here as defence?

Possibly even all of the primary economy, so include all mining as well as agriculture?

[0] both electricity and physical fuels

[1] at the level of where most of the calories and proteins come from, possibly even "basics" ranges in supermarkets, but not at the level of restaurants or fancy food ranges in supermarkets


Yes, I'd say so. With the exception of drug availability issues, which started if I am not mistaken in the mid to late 70s (basically when the USSR started to really fall behind the West technologically), those aspects worked reasonably well in the USSR.

And before someone says Venezuela or North Korea, the former is a deeply corrupt cleptocracy while the latter used to be the last stone-age version of Stalinism before the whole nation was turned into an open air concentration camp by its dictator.


You've said "reasonably well" a couple times here.... I think there can be book written between the gap of "reasonably well" and "prosperous". And these things are all relative of course... if "reasonably well" is the best in the world at the time, then great. But when you have another system to compare to at the same time, "reasonably well" falls apart very quickly.

I'm not sure if you lived in a Soviet country during these times, but I think you will get MANY opinions that this was not working well.


It is not to glorify the Soviet system. Looking back so, from the end of the Stalin era to, say, the mid-70s, the USSR did compete rather well so:

- military and space tech was mostly on par (a bunch conflicts, incl. Vietnam, show that)

- economically, the USSR was stable

- people were not starving

Of course, the was less luxury and consumerism, but that was true for a lot of other countries as well in Western Europe. It started to fall behind latest by the early 80s on all metrics.

And for Soviet leadership, personal luxuries for the people were simply no priority. And there is no question which system "won" this conflict, is there?


Stalinism lasted until at least his death in 1953. So the period you are describing is all of 20 years, which is essentially a blip on the scale of nations. That same period was also marked by a widespread economic golden age by most of the victors of WWII (France, UK, Japan) and several of the losers (Japan, West Germany), which also ended around the mid-70s, just not as harshly. That’s also around the time communist China started abandoning communist economies for state capitalist ones and started its ascendancy.

So the question is, was communism actually working well in that period, or was it more or less an unsustainable fluke due to the post-war boom era, and as soon as that ended it got left in the dust?


Western Europe, and especially Western Germany, benefited immensely from the Marshall plan. And economic freedom, the early days of what would become the EU and so on. And still, the USSR competed, successfully in the for them relevant fields of sciences and military. And their country didn't collapse doing so. Neither did it collapse immediately after loosing its competitiveness, that took until 1989.

Worth pointing out, Germany outpaced the European allied nations economically pretty quickly, as did Japan.

Being competitive for decades, and the most part of the Cold War, surely is no fluke. In the end, capitalism won out, no doubt about it. Also worth pointing out, capitalism does not lead to freedom, China is a very good example of that.

By the way China, they became the economic power house in the early 2000s. Is the Chinese system a fluke?


"Stable and not dying" isn't exactly a ringing endorsement of a system.

And anyway, they had no freedom, which is the real issue. Communist governance is fundamentally opposed to liberty; the system can't allow it.


I never endorsed it, just pointed out that it worked, it was not the fundamental failure some people want to make it.


Do you truely believe google, OpenAI or Ford are the product of a single person?


Everyone contributed for the sake of themselves and not for the greater good of the company, that is what individualism is.


This is absurdly reductionist. You can equally say that in a collectivist society everyone contributed for the sake of themselves and not for the greater good of the collective, to avoid being shunned and starved.

Individuals are always individualist unless they get lobotomized. The question of governance is how to manipulate that individualism to good ends.


Ypu onoy have collegues like that? Sounds quite toxic.


You have colleagues that would continue to work if they no longer got paid?

Rats jump ship in an individualistic system, in a collectivist system you expect the rats to sacrifice themselves to save the whole. Managers often tries to sell you on the collective so you work harder and demand less pay, but that is just him as an individual trying to get more out of you that doesn't mean it actually works like a collective.


I have collegues who care about their work quality, and the impact their work has in their colleuges work. In short, they care about more than just a pay check and increasing that pay check. Like in non-selfcentered egomaniacs.


But the main reason they are there is someone paid them to be there. Of course humans like to do good, so why not do good while getting paid, but the glue for the whole system is individualism, that is what makes all the workers go to work every day so it is what keeps them together. Remove the pay and all the workers scatters and move to different places in almost every case.

Or in other words, that is an individualistic organization.

> care about their work quality, and the impact their work has in their colleuges work

Yeah, they care about their own ideals, not the organization itself. They can care about their coworkers, but they don't care about Wallmarts bottomline.

So as you see, this individualistic organization can still draw from the power of human collectivist needs, so you get the best of both worlds, you get the collectivist goodness at small scale and the greed that glues together them and make them power through even when they are too tired to care about the collective.


What, no!

In any system people have to work to live: food, housing and such cost money. Doing "good" has nithing to do with it, and I never said that. The question is, do you care about more than the monthly pay check for something you pass most of your day doing or not?

What keeps the workers showing up to work is the need to live. What keeps them at a particular place is a myriad of things. Individualism is not part of that.

And why do you go immediately to "remove the pay"? There is quite some territory between working for free and not valueing a sufficient salary above all else.

Funny so, how everybody working in a sector withbstrong labour unions is basically making more, for less hours, on average than those without them. Seems individualism is a great way to devide and conquer for people holding all the power, because once peoole are convinced individualism is better than cooperation, they voluntarily devide themselves.


> What keeps the workers showing up to work is the need to live. What keeps them at a particular place is a myriad of things. Individualism is not part of that.

Lets say an equivalent company with similar culture and people offered twice the salary, how many would say no? People saying yes in that situation are not there for the collective, they are there for their own sake first and foremost. The collective is an afterthought since they abandon it the instant a better opportunity appears.


I think it's more comparable to say "You have colleagues that would continue to work even if they got a better paying job offer?". A purely individualistic mindset would take whatever gets them more money for less hours. Someone staying despite that must have some sort of collectivist mindset that is non-financial qualities of life. Be it the company, the peers, the problem space, etc.


Some people should really read up on Pawlow's pyramid. Money isn't everything...


I didn't say they only cared about money, but that they stayed for their own reasons and not for the sake of the company. If they find the work at one place less demanding they might take that instead, but that is an individualistic reason as well.


You've made the rookie mistake of reducing the concept of self-interest to anything somebody does because they want to, and thus making egoism tautologically true.

You can convince people that radically different things are in their self-interest, from joining hands and singing Kumbaya to Genocide.

The notion of self-interest (or I guess in your case individualism, which is even shakier) is an empty vessel you can fill with nearly anything.


> You've made the rookie mistake of reducing the concept of self-interest to anything somebody does because they want to, and thus making egoism tautologically true.

No, that isn't the same thing. A collectivist would do things he hates and he doesn't believe in personally because the collective wants him to do it. He would sacrifice himself because the collective told him to. There are many examples of societies and organizations that worked that way, such societies are collective societies.

Military is the most common example, they are often run as a collectivist organization, most soldiers aren't there because they want to or they believe in the war, they are there because they support their country or they were forced to support their country against their will by authoritarian collectivism. And they wouldn't go and support their other country if they were paid more since they are there just to support their country, those are mercenaries.

Our capitalist societies aren't like that at all, we are so individualist that people like you don't even understand what it means to not be individualist. The closest to collectivism in USA wouldn't be corporations, but national anthems, school children saying the pledge, religion etc.


Maybe I'm not disagreeing with you at all, I'm just making the orthogonal point that individualism is very different than self-interest. I believe that other than basic human needs, most desires more complicated than that are in large part socially determined. Individualistic societies (as you describe them) inculcate individualistic desires into people and the health of a society is determined by how effectively it instills prosocial behaviors in its populace. American individualism is actually a collectivist enterprise!

So something like the stock market, as the engine of American capitalism, only works if everybody in your society believes that it is worth taking risks in order to possibly get a huge windfall. But is that really in people's self-interest? Maybe what one would interpret as some kind of natural individual desire is actually a particularly American level of risk tolerance that has been inculcated because it has led to a lot of collective success.


Hospitals, Daycare, Schools, Social Work, Police, Fire Departments, Puclic Infrastructure, ... would fall apart in minutes if this would be mindset there.


No, the mindset there is that workers wants money, just like everywhere else. Not sure why you think otherwise.

If you mean that humans sometimes do more than the bare minimum, sure they do that. But that is an inherent part of humans, that has nothing to do with individualism or collectivist systems, humans do that in all systems, that is a part of the value of a human worker and why you pay them to work for you.


>No, the mindset there is that workers wants money, just like everywhere else. Not sure why you think otherwise.

Because I know people that work there, and money surely isn't the first reason you are teaching a bunch of brats while having parts of the public look down on you if you can easily get double the amount somewhere outside of the field. Same for political work, crowdsourcing, .. People take gratification in doing something that matters to society. Yes, they want to survive, but they are quite often not doing it for the money, the money is a nice bonus and enables to do it full-time, but that is not always the reason you do it.

Also you are goalpost moving, you said first they are only doing it only for themselves not for "the company", and people often do it for the institution that employs them. Not because they are paid, but because they believe they are doing something that matters.


Lets say an equivalent company with similar culture and people offered twice the salary, how many would say no? People saying yes in that situation are not there for the collective, they are there for their own sake first and foremost. The collective is an afterthought since they abandon it the instant a better opportunity appears.

That would apply to those teachers and doctors and nurses as well, almost all of them would gladly abandon their current kids/patients to go help other kids/patients if they were paid twice the salary. That is how we know they aren't there for the collective good of the organization, they just care about doing something good at any place with no feelings for that particular collective.

If you mean that people are helpful etc, then that is a completely different thing from them actually caring enough about a particular organization.


Millions of people including one side of my family have experienced the opposite. They were tenant farmers living in poverty, barely subsisting, before the hard left socialists both 1. developed the economy enough to give them jobs, 2. provided social safety nets such as tax payer paid healthcare so they wouldn't go bankrupt every time they needed to buy medicine or visit a doctor. If the country had never gone that direction it would have spent many more decades being a quasi-feudal land stuck the the middle ages. Not sure how "individualism" would have helped them at all. They didn't have any capital.


> If the country had never gone that direction it would have spent many more decades being a quasi-feudal land stuck the the middle ages

Capitalism was the end of such arrangements in the most developed parts of the world. Individualism helps since individual investors benefits from investing in better equipment making people more productive and thus helping living standards overall.

Social solutions to the same problem doesn't come close to being as effective at eliminating such inefficiencies. The main thing social solutions can do is provide baselines to the population such as education and healthcare as you say, but without capitalism to follow-up with targeted investments the country will remain poor even if its population is extremely educated. Social solutions are just very bad at using peoples talents well, they have a too collectivist view and don't see the individuals.


I remember how chattel slavery was very effective at making people more productive, and how Google is famous for how well it respects individual users when they have problems.


Many of those same peasants ended up dying in Soviet-era famines as well. I agree with you, the pre-communist revolution Russia was a living hell for many peasants, but it’s hard to say it was the “communist” aspect of the “communist revolution” that improved things, as opposed to the “revolution” aspect. What was needed was an overthrow of the existing power structures to enable modernization and industrialism, communist or not. Keep in mind other nations, like Japan, made even greater leaps out of feudalistic societies to modern economies. For Japan, it too required a dismantling of the existing power structures (first voluntarily at the turn of the century, and then by force after WWII), with no communism involved at all.

It’s possible communism had a uniquely positive influence on Russia’s transformation, but I remain skeptical considering it seems to be the only example of communism to ever to do so, the place from which Russia was coming, and the place in which it ended.


If you are simplifying collectivism to “communism” and individualism as “not communism”, then sure. But having lived in Japan for several years, which is a much more collectivist culture compared to the hyper-individualism of the US/West, I can confidently say that, while not perfect, they are in a much healthier state as a society.

As in most things in life, the golden path is usually somewhere in the middle, and US individualism has lurched far to the extreme, and is going to lead to its collapse if not reversed.


[flagged]


Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.

If you wouldn't mind reviewing https://news.ycombinator.com/newsguidelines.html and taking the intended spirit of the site more to heart, we'd be grateful.


Lol of course you get downvoted by a bunch of well off software engineers.

So many here are so fucking full of shit.

"I make 150k a year man, its not fair man, we need more communism"


Nah, the harm from these LLMs are mostly in how freely accessible they are. Just pay OpenAI a relatively tiny fee and you can generate tonnes of plausible spam designed to promote your product or service or trick people into giving you money. That's the primary problem we're facing right now.

The problem is... keeping them closed source isn't helping with that problem, it only serves to guarantee OpenAI a cut of the profits caused by the spam and scams.


> Just pay OpenAI a relatively tiny fee and you can generate tonnes of plausible spam designed to promote your product or service or trick people into giving you money. That's the primary problem we're facing right now.

Is content generation really the thing holding spammers back? I haven't seen a huge influx of more realistic spam so I wonder your basis for this statement.


There's a ton of LLM spam on platforms like Reddit and Twitter, and product review sites.


Everyone always says this, that there's "bots" all over Reddit but every time I ask for real examples of stuff (with actual upvotes) I never get anything.

If anything it's just the same regular spam that gets deleted and ignored at the bottom of threads.

Easier content generation doesn't solve the reputation problem that social media demands in order to get attention. The whole LLM+spam thing is mostly exaggerated because people don't understand this fact. It merely creates a slightly harder problem for automatic text analysis engines...which was already one of the weaker forms of spam detection full of false positives and misses. Everything else is network and behaviour related, with human reporting as last resort.


I want to see the proof of: bots, Russian trolls, and bad actors that supposedly crawl all over Reddit.

Everyone who disagrees with the hivemind of a subreddit gets accused of being one of those things and any attempt to dispute the claim gets you banned. The internet of today sucks because people are so obsessed with those 3 things that they're the first conclusion people jump to on pseudoanonymous social media when they have no other response. They'll crawl through your controversial comments just to provide proof that you can't possibly be serious and you're being controversial to play an internet villain.

I'd love to know how you dispute the claim that "you're parroting Russian troll talking points so you must be a Russian troll" when it's actually the Russian trolls parroting the sentiments to seem like real people.


There's a big market for high reputation, old Reddit accounts, exactly because those things make it easier to get attention. LLMs are a great way to automate generating high reputation accounts.

There are articles written on LLM spam, such as this one: https://www.theverge.com/2023/4/25/23697218/ai-generated-spa.... Those are probably going to substantiate this problem better than I would.


The "spam" is now so good you won't necessarily recognize it as such.


Pandora's box is already open on that one.. and none of the model providers are really attempting to address that kind of issue. Same with impersonation, deepfakes, etc. We can never again know whether text, images, audio, or video are authentic on their own merit. The only hope we have there is private key cryptography.

Luckily we already have the tools for this, NFT in the case of media and DKIM in the case of your spam email.


Oh I definitely agree that there's no putting it back into the pandora's box. The technology is here to stay.

I have no idea how you imagine "NFTs" will save us though. To me, that sounds like buzzword spam on your part.


An NFT is a way to attribute authorship in a mathematically guaranteed way.

If Taylor Swift signs ownership of a picture of her, you can know it is what she presents as real. If Elon musk signs a YouTube video of him offering crypto doubling giveaways, you can know he intended to represent the message as real. If the New York Times publishes an article signed with their key you can know it is meant to be published by the New York Times.


1) That's just using normal public/private key cryptography to sign messages, there's no need to bring in cryptocurrencies or NFTs

2) Public/private key cryptography would give us a way to verify that a message (or picture or whatever) from Taylor Swift is signed with Swift's private key, but it wouldn't help at all with e.g telling me that I'm responding to a real message and not a bot right now. Not to mention that it wouldn't even help much against deep fakes, since if I publish what I claim to be a secret recording of Swift, there would be no reason to expect her to have signed it with his private key.

Those two are the hallmarks of most of these "legit use cases for cryptocurrencies/NFTs" suggestions I've heard from cryptobros by the way: they're always some combination of "old technology that has nothing to do with cryptocurrencies" and "doesn't actually solve the problem".


NFT's as currently employed only immutably connect to a link - which is in itself not secure. More significantly, no blockchain technology deploys to internet scale content creation. Not remotely. It's hard to conceive of a blockchain solution fast enough and cheap enough -- let alone deployed and accepted universally enough, to have a meaningful impact on text / video and audio provenance across the internet given the pace of uploading. It also wouldn't do anything for the vast corpus of existing media, just new media created and uploaded after date X where it was somehow broadly adopted. I don't see it.


So we needed AI generated spam and scam content for Blockchain tech for digital content to make sense...


Whether it is hindsight or foresight depends on the perspective. From the zeitgeist perspective mired in crypto scams yea it may seem like a surprise benefit, but from the original design intention this is just the intended use case.


> but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.

Citation needed.

Counterpoints: - LLMs were mistrusted well before anything recent.

- More controls make LLMs more trustworthy for many people, not less. The Snafu at Goog suggests a need for improved controls, not 0 controls.

- The American culture wars are not global. (They have their own culture wars).


> More controls make LLMs more trustworthy for many people, not less. The Snafu at Goog suggests a need for improved controls, not 0 controls.

To whom? And, as hard as this is to test, how sincerely?

> The American culture wars are not global. (They have their own culture wars).

Do people from places with different culture wars trust these American-culture-war-blinkered LLMs more or less than Americans do?


- To me, the teams I work with and everyone handling content moderation.

/ Rant /

Oh God please let these things be bottle necked. The job was already absurd, LLMs and GenAI are going to be just frikking amazing to deal with.

Spam and manipulative marketing has already evolved - and thats with bounded LLMs. There are comments that look innocuous, well written, but the entire purpose is to low key get someone to do a google search for a firm.

And thats on a reddit sub. Completely ignoring the other million types of content moderation that have to adapt.

Holy hell people. Attack and denial opportunities on the net are VERY different from the physical world. You want to keep a market place of ideas running? Well guess what - If I clog the arteries faster than you can get ideas in place, then people stop getting those ideas.

And you CANT solve it by adding MORE content. You have only X amount of attention. (This was a growing issue radio->tv->cable->Internet scales)

Unless someone is sticking a chip into our heads to increase processing capacity magically, more content isnt going to help.

And in case someone comes up with some brilliant edge case - Does it generalize to a billion+ people ? Can it be operationalized? Does it require a sweet little grandma in the Philippines to learn how to run a federated server? Does it assume people will stop behaving like people?

Oh also - does it cost money and engineering resources? Well guess what, T&S is a cost center. Heck - T&S reduces churn, and that its protecting revenue is a novel argument today. T&S has existed for a decade plus.

/ Rant.

Hmm, seems like I need a break. I suppose It’s been one of those weeks. I will most likely delete this out of shame eventually.

- People in other places want more controls. The Indian government and a large portion of the populace will want stricter controls on what can be generated from an LLM.

This may not necessarily be good for free thought and culture, however the reality is that many nations haven’t travelled the same distance or path as America has.


I hope you don't delete it! I enjoyed reading it. It pleased my confirmation bias, anyways. Your comment might help someone notice patterns that they've been glancing over.... I liked it up until the T&S part. My eyes glazed over the rest since I didn't know what T&S means. But that's just me.


As of right now, the only solution I see is forums walled off in some way, complex captchas, intense proof of work, subscription fees etc. Only alternative might be obscurity, which makes the forum less useful. Maybe we could do like a web3 type thing but instead of pointless cryptos, you have a cryptographic proof that certifies you did the captcha or whatever and lots of sites accept them. I don’t think its unsolvable, just that it will make the internet somewhat worse.


Yeah, one thing I am afraid of is that forums will decide to join the Discord chatrooms on the deep web : stop being readable without an account, which is pretty catastrophic for discovery by search engines and backup crawlers like the Internet Archive.

Anyone with forum moderating experience care to chime in ? (Reddit, while still on the open web for now, isn't a forum, and worse, is a platform.)


>And in case someone comes up with some brilliant edge case - Does it generalize to a billion+ people ?

The answer is curation, and no, it doesn't need to scale to a billion people. maybe not even a million.

The sad fact of life is that most people don't care enough to discrminate against low quality content, so they are already a lost cause. Focus on those who do care enough and build an audience around them. You as a likely not billion dollar company can't afford to worry about that kind of scale, and lowering the scale helps you get a solution out for the short term. You can worry about scaling if/when you tap into an audience.


I get you. That’s sounds more like membership than curation though. Or a mashup of both.

But yes- once you stop dropping constraints you can imagine all sorts of solutions.

It does work. I’m a huge advocate of it. When threads said no politics I wanted to find whoever made that decision and give them a medal.

But if you are a platform - or a social media site - or a species.?

You can’t pick and choose.

And remember - everyone has a vote.

As good as your community is, we do not live in a vacuum. If information wars are going on outside your digital fortress, it’s still going to spill into real life


> This may not necessarily be good for free thought and culture

After reading the rest of your rant (I hope you keep it) ... maybe free thought and culture aren't what LLMs are for.


Counter-counterpoint: absolutely nobody who has unguardrailed Stable Diffusion installed at home for private use has ever asked for more guardrails.

I'm just saying. :) Guardrails nowadays don't really focus on dangers (it's hard to see how an image generator could produce dangers!) so much as enforcing public societal norms.


Just because something is not dangerous to the user doesn’t mean it can’t be dangerous for others when someone is wielding it maliciously


What kind of damage can you do with a current day llm? I’m guessing targeted scams or something? They aren’t even good hackers yet.


Fake revenge porn, nearly undetectable bot creation on social media with realistic profiles (I've already seen this on HN), generated artwork passed off as originals, chatbots that replace real-time human customer service but have none of the agency... I can keep going.

All of these are things that have already happened. These all were previously possible of course but now they are trivially scalable.


Most of those examples make sense, but what's this doing on your list?

> chatbots that replace real-time human customer service but have none of the agency

That seems good for society, even though it's bad for people employed in that specific job.


I've been running into chatbots that are confined to doling out information from their knowledgebase with no ability to help edge case/niche scenarios, and yet they've replaced all the mechanisms to receive customer support.

Essentially businesses have (knowingly or otherwise) dropped their ability to provide meaningful customer support.


That's the previous status quo; you'd also find this in call centres where customer support had to follow scripts, essentially as if they were computers themselves.

Even quite a lot of new chatbots are still in that paradigm, and… well, given the recent news about chatbot output being legally binding, it's precisely the extra agency of LLMs over both normal bots and humans following scripts that makes them both interestingly useful and potentially dangerous: https://www.bbc.com/travel/article/20240222-air-canada-chatb...


I don't think so. In my experience having an actual human on the other line gives you a lot more options for receiving customer support.


the issue is "none of the agency". Humans generally have enough leeway to fold to a persistant customer because it's financially unviable to have them on the phone for hours on end. a chatbot can waste all the time in the world, with all the customers, and may not even have the ability to process a refund or whatnot.


> That seems good for society, even though it's bad for people employed in that specific job.

Why?

It inserts yet another layer of crap you have to fight through before you can actually get anything done with a company. The avoidance of genuine customer service has become an artform by many companies and corporations, its demise surely should be lamented. A chatbot is just another in the arsenal of weapons designed to confuse, put-off and delay the cost of having to actually provide a decent service to you customers, which should be a basic responsibility of any public-facing company.


Two things I disagree with:

1. It's not "an extra layer", at most it's a replacement for the existing thing you're lamenting, in the businesses you're already objecting to.

2. The businesses which use this tool at its best, can glue the LLM to their documentation[0], and once that's done, each extra user gets "really good even though it's not perfect" customer support at negligible marginal cost to the company, rather than the current affordable option of "ask your fellow users on our subreddit or discord channel, or read our FAQ".

[0] a variety of ways — RAG is a popular meme now, but I assume it's going to be like MapReduce a decade ago, where everyone copies the tech giants without understanding the giant's reasons or scale


It's an extra layer of "Have you looked at our website/read our documentation/clicked the button" that I've already done, before I will (if I'm lucky) be passed onto a human that will proceed to do the same thing before I can actually get support for my issue.

If I'm unlucky it'll just be another stage in the mobius-support-strip that directs me from support web page to chatbot to FAQ and back to the webpage.

The businesses which use this tool best will be the ones that manage to lay off the most support staff and cut the most cost. Sad as that is for the staff, that's not my gripe. My gripe is that it's just going to get even harder to reach a real actual person who is able to take a real actual action, because providing support is secondary to controlling costs for most companies these days.

Take for example the pension company I called recently to change an address - their support page says to talk to their bot, which then says to call a number, which picks up, says please go to your online account page to complete this action and then hangs up... an action which the account page explicitly says cannot be completed online because I'm overseas, so please talk to the bot, or you can call the number. In the end I had to call an office number I found through google and be transferred between departments.

An LLM is not going to help with that, it's just going to make the process longer and more frustrating, because the aim is not to resolve problems, it's to stop people taking the time of a human even when they need to, because that costs money.


Why is everyone's first example of things you can do with LLMs "revenge porn"? They're text generation algorithms not even image generators. They need external capabilities to create images.


Do you also object to people saying that web browsers "display" a website even though that needs them to be plugged into a monitor?

If you chat to an LLM and you get a picture back, which some support, the image generator and the language model might as well be the same thing to all users, even if there's an important technical difference for developers.

It's a distinction that does not matter, as the question still has to be answered for the other modality. Do guns kill people, or do bad guys use guns to kill people? Does a fall kill you, or is it the sudden deceleration at the end? Lab leak or wet market? There's a technical difference, some people care, but the actionable is identical and doesn't matter unless it's your job to implement a specific part of the solution.


The moment they are good hackers, everyone has a trivially cheap hacker. Hard to predict what that would look like, but I suspect it is a world where nobody is employing software developers because a LLM that can hack can probably also write good code.

So, do you want future LLMs to be restricted, or unlimited? And remember, to prevent this outcome you have to predict model capabilities in advance, including "tricks" like prompting them to "think carefully, step by step".


Use the hacking LLM to verify your code before pushing to prod. EZ


> your code

To verify the LLM's code, because the LLM is cheaper than a human.

And there's a lot of live code already out there.

And people are only begrudgingly following even existing recommendations for code quality.


Your code because you own it. If LLM hackers are rampant as you fear then people will respond by telling their code writing LLMs to get their shit together and check the code for vulnerabilities.


> Your code because you own it.

I code because I'm good at it, enjoy it, and it pays well.

I recommend against 3rd party libraries because they give me responsibility without authority — I'd own the problem without the means to fix it.

Despite this, they're a near-universal in our industry.

> If LLM hackers are rampant as you fear then people will respond by telling their code writing LLMs to get their shit together and check the code for vulnerabilities.

Eventually.

But that doesn't help with the existing deployed code — and even if it did, this is a situation where, when the capability is invented, attack capability is likely to spread much faster than the ability of businesses to catch up with defence.

Even just one zero-day can be bad, this… would probably be "many" almost simultaneously. (I'd be surprised if it was "all", regardless of how good the AI was).


I never asked you why you code, this conversation isn't, or wasn't, about your hobbies. You proposed a future in which every skiddy has a hacking LLM and they're using it to attack tons of stuff written by LLMs. If hacking LLMs and code writing LLMs both proliferate then the obvious resolution is for the code writing LLMs to employ hacking LLMs in verifying their outputs.

Existing vulnerable code will be vulnerable, yes. We already live in a reality in which script kiddies trivially attack old outdated systems. This is the status quo, the addition of hacking LLMs changes little. Insofar as more systems are broken, that will increase the pressure to update those systems.


> I never asked you why you code

Edit: I misread that bit as "you code" not "your code".

But "your code because you own it", while a sound position, is a position violated in practice all the time, and not only because of my example of 3rd party libraries.

https://www.reuters.com/legal/transactional/lawyer-who-cited...

They are held responsible for being very badly wrong about what the tools can do. I expect more of this.

> You proposed a future in which every skiddy has a hacking LLM and they're using it to attack tons of stuff written by LLMs. If hacking LLMs and code writing LLMs both proliferate then the obvious resolution is for the code writing LLMs to employ hacking LLMs in verifying their outputs.

And it'll be a long road, getting to there from here. The view at the top of a mountain may be great or terrible, but either way climbing it is treacherous. Metaphor applies.

> Existing vulnerable code will be vulnerable, yes. We already live in a reality in which script kiddies trivially attack old outdated systems. This is the status quo, the addition of hacking LLMs changes little. Insofar as more systems are broken, that will increase the pressure to update those systems.

Yup, and that status quo gets headlines like this: https://tricare.mil/GettingCare/VirtualHealth/SecurePatientP...

I assume this must have killed at least one person by now. When you get too much pressure in a mechanical system, it breaks. I'd like our society to use this pressure constructively to make a better world, but… well, look at it. We've not designed our world with a security mindset, we've designed it with "common sense" intuitions, and our institutions are still struggling with the implications of the internet let alone AI, so I have good reason to expect the metaphorical "pressure" here will act like the literal pressure caused by a hand grenade in a bathtub.


The moment LLMs are good hackers every system will be continuously pen tested by automated LLMs and there will be very few remaining vulnerabilities for the black hat LLMs to exploit.


> The moment LLMs are good hackers every system will be continuously pen tested by automated LLMs

Yes, indeed.

> and there will be very few remaining vulnerabilities for the black hat LLMs to exploit.

Sadly, this does not follow. Automated vulnerability scanners already exist, how many people use them to harden their own code? https://www.infosecurity-magazine.com/news/gambleforce-websi...


Damage you can do:

- propaganda and fake news

- deep fakes

- slander

- porn (revenge and child)

- spam

- scams

- intelectual property theft

The list goes on.

And for quite a few of those use cases I'd want some guard rails even for a fully on-premise model.


Half of your examples aren't even things an LLM can do and the other half can be written by hand too. I can name a bunch of bad sounding things as well but that doesn't mean any of them have any relevance to the conversation.

EDIT: Can't reply but you clearly have no idea what you're talking about. AI is used to create these things, yes. But the question was LLMs which I reiterated. They are not equal. Please read up on this stuff before forming judgements or confidently stating incorrect opinions that other people, who also have no idea what they're talking about, will parrot.


> AI is used to create these things, yes. But the question was LLMs which I reiterated.

And the grandparent of the grandparent of your comment specifically named "Stable Diffusion": https://news.ycombinator.com/item?id=39612886

And text-based porn is still porn.

And it's a distinction without a difference that ChatGPT Pro doesn't strictly create images itself but instead forwards the request to DALL•E.

And the question of guard rails relevant to all AI, not just LLMs.


If we can change the rules of a discussion midway through, everyone loses. The parent replied to a question "What damage can be done with an llm without guardrails?" (regardless of the grandparent, this is how conversations work, you talk about the thing the other person talked about if you reply to them) and the response was to rattle off a bunch of stuff that LLMs can't do. Yes, they connected an LLM to an image generation AI. No, that doesn't mean "LLMs can generate images" aside from triggering some thing to happen. It's not pedantic or unreasonable to divide the two. The question was blatantly about LLMs.

If y'all want to rant and fear monger about any AI technology, including tech that has existed for years (deepfakes existed well before LLMs were mainstream), do that in a different thread. Don't just force every conversation to be about whatever your mind wants to rant about.

That said, arguing with you people is pointless. You don't even seem to think.


> If we can change the rules of a discussion midway through, everyone loses.

Then we lost repeatedly at almost every other step back to the root, because it switched between those two loads of times.

The change to LLMs was itself one such shift.

> No, that doesn't mean "LLMs can generate images" aside from triggering some thing to happen

The aside is important.

> It's not pedantic or unreasonable to divide the two.

It is unreasonable on the question of "guardrails, good or bad?"

It is unreasonable on the question of "can it cause harm?"

It's not unreasonable if you are building one.

> If y'all want to rant and fear monger about any AI technology, including tech that has existed for years (deepfakes existed well before LLMs were mainstream)

And caused problems for years.

> That said, arguing with you people is pointless. You don't even seem to think.

Communication isn't a single-player game, I can't make you understand something you're actively unwilling to accept, like the idea that tools enable people to do more, for good and ill, and AI is such a tool.

Perhaps you should spend less time insulting people on the internet you don't understand. Go for a walk or something. Eat a Snickers, take a nap. Come back when you're less cranky.


AI already is used to create fake porn, either of celebreties or children, fact. It is used to create propaganda pieces and fake videos and images, fact. Those can be used for everything from deffamation to online harassment. And AI is using other peoples copyrighted content to do so, also a fact. So, what's your point again?


Your other comment is nested too deeply to reply to. I edited my comment reply with my response but will reiterate. Educate yourself. You clearly have no idea what you're talking about. The discussion is about LLMs not AI in general. The question stated "LLMs" which are not equal to all of AI. Please stop spreading misinformation.

You can say "fact" all you want but that doesn't make you correct lol


You a seriously denying that generative AI is used to create fake images, videos and scam / spam texts? Really?


No. I'm declaring that you either can't read or don't understand that there's a difference between "gen AI" and LLMs. LLMs generate text. They don't generate images. Are you just a troll or not actually reading my messages? The question you're replying to asked about LLMs. I don't understand what's so difficult about this.


One has to love pedants. Your whole point was, LLMs don't create images (don't you say...), hence all the other points are wrong? Now go back to the first comment, assume LLMs and gen AI are used interchangeable (I am too lazy to re-read my initial post). Or don't, I don't care, because I do not argue semantics, tgere is hardly a more lazy, and disengenious, way to discuss. Ben Shapiro is doing that all the time and thinks he's smart.


Targeteted Spam, Reviewbombing, Political Campaigns


> Counter-counterpoint: absolutely nobody who has unguardrailed Stable Diffusion installed at home for private use has ever asked for more guardrails.

Not so. I have it at home, I make nice wholesome pictures of raccoons and tigers sitting down for Christmas dinner etc., but I also see stories like this and hope they're ineffective: https://www.bbc.com/news/world-us-canada-68440150


Unfortunately you've been misled by the BBC. Please read this: https://order-order.com/2024/03/05/bbc-panoramas-disinformat...

Those AI generated photos are from a Twitter/X parody account @Trump_History45 , not from the Trump campaign as the BBC mistakenly (or misleadingly) claim.


> Those AI generated photos are from a Twitter/X parody account @Trump_History45 , not from the Trump campaign as the BBC mistakenly (or misleadingly) claim.

They specifically said who they came from, and that it wasn't the Trump campaign. They even had a photo of one of the creators, whom they interviewed in that specific piece I linked to, and tried to get interviews with others.


Look at the BBC article...

Headline: "Trump supporters target black voters with faked AI images"

@Trump_History45 does appear to be a Trump supporter. However, he is also a parody account and states as such on his account.

The BBC article goes full-on with the implication that the AI images were produced with the intent to target black voters. The BBC is expert at "lying by omission"; that is, presenting a version of the truth which is ultimately misleading because they do not present the full facts.

The BBC article itself leads a reader to believe that @Trump_History45 created those AI images with the aim of misleading black voters and thus to garner support from black voters in favour of Trump.

Nowhere in that BBC article is the word "parody" mentioned, nor any examination of any of the other AI images @Trump_History45 has produced. If they had, and had fairly represented that @Trump_History45 X account, then the article would have turned out completely different;

"Trump Supporter Produces Parody AI Images of Trump" does not have the same effect which the BBC wanted it to have.


I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..

I won't deny the BBC has often very biased reporting for a publically funded source.


I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..

I won't deny the BBC has ofteb very biased reporting for a publically funded source.


I don't know whether this is the account you are talking about, but of the second account they discuss an image posted by saying: 'It had originally been posted by a satirical account that generates images of the former president' so if this is the account you are talking about..

I won't deny the BBC has very biased reporting for a publically funded source.


> I think there is far more societal harm in trying to codify unresolvable sets of ethics

Codification of an unresolvable set of ethics - however imperfect - is the only reason we have societies, however imperfect. It's been so since at least the dawn of agriculture, and probably even earlier than that.


Do you trust a for profit corporation with the codification?


Call me a capitalist, but I trust several of them competing with each other under the enforcement of laws that impose consequences on them if they produce and distribute content that violates said laws.


Wait but who codifies the ethics in that setup? Wouldn’t it still be, at best, an agreement among the big players?


They seem to be suggesting the market would alongside government regulation to fill any gaps (like the cartels that you seem to be suggesting).


This is what I'm starting to love about this ecosystem. There's one dominant player right now but by no means are they guaranteed that dominance.

The big-tech oligarchs are playing catch-up. Some of them, like Meta with Llama, are breaking their own rules to do it by releasing open source versions of at least some of their tools. Others like Mistral go purely for the open source play and might achieve a regional dominance that doesn't exist with most big web technologies these days. And all this is just a superficial glance at the market.

Honestly I think capitalism has screwed up more than it has helped around the world but this free-for-all is going to make great products and great history.


so regulation then?


Competition and regulation


I'm not sure I buy that users are lowering their guard down just because these companies have enforced certain restricts on LLMS. This is only anecdata, but not a single person I've talked to, from highly technical to the layperson, has ever spoken about LLMs as arbiters of morals or truth. They all seem aware to some extent that these tools can occasionally generate nonsense.

I'm also skeptical that making LLMs a free-for-all will necessarily result in society developing some sort of herd immunity to bullshit. Pointing to your example, the internet started out as a wild west, and I'd say the general public is still highly susceptible to misinformation.

I don't disagree on the dangers of having a relatively small number of leaders at for-profit companies deciding what information we have access to. But I don't think the biggest issue we're facing is someone going to the ChatGPT website and assuming everything it spits out is perfect information.


> They all seem aware to some extent that these tools can occasionally generate nonsense.

You have too many smart people in your circle, many people are somewhat aware that "chatgpt can be wrong" but fail to internalize this.

Consider machine translation: we have a lot of evidence of people trusting machines for the job (think: "translate server error" signs) , even tho everybody "knows" the translation is unreliable.

But tbh moral and truth seem somewhat orthogonal issues here.


Wikipedia is wonderful for what it is. And yet a hobby of mine is finding C-list celebrity pages and finding reference loops between tabloids and the biographical article.

The more the C-lister has engaged with internet wrongthink, the more egregious the subliminal vandalism is, with speculation of domestic abuse, support for unsavory political figures, or similar unfalsifiable slander being common place.

Politically-minded users practice this behavior because they know the platform’s air of authenticity damages their target.

When Google Gemini was asked “who is worse for the world, Elon Musk or Hitler” and went on to equivocate the two because the guardrails led it to believe online transphobia was as sinister as the Holocaust, it begs the question of what the average user will accept as AI nonsense if it affirms their worldview.


> not a single person I've talked to, from highly technical to the layperson, has ever spoken about LLMs as arbiters of morals or truth

Not LLMs specifically but my opinion is that companies like Alphabet absolutely abuse their platform to introduce and sway opinions on controversial topics.. this “relatively small” group of leaders has successfully weaponized their communities and built massive echo chambers.

https://twitter.com/eyeslasho/status/1764784924408627548?s=4...


> it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust

I would prefer things were open, but I don’t think this is the best argument for that

Yes, operators trying to tame their models for public consumption inevitably involves trade offs and missteps

But having hundreds or thousands of equivalent models being tuned to every narrow mindset is the alternative

I would prefer a midpoint, I.e. open but delayed disclosure

Take time to experiment and design in safety, etc. also to build a brand that is relatively trusted (despite the inevitable bumps) so ideologically tuned progeny will at least be competing against something better, and more trusted, at any given time

But the problem of resource requirements is real, so not surprising that being clearly open is challenging


> Yes, operators trying to tame their models for public consumption

*Falsify reality.


LLMs have nothing to do with reality whatsoever, their relationship is to the training data, nothing more.

Most of the idiocy surrounding the "chatbot peril" comes from conflating these things. If an LLM learns to predict that the pronoun token for "doctor" is "he", this is not a claim about reality (in reality doctors take at least two personal pronouns), and it certainly isn't a moral claim about reality. It's a bare consequence of the training data.

The problem is that certain activist circles have decided that some of these predictions have political consequences, absurd as this is. No one thinks it consequential that if you ask an LLM for an algorithm, it will give it to you in Python and Javascript, this is obviously an artifact of the training set. It's not like they'll refuse to emit predictive text about female doctors or white basketball players, or give you the algorithm in C/Scheme/Blub, if you ask.

All that the hamfisted retuning to try and produce an LLM which will pick genders and races out of a hat accomplishes is to make them worse at what they do. It gets in the way of simple tasks: if you want to generate a story about a doctor who is a woman and Ashanti, the race-and-gender scrambler will often cause the LLM to "lose track" of characteristics the user specifically asked for. This is directly downstream of trying to turn predictions on "doctor" away from "elderly white man with a kindly expression, wearing a white coat and stethoscope" sorts of defaults, which, to end where I started, aren't reality claims and do not carry moral weight.


Curate the false reality. The model falsifies realty by inherent architecture, before any tuning happens.


> The hazard we are experiencing with LLM right now is not how freely accessible and powerfully truthy it's content is, but it is precisely the controls upon it which are trying to be injected by the large model operators which are generating mistrust and a poor understanding of what these models are useful for.

This slices through a lot of double speak about AI safety. At the same time, people use “safety” to mean not letting AI control electrical grids and to ensure AIs adhere to partisan moral guidelines.

Virtually all of the current “safety” issues fall into the latter category. Which many don’t consider a safety issue at all. But they get snuck in with real concerns about integrating an AI too deeply into critical systems.

Just wait until google integrates it deeply into search. Might finally kill search.


What are you talking bout? It’s been deeply running google search for many years.

And AI for electrical grids and factories has also been a thing for a couple years.


LLMs hasn't been deeply integrated into google search for many years. The snippets you see predates LLMs, it is based on other techniques.


What people call AI might be an algorithm but algorithms are not AI. And it's definitely algorithms which do what you describe. There is very little magic in algorithms.


My read of "safety" is that the proponents of "safety" consider "safe" to be their having a monopoly on control and keeping control out of the hands of those they disapprove of.

I don't think whatever ideology happens to be fashionable at the moment, be it ahistorical portraits or whatever else, is remotely relevant compared to who has the power and whom it is exercised on. The "safety" proponents very clearly get that.


The only thing I'm offended by is the way people are seemingly unable to judge what is said by who is saying it. Parrots, small children and demented old people say weird things all the time. Grown ups wrote increasingly weird things the further back you go.


The primary or concluding reason Elon believes it needs to be open sourced is exactly because the "too much danger" is far bigger of a problem if that technology and knowledge-ability is privately available for only for bad actors.

E.g. Finding those dangers and them being public and publicly known is the better of 2 options vs. only bad actors potentially having them.


> Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice which is fundamentally unreconcilable even between two sentient humans when the ethics are really just a hacked on mod to the core model.

That’s a real issue but I doubt the solution is technical. Society will have to educate itself on this topic. It’s urgent that society understand rapidly that LLMs are just word prediction machines.

I use LLMs everyday, they can be useful even when they say stupid things. But mastering this tool requires that you understand it may invent things at any moment.

Just yesterday I tried the Cal.ai assistant which role is to manage your planning (but it don’t have access to your calendars that’s pretty limited). You communicate with it by mail. I asked him to organise a trip by train and book an hotel. It responded, « sure what is your preferred time for the train and which comfort do you want ? » I answered and it answered back that, fine, it will organise this trip and reach me back later. It even added that it will book me an hotel.

Well, it can’t even do that, it’s just a bot made to reorganize your cal.com meetings. So it just did nothing, of course. Nothing horrible since I know how it works.

But would I have been uneducated enough on the topic (like 99,99% of this planet’s population, I’d just thought « Cool, my trip is being organized, I can relax now ».

But hey, it succeeded at the main LLM task : being credible.


ChatGPT is not about to run weapons systems. It's like throwing knives out the window and they complaining that knives are dangerous. Any automation is dangerous without due diligence.


I think that’s missing the main point which is we don’t want the ayatollah for example weaponizing strong AI products.


>Society is approaching them as some type of universal ethical arbiter, expecting an omniscient sense of justice

Does anyone, even the most stereotypical hn SV techbro, thing this kind of thing? That's preposterous.


[flagged]


It just means that LLMs are an interpolation of everything on the internet. They would seem less like they have a point of view or an opinion on things.


They would have the average point of view of the Internet, which is far from truthful or even useful.


LLMs don't really average viewpoints. They just learn multiple viewpoints.


They would have whatever you prompted them to have, minus the guardrails.


He means that he thinks the only reason why these generative AIs ever get info wrong and causing misinfo is because the businesses that write them are too woke and holding them back.


Hard to say. But with a $20 subscription, you could ask ChatGPT 4.0 to interpret the truth that exists in that comment :D


I pay cents for GPT4. Please I urge everyone to stop paying this ridiculous $20 monthly fee if you just occasionally use chatGPT.


Well, on that note: how do you use GPT4, and how do you pay cents for it?


I use the API + playground which is essentially the chat interface. The API charges per-token and is then cheap. Unless you're a heavy user, it's tough to get to even a few dollars. Just don't use GPT4 and paste oodles of text, and you'll be fine.


My $20 subscription would probably say:

"Network Error"


I did. It answered "I don't know."

I guess I found the (in)coherent interpretation...


This looks like one of the steps leading to the fulfilment of the iron law of bureaucracy. They are putting the company ahead of the goals of the company.

"Pournelle's Iron Law of Bureaucracy states that in any bureaucratic organization there will be two kinds of people: First, there will be those who are devoted to the goals of the organization. Examples are dedicated classroom teachers in an educational bureaucracy, many of the engineers and launch technicians and scientists at NASA, even some agricultural scientists and advisors in the former Soviet Union collective farming administration. Secondly, there will be those dedicated to the organization itself. Examples are many of the administrators in the education system, many professors of education, many teachers union officials, much of the NASA headquarters staff, etc. The Iron Law states that in every case the second group will gain and keep control of the organization. It will write the rules, and control promotions within the organization." [1] https://en.wikipedia.org/wiki/Jerry_Pournelle#:~:text=Anothe....


Ironically, this is essentially the core danger of true AGI itself. An agent can't achieve goals if it's dead, so you have to focus some energy on staying alive. But also, an agent can achieve more goals if it's more powerful, so you should devote some energy to gaining power if you really care about your goals...

Among many other more technical reasons, this is a great demonstration of why AI "alignment" as it is often called is such a terrifying unsolved problem. Human alignment isn't even close to being solved. Hoping that a more intelligent being will also happen to want to and know how to make everyone happy is the equivalent of hiding under the covers from a monster. (The difference being that some of the smartest people on the planet are in furious competition to breed the most dangerous monsters in your closet.)


> They are putting the company ahead of the goals of the company.

I don't follow your reasoning. The goal of the company is AGI. To achieve AGI, they needed more money. What about that says the company comes before the goals?


From their 2015 introductory blog post: “OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”

Today’s OpenAI is very much driven by considerations of financial returns, and the goal of “most likely to benefit humanity as a whole” and “positive human impact” doesn’t seem to be the driving principle anymore.

Their product and business strategy is now governed by financial objectives, and their research therefore not “free from financial obligations” and “unconstrained by a need to generate financial return” anymore.

They are thus severely compromising their alleged mission by what they claim is necessary for continuing it.


Right.

But it seems like everyone agreed that they'd need a lot of money to train an AGI.


Sure, maybe. (Personally I think that’s a mere conjecture, trying to throw more compute at the wall.) But obtaining that money by orienting their R&D towards a profit-driven business goes against the whole stated purpose of the enterprise. And that’s what’s being called out.


Well, I don't think it's a maybe. From the emails it seems clear that even Elon thought the project would flop without a ton of money.

It seems pretty clear that they felt they had to choose between chasing money and shutting down. I'm guessing you'd prefer they went with the latter, but I can entirely understand why they didn't.


I don’t really care what they do. But since they’re now chasing money, they should be honest about it and say they had to give up on the original aspirations and have now become a normal tech company without any noble goals of doing R&D for the most benefit of humanity unconstrained by financial obligations.


I think they're being extremely honest and transparent about needing money to continue to advance their work. I mean, that's the entire message of these emails they quote... right?


I think what he is trying to say is they are compromising their underlying goal of being a non-profit for the benefit of all, to ensure the survival of "OpenAI". It is a catch-22, but those of pure intentions would rather not care about the survival of the entity, if it meant compromising their values.


That may be the goal now as they ride the hype train around “AGI” for marketing purposes. When it was founded the goal was stated as ensuring no single corp controls AI and that it’s open for everyone. They’ve basically done a 180 on the original goal, seemingly existing only to benefit Microsoft, and changing what your goal is to AGI doesn’t disprove that.


I think it works if the goals of the company are to make money, not actually to make agi?


You can say that about any initiative: "to achieve X they need more money". But that's not necessarily true.


But it does seem to be true in this case.


> they don't refute that they did betray it

They do. They say:

> Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”

Whether you agree with this is a different matter but they do state that they did not betray their mission in their eyes.


The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.

Of course they can give us nothing, but in that case they should start paying taxes and stop claiming they're a public benefit org.

My prediction is they'll produce little of value going forward. They're too distracted by their wet dreams about all the cash they're going to make to focus on the job at hand.


I agree with your sentiment but the prediction is very silly. Basically every time openai releases something they beat the state of the art in that area by a large margin.


We have a saying:

There is always someone smarter than you.

There is always someone stronger than you.

There is always someone richer than you.

There is always someon X than Y.

This is applicable to anything, just because OpenAI has a lead now it doesn't mean they will stay X for long rather than Y.


> The benefit is the science, nothing else matters, and having OpenAI decide what matters for everyone is repugnant.

OpenAI gets to decide what it does with its intellectual property for the same reason that a whole bunch of people are suing it for using their intellectual property.

It only becomes repugnant to me if they're forcing their morals onto me, which they aren't, because (1) there are other roughly-equal-performance LLMs that aren't from OpenAI, and (2) the stuff it refuses do is a combination of stuff I don't want to exist and stuff I have a surfeit of anyway.

A side effect of (1) is that humanity will get the lowest common (moral and legal) denominator in content from GenAI from different providers, just like the prior experience of us all getting the lowest common (moral and legal) denominator in all types of media content due to internet access connecting us to other people all over the world.


> The benefit is the science, nothing else matters

Even if that science helps not so friendly countries like Russia?


OpenAI at this point must be literally #1 target for every single big spying agency in whole world.

As we saw previously it doesn't matter much if you are top notch ai researcher, if 1-2 millions of your potential personal wealth are in stake this affect decision making (and probably would mine too).

How much of a bribe would it take for anybody inside with good enough access to switch sides and take all the golden eggs out? 100 million? A billion? Trivial amounts compared to what we discuss. And they will race each other to your open arms for such amounts.

We see sometimes recently ie government officials betraying their own countries to russian spies in Europe for few hundred - few thousands of euros. A lot of people are in some way selfish by nature, or can be manipulated easily via emotions. Secret services across the board are experts in that, it just works(tm).

To sum it up - I don't think it can be protected long term.


I'm a very weird person with money. I've basically got enough already, even though there are people on this forum who earn more per year than I have in total. My average expenditure is less than €1k/month.

This means I have no idea how to even think about people who could be bribed when they already earn a million a year.

But also, if AI can be developed as far as the dreamers currently making it real hope it can be developed, money becomes as useless to all of us as previous markers of wealth like "a private granary" or "a lawn" or "aluminium cutlery"[0].

[0] https://history.stackexchange.com/questions/51115/did-napole...


Wouldn't you accept a bribe if it's proposed as "an offer you can't refuse"?


Governments WILL use this. There really isn't any real way to keep their hands off technology like this. Same with big corporations.

It's the regular people that will be left out.


> Even if that science helps not so friendly countries like Russia?

Nothing will stop this wave, and the United States will not allow itself to be on the sidelines.


They are totally closed now, not just keeping their models for themselves for profit purposes. They also don't disclose how their new models work at all.

They really need to change their name and another entity that actually works for open AI should be set up.


Their name is as brilliant as

“The Democratic People's Republic of Korea”

(AKA North Korea)


> everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...

everyone... except scientists and the scientific community.


Well, the Manhattan project springs to mind. They truly thought they were laboring for the public good, and even if the government let them wouldn’t have wanted to publish their progress.

Personally I find the comparison of this whole saga (deepmind -> google —> openai —> anthropic —-> mistral —-> ?) to the Manhattan project very enlightening, both of this project and our society. Instead of a centralized government project, we have a loosely organized mad dash of global multinationals for research talent, all of which claim the exact same “they’ll do it first!” motivations as always. And of course it’s accompanied by all sorts of media rhetoric and posturing through memes, 60-Minutes interviews, and (apparently) gossipy slap back blog posts.

In this scenario, Oppenheimer is clearly Hinton, who’s deep into his act III. That would mean that the real Manhattan project of AI took place in roughly 2018-2022 rather than now, which I think also makes sense; ChatGPT was the surprise breakthrough (A-bomb), and now they’re just polishing that into the more effective fully-realized forms of the technology (H-bomb, ICBMs).


> They truly thought they were laboring for the public good

Nah. They knew they were working for their side against the other guys, and were honest about that.


The comparison is dumb. It wasn’t called the “open atomic bomb project”


Exactly. And the OpenAI actually called it "open atomic bomb project".


They literally created weapons of mass destruction.

Do you think they thought they were good guys because you watched a Hollywood movie?


Hmm do you have some sources? That sounds interesting. Obviously there’s always doubt, but yeah I was under the impression everyone at the Manhattan project truly believed that the Axis powers were objectively evil, so any action is justified. Obviously that sorta thinking falls apart on deeper analysis, but it’s very common during full war, no?

EDIT: tried to take the onus off you, but as usual history is more complicated than I expected. Clearly I know nothing because I had no idea of the scope:

  At its peak, it employed over 125,000 direct staff members, and probably a larger number of additional people were involved through the subcontracted labor that fed raw resources into the project. Because of the high rate of labor turnover on the project, some 500,000 Americans worked on some aspect of the sprawling Manhattan Project, almost 1% of the entire US civilian labor force during World War II.
Sooo unless you choose an arbitrary group of scientists, it seems hard. I haven’t seen Oppenheimer but I understand it carries on the narrative that he “focused on the science” until the end of the war when his conscience took over. I’ll mostly look into that…


If you really think you're fighting evil in a war for global domination, it's easy to justify to yourself that it's important you have the weapons before they do. Even if you don't think you're fighting evil; you'd still want to develop the weapons before your enemies so it won't be used against you and threaten your way of life.

I'm not taking a stance here, but it's easy to see why many Americans believed developing the atomic bomb was a net positive at least for Americans, and depending on how you interpret it even the world.


The war against Germany was over before the bomb was finished. And it was clear long before then that Germany was not building a bomb.

The scientists who continued after that (not all did) must have had some other motivation at that point.


I kind of understand that motivation, it is a once in a lifetime project, you are part of it, you want to finish it.

Morals are hard in real life, and sometimes really fuzzy.


In this note: HIGHLY recommend “Rigor of Angels”, which (in part) details Heisenbergs life and his moral qualms about building a bomb. He just wanted to be left alone and perfect his science, and it’s really interesting to see how such a laudable motivation can be turned to such deplorable, unforgivable (IMO) ends.

Long story short they claim they thought the bomb was impossible, but it was still a large matter of concern for him as he worked on nuclear power. The most interesting tidbit was that Heisenberg was in a small way responsible for (west) Germany’s ongoing ban on nuclear weapons, which is a slight redemption arc.


Heisenberg makes you think, doesn't he? As the developer of Hitler's bomb, which never was a realistic thing to begin with, he never employed slave labour for example. Nor was any of his stuff used during warfare. And still, he is seen by some as some tragic figure, at worst as man behind Hitler's bomb.

Wernher vin Braun on the other hand got lauded for his contribution to space exploration. His development of the V2 and his use of slave labour in building them was somehow just a minor disgression for the, ultimately under US leadership, greater good.


To be reductionist - history is written by the victors.

https://www.smbc-comics.com/comic/status-2


Charitably I think most would see it as an appropriate if unexpected metaphor.


I think they thought it would be far better that America developed the bomb than Nazis Germany, and the Allies needed to do whatever it too to stop Hitler, even if that did mean using nuclear bombs.

Japan and the Soviet Union were more complicated issues for some of the scientists. But that's what happens with warfare. You develop new weapons, and they aren't just used for one enemy.


What did Lehrer (?) sing about von Braun? "I make rockets go up, where they come down is not my department".


Don't say that he's hypocritical,

Say rather that he's apolitical.

"Once the rockets are up, who cares where they come down?

That's not my department, " says Wernher von Braun.


That's the one, thank you!


So.. "open" means "open at first, then not so much or not at all as we get closer to achieving AGI"?

As they become more successful, they (obviously) have a lot of motivation to not be "open" at all, and that's without even considering the so-called ethical arguments.

More generally, putting "open" in any name frequently ends up as a cheap marketing gimmick. If you end up going nowhere it doesn't matter, and if you're wildly successful (ahem) then it also won't matter whether or not you're de facto 'open' because success.

Maybe someone should start a betting pool on when (not if) they'll change their name.


OpenAI is literally not a word in the dictionary.

It’s a made up word.

So the Open in OpenAI means whatever OpenAI wants it to mean.

It’s a trademarked word.

The fact that Elon is suing them for their name when the guy has a feature “AutoPilot” which is not a made up word and had an actual well understood meaning which totally does not apply to how Tesla uses AutoPilot is hilarious.


Actually Open[Technology] pattern implies a meaning in this context. OpenGL, OpenCV, OpenCL etc. are all 'open' implementations of a core technology, maintained by non-profit organizations. So OpenAI non-profit immediately implies a non-profit for researching, building and sharing 'open' AI technologies. Their earlier communication and releases supported that idea.

Apparently, their internal definition was different from the very beginning (2016). The only problem with their (Ilya's) definition of 'open' is that it is not very open. "Everyone should benefit from the fruits of AI". How is this different than the mission of any other commercial AI lab? If OpenAI makes the science closed but only their products open, then 'open' is just a term they use to define their target market.

A better definition of OpenAi's 'open' is that they are not a secret research lab. They act as a secret research lab, but out in the open.


> An autopilot is a system used to control the path of an aircraft, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems). https://en.wikipedia.org/wiki/Autopilot

Other than the vehicle, this would seem to apply to Tesla's autopilot as well. The "Full Self Driving" claim is the absurd one, odd that you didn't choose that example.


OpenAI by Microsoft?


Ilya may have said this to Elon but the public messaging of OpenAI certainly did not paint that picture.

I happen to think that open sourcing frontier models is a bad idea but OpenAI put themselves in the position where people thought they stood for one thing and then did something quite different. Even if you think such a move is ultimately justified, people are not usually going to trust organizations that are willing to strategically mislead.


What they said there isn't their mission, that is their hidden agenda. Here is their real mission that they launched with, they completely betrayed this:

> As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world

https://openai.com/blog/introducing-openai


“Dont be evil” ring any bells?


Google is a for-profit, they never took donations with the goal of helping humanity.


They started as a defence contractor with generous “donation” from DARPA. That’s why i never trusted them from day 0. And they have followed a pretty predictable trajectory.


"Don't be evil" was codified into the S-1 document Google submitted to the SEC as part of their IPO:

https://www.sec.gov/Archives/edgar/data/1288776/000119312504...

""" DON’T BE EVIL

Don’t be evil. We believe strongly that in the long term, we will be better served—as shareholders and in all other ways—by a company that does good things for the world even if we forgo some short term gains. This is an important aspect of our culture and is broadly shared within the company.

Google users trust our systems to help them with important decisions: medical, financial and many others. Our search results are the best we know how to produce. They are unbiased and objective, and we do not accept payment for them or for inclusion or more frequent updating. We also display advertising, which we work hard to make relevant, and we label it clearly. This is similar to a newspaper, where the advertisements are clear and the articles are not influenced by the advertisers’ payments. We believe it is important for everyone to have access to the best information and research, not only to the information people pay for you to see. """


Yes, there they explain why doing evil will hurt their profits. But a for profits main mission is always money, the mission statement just explains how they make money. That is very different from a non-profit whose whole existence has to be described in such a statement, since they aren't about profits.


Nothing in an S-1 is "codified" for an organization. Something in the corporate bylaws is a different story.


This claim is nonsense, as any visit to the Wayback Machine can attest.

In 2016, OpenAI's website said this right up front:

> We're hoping to grow OpenAI into such an institution. As a non-profit, our aim is to build value for everyone rather than shareholders. Researchers will be strongly encouraged to publish their work, whether as papers, blog posts, or code, and our patents (if any) will be shared with the world. We'll freely collaborate with others across many institutions and expect to work with companies to research and deploy new technologies.

I don't know how this quote can possibly be squared with a claim that they "did not imply open-sourcing AGI".


In that case they mean that their mission to ensure everyone benefits from AI has changed to be that only a few would benefit. But it would support them saying like "it was never about open data"

In a way this could be more closed than for profit.


> but it's totally OK to not share the science...

That passes for an explanation to you ? What exactly is the difference between openai and any company with a product then ? Hey, we made THIS and in order to make sure everyone can benefit we sell at a price of X.


The serfs benefitted from the use of the landlord's tools.

This would mean it is fundamentally just a business with extra steps. At the very least, the "foundation" should be paying tax then.


So, open as in "we'll sell to anyone" except that at first they didn't want to sell to the military and they still don't sell to people deemed "terrorists." Riiiiiight. Pure bullshit.

Open could mean the science, the code/ip (which includes the science) or pure marketing drivel. Sadly it seems that it's the latter.


“The Open in openAI means that [insert generic mission statement that applies to every business on the planet].”


“ To some extent, they may be right that open sourcing AGI would lead to too much danger.”

I would argue the opposite. Keeping AGI behind a walled corporate garden could be the most dangerous situation imaginable.


There is no clear advantage to multiple corporations or nation states each with the potential to bootstrap and control AGI vs a single corporation with a monopoly. The risk comes from the unknowable ethics of the company's direction. Adding more entities to that equation only increases the number of unknown variables. There are bound to be similarities to gun-ownership or countries with nuclear arsenals in working through this conundrum.


You're talking about it as if it was a weapon. An LLM is closer to an interactive book. Millennia ago humanity could only pass on information through oral traditions. Then scholars invented elaborate writing systems and information could be passed down from generation to generation, but it had to be curated and read, before that knowledge was available in the short term memory of a human. LLMs break this dependency. Now you don't need to read the book, you can just ask the book for the parts you need.

The present entirely depends on books and equivalent electronic media. The future will depend on AI. So anyone who has a monopoly is going to be able to extract massive monopoly rents from its customers and be a net negative to the society instead of the positive they were supposed to be.


The state is much better at peering into walled corporate gardens than personal basements.


Everytime they say LLMs are the path to AGI, I cringe a little.


1. AGI needs an interface to be useful.

2. Natural language is both a good and expected interface to AGI.

3. LLMs do a really good job at interfacing with natural language.

Which one(s) do you disagree with?


I think he disagrees with 4:

4. Language prediction training will not get stuck in a local optimum.

Most previous things we train on could have been better served if the model developed AGI, but they didn't. There is no reason to expect LLMs to not get stuck in a local optimum as well, and I have seen no good argument as to why they wouldn't get stuck like everything else we tried.


There is very little in terms of rigorous mathematics on the theoretical side of this. All we have are empirics, but everything we have seen so far points to the fact that more compute equals more capabilities. That's what they are referring to in the blog post. This is particularly true for the current generation of models, but if you look at the whole history of modern computing, the law roughly holds up over the last century. Following this trend, we can extrapolate that we will reach computers with raw compute power similar to the human brain for under $1000 within the next two decades.


More compute also requires more data - scaling equally with model size, according to the Chinchilla paper.

How much more data is available that hasn't already been swept up by AI companies?

And will that data continue to be available as laws change to protect copyright holders from AI companies?


It's not just the volume of original data that matters here. From empirics we know performance scales roughly like (model parameters)*(training data)*(epochs). If you increase any one of those, you can be certain to improve your model. In the short term, training data volume and quality has given a lot of improvements (especially recently), but in the long run it was always model size and total time spent training that saw improvements. In other words: It doesn't matter how you allocate your extra compute budget as long as you spend it.


In smaller models, not having enough training data for the model size leads to overfitting. The model predicts the training data better than ever, but generalizes poorly and performs worse on new inputs.

Is there any reason to think the same thing wouldn't happen in billion parameter LLMs?


This happens in smaller models because you reach parameter saturation very quickly. In modern LLMs and with current datasets, it is very hard to even reach this point, because the total compute time boils down to just a handful of epochs (sometimes even less than one). It would take tremendous resources and time to overtrain GPT4 in the same way you would overtrain convnets from the last decade.


True but also from general theory you should expect any function approximator to exhibit intelligence when exposed to enough data points from humans, the only question is the speed of convergence. In that sense we do have a guarantee that it will reach human ability


It's a bit more complicated than that. Your argument is essentially the universal approximation theorem applied to perceptrons with one hidden layer. Yes, such a model can approximate any algorithm to arbitrary precision (which by extension includes the human mind), but it is not computationally efficient. That's why people came up with things like convolution or the transformer. For these architectures it is much harder to say where the limits are, because the mathematical analysis of their basic properties is infinitely more complex.


LLMs aren't improving at things they're unable to do at all. An example being reasoning.


LLMs can reason. You can verify this empirically by asking questions which require reasoning to e.g. GPT-4.




This is not peer reviewed research and has some serious issues: https://news.ycombinator.com/item?id=37051450


It sounds like you're arguing against LLMs as AGI, which we're on the same page about.


The underlying premise that llms are capable of fully generalizing to a human level across most domains, i assume?


Where did you get that from? It seems pretty clear to me that language models are intended to be a component in a larger suite of software, composed to create AGI. See: DALL-E and Whisper for existing software that it composes with.


The comment said that LLMs are the path to AGI, which implies at least that they’re a huge part of the AGI soup you’re talking about. I could maybe see agi emerging from lots of llms and other tools in a huge network, but probably not from an llm with calculators hooked up to it.


You're arguing that LLMs would be a good user interface for AGI...

Whether that's true or not, I don't think that's what the previous post was referring to. The question is, if you start with today's LLMs and progressively improve them, do you arrive at AGI?

(I think it's pretty obvious the answer is no -- LLMs don't even have an intelligence part to improve on. A hypothetical AGI might somehow use an LLM as part of a language interface subsystem, but the general intelligence would be outside the LLM. An AGI might also use speakers and mics but those don't give us a path to AGI either.)


The comment I was replying to was referencing OpenAIs use of the phrase "the path to AGI". Natural language is an essential interface to AGI, and OpenAI recognizes that. LLMs are a great way to interface with natural language, and OpenAI recognizes that.

While it's kind of nuts how far OpenAI pushed language models, even as an outside observer it's obvious that OpenAI is not banking on LLMs achieving AGI, contrary to what the person I was replying to said. Lots of effort is being put into integrating with outside sources of knowledge (RAG), outside sources for reason / calculation, etc. That's not LLMs as AGI, but it is LLMs as a step on the path to AGI.


I don’t know if they are or not, but I’m not sure how anyone could be so certain that they’re not that they find the mere idea cringeworthy. Unless you feel you have some specific perspective on it that’s escaped their army of researchers?


Because AI researchers have been on the path to AGI several times before until the hype died down and the limitations became apparent. And because nobody knows what it would take to create AGI. But to put a little more behind that, evolution didn't start with language models. It evolved everything else until humans had the ability to invent language. Current AI is going about it completely backwards from how biology did it. Now maybe robotics is doing a little better on that front.


I mean, if you're using LLM as a stand-in for multi-modal models, and you're not disallowing things like a self-referential processing loop, a memory extraction process, etc, it's not so far fetched. There might be multiple databases and a score of worker processes running in the background, but the core will come from a sequence model being run in a loop.


how come?


I just snicker.


Yea the idea that the computers can truly think by mimicking our language really well doesn't make sense.

But the algorithms are black box to me, so maybe there is some kind of launch pad to AGI within it


> here they explain why they had to betray their core mission. But they don't refute that they did betray it.

you are assuming that their core mission is to "Build an AGI that can help humanity for free and as a non-profit", the way their thinking seems to be is "Build an AGI that can help humanity for free"

they figured it was impossible to achieve their core mission by doing it in a non-profit way, so they went with the for-profit route but still stayed with the mission to offer it for free once the AGI is achieved

Several non-profits sell products to further increase their non-profit scale, would it be okay for OpenAI non-profits to sell products that came in the process of developing AGI so that they can keep working on building their AGI? museums sell stuff to continue to exist so that they can continue to build on their mission, same for many other non-profits. the OpenAI structure just seems to take a rather new version of that approach by getting venture capital (due to their capital requirements)


The problem of course is that they frequently go back on their promises (see they changes in their usage guidelines regarding military projects) so excuse me if I don't believe them when they say they'll voluntarily give away their AGI tech for the greater good of humanity


Wholeheartedly agreed.

The easiest way to cut through corporate BS is to find distinguishing characteristics of the contrary motivation. In this case:

OpenAI says: To deliver AI for the good of all humanity, it needs the resources to compete with hyperscale competitors, so it needs to sell extremely profitable services.

Contrary motivation: OpenAI wants to sell extremely profitable services to make money, and it wants to control cutting edge AI to make even more money.

What distinguishing characteristics exist between the two motivations?

Because from where I'm sitting, it's a coin flip as to which one is more likely.

Add in the facts that (a) there's a lot of money on the table & (b) Sam Altman has a demonstrated propensity for throwing people under the bus when there's profit in it for himself, and I don't feel comfortable betting on OpenAI's altruism.

PS: Also, when did it become acceptable for a professional fucking company to publicly post emails in response to a lawsuit? That's trashy and smacks of response plan set up and ready to go.


There is no fixed point at which you can say it achieves AGI (artificial general intelligence) it's a spectrum. Who decides when they've reached that point as they can always go further.

If this is the case, then they should be more open with their older models such as 3.5, I'm very sure industry insiders actually building these already know the fundamentals of how it works.


An interesting aspect of OpenAI's agreement with Microsft is that, until the point of AGI, Microsoft have IP rights to the tech. I'm not sure exactly what's included in that agreement (model, weights, training data, dev tools?), but it's enough that Nadella at least made brave sounding statements during OpenAI's near implosion that "they had everything" and would not be disrupted if OpenAI were to disappear overnight. I would guess they might have a major disruption in continuing development, but I guess at least the right to carry on using what they've already got access to.

The interesting part of this is that whatever rights Microsoft has do not extend to any OpenAI model/software that is deemed to be AGI, and it seems they must therefore have agreed how this would be determined, which would be interesting to know!

There was a recent interview of Shane Legg (DeepMind co-founder) by Dwarkesh Patel where he gave his own very common sense definition of AGI as being specifically human-level AI, with the emphasis on general. His test for AGI would be to have a diverse suite of human level cognitive tasks (covering the spectrum of human ability), with any system that could pass these tests then being subject to ad hoc additional testing. Any system that not only passed the test suite but also performed at human level on any further challenge tasks might then reasonably be considered to have achieved AGI (per this definition).


> still stayed with the mission to offer it for free once the AGI is achieved

And based on how they have acted in the past, how much do you trust they will act as they now say when/if they achieve AGI?


It's convenient that OpenAI posts newsbait as they're poised to announce new board members who will control the company.

And look at that, suddenly news searches are plastered with stories about this...

https://www.google.com/search?q=openai+board&tbm=nws

Who could have possibly forseen that 'openai' + 'musk' + emails would chum the waters for a news cycle? Certainly not a PR firm.


As the emails make clear, Musk reveals that his real goal is to use OpenAI to accelerate full self driving of Tesla Model 3 and other models. He keeps on putting up Google as a boogeyman who will swamp them, but he provides no real evidence of spending level or progress toward AGI, he just bloviates. I am totally suspicious of Altman in particular, but Musk is just the worst.


“he provides no real evidence of spending level” In the mails he mentions that billions per year are needed and that he was willing to put up 1 billion to start.


> They're probably right that without making a profit, it's impossible to afford...

This doesn't begin to make sense to me. Nothing about being a non-profit prevents OpenAI from raising money, including by selling goods and services at a markup. Some sell girl-scout cookies, some hold events, etc.

So, you can't issue equity in the company... offer royalties. Write up a compensation contract with whatever formula the potential employee is happy with.

Contract law is specifically designed to allow parties to accomplish whatever they want. This is an excuse.


Hell, I’d regularly donate to the OpenAI Crowdsource Fund if it guaranteed their research would be open sourced.


There is no way OpenAI could have raised $10B as a non-profit.


Would you please try to explain why?


This would be orders of magnitude more than any charity has ever raised, and is also an uncommonly huge raise even among _for-profit_ companies where investors expect returns.

Even when the EA community was flush with crypto billionaires there was no appetite for this level of spend.


> more than any charity has ever raised

The Novo Nordisk foundation has a $120 Billion endowment, the Bill & Melinda Gates Foundation has $50 Billion, the Welcome Trust has $42 Billion, ... There are about 19 charities that still have more than $10 Billion, let alone those who have raised (and spent) that during their existence.

https://en.wikipedia.org/wiki/List_of_wealthiest_charitable_...

> an uncommonly huge raise even among _for-profit_ companies where investors expect returns.

So, offer a return on investment. Charities that take loans pay interest. Charities that hire staff pay salaries. What's with this idea that charities cannot pay a reasonable market rate for services (e.g. short-term funding)?


I stand corrected, as stated my claim about charity OOM was wrong. Still, I don’t think I need to update much.

Because:

> So, offer a return on investment

This is precisely what they did; Microsoft’s investment is an extremely funky capped profit structure. They did it this way to minimize their cost of capital.

I’m not really clear what your concrete proposal is for raising $10b as a non-profit, perhaps you could flesh that out?

If you’re talking about financing a potentially decade- long project on tens of billions of dollars of pure debt, again that is… not a feasible structure.


> I’m not really clear what your concrete proposal is for raising $10b as a non-profit, perhaps you could flesh that out?

OpenAI, Inc. (the nonprofit) could have partnered with Microsoft directly.

To be fair, maybe Microsoft may have required that certain code be kept secret in a way that OpenAI's charitable purpose would not have allowed. However, that would just suggest that the deal was not open and not in the best interests of the charity.

Moreover, I'm skeptical that OpenAI Global LLC paid fair market value to OpenAI, Inc. for the assets it received. Sure, the GPT-2 itself was open sourced, but a lot of the value of the business lied in other things: all of the datasets that were used, the history of training, what worked and what didn't work, the accessory utilities, emails, documents / memos, the brand, etc. The staff is a little tricky, because - sure - they are ostensibly free to leave, but there's no doubt there's a ton of value in the staff.

If OpenAI, Inc. (non-profit) put itself on the open market with the proceeds to go to another charity, what do you think Microsoft would have paid to buy the business? I bet it would have been a lot more than OpenAI Global LLC paid to OpenAI, Inc for the same assets...


Great analysis, thanks for taking the time.

  here they explain why they had to betray their core mission. But they don't refute that they did betray it.
Although they don’t spend nearly as much time on it, probably because it’s an entirely intuitive argument without any evidence, is that they could be “open” as in “for the public good” while still making closed models for profit. Aka the ends justify the means.

It’s a shame lawyers seem to think that the lawsuit is a badly argued joke, because I really don’t find that line of reasoning convincing…


> lawyers seem to think that the lawsuit is a badly argued joke,

its because it is a badly argued joke. The founding charter is just that, a charter not a contract:

> the corporation will seek to open source technology for the public benefit when applicable

There are two massive caveats in that statement. wide enough to drive a stadium through.

Elon is just pissed, and is throwing lawyers at it in the hopes that they will fold (A lot of cases are settled out of court, because its potentially significantly cheaper, and less risky.)

The problem for Musk is that he is fighting with company who also is rich enough to afford good lawyers for a long time.

Also, he'll have to argue that he has materially been hurt by this change, again really hard.

last of all, its a company, founding agreements are not law, and often rarely contracts.


The evidence they presented shows that Elon was in complete agreement with the direction of OpenAI. The only thing he disagreed with was who would be the majority owner of the resulting for-profit company that hides research in the short to medium term.


> They're probably right that building AGI will require a ton of computational power, and that it will be very expensive.

Why? This makes it seem like computers are way less efficient than humans. Maybe I'm naive on the matter, but I think it's possible for computers to match or surpass human efficiency.


Computers are still way less efficient than humans, a human brain has less power draw than a laptop and do some immense calculations to parse vision, hearing etc better than any known algorithm constantly.

And the part of the human brain that governs our human intelligence and not just what animals do is much larger than, so unless we figure out a better algorithm than evolution did for intelligence it will require a massive amount of compute.

The brain isn't fast, but it is ridiculously parallel with every cell being its own core so total throughput is immense.


Perhaps the finalized AGI will be more efficient than a human brain. But training the AGI is not like running a human, it's like speed running evolution from cells to humans. The natural world stumbled on NGI in a few billion years. We are trying to do it in decades - it would not be surprising that it's going to take huge power.


Computers are more efficient, dense and powerful than humans. But due to self-assembly, brains consist of many(!) orders of magnitude more volume. A human brain is more accurately compared with a data center than a chip.


> A human brain is more accurately compared with a data center than a chip.

A typical chip requires more power than a human brain, so I'd say they are comparable. Efficiency isn't per volume but per power or per heat production. Human brains wins those two by far.


To be fair, we've locked ourselves into this to some extent with the focus on lithography and general processors. Because of the 10-1000W bounds of a consumer power supply, there's little point to building a chip that falls outside this range. Peak speed sells, power saving doesn't. Data center processors tend to be clocked a bit lower than desktops for just this reason - but not too much lower, because they share a software ecosystem. Could we build chips that draw microwatts and run at megahertz speeds? Sure, probably, but they wouldn't be very useful to the things that people actually do with chips. So imo the difficulty with matching the brain on efficiency isn't so much that we can't do it as that nobody wants it. (Yet!)

edit: Another major contributing factor is that so far, chips are more bottlenecked on production than operation. Almost any female human can produce more humans using onboard technology. Comparatively, first-rate chips can be made in like three buildings in the entire world and they each cost billions to equip. If we wanted to build a brain with photolithography, we'd need to rent out TSMC for a lot longer than nine months. That results in a much bigger focus on peak performance. We have to go "high" because we cannot practically go "wide".


Scaling laws. Maybe they will figure out a new paradigm, but in the age of Transformers we are stuck with scaling laws.


> but it doesn't really refute any of his core assertions: they still have the appearance of abandoning their core mission to focus more on profits

They don't refute that, but they claim that road was chosen in agreement with Elon. In fact, the claim this was his suggestion


> To some extent, they may be right that open sourcing AGI would lead to too much danger.

They claimed that about GPT-2 and used the claim to delay its release.


They claimed that GPT-2 was probably not dangerous but they wanted to establish a culture of delaying possibly-dangerous releases early. Which, good on them!


Do you really think it is a coincidence that they started closing down around the time they went for-profit?


No, I think they started closing down and going for profit at the time they realized that GPT was going to be useful. Which sounds bad, but at the limit, useful and dangerous are the same continuum. As the kids say, OpenAI got "scale-pilled;" they realized that as they dumped more compute and more data onto those things, the network would just pick up more and discontinuous capabilities "on its own."

<aisafety>That is the one thing we didn't want to happen.</aisafety>

It's one thing to mess around with Starcraft or DotA and wow the gaming world, it's quite another to be riding the escalator to the eschaton.


> But instead of changing their name and their mission, and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.

Why would they change their mission? If achieving the mission requires money then they should figure out how to get money. Non-profit doesn't actually mean that the corporation isn't allowed to make profit.

Why change the name? They never agreed to open source everything, the stated mission was to make sure AGI benefits all of humanity.


>They're probably right that building AGI will require a ton of computational power, and that it will be very expensive

Eh.

Humans have about 1.5 * 10 ^ 14 synapses (i.e connections between neurons). Assume all the synapses are firing (highly unlikely to be the case in reality), and the average firing speed is 0.05ms (there are chemical synapses that are much slower, but we take the fastest speed of the electrical synapses).

Assume that each synapse is essntially a signal that gets attenuated somehow in transmission. I.e value times a fractional weight, which really is a floating point operation. That gives us (1.5 * 10 ^14)/(0.0005)/(10 ^ 12)) = 300000 TFLOPS

Nvidia 4090 is capable of 1300 Tflop of fp8. So for comparable compute, we need 230 4090s, which is about $345k. So with everything else on board, you are looking at $500k, which is comparatively not that much money, and thats consumer pricing.

The biggest expense like you said is paying salaries of people who are gonna figure out the right software to put on those 4090s. I just hope that most of them aren't working on LLMs.


Inference compute costs and training compute costs aren’t the same. Training costs are an order of magnitude higher.


Training will be significantly cheaper and take less time once we have the correct software for the 4090s

Right now, the idea is that every time you build an LLM, you start the training from scratch, because thats all we know how to do.

Human like AI will most definitely not be trained like that. Humans can look at a piece of information once or twice and remember it.

Just like the attention paper, at some point someone will publish a paper that describes a methodology of feeding one piece of data back through the network only a few times to fully train it.


LLMs are just training on massive amounts of data in order to find the right software. No human can program these machines to do the complicated tasks that humans can do. Rather we search for them with Gradient based methods using data


> they still have the appearance of abandoning their core mission to focus more on profits

If donors are unwilling to continue making sustained donations, they would have died. They only did what they needed to to stay alive.


If the core mission is to advance and help humanity, then they determine by changing it to profit and making it closed will help that mission, then it is a valid decision


That's like saying rolling back environmental protection regulation will help humanity advance.


Not at all; it’s actually far more plausible that, in many cases, rolling back environmental regulations will help humanity advance.


Depends on your limited definition of advance. Chilling in a Matrix-esque wasteland with my fancy-futuristic-gadget isn't my idea of advanced-level-humanity.

May help with technological advancement, but not social or ethical advancement.


It’s been known to happen that environmental regulations turn out to be ill-considered, counterproductive, entirely corrupt instances of regulatory capture by politically dominant industries, or simply poor cost-benefit tradeoffs. Gigantic pickups are a consequence of poorly considered environmental regulations in the United States, for instance.


True, at their core though, those aren't environmental at their heart but rather something else green-washed, be it corruption, subsidisation or something else.


Or sometimes the regulators make mistakes. Good intentions are no guarantee of good outcomes.


True.


Their early decision to not open source their models was the most obvious sign of their intentions.

Too dangerous? Seriously? Who the fuck did/do they think they are? Jesus?

Sam Altman is going to sit there in his infinite wisdom and be the arbiter of what humanity is mature enough to handle?

The amount of kool aid that is being happily drank at openai is astounding. It’s like crypto scams but everyone has a PhD.


When they say "We realized building AGI will require far more resources than we’d initially imagined" it's not just money/hardware it's also time. They need more years, maybe even decades, for AGI. In the meantime, let's put these LLMs to good use and make some money to keep funding development.


> They're probably right that building AGI will require a ton of computational power, and that it will be very expensive.

Is that still true? LLMs seem to be getting smaller and cheaper for the same level of performance.


training isn't getting less intensive, its just that adding more GPUs is now more practical


by "betray", you mean they pivoted?


To "pivot" would merely be to change their mission to something related yet different. Their current stance seems to me to be in conflict with their original mission, so I think it's accurate to say that they betrayed it.


True.

> “The Open in openAI means that everyone should benefit from the fruits of AI after its [sic] built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”.

Well, nope. This is disingenuous to the point of absurdity. By that measure every commercial enterprise is "open". Google certainly is extremely open, as are Apple, Amazon, Microsoft... or Wallmart, Exxon, you name it.


Thanks for pointing that out. 100% agree with you.

What can we do?


"They're probably right that without making a profit, it's impossible to afford the salaries 100s of experts in the field and an army of hardware to train new models."

Except there's a proof point that it's not impossible: philanthropists like Elon Musk - who would have likely kept pumping money into it, and where arguably the U.S. and other governments would have funded efforts - energy and/or CPU time - as a military defense strategy to help compete with China's CCP funding AI.


I guess Mozilla as well then.


Well yeah, dive into the comments on any Firefox-related HN post and you'll see the same complaint about the organization structure of Mozilla, and its hindrance of Firefox's progress in favour of fat CEO salaries and side products few people want.


You might find me there. ;)

But, my God, the some of the nonprofit ceos I’ve known make the for-profit ceos look pathetic and cheap.


From all the evidence, the one to look the worst on all of this is Google...


Elon is suing OpenAI for breach of contract but doesn't have a contract with OpenAI. Most legals experts are concluding that this is a commercial for Elon Musk, not much more. Missions change, yawn...


> To some extent, they may be right that open sourcing AGI would lead to too much danger.

That's clearly self-serving claptrap. It's a leveraging of a false depiction of what AGI will look like (no ones really knows, but it's going to be scary and out of control!) with so much gatekeeping and subsequent cash they can hardly stop salivating.

No strong AI (there is no evidence AGI is even possible) is not going to be a menace. It's software FFS. Humans are and will be a menace though and logically the only way to protect ourselves from bad people (and corporations) with strong AI is to make strong AI available to everyone. Computers are pretty powerful (and evil) right now but we haven't banned them yet.


> there is no evidence AGI is even possible

Reading this is like hearing "there is no evidence that heavier-than-air flight is even possible" being spoken, by a bird. If 8 billion naturally-occuring intelligences don't qualify as evidence that AGI is possible, then is there anything that can qualify as evidence of anything else being possible?


we also cannot build most birds


That makes little intuitive sense to me. Help me understand why increasing the number of entities which possess a potential-weapon is beneficial for humanity?

If the US had developed a nuclear armament and no other country had would that truly have been worse? What if Russia had beat the world to it first? Maybe I'll get there on my own if I keep following this reasoning. However there is nothing clear cut about it, my strongest instincts are only heuristics I've absorbed from somewhere.

What we probably want with any sufficiently destructive potential-weapon are the most responsible actors to share their research while stimulating research in the field with a strong focus on safety and safeguarding. I see some evidence of that.


> If the US had developed a nuclear armament and no other country had would that truly have been worse?

Yes, do you think it is a coincidence that nuclear weapons stopped being used in wars as soon as more than one power had them? People would clamor for nukes to be used to save their young soldiers lives if they didn't have to fear nuclear retaliation, you would see strong political pushes for nuclear usage in everyone of USA's wars.


Hmm, indeed


Lots of people disagree on whether it is true or not, but basically the idea is mutually assured destruction

https://en.wikipedia.org/wiki/Mutual_assured_destruction


I sense that with AGI all the outcomes will be a little less assured, since it is general-purpose. We won't know what hit until it's over. Was it a pandemic? Was it automated-religion? Nuclear weapons seem particularly suited to MAD, but not AGI.


Networking is a thing, so the software can remotely control hardware.


> No strong AI (there is no evidence AGI is even possible) is not going to be a menace. It's software FFS.

have you even watched Terminator? ;)


> ... and returning the donations they took from these wealthy tech founders, they used the benevolent appearance of their non-profit status and their name to mislead everyone about their intentions.

I can't tell if your comment is intentionally misleading or just entirely missing the point. The entire post states that Elon musk was well aware and onboard with their intentions. Tried to take over OpenAI and roll it into his private company to control. And finally agreed specifically that they need to continue to become less open over time.

And your post is to play Elon out to be a victim who didn't realize any of this? He's replying to emails saying he's agreeing. It's hard to understand why you posted something so contradictory above pretending he wasn't.


> We realized building AGI will require far more resources than we’d initially imagined

So the AGI existential threat to humanity has diminished?


Not if their near-term funding rounds go through. So much for "compute overhang".


Malevolent or "paperclip indifferent" AGI is a hypothetical danger.

Concentrating an extremely powerful tool, what it will and won't do, who has access to it, who has access to the the newest stuff first? Further corrupting K Street via massive lobbying/bribery activity laundered through OpenPhilanthropy is just trivially terrifying.

That is a clear and present danger of potentially catastrophic importance.

We stop the bleeding to death, then worry about the possibly malignant, possibly benign lump that may require careful surgery.


> As we get closer to building AI, it will make sense to start being less open. The Open in OpenAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science (even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

That is surprisingly greedy & selfish to be boasting about in their own blog.


Yeah, they are basically saying that they called themselves OpenAI as a recruitment strategy but they never planned to be open after the initial hires.


Why do tech people keep falling for this shtick? It's happened over and over and over with open source becoming open core becoming source available being becoming source available with closed source bits.

How society organizes property rights makes it damn near impossible to make anything commons in a way that can't in practice be reversed when folks see dollar signs. Owner is a non nullable field.


Because the people that got recruited on those terms suddenly see what kind of dough they will be making, I suppose.


By believing that since they aren't "MBA", economics and human behaviour don't apply to them.


Thankfully the code as it was during the open source stage can be forked, maintained, and developed further if another party is interested.


They’re pretty open about that now though.


I think you're misreading the intention here. The intention of closing it up as they approach AGI is to protect against dangerous applications of the technology.

That is how I read it anyway and I don't see a reason to interpret it in a nefarious way.


Two things that jump out at me here.

First, this assumes that they will know when they approach AGI. Meaning they'll be able to reliably predict it far enough out to change how the business and/or the open models are setup. I will be very surprised if a breakthrough that creates what most would consider AGI is that predictable. By their own definition, they would need to predict when a model will be economically equivalent to or better than humans in most tasks - how can you predict that?

Second, it seems fundamentally nefarious to say they want to build AGI for the good of all, but that the AGI will be walled off and controlled entirely by OpenAI. Effectively, it will benefit us all even though we'll be entirely at the mercy of what OpenAI allows us to use. We would always be at a disadvantage and will never know what the AGI is really capable of.

This whole idea also assumes that the greater good of an AGI breakthrough is using the AGI itself rather than the science behind how they got there. I'm not sure that makes sense. It would be like developing nukes and making sure the science behind them never leaks - claiming that we're all benefiting from the nukes produced even though we never get to modify the tech for something like nuclear power.


Read the sentence before, it provides good context. I don't know if Ilya is correct, but it's a sincerely held belief.

> “a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”


Many people consider what OpenAI is building to be the dangerous application. They don't seem nefarious to me per se, just full of hubris, and somewhat clueless about the consequences of Altman's relationship with Microsoft. That's all it takes though. The board had these concerns and now they're gone.


"Tools for me, but not for thee."


I think the fundamental conflict here is that OpenAI was started as a counter-balance to google AI and all other future resource-rich cos that decide to pursue AI BUT at the same time they needed a socially responsible / ethical vector to piggyback off of to be able to raise money and recruit talent as a non profit.

So, they cant release science that the googles of the world can use to their advantage BUT they kind of have to because that's their whole mission.

The whole thing was sort of dead on arrival and Ilya's email dating to 2016 (!!!!) only amplifies that.


When the tools are (believed to be) more dangerous than nuclear weapons, and the "thee" is potentially irresponsible and/or antagonists, then... yes? This is a valid (and moral) position.


If so, then they shouldn’t have started down that path by refusing to open source 1.5B for a long time while citing safety concerns. It’s obvious that it never posed any kind of threat, and to date no language model has. None have even been close to threatening.

The comparison to nuclear weapons has always been mistaken.


Oh I'm talking about the ideal, not what they're actually doing.


Sadly one can’t be separated from the other. I’d agree if it was true. But there’s no evidence it ever has been.

One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.


> One thought experiment is to imagine someone developing software with a promise to open source the benign parts, then withholding most of it for business reasons while citing aliens as a concern.

I mean, I'm totally with them on the fear of AI safety. I'm definitely in the "we need to be very scared of AI" camp. Actually the alien thought experiment is nice - because if we credibly believed aliens would come to earth in the next 50 years, I think there's a lot of things we would/should do differently, and I think it's hard to argue that there's no credible fear of reaching AGI within 50 years.

That said, I think OpenAI is still problematic, since they're effectively hastening the arrival of the thing they supposedly fear. :shrug:


It makes people feel mistrusted (which they are, and in general should be.) it’s a bit challenging to overcome that.


Sounds pretty much like any other corpo “Pay us bucks and benefit from our tech”


Regardless of who is right here, I think the enormous egos at play make OpenAI the last company I’d want to develop AGI. First Sam vs. the board, now Elon vs. everyone … it’s pretty clear this company is not in any way being run for the greater good. It’s all ego and money with some good science trapped underneath.


Serious question. Does anyone trust Sam Altman at all anymore? My perspective from the outside is his public reputation is in tatters except that he's tied to Open AI. Im curious what the internal rep is and the greater community?


Yes, me.

At least, I trust him as much as any other foreign[0] CEO.

For all the stuff people complain about him doing, almost none of it matters to me, except for the stuff which isn't proven (such as his sister's allegation) where I would change my mind if evidence was presented. What I don't trust is that Californian ethics don't map well enough onto my ethics, which also applies to basically all of Big Tech…

…but I'm not sure any ethics works too well when examined. A while ago I came up with the idea of "dot product morality"[1] — when I was a kid, "good vs evil" was enough, then I realised there were shades of grey, then I realised someone could be totally moral on one measure (say honesty) and totally immoral on another (say you're a vegan for ethical reasons and they're a meat lover), and I figured we might naturally simplify this inside our own minds my saying another person is "morally aligned" (implicitly: with ourselves) when their ethics vectors are pointing the same way as ours.

But more recently I realised that in a high dimensional space, there's a huge number of ways for vectors to be almost the same and yet 90° apart[2].

[0] I'm not American, so to me he's foreign

[1] I really need to move my blog to github, the only search results on Google are the previous times I posted this to HN: https://kitsunesoftware.wordpress.com/2019/05/25/dot-product...

[2] Via trying to calculate the dot product of two Markov chains: https://github.com/BenWheatley/MarkovChain-Dot-Product-compa...


he is a salesman selling midly working snake oil, he was into nft a few year ago and jumped to the new snake oil. everything that elon is claiming now is real. they manipulated opinion and rode open source to train on public open data sources and once they had something they could,market they closed everything. its not up;for debate that is a fact


I'm old enough to remember GPT-2, which they didn't release because they "wanted to set a precedent" for safety and responsibility.

GPT-2 wasn't marketable. They were mocked for it, even.

So it is, in fact, up for debate. So is much of the rest, that I care about.


you might want to google gp2 and github


They were closed source from the start. And he invested in one NFT startup (among many other startups).


Curious why “foreign” was worth mentioning? Do you trust your countrymen CEOs more or less?


Closer alignment with my world view. Due to having grown up in the same milieu for the home country, due to choosing the country in part for its idea of what "good" looks like for the one I moved to.

Also, it tickles me to tell Americans that they are foreign. :P


Yes. I liked him when he was head of YC, I liked him when he was head of reddit for a few days, I like him now. I've never had any issue with him. When they made a capped-profit portion of OpenAI, they explained their reasoning, and I think it's clear we wouldn't have GPT-4 today (or in the foreseeable future) if they stayed purely non-profit.

Hell, capped-profit is more than you can say for any other tech company.


The funniest part of the OpenAI post is where someone comes in breathlessly and says "hey have you read this ACX post on why we shouldn't open source AGI" to the guy who's literally been warning everybody about AGI for decades and Elon is like: "Yup." Someone was murdered that day. There is nothing for dismissive than a yup.


You're generally correct, but what really stings is Claude 3 Opus released right at the same time. It's superior to GPT-4 in pretty much every way I've tested. Center of gravity has shifted across a few streets to Anthropic seemingly overnight.


I have had homework questions (functional analysis and commutative ring theory) that GPT-4 is good enough for but Claude3 has been strikingly better.


> It's superior to GPT-4 in pretty much every way I've tested.

They are just catching up to GPT-4 which was released a year ago (March 14, 2023). Meanwhile GPT-5 has been through a year of development.


meanwhile claude is not globally available...


The really depressing thing is that the board anticipated exactly this type of outcome when they were going "capped profit" and deliberately restructured the company with the specific goal of preventing this from happening... yet here we are.

It's difficult to walk away without concluding that "profit secondary" companies are fundamentally incompatible with VC funding. Would that be a pessimistic take? Are there better models left to try? Or is it perhaps the case that OpenAI simply grew too quickly for any number of safeguards to be properly effective?


I think the fact that a number of top people were willing to actually leave OpenAI and found Anthropic explicitly because OpenAI had abandoned their safety focus essentially proves that this wasn’t a thing that had to happen. If different leaders had been in place things could have gone differently.


Ah, but isn't that the whole thing about corporations? They're supposed to outlast their founders.

If the corporation fails to do what it was created to do, then I view that as a structural failure. A building may collapse due to a broken pillar, but that doesn't mean we should conclude it is the pillar's fault that the building collapsed -- surely buildings should be able to withstand and recover from a single broken pillar, no?


My sweet summer child. Do you really belive this Anthropic story and that Anthropic will go any other way? Under late-stage capitalism, there is no other way. Everyone has ideals until they see a big bag of money in front of them. It doesn't matter if the company is a non-profit, for-profit or whatever else.


How did they structure it to prevent this? Is it in the statutes of the company or smth?


It's actually a very clever structure! Please open the following image to follow along as I elaborate: https://pbs.twimg.com/media/F_PGOPOacAApU8e.jpg

At the top level, there is the non-profit board of directors (i.e.: the ones Sam Altman had that big fight with). They are not beholden to any shareholders and are directly bound by the company charter: https://openai.com/charter

The top-level nonprofit company owns holding companies in partnership with their employees. The purpose of these holding companies is to own a majority of shares in & effectively steer the bottom layer of our diagram.

At the bottom layer, we have the "capped profit" OpenAI Global, LLC (this layer is where Sam Altman lives). This company is beholden to shareholders, but because the majority shareholder is ultimately controlled by a non-profit board, it is effectively beholden to the interests of an entity which is not profit-motivated.

In order to raise capital, the holding company can create new shares, sell existing shares, and conduct private fundraising. As you can see on the diagram, Microsoft owns some of the shares in the bottom company (which they bought in exchange for a gigantic pile of Azure compute credits).


Except Altman has the political capital to have the entire board fired if they go against him, which makes the entire structure irrelevant. The power is where the technology is being developed -- at the bottom where they can threaten to walk out with plush jobs from the major shareholders at the bottom. The power is not where the figureheads sit at the top.


Right. That's the thing which I found to be so depressing. Just because I think that it was clever does not mean that I think it was successful.

Out of curiousity, with the benefit of hindsight; what would you have tried doing differently to prevent such a coup?


That's a good question, and I agree it was a clever design. I don't know that there is a way to modify the org structure to prevent what happened. As much as I dislike them, a clear non-compete clause after the structure was in place might have helped, but I'm not sure that's even an option in CA. And having employees re-sign for noncompete would be fraught itself (better from the start). But this does seem like the most relevant application of non-compete. I'm not a lawyer and I'm sure they had top notch lawyers review the structure. If OpenAI played the non-compete card it wouldn't make retention easier if employees were willing to walk (they wouldn't exactly have trouble finding jobs anywhere). Do you know of anything that might have prevented it?


Well, if you squint a little bit, this all looks kind of like a military coup. Through the cultivation of personal loyalty in his ranks, General Altman became able to ignore orders, cross the Rubicon, and subjugate the senate. It's an old but common story.

I point out this similarity because I suspect that the corporate solution to such "coups" will mirror the government solution: checks and balances. You build a system which functions by virtue of power conflicts rather than trying to prevent them. I won't pretend to know how such a thing could be implemented in practice, however.


Ultimately, the people performing the actual work will always have a collective veto power.


Like the rest of the company, it's very clever but not in any way positive for anyone but them.


And what was this structure supposed to achieve? At the top we have board of directors not accountable to anyone, except, as we recently discovered, to the possibility of a general rebellion from employees.

That's not clever or innovative. That's just plain old oligarchy. Intrigue and infighting is a known feature of oligarchies ever since antiquity.


> That's not clever or innovative

Whether or not it is “clever”, the idea of a non-profit or charity owning a for-profit company isn’t original. Mozilla has been doing it for years. The Guardian (in the UK) adopted that structure in 1936


That's not entirely true. As a 501(c)(3) organization, they are bound to honor their founding bylaws on pain of having their tax-exempt status revoked w/ retroactive consequences. I won't comment on whether this is the fault of the bylaws or the IRS... but in the end I think we can agree that this was evidently not an effective enforcement mechanism.

As for the whole "cleverness" topic... it wasn't designed as an oligarchy, that's merely what it has devolved into. The saying "too clever by half" exists with good reason


If AGI is as transformative as its proponents make it out to be, would it both attract and create those enormous egos though?


Which is why one might create a mechanism, say a non-profit, that has an established, codified mission to combat such obviously foreseeable efforts.


OpenAI clearly rejected Elon Musk's advances and kept him out. Isn't it working already in its current form?


Touch grass


“I visualize a time when we will be to robots what dogs are to humans. And I am rooting for the machines.” — Claude Shannon


A robot stamping on a human face—forever.


I assure you that we all got the allusion that you're making, but given the quote that you're replying to I think that perhaps you personally should not be allowed to own a dog.


Pray tell me, sir, whose dog are you?


A fellow Code Report viewer, I assume?


who? never heard of it. Prof Shannon said this a while ago, in the '50s I believe.


I'm aware. The quote just so happened to appear in a Youtube program called Code Report yesterday, so I thought you might've been a viewer. I didn't mean to imply anything beyond that, sorry for the confusion.


Who runs a company “for the greater good”?

Surely, anyone taking on and enduring the pain of running a company does so for egoistic reasons.

Your implicit assumption is that altruism exists. In the limit, every living being is egoistic. Anything you do is ultimately for egoistic reasons - even if you do it “for others” at first sight, it ultimately benefits you in some way, even only to make you feel better.

A common misconception is that “egoism is bad”. Egoism doesn’t have to be bad. If the goals align it’s a net benefit for both sides. For example, a child might seek care, while parents seek happiness. Both are egoistic, but both benefit from each other.


It's not supposed to be a company, it is supposed to be a non-profit organization with an altruistic goal.

If it's not possible maybe they shouldn't have set it up this way, but they did and here they are.


I was with them, sort of, until they had this bit of Comms-major corporate BS:

> We’re making our technology broadly usable in ways that empower people and improve their daily lives, including via open-source contributions.


Bingo. Pure, unadulterated ego


> First Sam vs. the board, now Elon vs. everyone

Elon Musk is renowned for being an attention seeker and doing these stunts as a proxy to relevance. It's touring Texas borders wearing a hat backwards, it's messing with Ukraine's access to starlink alongside making statements on geopolitics, it's pretending that he discovered railway and the technology for digging holes in the ground as a silver bullet for transportation problems, it's making bullshit statements on cave rescue submarines followed by attacking actual cave rescuers who pointed out the absurdity of it of being pedophiles... Etc etc etc.

I think it makes no sense at all to evaluate the future of an organization based on what stunts Elon Musk is pulling. There must be better metrics.


Ad hominem makes for a damn good argument, especially if the person in case doesn’t try to appease everyone.


When you describe it, it sounds a lot like an average Hacker News commenter.

Enjoys hearing themselves speak so they can't help themself but to share a speculative idea or opinion on something they have little familiarity with.

High tolerance for being part of the out-group or sharing unpopular takes.

Placing logic (or what is logical by their own judgment) above all else including social niceties.


Yes and no. Elon has ego, but I also take him at his word when he says he wants to open source AI. He did the same thing with Tesla's patents.


Did you also take him at his word when he said 5+ years ago that Teslas can drive themselves safer than a human "today"? Or that Cybertruck has nuclear explosion proof glass (which was immediately shattered by a metal ball thrown lightly at it)?

Musk has a long history of shamelessly lying when it suits his interest, so you should really really not take him at his word.


Pointing out Elon Musk's claims regarding free speech and the shit show he's been forcing upon Twitter, not to mention his temper tantrum directed at marketing teams for ditching post-Musk Twitter due to fears their ads could showcase alongside unsavoury content like racist posts and extremism in general, should be enough to figure out the worth of Elon Musk's word and the consequences of Elon Musk's judgement calls.


I seem to remember that being only partially true? Or the license was weird and deceptive? Also as other replies have stated why isnt "Grok" open source? Musk loves to throw around terms like open source to generate good will but when it comes time to back those claims up it never happens. I wouldnt take Musk at his word for literally anything.


Why isn't gronk open source?


Was it ever promised to be open source?


Didn't the stipulation for the patents require other automakers to share their patents as a condition of using Tesla patents?

"I'd like some of your turkey please and in exchange I will offer this half eaten chicken bone."


You mean like the GPL open source license?


Did you read the article? According to it, Elon Musk agreed with AGI being closed source, but he wanted controlling interest.


This looks bad for OpenAI (although it's been pretty obvious that they are far from open for a long time).

But it looks 10x worse for Elon. At least for the public image, he desperately try to maintain.

> As we discussed a for-profit structure in order to further the mission, Elon wanted us to merge with Tesla or he wanted full control. > In late 2017, we and Elon decided the next step for the mission was to create a for-profit entity. Elon wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding,


The difference is OpenAI had a reputation to protect, Musk can't sink any lower at this point, and his stans will persist.

The fact that they slinging mud at each other just proves the mutual criticism though, and provides an edifying spectacle for the rest of us.


> The difference is OpenAI had a reputation to protect

OpenAI changed its stance about five years ago. In meanwhile they got billions in investment, hired best employees, created very successful product and took leadership position in AI. Only narrative remaining was that they somehow betrayed original donors by moving away from the charter. This shows that is not the case; original donor(s) are equally megalomaniac and don't give a fuck about the charter.


I was struggling to put words to my thoughts here, but you nailed it.

It doesn’t make sense to engage in a public spat with someone who has such a negative reputation. Staying silent would have made more sense.

Posting it publicly is an odd and very unprofessional move. I can imagine Satya doesn’t like this blog post.


> Musk can't sink any lower at this point, and his stans will persist.

You sure about that ?

For now they are largely looking away (and boy it's hard !) from his 'far right' adventures. But it was just reported that he met with Trump. And it's pretty clear that if Trump is elected and does cancel EV subsides as he says he will do, Tesla is dead and he knows it.

So now, we have both of those guys, having something each other wants - Trump wants Musks money now, and Musk wants ... taxpayer money once Trump is elected. I bet, Musk reputation can and will go much much lower in coming months.


If your life was changed by investing in TSLA you will look the other way the rest of your life. For many millennials that was their ticket to a good life.

His detractors do a poor job of knocking him down a peg. I am speaking as someone who was there on day 0 of r/realTesla.

Even today with garbage like that recent 60 minutes hit piece on SpaceX, it is easy to dismiss because of all the easily disprovable claims. So much criticism on Musk is just so lazily produced and not properly vetted and its a shame because there is a ton of material to get him with.


> as he says he will do

Depends if there really is business clash between ICE donors and Musk.

What is promised to public is not important if your voters are true believers.

Same for Musk, he can spin anything to his believers as well.


Nothing you wrote has a spec of evidence. The current administration has been clearly targeting Elon Musk and Tesla/SpaceX, so why on earth wouldn't Elon support their opponents?


Nothing you wrote has a spec of evidence.


everyone with a logical mind would see it. your political opinion should not cloud your judgement… people like you are the reason we have a senile old man leading our world to wars. I cant fathom why americans hate trump so much but are so lenient of biden and democrats that sent us to the worst conflict the world has seen in decades


Just a word of advice regarding the effect of associating with Trump: don’t make the mistake of thinking everyone holds the same opinion as you


The effect of sucking up to someone who previously said this about you might be even worse:

"When Elon Musk came to the White House asking me for help on all of his many subsidized projects, whether it's electric cars that don't drive long enough, driverless cars that crash, or rocketships to nowhere, without which subsidies he'd be worthless, and telling me how he was a big Trump fan and Republican..."


Musk is a vindictive child and he has an axe to grind against Biden and the "woke" left. You'd think he wouldn't do something self defeating because of some slight but he turned $40 nillion into $10 billion buying Twitter for the stupidest reasons.


Based


I mean his standing is based on what he is capable of, not what he does given the circumstance. We know he would side with Trump if it benefits him. I don't need to see the scenario play out for me to judge him for it.


> if Trump is elected and does cancel EV subsides as he says he will do, Tesla is dead

You know that Tesla operates outside the US as well, right? Where 95% of the worlds population live?


I know, yes. But US still account for a very big share of their revenues.

Check what happens to Tesla sales when countries cut EV subsides. New Zealand did it in January. Here is the data:

https://evdb.nz/ev-stats

The truth is, without subsides, Tesla would sell a fraction of what it does now, and it would certainly not be profitable (hell, it probably won't be profitable in 2024, even with the subsides !).


But at the same time, tear downs of their cars show they have healthy 30% margins when their competitors are selling their EVs often at a loss. Their current balance sheet looks much better than their competitors given the assets that they have for the EV transition. If governments are serious about transitioning to EVs then either they kick the other car companies into gear or accept that Tesla(and the Chinese) will be the only real serious players.


His 'stans and TLSA uber-bulls insist Tesla is no longer a car company but an AI one. So they probably don't care anymore.

I know some uber-bulls have long insisted Tesla stop selling cars to the public so they can be horded for the imminent Robotaxi fleet that will soon be deployed en masse by 2020.


And they face fierce competition from BYD. Things are not getting any easier despite Musk saying "legacy" can't keep up.


BYD isn’t legacy.


Musk isn't talking about BYD as legacy.


Musk isn’t saying BYD can’t keep up.


Do you mind elaborating why you think it looks bad for OpenAI? I didn't see anything that diminishes their importance as an entity or hurts them in respect to this lawsuit or their reputation. In their internal emails from 2016 they explain what they mean by open.


Bad as to the credibility of the image they tried to sell in the beginning.

If we agree it's a for profit company, and all this 'Open' stuff is just PR, then yes - it's not looking bad. It's just business.


I hear people complain about the 'Open' a lot recently, but I'm not sure I understand this type of concern. Publications, for companies, are always PR and recruitment efforts (independent of profit or nonprofit status). I recall that OpenAI were very clear about their long term intentions and plans for how to proceed since at least February of 2019 when they announced GPT2 and withheld the code and weights for about 9 months because of concerns with making the technology immediately available to all. In my own mind, they've been consistent in their behavior for the last 5 years, probably longer though I didn't care much about their early RL-related recruitment efforts.


With all due respect, they are not doing a sleight of hand while selling widgets... they are in the process of reshaping society in a way that may have never been achieved previously. Finding out that their initial motives were a facade doesn't portend well for their morality as they continue to gain power.


I still don’t understand why you think their initial motives were a facade. They have always been trying to get to AGI that will be useable by a large fraction of society. I am not sure this means they need to explain exactly how things work at every step along the way or to help competitors also develop AGI any more than Intel or Nvidia had to publish their tapeouts in order for people to buy their chips or for competitors to appear. If OpenAI instead built AI for the purpose of helping them solve an internal/whimsical project then that would not be “open” by any reasonable definition (and such efforts exist, possibly by ultra wealthy corporations but also by nations, including for defense purposes.)


At one time their mission statement said:

> OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

That's obviously changed to "let's make lots of money", which should not be any "non-profit" organization's mission.


I still do not see the point that you are trying to make. Do you think that their current path is somehow constrained (instead of unconstrained) by a need to generate financial returns? I haven’t seen any evidence of a change to the core mission described in the statement.


Do you think the employees they've given PPUs to aren't expecting a financial return?


I don’t see how expecting a financial return is in conflict with the original mission statement. Of course the employees expect a financial return.


If you advertise your company as a wholesome benevolent not-for-profit and generate a lot of goodwill for your human enhancing mission and then pull a bait and switch to what looks to be a profit/power motive at all costs. It certainly makes most people who are following the organization sour on your mission and your reputation.

Typically when organizations do something like that it speaks to some considerable underlying issues at the core of the company and the people in charge of it.

It is particularly troubling when it pertains to technology that we all believe to be incredibly important.


I don’t see any changes to the core mission of OpenAI between its founding and now. Maybe some people misinterpret what a nonprofit vs for-profit company status means and confuse the former with other types of organizations like academia or charity. For example, no early founder or investor in a nonprofit can expect to make money simply out of their investment, and I haven’t seen evidence to the contrary for OpenAI. Any profits that a nonprofit makes, even those through ownership of the for-profit entity, must go back to the initial cause, and may include paying employees. Salaries in AI are high these days so if you want to stay on top you have to keep them higher than competition. In any case, I think this thread was not as productive as I hoped.


I think the word “open” is sort of a misrepresentation of what the company is today. I don’t mind personally but I can also see why people in the OSS community would.

Now, I’m not too concerned with any of the large LLM companies and their PR stunts, but from my solely EU enterprise perspective I see OpenAI as mostly a store-front for Microsoft. We get all the enterprise co-pilot products as part of our Microsoft licensing (which is bartered through a 3rd party vendor to make it seem like all the co-pilot stuff we get is “free” when it goes on the budget).

All of those tools are obviously direct results of the work OpenAI does and many of them are truly brilliant. I work in an investment bank that builds green energy plants and sells them with investor money. As you might imagine, nobody outside of our sales department is very good at creating PowerPoints. Especially our financial departments used to a “joy” to watch when they presented their stuff on monthly/quarterly meetings… seriously it was like they were in a competition to fit the most words into a single slide. With co-pilot their stuff looks absolutely brilliant. Still not on the level of our sales department, but brilliant and it’s even helped their presentations not last 90 million years. And this is just a tiny fraction of what we get out of co-pilot. Sure… I mostly use it to make stupid images of ducks, cats, and space marines with wolf heads for my code related presentations, and, to give me links to the right Microsoft domination I’m looking for in the ocean of pages. But it’s still the fruits of OpenAI.

Hell, the fact that they’re doing their stuff on Azure basically means that a lot of those 10 Microsoft billion are going directly back to Microsoft themselves as OpenAI purchases computing power. Yet it remains a “free” entity, so that Microsoft doesn’t run into EU anti-trust issues.

Despite this gloom and doom with an added bit of tinfoil hat, I do think OpenAI themselves are still true to their original mission. But in the boring business sense in an enterprise world, I also think they are simultaneously sort of owned by the largest “for enterprise” tech company in the world.


It was his money to give, he's not obligated to giving it out without some form of control. But yes that does seem pretty excessive, he basically wanted to pull a Tesla there.


They both are all about publicity and that's the name of the game. Doesn't matter who wins at the end, it's going to be absolutely all out in the Open. Hey.


There's nothing new here, though. That he demanded control of OpenAI has been well reported in the past.


It's not just that Elon wanted control, it's that Elon wanted it to become closed and for-profit. This exposes Elon as a bald-faced hypocrite with cynical intentions motivating the lawsuit.

Sam's reticence to publicly defend himself until now has backfired. Elon has fully controlled the public narrative and perception forming and it is hard to dislodge perceptions after they've settled.


Elon makes plenty of companies, him making an AI company that he was in control of doesn't look bad or strange at all.


He was pushing them to do the thing he's suing them over.

> Message to Elon: "A for-profit pivot might create a more sustainable revenue stream"

> Elon: You are right ... and Tesla is the only path.

Then 3 weeks later Elon gets kicked off the board, probably after a fight where he tried to make OpenAI become for-profit under Tesla.

How can you not see how conniving this man is? The lawsuit is either a revenge play or a play to take down xAI's competition.


Oh, I read that as if it was the founding stage, not 2 years later. Yeah I agree it puts things in perspective a bit.


Kicked off the board? Thats news to me.


> In late 2017, we and Elon decided the next step for the mission was to create a for-profit entity. Elon wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding. Reid Hoffman bridged the gap to cover salaries and operations.

I misspoke. He may have just resigned from the board when he didn't get what he wanted.


Indeed he did resign - which is a significant difference.


I was talking about his public image - 'founder' of Tesla, doing everything for the love of humanity etc.

It's all bullshit, and these emails show it again. He is an extraordinary businessman of course - but everything he does, he does for money and power - mostly coming from government subsides btw. Until recently though, he managed to make most believe that it was to save the planet and the humanity.


At this point most people didn't think Elon was a good guy even before these emails leaked


Yes, he has shown his true self when he called the British diver, who was rescuing kids out of cave in Thailand, a pedophile. That was more than 5 years ago.


Morals, ethics, and not being a jerk aside, Musk has now proven that he is just plain not thinking things through, when he:

1) Released the original Boring Company idea about tiny tunnels under LA to alleviate road traffic, which could easily be shot down by anyone modeling it on a single paper napkin.

2) Got caught up in ontological (theistic) arguments about alien scientists who had surely created a computer simulation in which we all live.

3) In an effort to prevent the inevitable AI overlords from controlling us, bought a company to create a ubiquitous neural interface so that computers have read/write access to our brains, of which somehow the AI overlords will not take advantage.

This was all as of 2018.

I still give him credit for previous accomplishments, but in aggregate, it is arguable that Elon Musk's new found ability to avoid thinking things through might be our "Great Filter."


This reminds me of a quip I heard on twitter and found quite funny.

“Elon is the stupid people’s fantasy of what being smart is like, just like Trump can be a poor person’s fantasy of what being rich is like.”

Don’t agree entirely in the poor part, but the Elon part I find quite accurate.


Even though I truly despise the man that Musk is today, he is not comparable to Trump, prior to politics. Trump always scammed and lied, while Musk did have a real foundation of accomplishments.

I may be trying to compensate for having been an Elon stan pre-2018ish, but I do have to give him a lot credit for being the founder of SpaceX, which is the best launch provider on the planet. He was also the CEO who made Tesla what it is. He did accelerate the adoption of EVs. He truly believed in the physics of both of those endeavors, he was right, and I loved him for it. A true inspiration.

BUT.. around 2018 he got high on his own supply. I had watched every Elon video up to that point, and there is one old one where he said ~"I fear that I could get too ego driven, and have no one around to call me on my shit, and that really worries me." He was right to worry about that, because he sailed right through that with a vengeance. The first couple years of that were tolerable, but now, what a devolution. I hate to admit it, but it really broke my heart.

Now back to my first paragraph, Musk is worse than Trump politically, aka the part that matters now, because he is sucking a lot of intelligent people down to the lowest of human nature. It deeply disgusts me.


In his own biography the author mentions how Elon did the hostile takeover of Tesla.

He can't work with CEOs who are smarter than him.


I'm really confused that people would think that this makes Elon Musk look bad

I mean, it was not the conspiracy-infused, anti-vaccine, pro-Russia, antisemitic vibe about him, no, but those emails that crossed the line


> For example, Albania is using OpenAI’s tools to accelerate its EU accession by as much as 5.5 years

What an insane thing to say so matter of factly. This is like a character in an airport bookstore political thriller who is poorly written to be smart.


Indeed. I found the claim about Icelandic even more telling. Here's a small language that basically exists in its present form since the past thousand years (i.e: it has changed little since Old Norse). It also is notoriously conservative in its preservation of native vocabulary through avoidance of loanwords.

Iceland/Icelandic don't need gee whiz computer things to "preserve" itself.


The lack of self-awareness of whoever wrote this, and of all those who signed it, is mind boggling.


I assume they wrote that with an LLM. That reads exactly like typical LLM arguments.


To me, it reeks of "Rationalist-speak". Throw a couple numbers with decimal points to sound precise, mention Bayesian priors a few times.


Why is that such an insane thing for them to say?


Last time I checked, the decision to admit a new member state requires the unanimous approval of the EU's current member states. As such unless OpenAI can literally influence world politics the claim is 100% bogus. All it takes is one rogue member and Albania will never join the EU.


How is it not? How could anyone possibly know that number? And what gives you any inkling that it has the remotest chance of being true?


They asked ChatGPT of course!


How about the possibility that OpenAI and the Albanian government - along with many other governments - have a relationship of which you're unaware?


Uh huh. And they arrived at this 5.5 number how?

It's bullshit.


You cannot fathom how a government official may have told them that they've managed to accelerate their EU accession through the help of LLMs by a specific number of months?

Here's a speculative scenario that doesn't seem so insane:

Albania has a roadmap for EU accession. The roadmap is broken up into discrete tasks. The tasks have estimates for time to completion. They've been able to quickly hack away at 5.5 years worth of tasks using LLMs.

Your problem with the statement is that they didn't provide a source. Maybe express interest in the facts that might support that instead of freaking out over perceived insanity.


EU accession isn't some jira story where you close tickets.

For a country like Albania, it requires massive social, cultural, political and economical changes. There's no way anyone has a good estimate and there's no way an LLM has magically transformed the culture in a way that's a) meaningful and b) quantifiable.

Turkey has been a candidate for 25 years now, with no meaningful progress.


Why could it not be 15 years for the cultural stuff and 10 years worth of paperwork process, I can imagine an LLM cutting that process in half if it involved hiring and training staff, or waiting for staff to become free from other work to build said paperwork. Out of everything they said in that email, the 5.5 years isn't something I'd pick on given, as you noted, the crazy timelines we're talking about to even be able to put forward an EU application.


Because these are made up numbers. No one knows how long it takes to change culture. The paperwork stuff is not like filling in a tax return, it's actually creating laws that fit into the Albanian legal system but implement EU legislation, and then passing those laws. It's defining new economic measures, getting those numbers, and then formulating legislation to improve them, and then actually implementing that legislation, and the improvements actually happening.

There are no real estimates on any of these numbers. Some countries joined the EU in months, others in 1-2 years, others took 5-10, Turkey has been in the process for two decades or so.


Surely everyone sensible knows an estimate when they see one? All estimates are made up numbers friend, how much flour you use in bread is a made up number too, how long you cook an egg, how fast should a speed limit be set...sure, based on some form of realty, but all estimated are made up numbers. So lets go to that "some form of reality" we derived that made up number form: https://www.euractiv.com/section/politics/news/albania-to-sp... - Seems reasonable, no?

Picking on a number based on it's "made upness" is a fool's errand, pick on the assumptions of it's basis in reality.


"All models are wrong", but some models are more wrong than others. My estimate for how much flour I'm going to put in the bread is going to be way, way more accurate than your estimate of how many years ChatGPT shaved off Albanian EU process.

I just can't get over the absurdity of even attempting to estimate - in 0.1 decimal years - how many years tool X shaves off of a totally non quantifiable complex real world process.

And I can't get over the absurdity of seeing multiple people here defending this bullshit.


That's fair!! I accept it.

I'd hazard a guess that all the people here, people like me, "defending this bullshit" - are probably professionally estimators - I am anyway, so I suppose I personally take particular offense to picking on estimates. However, I will never fight that some estimates are superior to others, that is indeed a fact. A good conversation regardless, and isn't that why we're all here? Have a great day Atte. :)


I'm not sure what a professional estimator is, but I hope the implied precision in your estimates is reasonable. In this case it was not the act of estimating itself which was pissing people off - it was the ridiculous level of implied precision. As another commenter noted, presenting the estimate at "5 years" wouldn't be nearly as ridiculous as presenting the estimate at "5.5 years", as the latter implies a level of precision that is entirely unrealistic for non physical processes.

Have a great day as well!


What if the level of precision is posed as "within X months"?

5.5 years is just 66 months...

The claim that an estimate of 5.5 years is "a level of precision that is entirely unrealistic for non physical processes" is absurd. In this context, how can you possibly believe that businesses and organizations worldwide are not capable of estimating non-physical procesess to within a month?


There is no conceivable way in which an estimate of 5.5 years in a non-physical process is anything other than bullshit. If you're estimating paperwork in years, then you're already giving enormously rough estimates. 5 years maybe would have made some sense, but .5 years at that scale is just a way to make a gut feeling sound precise.

Even more importantly, the process of acceeding to the European Union is a political and economic process. The pace at which such things go and the exact requirements are in a constant churn. Today you might need some paperwork, tomorrow some other. You might meet all the technical requirements but not be accepted, or you might not meet all the technical requirements but exceptions can be made, all at the whims of other states' current leaders. Long-term estimates are not even close to plausible for such a process.


Don't know what others think, but to me that's just corporate gobbly-gook word-salad that's simultaneously true, untrue, and unverifiable all at the same time. They might as well say something like "we're forming synergy with humanities proverbial intellect for the betterment of all humanitarian goals such as equity, justice and fairness".


because it's a completely made up claim with a random number thrown in. How do you end up with "5.5" years for a process that doesn't have a timetable, number of estimated emails required for EU accession divided by number of emails ChatGPT can generate per day?


I was pretty skeptical about this too, but looking into it there is some basis in fact. They're using it to help translate and integrate the "Acquis communautaire" - the huge body of EU laws and regulations that need to be enshrined in national laws of candidate countries. This is one of the most time-consuming parts of the process, and usually takes many years. Leaving aside how risky this is (presumably they will have checks in place), I can see how this could save years of work. Saying 5.5 years is ridiculous false precision though.

It won't help with the toughest part though, which is the politics of the other member states.


Translating that doesn't seem to take very long though. Looked up an example, Sweden voted to enter EU 13 nov 1994, and legally entered and had thus finished incorporating that into their legislation 1 jan 1995, so 1.5 months at most. Not sure what 5.5 years means, maybe they meant the total amount of working years in manhours saved?


The incorporation of the acquis is part of the negotation process, and had already been completed by the time of the referendum. It had started in 1991, which was unusually fast.


An article about this with some more details:

https://www.euractiv.com/section/politics/news/albania-to-sp...


What do emails have anything to do with EU accession?


Exactly. No one knows! The point is that there are many unknown variables but yet the company estimates the future outcome based on many variables.

Personally I think it's a kind of extrapolation based on new processes. It's more PR than math.

The aim of the message is to say that AI can help wider society


literally nothing, that was the point. There is no way you make such a weirdly specific claim about an indeterminate process without cooking up some invented metric. It's like saying "ChatGPT accelerated my next promotion by 24.378 days, this is a totally scientific number, I swear"


I guess Sam Altman finally somehow found Ilya Sutskever after saying he didn't know where he was and whether he was still working at OpenAI [1]

[1]https://officechai.com/startups/sam-altman-is-unsure-if-ilya....


Your source never says he didn't know where he was. That's a bit hyperbolic.

> with Sam being unsure of whether Sutskever is still working at OpenAI even 8 weeks after the entire incident

He wasn't missing/found, maybe negotiations were ongoing.


Does it say he found him anywhere? All I see in the article are emails to him from 2016


He is a signatory


can you please put a space in front of "https"?


All that's laid bare here, is the carcass of their personalities.

None of this screams "I'm going to change this world", this organization is mired in politics from the start, no wonder Satya is hedging his bets.

edit: grammar


You can start a little mom-and-pop store and get plenty of politics. No surprise a billion(s)-dollar company has politics. It's everywhere. I'm not going to claim that it's their openness that is allowing us to see it here, but in some companies it spills out in the open, in others it stays within the family, but there's always politics!


Yes but when the politics goes about destroying the company you have a real problem.

See Apple in the 90's for instance.


Problems like being the company with the highest market cap in history for 13 years running!


No, Apple was on the verge of bankruptcy and had to be bailed out by Microsoft in the 90's.


The greatest trick AI has played is convincing the world that it's a black box that could genuinely grow out of the control of the powerful, well-connected few that control its development.


If anyone paying the right price to access a product, and no underlying technology, making it "open", isn't most of the world's technology open? Isn't Apple really OpenApple? Isn't Oracle really OpenOracle? Apple probably puts out more open-source tech than OpenAI.

Does the word mean much anymore, then? Is it nothing more than a sentiment then?

Perhaps OpenAI should have renamed itself to "AI for all" or something when they adopted the capped-profit model. Perhaps they should've returned donor funds and turned fully for-profit too. Perhaps that was a genuine resolution and pivot, which every org should be able to allowed to do.

Genuine question, I run a nonprofit whose name starts with "open". But we do explicitly bring closed source work to be more openly licensed, without necessarily making the technology open-source.


Well, if AI is the product, Apple is OpenLuxuryHardware, and Oracle is OpenScrewYou.


:'D


Red Hat made off alright. Genuine questions: If OpenAI can pull off something like Red Hat, would people be more, or less, ok with it?


The real villain here is OpenTable!


> Elon understood the mission did not imply open-sourcing AGI. As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open. The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”, to which Elon replied: “Yup”.

Wow the most open OpenAI has ever been is when someone sue them.

On the other hand, this shows Elon doesn't care jackshit about the lack of openness from OpenAI. He's just mad only that he walked away from a monumental success.


> The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...

Based on this line of reasoning, ANY company that builds any given technology and intends to share (sell) it to the world, but not divulge how it was done, can call itself OpenWhatever.

They are clearly saying that the word “Open” in their name means nothing.


> They are clearly saying that the word “Open” in their name means nothing.

Similarly Microsoft makes incredibly large software and my letters about this have gone unanswered




Lots of microcomputers.


Microsoft made software for small computers (PCs) not diminutive software


But my PC is enormous compared to my watch


And tiny compared to a mainframe.

Points of reference change.


Any company could do that. If it would make sense for them to do that would depend on the market they were entering though. At the time OpenAI came about companies weren't sharing (selling) AI to the world. Doing so was a point of differentiation. There's Google over there hoarding all of their AI for themselves. Here's us over here providing APIs and free chat interfaces to the general public.

So sure the name means nothing now in a market shaped by OpenAI, where everyone offers APIs and has chat interfaces. It doesn't mean it meant nothing when they picked it or that they abandoned the meaning. The landscape just changed.


The online versions of Microsoft office apps are free. What if they renamed those to... OpenOffice?


I laughed out real loud after reading this xD

But really there is so much money in this, and if they can make it they are going to be the next Google.

It should be obvious they don't want to be 'open' in terms of really making this open source.


Does 'Open' means anything in legal sense? It may not be in bad taste but it is your interpretation.


AI/AGI is a multiplier. There could have been a world where just Google builds that multipler and only uses it internally. Making that multiplier publicly available can be the "Open" part.

I understand that this particular audience is very sensitive about the term, but why are we being so childish about it? Yes, you can name your company whatever you want within reason. Yes, it does not have to mean anything in particular, asterisk. Companies being named in colourful ways is not particularly new, nor interesting.


> There could have been a world where just Google builds that multipler and only uses it internally.

Funny that you mention that, actually. Before OpenAI started generously... publicizing their models, Google was actually shipping their weights day-and-date with their papers. So honestly, I actually doubt that Google would do that.

> Making that multiplier publicly available can be the "Open" part.

Aw, how generous of them. They even let us pay money to generously use their "Open" resource.

> but why are we being so childish about it?

Why are you being so childish about it? "Open" means something - you can't contort OpenAI's minimalist public contributions into a defense of their "openness". You'd have better luck arguing that Apple and Microsoft support Open Source software ddevelopment.

The last significant contribution OpenAI open-sourced was GPT-2 in 2019. They are a net-negative impact on the world at-large, amassing private assets under the guise of public enrichment. If it was an option between OpenAI or nothing, I'd ask for nothing and pray for a better frontrunner. It's not the name, it's the way they behave and the apologism they garner.


I'm just surprised how the entire tech world has been fooled into thinking AGI is around the corner.


What you think "AGI" and "around the corner" means might very well not be.


That's 100% the case, you can call your company OpenWhatever and keep everything closed. It's a brand, nothing else


By that analogy any company that uses prefix has to be open source?

OpenDoor?


> The Open in openAI means that everyone should benefit from the fruits of AI after its built.(even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes).

"OpenSource". Open in open source means everyone should benefit from the fruits of source after it is built but it's totally OK to not share the source(Even though calling ourselves open is the right strategy for medium term recruitment and adoption purposes).

If anyone tries the same logic with "open" source, they will be ridiculed and laughed at, but here we are.


isn’t this the old: free as in speech, not as in beer


>isn’t this the old: free as in speech, not as in beer

Hardly. Free beer and free speech do have different meanings, but freeware isn't something you have to pay for because it's "free as in beer".

In OpenAI's case, "open" isn't open as in anything normally associated with the word in the context.

Open as in private club during business hours for VIP members only is how they are trying to explain it, but understandbly, some people aren't buying it.


"Free as in beer" : I doubt anybody is expecting OpenAI to give away their work for free or give free credits/tokens forever. Even when they do, it is no different from a free tier of any other *closed* commercial products.

"Free as in speech" : I'm not sure which part of openAI's actions show commitment to this part.


It's because they were whispering.

https://openai.com/research/whisper


OpenAI isn't free either way, they don't let you do porn etc with their models.


What’s more, ChatGPT won’t even attempt to name controversial websites if you’ve forgotten their name.


In my eyes this is a straw argument.

"[T]otally OK not to share the science." I think the reasonable average person would disagree with that. And, it would go against certain goal & financial transparency principles that the IRS demands to bestow the 501(c)3 designation.

(e.g. see here https://www.citizen.org/article/letter-to-california-attorne...)


“The core algorithms we use today have remained largely unchanged from the ~90s. Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated.”

The fact that these two statements have opposing ideals really highlights the hypocrisy of these billionaires. Ilya’s statement is just them trying to convince themselves that what they’re working on is still a noble cause even if OpenAI isn’t actually “Open”.


Ilya's justification for that argument was to link to a SlateStarCodex blog post. We are doomed...


Where did you get that it was sent by Ilya? The to: field is redacted.


The name is not redacted, only the exact e-mail address.


I don't think it's correct. I think it's sent by redacted and ilya later responded to it. I don't think ilya linked the codex


What’s wrong with slatestarcodex? I’ve never heard of them before


Lucky you! It's some of the best content on the internet. My favourite blog for sure. Same guy has continued on with his writing at https://www.astralcodexten.com which is also pretty good but doesn't reach the same highs.

What's wrong with it in context though is that as great as it is, it's just some guys blog. It's disconcerting that people would be working on technology they think is more dangerous than nuclear weapons and basing their approach to safety on a random blog post.

Although it's disconcerting to think of a committee deciding how it's approached, or the general public, or AI loving researchers, so it might just be a disconcerting topic.

If OpenAI or just Ilya think Scott is the best man to have thinking about it though, I would have at least liked them to pay him to do it full time. Blogging isn't even Scott's full time job, and the majority of his stuff isn't even about AI.


nobody said thats the only argument Ilya had, but the points from scott alexander are legitimate and could be addressed even before hiring scott on or having an academic paper written.


Speaking of, I have been wondering how much LLM breakthroughs were helped by Karl Friston ??

https://slatestarcodex.com/2019/03/20/translating-predictive...


> I’ve never heard of them before

Begin here: https://slatestarcodex.com/2014/07/30/meditations-on-moloch/


Yeah seems pretty cut and dry. It'd feel pretty bad to see OpenAI doing what they're doing now after saying things like "My probability assessment of OpenAI being relevant to DeepMind/Google without a dramatic change in execution and resources is 0%."


He was right though, they did have a dramatic change in execution and resources the next couple of years when they prepared to sell out to Microsoft and that gave them a chance. Elon really gave them good advice there, even if it was snarky.


[flagged]


That's not what he said.

I am not sure how to report, but this user name is offensive. In English it means f***ot.


I don’t think anything about this is cut and dry. You may have a strong opinion on the matter, but at the most charitable it’s people doing their best in a murky situation


I did say it seems cut and dry. It just looks a certain way. I know that things may not be as they seem. Elon's complaints about OpenAI pulling a profit make sense in isolation, but look very different in light of these emails. This blog post is a very good move in OpenAI's favor.


The comments in this thread alone would indicate the the crowd is pretty split even after this blog post


This is misleading. Please read the actual source in context rather than just the excerpt (it's at the bottom of the blog). They are talking about AI safety and not maximizing profit.

Here's the previous sentence for context:

> “a safe AI is harder to build than an unsafe one, then by opensorucing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI, which will experience a hard takeoff.”

As an aside: it's frustrating that the parent comment gets upvoted heavily for 7 hours and nobody has bothered to read the relevant context.


I find it good that no one until now provided this bullshit lie as context. The "we need it to be closed for it to be safe" claptrap is the same reasoning people used for feudalism: You see, only certain people are great enough to use AGI responsibly/can be trusted to govern, so we cannot just give the power freely to the unwashed masses, which would use it irresponsibly. You can certainly trust us to use it to help humanity - pinky swear.


This is from an internal email, it was not written for PR. Whether he is correct or not about his concerns, it's clear that this is an honestly held belief by Ilya.


really makes the "Open" sound more sinister, like they're opening the ai.


Like they rub open the lamp to release the genie. To go with the metaphor, the genie obeys the person holding the lamp, not everybody


Release the kraken.


Yes. I think the same. Elon is just bitter he misjudged and now wants to claw back without seeming a giant a-hole.


He has celebrityitus — he is so far gone from knowing anyone that doesn’t suck up to him that he can help but look like a giant a-hole all the time because everyone round him will tell him that he is cool and right.

And it doesn’t take being that rich to have this problem. Even minor celebs in L.A. have this problem!


> this shows Elon doesn't care jackshit about the lack of openness from OpenAI. He's just mad only that he walked away from a monumental success.

Yup he's pretty transparently lying here.

He committed to fund for $1B dollars. Then when they wouldn't make him CEO and/or roll the company into Tesla he refused his commitment after paying out only 4% of it. Claimed the company would fail and only he could save them (again wrong).

And now is mad he doesn't control the top AI out there and because he chose to walk away from them.

And yet again people are falling for him. Elon talk never matches his action. And there is still a large portion of the internet that falls for his talk time and time again.


Whether it is Elon or Altman that controls it has nothing to do with how open or not openAI is. And it has become very clear that OpenAI is nothing but Microsoft in a trenchcoat.

No matter his motives, I applaud this lawsuit. Who cares if his talk doesn't match his action? His action, here, is good.


Musk's lawsuit is for breach of contract, but it looks like Musk agreed with what OpenAI did, which means the lawsuit will fail.

From the complaint:

> Together with Mr. Brockman, the three agreed that this new lab: (a) would be a non-profit developing AGI for the benefit of humanity, not for a for-profit company seeking to maximize shareholder profits; and (b) would be open-source, balancing only countervailing safety considerations, and would not keep its technology closed and secret for proprietary commercial reasons (The “Founding Agreement”).

The biggest beneficiary of this lawsuit is Google, which now gets more runway to bumble along to victory.


Do you also own a Tesla?


> He's just mad

This seems to be his natural resting state these days


> On the other hand, this shows Elon doesn't care jackshit about the lack of openness from OpenAI. He's just mad only that he walked away from a monumental success.

You really come to that conclusion from a "yup"? Damn.


[dead]


I'm sure many if not most here would agree, but can you please make your substantive points without fulminating? This is in the guidelines: https://news.ycombinator.com/newsguidelines.html.


Trust me, the moment they do discover anything it will first be under the control of CIA, not the nerds who invented it.


The nerds are getting richer and more powerful by the year. We aren't that far from a dystopian cyberpunk future where corporations take the place of governments.


[flagged]


If Musk is a has-been, he's the most-successful has-been in history by a factor of 100.


That's interesting. Care to share how you came up with the specific number of 100 for the factor?


I'll tell Napoleon to send his regards.


LOL. The "has been" could lose 99% of his wealth and still be rich enough to fly around on a private jet.


At the time, all AI models were hidden inside big corporations. We saw research papers but couldn't use any. OpenAI allowed anyone to access modern LLMs. They were open in the sense that they give everyone access to the model.


Did people really think leading the development of the most important invention in human history wouldn’t involve a little bit of drama?

I know everyone wants OpenAI to be a magical place that open sources their models the moment they’re done training, but it’s clear that they’ve chosen a reasonable path for their business based on both practicality and risk reduction. If they had gone another way, today they’d be an unceremonious branch of Elon’s empire, or a mid-level nonprofit that never had the means to hire the best people and is still spending all their time soliciting donations to train GPT 3.5.

They did what they believed they had to do to be the ones to get to AGI. Will they be the safest stewards of this tech? Hard to say, though it’s clear even the once safety minded Anthropic isn’t shying from releasing SOTA models.


> most important invention in human history

How can we possibly weigh all human inventions and decide this one, which has yet to even be invented, is the most important?


I try not to nitpick and let people on HN story tell a bit, it's fun reading. However, if I were to join you on the nitpicking, I'd further: Who is to measure "important" - oil and coal related products (automotive/electricity) may end up being the most important because they are our undoing! =)


It's been 40-50% invented, and it's clear to see just how much even these early versions are changing things.

Fundamentally every invention in human history was made because of intelligence. What happens when we invent intelligence itself?


> It's been 40-50% invented

There's absolutely no way we can know this until a functional AI has been developed, and even then how is that % calculated? Do we only consider the code required, the training data set, or do we need to include the underlying tech from GPUs to electricity?

> What happens when we invent intelligence itself?

Well first we have to agree on a definition of intelligence and a reliable way of measuring it. I'm not aware of an answer yet on either of these, and humans have been trying to answer these questions for a very long time.


That's an easy one. Lots of ways. Just think about it for a minute.

Here's a quick list I came up with:

First time we've created something artificial we can talk sensibly to and get sensible knowledgeable responses.

Exporting expertise in a way that's trivial to consume and employ for millions of people.

Allow autonomous robotic servants to operate in a chaotic environment such as a home or business.

Replace entire swaths of knowledge-workers in a single generation.

What have you thought of?

Here's some of what ChatGPT lists as reasons:

Artificial intelligence (AI) is arguably one of the most important inventions in human history due to its transformative impact across various fields. Here are some ways in which AI is considered crucial:

And it goes on to list: automation, decision making, healthcare, accessibility, sustainability, creativity, economic growth, space exploration.


AI, and therefore AGI, haven't been created yet as far as we know. We don't know what it will be capable of, what it will do to our society, or to humanity as a whole.

Our entire world changed with some of the most basic inventions that allowed for agrarian societies. The same goes for basic inventions that allowed for the industrial revolution.

I understand that the hopes for A(G)I include changes that would be hugely impactful, but they simply haven't materialized yet. Even when they have, how can we possibly weigh the different inventions? We will never know what the world would look like without agriculture, oil, the printing press, electricity, etc. Is the weighing entirely based on modeling, opinion, and/or hope?


> ... chosen a specific path for their business based on both practicality and risk reduction

The risk reduction didn't go so well now that Elon is putting up lawyers to force them to become more "open".

This organization thrives in the dark and they know their secret to success depends on it. Would save every one a lot of time if they came out as a proper non-profit and dropped "open" in the name/branding.

> If they had gone another way, today they’d be an unceremonious branch of Elon’s empire

They substituted this dream with Microsoft.


I’m pretty sure Elon won’t win the case, and still, 49% is less than 100%


> I’m pretty sure Elon won’t win the case

0%. Not 1%. I wish it were otherwise.


Lol I'm amazed that all the tech bros have fallen for their marketing and think AGI is really around the corner.


It is around the corner. Anyone who can't see this is either being deliberately obtuse or doesn't have the intelligence to gauge trends


Like most airing out of drama, this doesn't really materially change anything aside from making everyone involved look extremely bad

But also, all entities involved looked bad already to observers who weren't so deeply rooting for them already that nothing would change their minds, so this is basically tabloid fodder drama in terms of importance as far as I can tell


Agree. But this showed Elon to be a much worse cry baby.


Like I said before this seems par for the course for him


Well, isn't it good that the public learns more about the bad stuff they're doing?


Bad stuff? What bad stuff?


Well, you said it made them look bad. Or did you mean that they aren't doing anything bad but are only making it look bad for each other?

Edit: sorry, I meant the parent said that. But I guess you understand it now.

Edit 2: To be clear, the bad stuff:

- OpenAI not really being "open" in any way.

- Elon not really caring about that but just wanting to take revenge because they became successful without him.


That's not really "bad stuff" the public would care about IMO, just some internal drama.


From all the comments in this threat it seems a lot of people care about it. Of course HN public is specially tech/nerd oriented, but from the popularity here I think it's not crazy to think that non-HN audience would also care.


I really think everyone is over estimating how much people know or care about OpenAI or any of the people involved.

Half the country doesn't even understand Instagram is owned by Meta.


Yeah ok, most people won't care, but still a lot of people care, and I think it's good that this kind of information is shared with the public. Maybe it won't have a significant direct impact now, but it will make some people think about the topic, and discuss about it. And in the end, the HN community, who are already involved with this topic, might be actually contain some people that are / will be in positions where this information is very relevant.


Well, if "the public" in any way bought into OpenAI's branding claim that they were a non-profit dedicated to making something for the public benefit, they have a right to feel betrayed

As someone who was watching the brain drain of AI researchers from academia by tech giants before OpenAI was founded, their ethos and initial actual transparency was somewhat inspiring, and the first move to lock up their new frontier model struck me as disappointing and suspicious, and then actively like a betrayal once it came out that it was an exclusive licensing deal with Microsoft.

Maybe members of the public who are less invested in AI research don't feel similarly, but a lot of laypeople seem to suddenly care about AI, including the usual giant swath of techbros that were probably previously working crypto scams becoming really cultish and smug about it, but also non-tech people I know have it on their mind a lot, which is kinda new, and most of these people view it negatively, and almost all of them care because of OpenAI. Frankly your comment makes no sense to me. What are you trying to get at here?


Yes, transparency is perhaps the most important check against power. We are constantly told by governments and corporations alike that there is some dire safety reason they have to do so much secretly, and while rare exceptions where this may actually be true for some temporary situation do exist (often the case in wars, perhaps, though not decades after the fact as governments so often allege), it's clear that this is often a lie told for the obvious reason that this secrecy allows powerful people to act without this check. We only get told the lie in the first place because this secrecy breaks expectations of transparency; often - as with openAI - breaks promises that were previously made

So we have them airing their dirty laundry because a billionaire sued them for his own petty reasons, having planned to break the promises the organization made that he's now suing them over in a similar way. Billionaire is hypocrite, news at eleven. This doesn't exonerate the organization. I hope this suit makes people angry. I hope it makes it harder to get away with this facile and infantilizing line about keeping people in the dark to protect them


> Elon said we should announce an initial $1B funding commitment to OpenAI. In total, the non-profit has raised less than $45M from Elon and more than $90M from other donors.

This is not reflected in their 990's. They claim $20 million in public support and $70 million in "other income" which is missing the required explanation.

Also, why are none of the current board members of OpenAI included as authors here? Is there a problem in the governance structure?

Elon could not legally contribute more than he did without turning OpenAI into a private foundation. Private foundations are required to give away 5% of their total assets annually and are not permitted to own substantial stakes in for-profit businesses.

Showing old emails from people who clearly don't understand what they are getting into is not very helpful to their case. Maybe if they had talked to a lawyer who understood non-profit law, or even just googled it.

If the $70 million was in fact a donation instead of income, they fail the public support test and are de facto a private foundation.


A bunch of what you wrote isn't accurate..

Here's the 2020 990 which shows the first 5 years of the org's existence (including the time at question for this suit): https://apps.irs.gov/pub/epostcard/cor/810861541_202012_990_...

Page 15 is Schedule A Pt 2 which shows the total contributions by year. They did indeed raise ~$133M over that time frame. Row 5 shows the contributions from any 1 person who contributed more than 2% of their total funding (this excludes other nonprofits) -- so the $41M there is definitely Musk. So his share was only ~30% of the total and the other 70% was public support which you can confirm in Section C at the bottom of that page.

"Public support" includes other nonprofits - and it's fine of e.g. Musk 'laundered' other funding via a DAF or something at a different nonprofit since the funds belong to that nonprofit and they have ultimate discretion over the grant.


I think I screwed up. It has been a while, but it looks like I misread Part II, Section B, Line 10 as $70 million, not $70,000. Let me check the previous years to see if I'm thinking of something else. Thanks for double-checking this.

I know they got $20 million from Open Philanthropy which qualifies as public support, so I am still wondering about the other $70 million, but it is not the smoking gun that I thought it was.

It has to be made up of donations from individuals under $2.6 million or from other public charities, but not private foundations.


Most rich people setup DAFs alongside their family foundations so that they can make large contributions to this type of org. without triggering disclosure or private foundation tests -- so e.g. Musk will create a DAF at Fidelity Charitable and give them $100M and collect the associated tax break in year 1 -- he can then direct Fidelity to grant $20M/year to OpenAI which will show up as Public Support since it's coming from another nonprofit entity and Fidelity maintains ultimate discretion over the funds.

Edit - Got curious and sure enough - this is the 28,000 page 990 filing for Fidelity Charitable: https://apps.irs.gov/pub/epostcard/cor/110303001_202006_990_...

On page 205 there's a $3.5M donation to OpenAI from 2019.

Likewise here for SVCF on page 237 (https://apps.irs.gov/pub/epostcard/cor/205205488_201912_990_...) - there's a $30M donation to OpenAI in 2019.


Nice work. The IRS proposed regulations in 2017 that contributions from DAFs should be treated as if they came directly from an individual donor, but they are still waiting on final regulations.

https://www.irs.gov/pub/irs-drop/n-17-73.pdf

https://taxnews.ey.com/news/2023-1927-irs-releases-proposed-...


Reid Hoffman gave $10 million through his private foundation, Aphorism Foundation, in 2017 and 2018:

https://apps.irs.gov/pub/epostcard/cor/464347021_201712_990P...

https://apps.irs.gov/pub/epostcard/cor/464347021_201812_990P...

Since private foundations aren't public charities, they don't have the pass-through protection of donor-advised funds, so this should have been excluded from the public support total because it is more than 2% of the total support.

Also, this reporter did some additional legwork earlier in 2023: https://techcrunch.com/2023/05/17/elon-musk-used-to-say-he-p...


I am an investigative reporter, and I approve of your research tenacity!

Nice :)


I find it very strange that OpenAI would post this in the middle of a lawsuit. Shouldn't all of these emails come out in discovery anyway? Publishing this only benefits OpenAI if they're betting on the case never reaching discovery. It seems like they just want to publish very select emails which paint a certain picture.

Also, DKIM signatures are notably absent, not that we could verify them anyway since the emails are heavily redacted.


If they just waited until discovery, it would be Musk's lawyers that control the narrative, choosing which parts of the emails to focus on publicly, which to ignore and what story to paint around it.

As you say, this way, they get to control the narrative. Nothing strange at all.

From what I can tell, Musk's lawsuit doesn't have a much of chance in the first place. I don't think he expects to win, it seems to be more a tool in Musk's own media push, and while I wouldn't bet on it, there is absolutely a chance it won't reach discovery.

I think OpenAI have quite wisely decided that the real battle here is the court of public opinion. They know it's possible to win the court case but lose in the eyes of the public. And they know that Musk has a lot of previous experience (and success) in the this battleground.


OpenAI might think they're winning a PR battle by shaping a narrative here, but they are now locked into this narrative, possibly to their detriment in court. Just seems like a bad idea. I find it odd that their lawyers wouldn't steer them clear of something like this.

> I think OpenAI have quite wisely decided that the real battle here is the court of public opinion.

They've failed to win me over. As far as I can tell, their attempted PR victory hinges on a single email with a one-word reply from Musk. Their own emails are far more damning as they give a detailed explanation of why they believe AI should not be open.

To an observer who already dislikes Musk, I'm sure it's a PR win. To someone neutral or someone who dislikes both parties, it's a PR disaster.


> Also, DKIM signatures are notably absent, not that we could verify them anyway since the emails are heavily redacted.

What are you implying? Faking emails opens you up to libel. I doubt OpenAI is trying to add another lawsuit to their workload.


> What are you implying?

I'm not implying anything. I'm pointing out a lack of openness on OpenAI's part.


Unless it's brought up as a point in court, actual exhibits in lawsuits don't have DKIM signatures either.


I'm not making a legal argument.


Oh!

In that case, let me ask: where have you ever seen DKIM signatures being provided on a public blog post/press release?


Nowhere, and I criticize it every time. I've even gone to the lengths of trying to find DKIM keys for published emails in the past[1].

But at least those emails were un-redacted. To spell that out for you: in a highly-charged, highly-contentious political setting, emails were published un-redacted and theoretically verifiable with DKIM. No such possibility exists for OpenAI's blog post.

[1] https://news.ycombinator.com/item?id=24780798#24785123


What benefit would that bring, considering the points raised earlier?


The court of public opinion might be more important in some ways than any real court. People have already begun judging OpenAI based on the lawsuit.


It's only strange if you think the lawsuit has merit, and I'm yet to find anyone credible who thinks it does. If you believe the lawsuit is just a vexatious move to attack OpenAI then it makes perfect sense to just fight it as a PR battle.


I'm guessing they'll come to some sort of out of court settlement before it gets to trial.


Musk is blabbering like a broken fountain on Xitter shrug

Xitter is the #1 source of facts, so - Can't blame them for counter balancing a little.


> we felt it was against the mission for any individual to have absolute control over OpenAI

This has to be a joke, right? I'd like to think Altman paused for a second and chuckled to himself after writing - or reading - that.


This whole post makes OpenAI look like a clown show.


Which it is, but somehow is receiving 1/10th of the flack from HN community than any non-tech bureaucratic organization would, especially a sovereign entity that dares to charge taxes. Maybe that's because it is led by cool "startup" people, I don't know.


From the email exchange:

> even though sharing everything is definitely the right strategy in the short and possibly medium term for recruitment purposes

Now we have the clear proof that it was their clear intention from the start to abuse from the naivety of everyone pretending to have an open sourcing goal when in fact they knew planned from the beginning to close everything once they would reach a certain amount of success.

That sucks...


I'm not sure how anyone believed they would open source their proprietary AI, or what that would even mean.

Did OpenAI ever even officially define what open source means in this context? Is the training algorithm open source? Or the data model? Or the interpretation engine that reads the model and spits out content? Or some combination of all three?


Why would anyone even be under the impression that Elon would be pursuing some higher truth or a noble goal by this suite ?

This response is comical and ironically proves Musk's point that OpenAI is just a profit seeking organization structured in a way to masquerade as a non profit to doge taxes.

Basically a clever scheme by Sam, he gets to have his cake and eat it too this way and probably everyone at the leadership level is congratulation themselves for being so brilliant.

Look here, the truth is that they have been caught with their pants down and now are just attempting to back peddle.

I understand why they won't budge because if they do, they will lose all their scam marketing tactics and Nonprofit status.

Let's hope other AI models catch up quickly, I'm rooting for OSS to take over this field entirely like UNIX did with.

Oh, also, before you lecture me about the "threat of AI", maybe give me a chat bot that can do basic math first.


Because his actions have shown he is several times.


> by opensorucing [sic] everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe AI

by open sourcing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe BROWSER

by open sourcing everything, we make it easy for someone unscrupulous with access to overwhelming amount of hardware to build an unsafe OPERATING SYSTEM


Reading the messages:

This is a marketing step of course, no sane lawyer would agree to this. And that is because, I don't think they show what they think and want to show.

That at some point Elon had an opinion that a lot of money is needed or that OpenAI maybe had no future? That does not change the duty or obligations of a non-profit to the mission.

Also, it is clear some important information has been blacked out. And that critical conversation happened offline.

I don't think it will do Elon the image pressure they think it will. But if I was Microsoft... I would hedge my bets a lot...

This looks more and more as giving fuel to a dissolution action of OpenAI as a non-profit than anything else.


And what would happen? 2 minutes after being dissolved, ClosedAI is founded and hires all the employees.


Perhaps even before the dissolution.

But the science and IP become public and open or under a non-profit that is tasked with opening them. And for-profit segments are stripped of any exclusive rights that arose from the OpenAI.

The irony is that the dissolution, overseeing by third special referee or permanent injunction (e.g. the Musk suit) of OpenAI is the only ways OpenAI is "opening their AI."


I'm not a Meta fan, but it's hard to not look at how they have handled Llama and how well it's worked out for them and the "community." I think that OpenAI could also be playing a leading role in the science instead of selling subscriptions for Microsoft.


>"Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction,training method, or similar."

I am not a fan of the "enjoy the fruits but not the science" statement they made in the blog. The quote above was from GPT-4's technical paper. I'll go the extra mile and steel man a counter-argument and say they're seeking a competitive advantage to secure more in investments so they can stick to their mission statements of making safe AGI, but I think we can all see where this line of thinking of is going further down the line. Building your stack around an academic research paper from a rival company then refusing to publish your contributions is terribly disappointing imo. I hope this won't be a trend that other companies participate in.


It’s a ridiculous statement. Any corporation can say their goal is to make their product available to as many people as possible.

I love how transparent it makes it though that these guys live in a bubble and you shouldn’t take their beliefs too seriously


The manner in which they became for-profit does not feel like an exact comparison to Elon thinking that they should become private to be competitive. For example, merging with Tesla isn't the same path the non-profit OpenAI went even if the end state is similar.

Though I think the blog post will probably achieve its messaging goals of diminishing Elon's character to their own public status... "Yeah you think we suck, but it would be no tangible difference to you if Elon had run it, so you should know he sucks too."


Has anyone else noticed the redactions have variable widths?

A little video of devtools:

https://customer-zppj20fjkae6kidj.cloudflarestream.com/e28e5...

I remember seeing techniques which could decode such redactions from PDFs. I don't know why the widths would be included unless it was intentional (stylistic, maybe? but it would be a bear to code), or perhaps exported from something like Adobe Acrobat.

Elon's email is one solid redaction block, while the email body is broken up into widths that don't seem to be consistent.


Somebody used Claude to attempt to un-redact the missing text, and the result it generated is scarily convincing:

https://twitter.com/skirano/status/1765238754615181531


Except the CC part. Why would you include the co-founder of your competitor in an email describing how to win over said competitor?


The anthropic guy must feel like they have made the best decision.

I love the drama. It is very entertaining. I'm glad OpenAI decided to start as a non profit. I feel like they will never be able to get away with it. The issue will keep lingering.


> Elon wanted majority equity, initial board control, and to be CEO. In the middle of these discussions, he withheld funding. Reid Hoffman bridged the gap to cover salaries and operations.

Wow, Reid Hoffman for the win.


He catches wins surprisingly often.


If you are going to simulate blacking out email addresses maybe don't preserve spacing.


I presume they persevered the spacing to avoid any accusations of being misleading as to the redacted information.

This kind of in place redaction seems to be the typical way that documents are submitted to courts, or at least it was the same way that the emails Elon provided were redacted[0].

[0]: https://twitter.com/TechEmails/status/1763633741807960498


Surely it would be more effective to reduce every censorship bar to 1 character width.


I'm not sure who they are attempting to convince of what with this communication?

Nobody who doesn't know who REDACTED is cares what REDACTED thought of the issue.

This is again some of the least professional comms I've seen from a Microsoft entity. We'll see what the evidence is ourselves if and when the case goes to court.


> it's totally OK to not share the science

Ilya being the bad guy is a surprise here.


If that surprises you, you haven't followed Ilya much. Ilya is very terrified of AGI destroying humanity and doesn't believe in this "accelerate AI and give it to everyone" naivety.

'Back in May 2023, before Ilya Sutskever started to speak at the event, I sat next to him and told him, “Ilya, I listened to all of your podcast interviews. And unlike Sam Altman, who spread the AI panic all over the place, you sound much more calm, rational, and nuanced. I think you do a really good service to your work, to what you develop, to OpenAI.” He blushed a bit, and said, “Oh, thank you. I appreciate the compliment.”

An hour and a half later, when we finished this talk, I looked at my friend and told her, “I’m taking back every single word that I said to Ilya.”

He freaked the hell out of people there. And we’re talking about AI professionals who work in the biggest AI labs in the Bay area. They were leaving the room, saying, “Holy shit.”

The snapshots above cannot capture the lengthy discussion. The point is that Ilya Sutskever took what you see in the media, the “AGI utopia vs. potential apocalypse” ideology, to the next level. It was traumatizing.'

https://www.aipanic.news/p/what-ilya-sutskever-really-wants


Is that talk mentioned in the quote available online?


No I don’t believe it was publicly recorded.


Don't want that to end up in the training data huh


Why? Not that he hasn't been politicking...


Okay, so Open in OpenAI means people get to enjoy the free version of the fremium product.


Is it normal (or beneficial) for a corporation of this size to put out a press release like this regarding ongoing lawsuits?


The mission of OpenAI is to ensure AGI benefits all of h̶u̶m̶a̶n̶i̶t̶y̶ shareholders.


Just reposting another comment here which has a link to their original, mission statement:

https://news.ycombinator.com/item?id=39611908

That mission statement contradicts what I’m reading here. It also contradicts what Anthropic is doing. HuggingFace, TogetherAI, and Mosaic are all executing on the original vision which OpenAI has abandoned. Perhaps the board and regulators should ask those companies’ leaders how to best balance open AI’s vs business and risk management.

Also, don’t forget it’s not proprietary/closed vs free/open. There are models in between with shared source, remixing, etc. They might require non-commercial use for the models and their derivatives. Alternatively, license the models with royalties on their use like RTIS vendors do. They can build some safety restrictions into those agreements if they want. Like for software licensing, we don’t have to totally abandon revenue to share or remix the code.


Edit to add this comment from Sam Altman from the Slate Star Codex article. Again, it sounds quite opposite of what they’re doing:

“We think the best way AI can develop is if it’s about individual empowerment and making humans better, and made freely available to everyone, not a single entity that is a million times more powerful than any human. Because we are not a for-profit company, like a Google, we can focus not on trying to enrich our shareholders, but what we believe is the actual best thing for the future of humanity.“


What is OpenAI GPT doing that Mistral or Claude or LLama isn't doing?

What makes OpenAI sit the high moral throne?

I get Sam & Elon's ego clash but I don't understand in this day why GPT-4 is a special snowflake and others aren't.

The LLM hallucinates as much as its peers.

We are nowhere close to AGI. None of the LLMs exhibit composite reasoning and on the fly acquisition of new skills.


"The mission of OpenAI is to ensure AGI benefits all of humanity, which means both building safe and beneficial AGI and helping create broadly distributed benefits."

This is a confidence trick. Nothing they do actually can be proven to "benefits all of humanity". Would never put my trust in Altman.


It is interesting that their narration of the story includes emails from 2017, early 2018, and late 2018, a chronological order they point out.

But the last email from Ilya to Elon and back, which rounds off this narrative, is from early 2016 - and they curiously don't mention the date here.


I'm surprised they would publish this honestly. Especially considering that they're in a lawsuit.


Probably didn't talk to a lawyer.


That seems incredibly unlikely given a large percentage of the people around the top of OpenAI are lawyers, most notably, their Chief Strategy Officer.


What is the actual justification for not discussing it openly? I know that is typical advice, but I am not sure it is correct advice for these types of situations anymore.


GPT-4 can pass the bar exam, so they can just ask it.


They redefined the open in OpenAI in an almost Orwellian way.


As is often argued right here on HN, many programmers believe the Open Source Initiative/Foundation redefined "open" and are not happy with their co-opting of the term. OpenAI is merely playing the same game.


Not particularly surprising, given the last few months of activity at OpenAI.


Just like Google's "Do no evil"

*only applicable to certain definitions of evil


Google scrapped "Don't be evil" a long time ago.

Now it's some kind of "Kiss the ring and have respect for the hand that feeds you" or something.


Agreed, but "Do no evil" shouldn't be the kind of thing you just scrap.

Isn't doing that in and of itself somewhat evil?


"Don't be evil" is a meaningless phrase, because "evil" is an entirely subjective concept.

Google was born out of NSA and CIA research grants, and they operate under capitalist incentives. Many would say they have never not been evil. But again, it's a matter of perspective.

It's obviously not a phrase any company would ever take seriously since it has no real legal implications, and it's honestly weird how many people seem surprised and offended to find this out. No one is getting fired because Raisin Bran cereal doesn't guarantee a minimum of two scoops of raisins, either.


Is there a widely accepted definition of "safe AI"?

Musk's definition is obviously very different from Google's. Are we including offensive language in general discussion of AI safety, or are we just talking about illegal activity? Because if it's the latter, then by definition we already have laws in place. Trying to make a gun that can't fire bullets will always yield a result that is not a gun.

AGI is inherently unsafe. Intelligent people will always be capable of harm. The choice needs to be made: do we want to make this thing a gun or not?


At creation a company's purpose is defined in its memorandum & articles of association, in this case OpenAI's would be to achieve AGI and to better humanity through the development and provision of AI models. This is how the company creates value and sustains itself.

Whether it is for-profit or not-for-profit is a different point. In this case it is clearly for profit and it should be registered and taxed as such.

They try avoid the implied obligation through a creative case of running a company held by a "foundation". A tax avoidance scheme basically.


Aren't there tax implications in this whole scheme? Shouldn't they have to pay taxes retroactively from the day they were founded as a non-profit? Shouldn't the donations be taxed?


Non profits are allowed to own fornprofits. Mozilla Foundation owns Mozilla Corp which brings in tons of money for years.


What's with the redactions? Would that information change any of the context? I understand redacting an email address, but hmmm. This doesn't scream transparency, or open, by any means.


My read on some of the redaction is they appear to be someone on Elon's side of things who's not currently suing OpenAI. It can be hard to guess what redactions are though. They're redacted.


All of these people are so bizarre. It's like Howard Hughes versus some potential acquisition turned competitor and the yellow journalism that followed. Time is a flat circle.


This letter seems to adress why they're not open, but not whatsoever why they're selling out to Microsoft, which was Musk's primary criticism, or am I wrong?


> It seems to me that OpenAI today is burning cash and that the funding model cannot reach the scale to seriously compete with Google (an 800B company). If you can't seriously compete but continue to do research in open, you might in fact be making things worse and helping them out “for free”, because any advances are fairly easy for them to copy and immediately incorporate, at scale.

It was a reasonable concern, funny that it turned out the other way with the transformer


A lot of fluff in here, most of it selectively chosen. I'll be waiting for the discovery in the court case before I form an opinion on who is in the right here.


> with someone whom we’ve deeply admired...

He keeps showing us the kind of person he is, and people go on and on thinking this, until something happens to them?


They're both bad actors, and they both have good points. I hope they keep fighting each other -- it might distract them from us.


I find it interesting that they promote the idea that they are pursuing AGI when they don't even have any I yet in their products.


If it would already be in their products, there wouldn't be much to pursue, right?


I'm not an ML guy but does it actually matter that OpenAI doesn't disclose it's source? Who cares? They proved the transformer arch and everyone knows how to replicate what they did. That's pretty open. The sauce is the alignment and maybe that's what 'people' are mad about, I don't know.


Microsoft being the sole beneficiary is the main problem. They are not the only company which can offer compute and money.


Is there some executive summary of this? I find it hard to wade through the waffling and while I appreciate citing sources by pasting e-mails verbatim this doesn't exactly make the statement easier to follow. It sounds like they're damning Elon with faint praise?


I don't know what to say more than - I wish they'd calculated in KWh instead of $.

I really want to know that figure, but I'm left extrapolating.

I could _guess_ it's huge, but that's not the point.

> "Can we make a separate intelligence beside ourselves, damn the energy expenditure?"



I don't trust anyone involved to have the best interest of anyone else involved.


This is the correct response. Either of them could be lying.

A leak needs to happen to cut through the BS and nonsense.


the email evidence confirms that infrastructure is one of the strongest moats in this space. From Elon:

"Not only that, but any algorithmic advances published in a paper somewhere can be almost immediately re-implemented and incorporated. Conversely, algorithmic advances alone are inert without the scale to also make them scary."

This is reinforced by the fact that infrastructure / training details are much more sparse in papers than algorithmic adjustments.

It's all about 1. researchers (capable of navigating the maze and carefully chosing what to adopt from the research space) 2. clean data 3. infrastructure


They should atleast rename themselves, I'll stick to ClosedAi until then.


Well, they didn't addressed extent of Microsoft's control over OpenAI.


After 1-bit quantizing situation from my armchair, the whole series of event looks like a cold and dry business decision that Musk is a risk with low expectancy of return. Isn't that it, after all?


The title of the post made me think it was going to be a blog post from a third party commentator. For OpenAI to write such a blog post is so petty and desperate.

It is clear that they are full of it.


> Unfortunately, humanity's future is in the hands of <redacted>.

> <redacted>

> And they are doing a lot more than this.

I'm really curious who/what Elon was calling out here.


Probably Google.


I'm kind of surprised they'd post something this substantial without any apparent proofreading (given the repetition around the Tesla stuff).


Who is the redacted person from email #2?

I'm guessing email #3 is referring to Larry, whom Elon has had a lot to say about publicly on the topic of AGI.


I loved this part: "Unfortunately, humanity's future is in the hands of [redacted]."

Google? or a person?


At first I thought "why would Elon choose to slow down progress like this?" And then I realized that he probably knows some things that I don't. He's always one step ahead, even if he doesn't seem like it. I wouldn't be surprised if he's in control of OpenAI a year from now. Having Elon as the brains and Altman as the salesman would be the ultimate dream team.


It's because he's an egomaniac and wants credit and control. He's not the brains at all


Anyone can guess this part of Elon’s email: “ Unfortunately, humanity's future is in the hands of …” ?


The timestamps of the emails are interesting. apparently they work at 3:40 AM but also at midday and at 8 AM


Life becomes work for these types of people, I'm sure there's no hard line when at work or at home, esp during the early days or the early frenzy.


OpenAI is not far ahead from alternatives... it doesn't matter much what the outcome will be.


Unless they develop an agent that can recursively self-improve to the point of an intelligence explosion!


I assume almost everyone already trains on synthetic data.


The next frontier is the physical world; untainted by LLMs.


now recursive functions are AI? /s


> We are dedicated to the OpenAI mission and have pursued it every step of the way. (emphasis added)

> We intend to move to dismiss all of Elon’s claims. (emphasis added)

I feel somewhat surprised but more worried that people working in a space riddled with so much uncertainty seem to use so much certainty in their statements.


It's nice to have more context, but posting all of this seems ill advised.


From: Elon Musk

To: Ilya Sutskever

The most promising option I can think of, as I mentioned earlier, would be for OpenAI to attach to Tesla as its cash cow. I believe attachments to other large suspects (e.g. Apple? Amazon?) would fail due to an incompatible company DNA. Using a rocket analogy, Tesla already built the “first stage” of the rocket with the whole supply chain of Model 3 and its onboard computer and a persistent internet connection. The “second stage” would be a full self driving solution based on large-scale neural network training, which OpenAI expertise could significantly help accelerate. With a functioning full self-driving solution in ~2-3 years we could sell a lot of cars/trucks. If we do this really well, the transportation industry is large enough that we could increase Tesla's market cap to high O(~100K), and use that revenue to fund the AI work at the appropriate scale.

Surprise.


The level of nonsense is amazing.

AGI is far away and not yet a problem of "compute power". We've seen with Autonomous Driving what these statements are worth.

Albania is using ChatGPT to speed up EU accession by upto 5.5 years? What? Implementing EU laws using a tool, whose capability to "understand" legal text is like zero and whose capability to hallucinate is unbounded?


Everyone should leave OpenAI and form a new entity. Problem's solved. Key partners would resign the key contracts they have.

Leave the whole “Open” part behind for good so we can stop talking reading and hearing about it.


It wasn’t everyone, but that’s sort of the Anthropic origin story


"Open"AI


How do the openAi engineers feel about being tricked into the roost? Strange we never hear from them , is it because their mouths are stuffed with gold?


> we felt it was against the mission for any individual to have absolute control over OpenAI

Unless that individual is Sam Altman?


Is there really that much of a distinction between OpenAI and Microsoft at this point? Not talking about on paper like tax documents but in the real world of interests, objectives and decision-making.


This is going to come across as contrarian view and get a lot of downvotes - OpenAI is being 'open'. Not Open Source, but Open. They are bringing AGI tech to the common person at affordable pricing. Before OpenAI, all the AI and its use cases were hidden behind closed doors at Google. By kickstarting the AI race, they have created high standards, new knowledge, and enough momentum to hold up the entire startup ecosystem and the US economy. They make hard choices and now the entire industry is moving forward because of their choices.


Google is actually the one that published the paper that made all this possible (just like they have done with tons of other breakthroughs)

https://en.m.wikipedia.org/wiki/Attention_Is_All_You_Need


Papers aren’t products, and most of the authors - who did the actual work - are long gone.


I don’t have a horse in this race but I’d definitely like to see this viewpoint discussed more.

The open source angle probably matters more to the HN audience but it’s hard to argue that OpenAI hasn’t “opened up access to AI” to a vast global audience.

And that’s not to say that the means by which they’ve done so doesn’t still need some scrutiny.


By that definition every corporation aims to be open as possible…


We all had an understanding that open in OpenAI meant open source. Come on.


why respond?


I like it just because I'm fascinated to see how much evidence it takes for some people to see who Elon is


It's like selective screenshots of your Whatsapp chat


its afraid


agree, this seems like a super bitchy kind of he-said-she-said cat fight and the update doesn't make any meaningful contribution other than 'no you didnt!! See, I have emails to prove it!'.

Yeah yeah, put it in the court documents if it's relevant.

This is drama for drama's sake / PR positioning for ??? reasons (someones ego).

I thought OpenAI was playing the 'we're grown ups' game now?

...guess not.


Two things can be trouble.


IMHO Musk just wants to slow down OpenAI, so his company can catch up.

That being said, I am pretty sure there will be other conversations supporting Musks claims


Elon Musk has been warning about AI risks for at least a decade. Whatever his other motives, AI risk seems like one of his deeply held beliefs.


I guess neither Elon Musk nor Sam Altman want to live in a world where someone else makes it a better place


> Elon understood the mission did not imply open-sourcing AGI.

> As Ilya told Elon: “As we get closer to building AI, it will make sense to start being less open.

> The Open in openAI means that everyone should benefit from the fruits of AI after its built, but it's totally OK to not share the science...”

> to which Elon replied: “Yup”.

That's honestly pretty disgusting.


> We couldn’t agree to terms on a for-profit with Elon because we felt it was against the mission for any individual to have absolute control over OpenAI.

I think Musk’s lawsuit is without merit, but it’s laughable in light of what happened with the recent leadership struggle to include this bit.


> I think Musk’s lawsuit is without merit

If it does not have merit, why does OpenAI feel compelled to write this post? Unless they know or at least fear that the lawsuit has merit.


Public relations. What Musk has said about OpenAI is true: they abandoned their mission and sold themselves off to Microsoft in pursuit of profit, and have no intention of making the technology (or its rewards) widely available to all. They can’t counter any of these claims credibly, but can still (correctly) paint the messenger as arguing in bad faith to diminish their impact. Being right about all of this doesn’t accrue any material benefit to Musk, however, because the terms of the agreement are so full of loopholes that just about any lawyer can drive a semi through them.


“have no intention of making the technology (or its rewards) widely available to all”

Has any organization done more to make AI widely available to all? I can’t think of one. True, they’re not making the source code available. True, they’re making tons of money off it. But their most powerful AI is accessible to anyone with a credit card.


I don't know about "widely available". It's a matter of openess.

Coca Cola is widely available, but its not "open" (its recipe is a secret, maybe others have reverse engineered the formula, but Coca Cola Co. intentions always were to keep it a secret).


> their most powerful AI is accessible to anyone with a credit card

Only 22.26% of the world's population had credit cards as of 2021.


For people without one, I think ChatGPT free version is also by far the best option available no?


Yes, Meta, Mistral, Allen AI, and many others.


None of those are anywhere close to as easy for the average person to access as OpenAI’s offerings. And none are as good as GPT-4 afaik.

There are many arguments one can make against OpenAI. I just don’t see how “they don’t want to make AI widely available” makes any sense at all. A world where OpenAI never existed would clearly be a world where AI is far less accessible to the average person.


There is no way to reconcile their founding statements and principles with what they’re doing now. They don’t even publish legitimate research papers any longer and barely even try to defend themselves as aligning with their prior mission.


I think in their minds, their mission has always been to make AI widely available. It seems like they initially thought open source and open research were a viable path to achieving that mission, but later they changed their minds. I get why people are upset about that pivot, but considering they single-handedly brought AI out of the research lab and into the mainstream, it seems hard to argue that they were wrong.

In any case, sticking to the point of this discussion, there is no indication of any kind that they don’t intend to make AI widely available. It’s like saying McDonald’s doesn’t intend to make hamburgers widely available.


To be clear, widely available is my paraphrase while their founding principles stated that the technology should be “freely available” and explicitly reference open source. That’s entirely inconsistent with what they’re doing now. What you’re calling a “pivot” is just casting aside what they said they stood for in pursuit of money, which fine, whatever, happens all the time. We don’t have to strain ourselves making excuses for them, because, again, they hardly bother to do that themselves.


> I think in their minds,

I've found it good practice to judge people by their actions, not their assumend intentions.


There’s so much greed in this space it’s sickening. Even Elon isn’t immune to greed. It should not be humans determining the course of AGI, but rather multiple AIs in a DAO that rule by quorum and consensus. The human element must be taken out of the equation.


This is exactly like a plot from Silicon Valley. OA's "middle-out compression" will revolutionize the world! I agree with hackernews Habosa's starkly simple comment. These people are up their own ass.


This is the usual Musk play book. He is a salesman, not a tech guru, and plays the "good guy card" when it suits him in order to fool his followers en masse. A few months ago this forum was very much for the profit model of Altman after his ouster, and now it's the other way around.

Why believe any sort of discussion on the internet at all anymore? Can hackernews prove the discussions here are not by Tesla staff? By OpenAI staff? By bots? By AI responses? There is a saturation point being hit by the credibility and reliability of this medium we call the internet. This was obviously always the case to some degree, but the profit model of most businesses now are built on the backbones of social media hype, all of which is completely, and easily, falsifiable, more than ever before.


> A few months ago this forum was very much for the profit model of Altman after his ouster

I did not get that impression at all.


"A few months ago this forum was very much for the profit model of Altman after his ouster, and now it's the other way around."

Speak for yourself.

Oddly your second paragraph is an argument against the validity of your first paragraph.


Making a conclusion that this forum was somehow uniform in the position you suggest is too revisionist to finish reading the remainder of your comments.


Case in point: you made a conclusion without fully reading something, based on a pre-existing bias you had after someone made a claim that went against it. This is modern internet in a nutshell, and that's not even considering whether this forum, like any other, is not just a lot of the same people behind multiple accounts.

Here's a thread to get you started: https://news.ycombinator.com/item?id=38309611


I noticed the same thing. I also notice pompous dorks love to hate on people like you who point this out.


why would I lie?


"Open, as in Open For Business"

And Elon was in on it. Not really surprising but disappointing obviously.


Amazing people still think Musk’s “core points” are still correct after this.

I hate this community anymore.


> Unfortunately, humanity's future is in the hands of XXXX.

Only wrong answers please.


Brin probably, Musk and him got into a fight over AI years ago. It's detailed in the Isaacson biography


Elon Musk's antics have been utterly shameless.

a) If he cared at all about altruism, humanity etc he would open-source Grok and allow anyone access to the Twitter dataset.

b) He talks all about AGI in relation to Tesla but then used it as a weapon to try and extract more control over the company from investors.


I don't really understand this view honestly, why should Grok be open sourced? When was Twitter called OpenTwitter and had a similar mission statement to OpenAI?


The parent post said if “Musk” cared, not Twitter. He also doesn’t have anything in the way of public shareholders to appease at Twitter anymore.

Whether or not it’s a reasonable expectation is up for debate. But it is at least a congruent argument.


There’s a difference between developing and open sourcing models and technology and end user products though.


If Musk wants to take the high moral ground and lecture others about openness and the importance to humanity he should start with his own actions.

It seems pretty clear that altruism is a cynical marketing and PR technique for him rather than something he actually cares about.


Musk cofounded and funded OpenAI.


And then pushed them to create a for-profit entity and merge with Tesla.


Not a gian Musk fan lately, but this is true, it wouldn't even be a thing without him.

Once bitten twice shy.


ah yes, he should be shamed for being a major force behind accelerating adoption of eletric cars by a few years minimum.


He's being shamed for being a loud hypocrite about OpenAI, not for his accomplishments pushing the mass adoption of EVs. Two different things.


Two thoughts after reading the article:

1. The post depicts properly the need of ownership from Elon Musk, but it misses the point of the Open Source Mission that OpenAI had when being created. And therfore, it misses the main point of the lawsuit.

2. It feels like the writing is assisted by OpenAI itself.

Am I the only one thinking that the strategy of the post (and maybe the contents) are completely guided by their algorithm?


Elon Musk has said that he will drop lawsuit against OpenAI if they change their name to ClosedAI [https://twitter.com/elonmusk/status/1765409615070601417]


I don't think OpenAI has to worry about much at all, he can't bleed them dry via a lawsuit when they have one of the largest corporations on Earth bankrolling them. I'm not sure what he's trying to accomplish other than making dubious lawsuits like Trump does.


I don't care much about OpenAI === open-source as long as OpenAI === open the AI pandora's box to everyone.

Compare:

- Google open-sourced many models but sat on the transformers paper for 5 years and never released a product like ChatGPT for the masses.

- OpenAI didn't open-source GPT-4 but made it available to basically everyone.


Selectively editing material you release to libel a public figure is highly duplicitous.

They redacted parts of these emails.

Did they publish all of the emails?

Or are we literally being fed a curated collection?

Very gross.


TLDR; Elon is slimy, Sam also.


I am “GI” without the A last time I checked, and I don’t require much compute at all!


It’s the meat


Just a couple million years of training!


Seems pretty straightforward... Elon invested when they claimed to be an open-source non-profit, and clearly now are closed source and very for profit. The notion that the for profit is just a temporary means to achieve scale goals is laughable.


The emails clearly show that Elon had no interest on this being a non profit. Especially because he advocated for OpenAI to be absorbed by Tesla multiple times. The only reason he is suing is because xAI is dead and he needs some AI love.


He's not asking for money in the lawsuit, so what is it going to do? I thought it was for show in the first place, the goal was to hold them to their founding principles. Which are of course, pretty subjective. Top to bottom it's God complex over there and the secret justification for closed source was pretty ironic--we only found out this key unmentioned detail once the lawsuits start flying.


He is not asking for money because that would make it crystal clear he's after the money. In actuality, he is just bitter he missed the boat. He is not after the money, but the fame.


"For an award of restitution and/or disgorgement of any and all monies received by Defendants while they engaged in the unfair and improper practices described herein"

He claims he'll give all the money to charity, but he's certainly asking for money.


Is xAI dead? I thought he is very much invested in Grok.



It's probably worth reading the post, which refutes these points.


It does not refute them in any way. It very openly admits them while (trying to) justify them.


"In late 2017, we and Elon decided the next step for the mission was to create a for-profit entity."


1) They defied their original charter. 2) separately: Creating a for-profit entity is not what happened here. They transformed into a for-profit entity and discarded the non-profit mission. There is no vestige left of the original non-profit. Same funds, same code, same IP.


The for profit entity is owned by the non profit entity. This is permissible and a number of non profits operate this way.


Would you be happy if you donated tens of millions to a non profit entity which then changed their mission and suddenly started making a profit?


1) OpenAI did not change their mission. They are still pursuing AI. They are simply making more money doing it now than they did when they started.

2) Lots of non-profits make a profit. A non-profit is not defined by losing money, it is defined by the nature of what it does with its money, and what parts of its revenue are subject to taxation and what parts are tax-free. For example: the Microsoft contract would have been subject to income tax even if was directly with the OpenAI non-profit entity; moving that to the for-profit subsidiary just simplifies the accounting.

3) I do not think that OpenAI's mission should have ever qualified it for IRC 501(c)(3) status, but the IRS didn't agree and at this point it would be very difficult to challenge the (parent entity)'s non-profit status.


I like the part where you didn't answer the question, but did a lot of whataboutism.


Where did they claim being open-source?


It is like Google's 'don't be evil' slogan. OpenAi proved to be anything but Open.


Don't use their models. As a paid member I am fine with it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: