They hired the actor that did the voice months before they contacted SJ. The reaction on this site to the news that this story was false is kind of mindbending.
My guess: Sam wanted to imitate the voice from Her and became aware of Midler v. Ford cases so reached out to SJ. He probably didn't expect her decline. Anyway, this prior case tells that you cannot mimic other's voice without their permission and the overall timeline indicates OpenAI's "intention" of imitation. It does not matter if they used SJ's voice in the training set or not. Their intention matters.
Please don't take this as me defending OpenAI's clearly sketchy process. I'm writing this to help myself think through it.
If it weren't for their attempt to link the voice to SJ (i.e. with the "her" tweet), would that be OK?
- It's fine to hire a voice actor.
- It's fine to train a system to sound like that voice actor.
- It's fine to hire a voice actor who sounds like someone else.
- It's probably fine to go out of your way to hire a voice actor who sounds like someone else.
- It's probably not fine to hire a voice actor and tell them to imitate someone else.
- It's very likely not fine to market your AI as "sounds like Jane Doe, who sounds like SJ".
- It's definitely not fine to market your AI as "sounds like SJ".
Say I wanted to make my AI voice sound like Patrick Stewart. Surely it's OK to hire an English actor who sounds a lot like him, so long as I don't use Sir Patrick's name in the advertising. If so, would it have been OK for OpenAI to do all this as long as they didn't mention SJ? Or is SJ so clearly identifiable with her role in "Her" that it's never OK to try to make a product like "Her" that sounds like SJ?
Most of those "rights of publicity" don't apply in the US though. If they did, half the actors in movies couldn't get work, because quite a few of them get work as the off-brand version of X in the first place.
Off the top of my head I can't think of a single example of any actors that come across as an "off-brand" version of someone else. What are some of the examples you have in mind?
I don't think this has anything to do with this case, but there certainly are some "types" in Hollywood with doppelganger actors.
Isla Fischer / Amy Adams
Margot Robbie / Samara Weaving / Jamie Pressley
Justin Long / Ezra Miller
Not to mention all of the token/stereotype characters where it hardly matters who the actor is at all. Need to fill the funny fat lady trope? If Rebel Wilson wasn't available, maybe they can get Melissa McCarthy.
The voice from Her isn't even the first voice I'd think of for a female computer voice. That trope has been around for decades. I'm sure OpenAI just wanted SJ specifically because she's currently one of the most popular celebrities in the world.
Another one you didn't mention: until relatively recently, I thought actors Elias Koteas (Exotica) and Christopher Meloni (CSI: Special Victims Unit) were the same person!
This is where your post breaks down. Many people say they don't think the voice sounds like SJ. Others do. But it appears you've made up your mind that they deliberately emulated her voice?
There's no clear line for this. To get the definite conclusion, you will need to bring this to the court with lots of investigation. I know this kind of ambiguity is frustrating, but the context and intention matter a lot here and unfortunately we don't have a better way than a legal battle to figure it out.
Thanks to Sam, this OpenAI case is clearer than others since he made a number of clear evidence against him.
Sure. And I'm not necessarily concerned about the outcome of the seemingly inevitable lawsuit. I'm more interested in calibrating my own moral compass here. If I were asked to participate in this, where would my comfort zone be?
I'll defer to a judge and jury about the legalities. As you noted, Sam gave them a lot of help.
My own thought is - every artist in history has taken inspiration from previous artists. The British voice actor in the above example surely studied greats in his field like Sir Patrick. We don't typically mind that. Where I think the line is between standing on the shoulders of giants and devaluing someone else's art is how well digested, for lack of a better term, the inspiration has been. Is the later artist breaking the components out, examining each, and assembling them with the insights they gained? Or are they seeking to resemble the end result as much as possible? I think that separates the cheap hack from the person who 'steals like an artist'
When it comes to whether something is "wrong", in general intent matters a lot and what they did was communicate an obvious intent. There are certainly ways they could have avoided doing so, and I'm not sure I understand the value of trying to dissect it into a dozen tiny pieces and debate which particular detail pushes it over the line from ambiguous to hard-to-deny? Maybe I don't understand what kind of clarity you're trying to achieve here.
This particular area of law or even just type of "fairness" is by necessity very muddy, there isn't a set of well-defined rules you can follow that will guarantee an outcome where everyone is happy no matter what, sometimes you have to step back and evaluate how people feel about things at various steps along the way.
I'd speculate that OAI's attempts to reach out to SJ are probably the result of those evaluations - "this seems like it could make her people upset, so maybe we should pay her to not be mad?"
What if they hired 10 different voice actors with no intent to have someone like SJ, but one voice actor actually did sound like from Her so they liked it the most and decided to go with it. And if only after the fact they realized that it is quite similar to SJ in general and decided to reach out to her and also went along with the Her idea due to obvious similarities?
Such a situation strains credulity. Voice acting is a profession, and professionals by their nature are aware of the wider industry, including the competition. SJ was the world’s highest paid actress in 2018-2019. The film Her was nominated for 5 Academy Awards, including Best Picture, and won Best Original Screenplay.
Even if this did go down the way you suppose, once they realized the obvious similarities, the ethical thing to do was to not use the voice. It doesn’t matter if the intention was pure. It doesn’t matter if it was an accident.
>If it weren't for their attempt to link the voice to SJ (i.e. with the "her" tweet), would that be OK?
Taking the recent screwup out of account... It's tough. A commercial product shouldn't try to assossiate with another brand. But if we're being realistic: "Her" is nearly uncopyrightable.
Since Her isn't a tech branch it would be hard to get in trouble based on that association alone for some future company. Kind of like how Skynet in theory could have been taken by a legitimate tech company and how the Terminator IP owner would struggle to seek reparations (to satisfy curiosity, Skynet is a US govt. Program. So that's already taken care of).
As long as you don't leave a trail, you can probably get away with copying Stewart. But if you start making Star trek references (even if you never contacted Stewart), you're stepping in hit water.
This is specifically covered in cases like Midler v. Ford, and legally it matters what the use is for. If it's for parody/etc it's completely different from attempting to impersonate and convince customers it is someone else.
Midler v. Ford is a bit different from CahtGPT in that it was about a direct copy of a Midler song for a car ad, not just a voice sounding similar saying different stuff.
Sure, you can't sell light sabers. Can't you use a Darth Vader voice impersonator to sell vacuums? What about a voice that sounds like generic background actor 12?
> It's fine to hire a voice actor who sounds like someone else.
Not necessarily, when you're hiring them because they like someone else—especially someone else who has said that they don't want to work with you. OpenAI took enough steps to show they wanted someone who sounded like SJ.
> Surely it's OK to hire an English actor who sounds a lot like him, so long as I don't use Sir Patrick's name in the advertising.
> If it weren't for their attempt to link the voice to SJ (i.e. with the "her" tweet), would that be OK?
No.
The fact that it sounds very much like her and it is for a virtual assistant that clearly draws a parallel to the virtual assistant voiced by SJ in the movie (and it was not a protected use like parody) makes it not OK and not legal.
> Surely it's OK to hire an English actor who sounds a lot like him, so long as I don't use Sir Patrick's name in the advertising.
Nope.
If you made it sound identical to Patrick Stewart that would also likely not be OK or legal since his voice and mannerisms are very distinctive.
If you made it sound kind of like Patrick Stewart that is where things get really grey and it is probably allowed (but if you're doing other things to draw parallels to Patrick Stewart / Star Trek / Picard then that'd make your case worse).
And the law deals with grey areas all the time. You can drag experts in voice and language into the court to testify as to how similar or dissimilar the two voices and mannerisms are. It doesn't nullify the law that there's a grey area that needs to get litigated over.
The things that make this case a slam dunk are that there's the relevant movie, plus there's the previous contact to SJ, plus there's the tweet with "Her" and supporting tweets clearly conflating the two together. You don't even really need the expert witnesses in this case because the behavior was so blatant.
And remember that you're not asking a computer program to analyze two recordings and determine similarity or dissimilarity in isolation. You're asking a judge to determine if someone was ripping off someone else's likeness for commercial purposes, and that judge will absolutely use everything they've learned about human behavior in their lifetime to weigh what they think was actually going on, including all the surrounding human context to the two voices in question.
A random person's normal speaking voice is nobody's intellectual property. The burden would have been on SJ to prove that the voice actor they hired was "impersonating" SJ. She was not: the Washington Post got her to record a voice sample to illustrate that she wasn't doing an impersonation.
Unless & until some 3rd other shoe drops, what we know now strongly --- overwhelmingly, really --- suggests that there was simply no story here. But we are all biased towards there being an interesting story behind everything, especially when it ratifies our casting of good guys and bad guys.
If "Her" weren't Sam's favorite movie, and if Sam hadn't tweeted "her" the day it launched, and if they hadn't asked SJ to do the voice, and if they hadn't tried to reach her again two days before the launch, and if half the people who first heard the voice said "Hey, isn't that SJ?" -
Then I'd say you have a point. But given all the other info, I'd have to say you're in denial.
> the Washington Post got her to record a voice sample
Actually it only says they reviewed "brief recordings of her initial voice test", which I assume refers to the voice test she did for OpenAI.
The "impersonating SJ" thing seems a straw man someone made up. The OpenAI talent call was for "warm, engaging, charismatic" voices sounding like 25-45 yr old (I assume SJ would have qualified, given that Altman specifically wanted her). They reviewed 400 applicants meeting this filtering criteria, and it seems threw away 395 of the ones that didn't remind Altman of SJ. It's a bit like natural selection and survival of the fittest. Take 400 giraffes, kill the 395 shortest ones, and the rest will all be tall. Go figure.
You’re right that a random person’s voice is not IP, but SJ is not a random person. She’s much more like Mr. Waits or Ms. Milder than you or I.
I don’t believe the burden would be to prove that the voice actor was impersonating, but that she was misappropriating. Walking down the street sounding like Bette Midler isn’t a problem but covering her song with an approximation of her voice is.
You are dead right that the order of operations recently uncovered precludes misappropriation. But it’s an interesting situation otherwise, hypothetically, to wonder if using SJ’s voice to “cover” her performance as the AI in the movie would be misappropriation.
> You are dead right that the order of operations recently uncovered precludes misappropriation.
I don't think that follows. It's entirely possible that OpenAI wanted to get ScarJo, but believed that simply wasn't possible so went with a second choice. Later they decided they might as well try anyway.
This scenario does not seem implausible in the least.
Remember, Sam Altman has stated that "Her" is his favorite movie. It's inconceivable that he never considered marketing his very similar product using the film's IP.
"We hold only that when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California."
Ford having multiple ads would not have changed the determination.
The voice doesn’t sound like her and the article shows there’s a fair amount of proof to back up the claim that it wasn’t meant to and that there was no attempt to imitate her.
And yet so many people think it does. What a weird coincidence.
> there’s a fair amount of proof to back up the claim that it wasn’t meant to
Sam has said his favorite movie is "Her". Sam tweeted "her" the day it was released. Sam wrote to ScarJo to try to get her to do the voice. OpenAI wrote to her two days before the release to try to get her to change her mind. A large percentage of the people who hear the voice thinks it sounds like ScarJo.
I think we're just going to have to agree to disagree about what the evidence says. You take care now.
I interpret it completely differently given that the voice actor does not sound like SJ.
1. OpenAI wants to make a voice assistant. 2. They hire the voice actor. 3. Someone at OpenAI wonders why they would make a voice assistant that doesn’t sound like the boss’s favorite movie. 4. They reach out to SJ who tells them to pound sand.
Accordingly, there is no misappropriation because there is no use.
The voice is different enough that anyone who listens to samples longer than 5 seconds side by side and says they can’t tell them apart is obviously lying.
All the reporting around this I’ve seen uses incredibly short clips. There are hours of recorded audio of SJ speaking and there are lots of examples of the Sky voice out there since it’s been around since September.
It doesn't even need to sound like the person. It's about the intent. Did OpenAI intend to imitate ScarJo.
Half the people of the world thinking it's ScarJo is strong evidence that it's not an accident.
Given that "Her" is Sam's favorite movie, and that he cryptically tweeted "her" the day it launched, and that he reached out to ScarJo to do the voice, and that the company reached out to her again to reconsider two days before the launch -
I personally think the situation is very obvious. I understand that some people strongly disagree - but then there are some people who think the Earth is flat. So.
I don't think the intent matters (though it's moot in this case because I think there is clear intent): If someone unknowingly used a celebrity's likeness I think they would still be able to prohibit its use since the idea is that they have a "right" to its use in general, not that they have a defence against being wronged by a person in particular.
For example if someone anonymously used a celebrity's likeness to promote something you wouldn't need to identify the person (which would be necessary to prove intent) in order to stop have the offending material removed or prevented from being distributed.
"We hold only that when a distinctive voice of a professional singer is widely known and is deliberately imitated in order to sell a product, the sellers have appropriated what is not theirs and have committed a tort in California."
The passage you cited reads "we hold only that when" (compare with "we hold that only when") which I understand as that they are defining the judgment narrowly and punting on other questions (like whether the judgment would be different if there were no intent) as courts often do. In fact the preceding sentence is "We need not and do not go so far as to hold..."
It might make sense for intent to be required in order to receive damages but it would surprise me if you couldn't stop an inadvertent use of someone's likeness. In fact the Midler case cites the Ford one: 'The defendants were held to have invaded a "proprietary interest" of
Motschenbacher in his own identity.'. I think you can invade someone's "proprietary interest" inadvertently just as you can take someone's property inadvertently; and courts can impose a corrective in both cases, in the first by ordering the invasion of proprietary interest be stopped and in the second by returning the taken property.
No. (I did cite the Ford statement about "proprietary interest" which I think supports my argument).
I'm not familiar with all the case law but I assume that no case has been brought that directly speaks to the issue but people can and do discuss cases that don't yet have specific precedent.
I don't think that's true. I can't cite them off the top of my head but when I read about supreme court cases often a big point of contention of the ruling is whether they decided to issue a narrow or broad ruling. Sometimes they decide to hear a case or not based on whether it would serve as a good basis for the type (narrow/broad) ruling they want to issue.
And the legal universe is vast with new precedent case law being made every year so I don't think the corpus of undecided law is confined to well known legal paradoxes.
As for this case it doesn't seem that odd to me that the issue of intent has never been at issue: I would expect that typically the intent would be obvious (as it is in the OpenAI case) so no one has ever had to decide whether it mattered.
That's a different standard: "Can you tell them apart side-by-side" vs. "does this sound like person X" or "is this voice exploiting the likeness of person X". It's the latter question that is legally relevant.
I cannot read the article because of it's paywall - is there actual proof OpenAI reached out to Johansson - or is it just being alleged by her lawyers?
It seems she has every reason to benefit from claiming Sky sounded like her even if it was a coincidence. "Go away" payments are very common, even for celebrities - and OpenAI has deep pockets...
Even so, if they got a voice actor to impersonate or sound similar to Johansson, is that something that's not allowed?
Johansson is a super successful actress and no doubt rejects 95% of roles offered to her, just as she rejected Altman's request to be the voice of ChatGPT.
She doesn't need "go away" payments, and in any case that is not what we're looking at here. OpenAI offered her money to take the part, and she said no.
According to celebrity net worth website, SJ is worth $165M.
> Johansson is a super successful actress and no doubt rejects 95% of roles offered to her
> She doesn't need "go away" payments
> According to celebrity net worth website, SJ is worth $165M.
I have no idea what Johansson's estimated net worth, or her acting career have to do with this? Wealthy people sue all the time for all kinds of ridiculous things.
The voice is, in fact, not Johansson. Yet, it appears she will be suing them non-the-less...
It's not illegal to sound like someone else - despite what people might be claiming. If it turns out to be true that Sky's voice actor was recorded prior to the attempted engagement with Johansson, then all of this is extra absurd.
Also, Sky doesn't sound like Johansson anyway... but apparently that isn't going to matter in this situation.
Miller v. Ford Motor Co. would disagree. There is a viable legal precedent, and it is very arguably illegal.
> The appellate court ruled that the voice of someone famous as a singer is distinctive to their person and image and therefore, as a part of their identity, it is unlawful to imitate their voice without express consent and approval. The appellate court reversed the district court's decision and ruled in favor of Midler, indicating her voice was protected against unauthorized use.
They’d have to prove the voice actor was imitating SJ. If OpenAI recorded the entire interview process as well as the sessions where they recorded the actor for samples to steer the model with then it should be open and shut. There’s also the fact of the 4 other voices. Who are they meant to be?
If her lawyers are half competent, then they wouldn’t lie. They may not tell the whole truth, but we’re not discussing what wasn’t said here.
As for your second question, yes. Otherwise you have a perfect workaround that would mean a person likeness is free-for-all to use, but we already decided that is not acceptable.
> Otherwise you have a perfect workaround that would mean a person likeness is free-for-all to use, but we already decided that is not acceptable
That is not decided. There have been high profile cases were someone's likeness was explicitly used without permission, and they still had no recourse. It was argued the person was notable enough they could not protect their likeness.
Regardless, it appears debated if Sky even sounded like Johansson, which will make this very difficult for anyone to argue (being subjective and all). If the Sky voice actor was recorded prior to engaging with Johansson (which has been claimed by OpenAI), then it seems even more difficult to argue.
In the end, this will net Johansson a nice "go away" payday and then everyone will forget about it.
Sure, no-one is disputing that, and despite this Altman then contacts SJ again two days before release asking her to reconsider, then tweets "her" to remind the public what he was shooting for. The goal could have just been ChatGPT with a voice interface, but instead Altman himself is saying the the goal was specifically to copy "her".
He's not necessarily saying that was the goal from the start. All he is admitting with that tweet is that it is indeed (he finds it to be) reminiscent of "Her".
Well, from the start, Altman wanted SJ to do the voice. Perhaps he'd never seen or heard of the movie "her", and the association is just coincidental?
After "she" said no, then Altman auditions a bunch of voice talent and picks someone who sounds just like SJ. I guess that's just the kind of voice he likes.
So, Altman has forgotten about SJ, has 5 voice talents in the bag, and is good to go, right? But then, 2 days(!) before the release he calls SJ again, asking her to reconsider (getting nervous about what he's about to release, perhaps?).
But still, maybe we should give Altman the benefit of the doubt, and assume he wanted SJ so badly because he had a crush on her or something?
Then on release day, Altman tweets "her", and reveals a demo not of a sober AI assistant with a voice interface, but of a cringe-inducing AI girlfriend trying to be flirty and emotional. He could have picked any of the five voices for the demo, but you know ...
But as you say, he's not admitting anything. When he tweeted "her" maybe it was because he saw the movie for the first time the night before?
Sky has been in the app for many months, the release last week didn't add the voice, it merely is for a mode which allows a much more natural back and forth and interaction via voice, which is, indeed, fairly reminiscent of the voice interface in "her".
He probably just really wants to actually have SJ's voice from the film. But SJ doesn't really have a right to arbitrary west coast vocal fry female voices. Without the "disembodied voice of an AI" element, I don't think most people would note "Oh she sounds like SJ". In fact, that's what the voice actress herself has said -- she hadn't gotten compared to SJ before this.
> After "she" said no, then Altman auditions a bunch of voice talent and picks someone who sounds just like SJ.
According to the article, the word After here is incorrect. It states the voice actor was hired months before the first contact with SJ. They might be lying, but when they hired the voice actor seems like it would be a verifiable fact that has contracts and other documentation.
And like others, not defending OpenAI, but that timeline does tend to break the narrative you put forth in this post.
The 2 days thing won’t fly. The omni model can probably produce any voice when led with a few seconds of audio. Meta developed a similar model a while back but didn’t open source it out of fear of deepfakes and there are commercial offerings like elevenlabs and open source ones like bark et al. So the last minute ask wasn’t nerves but a last ditch attempt to score a rebound for the launch.
The voice, or the plot/concept of the movie? Her was a bout an AI having enough realism that someone could become emotionally attached to it. It was not a movie about Scarlett Johansson's voice. Any flirty female voice would be appropriator a "her" tweet.
And, hired a voice actress that sounds exactly like Rashida Jones [1] rather than SJ, 6 months before.
These don't have to be related. Maybe they are, but the confidence that they are is silly, since having a big celebrity name as the voice is a desirable thing, great marketing, especially one that did the voice acting for a movie about AI. My mind was completely changed when I actually listened to the voice comparison for myself [1].
And if you commision an artist to draw a black-leather tight-fit clad red-head superspy in an ad for tour product, it need not look like Black Widow from the MCU.
But if it does look very much like her, it doesn't really matter whether you never intended to.
It’s actually just a few comments away, and visible for me from the parent to your comment
> Well, from the start, Altman wanted SJ to do the voice. Perhaps he'd never seen or heard of the movie "her", and the association is just coincidental? After "she" said no, then Altman auditions a bunch of voice talent and picks someone who sounds just like SJ.
I suppose some might imagine asserting the opposite as a distinct concept from disputing but there you have it. You should be able to find a link quite easily.
I don't see where that quote says "they contacted SJ before they hired the actor that did the voice" or anything similar. Perhaps you need to taste it a bit more before you try again.
"Whether or not Altman wanted SJ's voice" and "whether or not they got someone else to do SJ's voice before asking SJ to do it" are two completely independent matters.
Even if the voice actor was sourced before they originally contacted SJ, it was clearly the intent to sound like her. There are so many other distinctive voices they could have chosen, but instead they decided to go as close as possible to "her" as they could. Many people thought it was SJ until she stated it wasn't. I appreciate the voice actor may sound like that naturally, but its hardly coincidental that the voice that sounds most like the voice from "her" was the one chosen for their promotion. It is clearly an attempt to pass-off.
>Even if the voice actor was sourced before they originally contacted SJ, it was clearly the intent to sound like her.
Her, being the voice SJ did for the movie, not SJ's conversational voice which is somewhat different.
If OpenAI were smart, they did it in a chinese wall manner and looked for someone whose voice sounded like the movie without involving SJ's voice in the discussion.
This is not a thing. They hired a voice actor, who spoke in her normal speaking voice. That voice is not SJ's intellectual property, no matter what it sounds like. Further, I don't know how you can say any intention here is "clear", especially given the track record on this particular story, which has been abysmal even after this story was published.
They could have taken auditions from 50 voice actors, come across this one, thought to themselves "Hey, this sounds just like the actress in 'Her', great, let's use them" and that would be fine. Laurence Fishburne does not own his "welcome to the desert of the real" intonation; other people have it too, and they can be hired to read in it.
Again: the Post has this voice actor reading in the normal voice. This wasn't an impersonator.
> I don't know how you can say any intention here is "clear"
You are suggesting that it is coincidence that they contacted SJ to provide her voice, they hired a voice actor that sounds like her, they contacted SJ again prior to launch, and then they chose that specific voice from their library of voices and tweeted the name of the movie that SJs voice is in as a part of the promo?
I haven't suggested what they have done is illegal, given that the fictional company that created the AI "her" is unlikely to be suing them, but it is CLEARLY what their intent was.
What part of "actor" in "voice actor" did you not understand? You don't hire an actor to play themselves generally. "SJ" was not playing herself in Her.
> They could have taken auditions from 50 voice actors, come across this one, thought to themselves "Hey, this sounds just like the actress in 'Her', great, let's use them" and that would be fine.
Except that is simply not true. If their intent was to sound like Her, and then they chose someone who sounds like Her, then they're in trouble.
That's perfectly fine. SJ does not have an intellectual property claim on someone else's natural speaking voice. This is addressed directly in Midler v. Ford.
You don't know that's what happened, but it wouldn't matter either way. Regardless: it is misleading to call that person an "impersonator". I'm confident they don't wake up the morning and think to themselves "I'm performing SJ" when they order their latte.
It’s not perfectly fine. If a company uses an actress because she sounds similar to a character they want to associate with their product, they are liable for damages whether or not the actress lists “impersonator” in her job description.
The key here is intent. If there was no intention for OpenAI to model the voice after the character Samantha, then you're right, there's no foul.
But as I have explained to you elsewhere, that beggars belief.
We will see the truth when the internal emails come out.
It was not claimed that they cloned ScarJo's voice. They hired a soundalike when they couldn't get the person they wanted. Use or lack of use of AI is irrelevant. As I said before, both Bette Midler and Tom Waits won similar cases.
Since they withdrew the voice this will end, but if OpenAI hadn't backed off and ScarJo sued, there would be discovery, and we'd find out what her instructions were. If those instructions were "try to sound like the AI in the film Her", that would be enough for ScarJo to win.
I know that the Post article claims otherwise. I'm skeptical.
> It was not claimed that they cloned ScarJo’s voice.
There were some claims by some people when the issue first arose that they had specifically done a deepfake clone of SJ’s voice; probably because of the combination of apparent trading on the similarity and the nature of OpenAI’s business. That’s not the case as far as the mechanism by which the voice was produced.
It's technically possible that the Sky voice/persona is half voice actress and half prosody/intonation ("performance") from SJ/"her". Clearly the ChatCGT tts system is flexible enough to add emotion/drama to the underlying voice, and that aspect must also have been trained on something.
Clearly a lot of people (including her "closest friends") find the ChatGPT demo to have been very similar to SJ/"her", which isn't to deny that the reporter was fed some (performance-wise) flat snippets from the voice actor's audition tape that sounded like flat sections of the ChatGPT demo. It'd be interesting to hear an in-depth comparison from a vocal expert, but it seems we're unlikey to get that.
Your immediate acceptance that a timeline that represents the best spin of a deep-pocketed company in full crisis PR mode proves the story "false", full stop, no caveats is... I wouldn't say mind-bending, but quite credulous at a minimum. The timeline they present could be accurate but the full picture could still be quite damning. As Casey Newton wrote today [1]:
> Of course, this explanation only goes so far. We don’t know whether anyone involved in choosing Sky’s voice noted the similarity to Johansson’s, for example. And given how close the two voices sound to most ears, it might have seemed strange for the company to offer both the Sky voice and the Johansson voice, should the latter actor have chosen to participate in the project. [...] And I still don’t understand why Altman reportedly reached out to Johansson just two days before the demonstration to ask her to reconsider.
They absolutely have not earned the benefit of the doubt. Just look at their reaction to the NDA / equity clawback fiasco [2], and their focus on lifelong non-disparagement clauses. There's a lot of smoke there...
>They hired the actor that did the voice months before they contacted SJ. The reaction on this site to the news that this story was false is kind of mindbending.
People lose their rational mind when it comes to people they hate (or the opposite I suppose). I don't care for Sam Altman, or OpenAI one way or another, so it was quite amusing to watch the absolute outrage the story generated, with people so certain about their views.
I don't understand the point you are trying to make. The essential question is whether they were trying to imitate (using a voice actor or otherwise) Scarlett Johansson's voice without her permission. Nothing in the article refutes that they were; whether they sought the permission before or after they started doing the imitation is irrelevant. Others have pointed to previous case law that shows that this form of imitation is illegal.
Moreover I can't see any reasonable person concluding that they were not trying to imitate her voice given that:
1. It sounds similar to her (It's unbelievable that anyone would argue that they aren't similar, moreso given #2).
2. Her voice is famous for the context in which synthetic voice is used
3. They contacted her at some point to get her permission to use her voice
4. The CEO referenced the movie which Johansson's voice is famous for (and again depicts the same context the synthetic voice is being used) shortly before they released the synthetic voice.
Except the story isn't false? They wanted her voice, they got her voice*, they did marketing around her voice, but it's not her voice, she didn't want to give them her voice.
Notice how the only asterisk there is "it's technically not her voice, it's just someone who they picked because she sounded just like her"
Yeah, but then again, I totally expected this opening the comment threads. Same happened with RMS debacle, same happened with similar events earlier, same happened on many a Musk stories. It seems that a neat narrative with clear person/object to hate, once established, is extremely resilient to facts that disprove it.
Right. Even if you think OpenAI isn’t a good place, this is an investigation by an established newspaper that refuted some of the more serious accusations (that OpenAI got a Johannson impersonator - they didn’t, that they modified the voice to sound like Johansson - evidence suggests this didn’t happen). When the reaction is “I don’t care that an investigation refuted some of the accusations”, it demonstrates someone isn’t openly approaching things in good faith.
Likewise, if someone’s attitude is - “OK, maybe there’s no paper trail, but I’m sure this is what the people were thinking”, then you’ve made an accusation that simply can’t be refuted, no matter how much evidence gets presented.
> refuted some of the more serious accusations (that OpenAI got a Johannson impersonator - they didn’t
A lot of the argument here comes down to whether the article does refute that. I don't believe it does.
What it refutes is the accusation that they hired someone who sounds like Johansson after she told them she would not do it herself. That was certainly a more damning accusation, but it's not an identical one.
But in my view, it requires a pretty absurd level of benefit of the doubt to think that they didn't set out to make a voice that sounds like the one from the movie.
Maybe good for them that they felt icky about it, and tried to get her for real instead, but she said no, and they didn't feel icky enough about it to change the plan.
Do you believe the article "refutes" that? Does it truly not strike you as a likely scenario, given what is known, both before and after this reporting?
> A lot of the argument here comes down to whether the article does refute that.
It clearly refutes the claims that they got a Johansson impersonator. The article says this is a voice actress, speaking in her normal voice, who wasn’t told to mimic Johansson at all. You can say that you personally think she was chosen because people thought she sounded similar to Johansson, even though there’s no evidence for that at this point. But the claim - which was made several times in discussions on here before - that she is a Johansson impersonator is factually incorrect.
> But in my view, it requires a pretty absurd level of benefit of the doubt to think that they didn't set out to make a voice that sounds like the one from the movie.
I tried it several times in the past and never once thought it sounded like Johansson. When this controversy came out I looked at videos of Her, because I thought Johansson could have been using a different voice in that movie, but no - the voice in her is immediately recognizable as Johannson’s. Some have said Sky’s was much closer to Rashida Jones, and I agree, though I don’t know how close.
I think this is quibbling over the definition of "impersonator"?
I think the most plausible thing that happened is that they thought "hey it would be so awesome to have an AI voice companion like the one in Her, and we can totally do that with these new models", and then auditioned and hired someone that sounded like that.
Does it not fit the definition of "impersonator", since they didn't explicitly tell the person the hired to impersonate the voice from the movie? Sure, fine, I guess I'll give it to you.
But it doesn't refute "they wanted to use a voice that sounded like the one in Her", and there are a number of indications the this was indeed the case.
> When the reaction is “I don’t care that an investigation refuted some of the accusations”, it demonstrates someone isn’t openly approaching things in good faith.
When the reaction is "it doesn't matter, it's still not ok to copy someone's voice and then market it as being that person's voice or related to that person's voice" and your reaction is to cast that as being something else, it demonstrates you are not openly approaching things in good faith.
Let's note that OpenAI didn't release the names of the voice talent since they said they wanted to protect their privacy...
So, how do you think the reporter managed to get not only the identity, but also the audition tape from "Sky"? Detective work?
An interesting twist here is that WashPo is owned by Bezos, who via Amazon are backing Anthropic. I wonder how pleased he is about this piece of "investigative reporting"?
OpenAI allowed the reporter to hear some snippets from the audition tape. Not exactly my idea of an "investigation".
There are multiple parts to the voice performance of ChatGPT - the voice (vocal traits including baseline pronunciation) plus the dynamic manipulation of synthesized intonation/prosody for emotion/etc, plus the flirty persona (outside of vocal performance) they gave the assistant.
The fact that the baseline speaking voice of the audition tape matches baseline of ChatGPT-4o only shows that the underlying voice was (at least in part, maybe in whole) from the actress. However, the legal case is that OpenAI deliberately tried to copy SJ's "her" performance, and given her own close friends noting the similarity, they seem to have succeeded, regardless of how much of that is due to having chosen a baseline sound-alike (or not!) voice actress.
Have you listened to both voices in the comparisons floating around? There is no way any of SJ's closest friends or family would be fooled if that voice called them up pretending to be SJ.
> What facts disprove OpenAI making a voice that sounds like SJ
The objective parts of this are disproved in several ways by the very article under which we're commenting. The subjective parts are... subjective, but arguably demonstrated as false in the very thread, through examples of SJ vs. Sky to listen side by side.
> such that the movie Her is referenced by Altman
You're creating a causal connection without a proof of one. We don't know why Altman referenced "Her", but I feel it's more likely because the product works in a way eerily similar to the movie's AI, not that because it sounds like it.
> and why is that actress upset?
Who knows? Celebrities sue individuals and companies all the time. Sometimes for a reason, sometimes to just generate drama (and capitalize on it).
> You're creating a causal connection without a proof of one. We don't know why Altman referenced "Her", but I feel it's more likely because the product works in a way eerily similar to the movie's AI, not that because it sounds like it.
There's no proof needed. A marketer doesn't market something for no reason.
We are all capable of interpreting his statement and forming an opinion about its intent. Indeed, the entire point of making any statement is for others to form an opinion about it. That doesn't make our opinion invalid - nor does the whining and backpedaling of the person who made the statement.
Your opinion may be different than others, but I doubt that would be the case if you were truly approaching this situation in an unbiased way.
You want to say that the dispute here is over ignoring objective facts, but it isn't. I haven't seen anybody here ignoring the facts laid out by this article.
The dispute is instead about statements just like your We don't know why Altman referenced "Her", which, on the one hand, you're right, the mind of another personally is technically unknowable, but on the other hand, no, that's total nonsense, we do indeed know exactly why he referenced the movie, because we're a social animal and we absolutely are frequently capable of reasoning out other people's motivations and intentions.
This is not a court of law, we don't have a responsibility to suspend disbelief unless and until we see a piece of paper that says "I did this thing for this reason", we are free to look at a pattern of behavior and draw obvious conclusions.
Indeed, if it were a court of law, that's still exactly what we'd be asked to do. Intent matters, and people usually don't spell it out in a memo, so people are asked to look at a pattern of behavior in context and use their judgement to determine what they think it demonstrates.
The objective parts don’t disprove that OpenAI set out to make an AI that sounded like Scarlett Johanson to use as a marketing ploy. In fact, I’d argue it’s more likely that’s exactly what the evidence suggests they did. But maybe a judge will get to rule on whose interpretation of the facts is correct.
I also see this dynamic on these same kinds of threads, but what I see is that one side is very sure that the facts disprove something, and the other side is very sure they don't. I've been on both sides of this, on different questions. I don't think there is anything weird about this, it's just a dispute over what a given fact pattern demonstrates. It's totally normal for people to disagree about that. It's why we put a fairly large number of people on a jury... People just see different things differently.
It's unhelpful because the massive comment chains don't bring anything to the "discussion" (this is literal celebrity gossip so I'm having a hard time using 'discussion', but wait this isn't Reddit how could I forget, we're the enlightened HN.) It just devolves into ones' priors: do you hate or love OpenAI and sama for unrelated reasons. It's just a sports bar with the audience a few drinks in.
I mostly agree with you, but would ask: Why are you here, reading this thread? This isn't, like, a thread about something interesting that is being tragically drowned out by all this gossip. It's just an entirely bad thread that we should (and probably do) all feel bad about getting sucked into.
But the tiny sliver of disagreement I have with "this is a bad thing to discuss here and we should all feel bad" is that some people who frequent this site are sometimes some of the people involved in making decisions that might lead to threads like this. And it might be nice for those people to have read some comments here that push back on the narrative that it's actually fine to do stuff like this, especially if it's legal (but maybe also if it isn't, sometimes?).
The way I see this particular discussion is: outside the tech bubble, regardless of the new facts in this article, people see yet another big name tech leader doing yet another unrelatable and clearly sleazy thing. Then what I see when I come to the thread is quite a few of the tech people who frequent this site being like "I don't get it, what's the problem?" or "this article totally refutes all of the things people think are a problem with this". And I feel like it's worth saying: no, get out of the bubble!
I'm surprised by many people's reaction to this too, especially that IF it was possible right now, the industry would steal everybody's lunch in an instant without thinking about consequences. This case is like an appetizer to what's to come if things keep on going in this way..
> I mostly agree with you, but would ask: Why are you here, reading this thread? This isn't, like, a thread about something interesting that is being tragically drowned out by all this gossip. It's just an entirely bad thread that we should (and probably do) all feel bad about getting sucked into.
I was just reading the comments after reading the article to see if anything new came up, and was pretty appalled at the quality of commentary here. I'm not participating in the thread more because it's not worth it.
> But the tiny sliver of disagreement I have with "this is a bad thing to discuss here and we should all feel bad" is that some people who frequent this site are sometimes some of the people involved in making decisions that might lead to threads like this. And it might be nice for those people to have read some comments here that push back on the narrative that it's actually fine to do stuff like this, especially if it's legal (but maybe also if it isn't, sometimes?).
I've been involved in big decisions in other Big Tech companies. I'm proud of having fought to preserve Tor access to our offerings because I believe in Tor despite the spam and attacks it brings. I don't know about other folks in these positions, but if I were to read discussion like this, I'd roll my eyes and close the thread. If a random incoherent drunk ranter told me something was wrong with my ideas, I'd dismiss them without much hesitation.
> The way I see this particular discussion is: outside the tech bubble, regardless of the new facts in this article, people see yet another big name tech leader doing yet another unrelatable and clearly sleazy thing.
Because journalists know there is anti-tech sentiment among a segment of the population and so they stoke it. I don't know that much about this case, but a different story I've been following for a while now is the California Forever creation of a new city adjacent to the Bay Area. Pretty much every article written about it calls the city a "libertarian city" or "libertarian, billionaire dream". I'm involved in local planning conversations. I've read over their proposals and communications. They never, ever, mention anything about libertarianism. They're not proposing anything libertarian. They're working with existing incorporation laws; they're literally acting as a real-estate developer the same as any other suburban tract developer anywhere else in the US. But the press, desperate to get clicks on the story, bills it as some "libertarian city".
This "bubble" that you speak of is literally just a bubble created by journalists. I'm not saying that tech hasn't created some new, big, real problems nor that we shouldn't discuss these problems, but we need to recognize low-effort clickbait where we see it. This [1,2] article and thread talks about the reasons why, and it's not simple or straightforward, but at this point I consider most (not all) tech journalism to basically be tabloid journalism. It's meant specifically to drive clicks.
The only silly thing is some folks on HN think this site is somehow more high-brow than some general social media conversation on the news. It's the same social media as everywhere else, it's just more likely that the person talking is a software nerd, so the clickbait they fall for is different. My comment is my attempt as a community member to remind us to strive for something better. If we want to be more than just another social media site then we need to act like it. That means reading articles and not reacting to headlines, having good-faith conversations and not bringing strong priors into the conversation, and actually responding to the best interpretations of our peers' comments not just dunking on them.
> I don't know about other folks in these positions, but if I were to read discussion like this, I'd roll my eyes and close the thread.
I'm loathe to reply because I know you don't want to engage anymore, but I think it's fair to reply to your reply on this point:
I'm sure you're right that the people involved in this will be defensive and roll their eyes, and I'm sympathetic to that human reaction, but it's also why society at large will continue along this path of thinking we suck.
If we roll our eyes at their legitimate criticism of all this sleazy stuff that is going on, then they're just right to criticize us.
And sure, "we shouldn't care if people at large think we suck because we roll our eyes at their criticism of our sleaziness" is a totally valid response to that. But I'm certainly not going to take up our cause against the inevitable backlash, in that case.
> This "bubble" that you speak of is literally just a bubble created by journalists.
I don't think so. I'm honestly sympathetic to how you've become convinced of that by the California Forever thing, which I agree has gotten a raw deal in the press. But I think this tech / SV / HN bubble is nonetheless a real thing. I work inside that bubble but live outside it. I spend a decent amount of time during my days reading and (often foolishly, like today) commenting on threads here.
But I spend a lot of my evenings and weekends with friends and family (in Colorado) who are very distant from our "scene". And I'm telling you, in my twenty year career, I have lived this evaluation from from "the internet is so awesome, google search is amazing, you know how to do software, that's so cool!" to "I don't know how you can stomach working in that industry". Sure, the media has had some impact on this, but we've also been super arrogant, have screwed up tons of stuff that is visible and salient to lots of people, and have seemed completely oblivious to this.
This episode is just one more example of that trend, and I think it's crazy to think "nah, this is all fine, nothing to see here".
I'm not sure what RMS has to do with Altman. I'm also not sure why you think people just want to hate on Musk when it took a decade of his blatant lies for most people to catch on to the fact that he's a conman (remember, everyone loved him and Tesla for the first 5 or 10 years of lies). But the comparison between Musk and Altman is pretty apt, good job there.
Well not sure what you mean by 'Conman'. Wildly successful people do aim high a lot, a lot. They don't meet 80% of their goals, that is perfectly ok. Even as low as 20% success on a lot of these moonshot things sets you ahead of the masses who aim very low and get there 100% of the times.
This whole idea that some one has to comply to your idea of how one must set goals, and get there is something other people have no obligations to measure up to. Also that's the deal about his lies? He can say whatever he wants, and not get there. He is not exactly holding an oath to you or any one that he is at an error for not measuring up.
Musk might not get to Mars, he might end up mining asteroids or something. That is ok. That doesn't make him a conman.
tl;dr. Any one can say, work and fail at anything they want. And they don't owe anybody an explanation for a darn thing.
> It's not aiming high when anyone competent and informed can tell him there's no way, and he pays many competent and informed people to tell him.
You know that this is exactly how SpaceX won big? There were many a competent, credentialed people telling Musk that reusable rockets are a pipe dream, all the way to the first Falcon 9 landing and reflight. Some of them even continued giving such "competent and informed" advice for many months afterwards.
> Setting a goal is not the same as telling people the company that you are dictator of is going to do something.
> You know that this is exactly how SpaceX won big? There were many a competent, credentialed people telling Musk that reusable rockets are a pipe dream, all the way to the first Falcon 9 landing and reflight. Some of them even continued giving such "competent and informed" advice for many months afterwards.
And there were also people saying it could be done. Where were those people for self-driving? (Oh right they were in the Facebook comment section with no relevant knowledge.)
> That's literally what it means, though.
No, it's not at all.
A goal is personal, or perhaps organizational. You need not announce something on Twitter in order to set a goal for yourself or for your company.
It has nothing to do with this. There are many successful people and businesses that I admire, and a number of notable examples of those I do not. The two your comment mentions are simply part of that latter group. I think for good reason. (But of course I would think that...)
I’m not talking about you specifically here. What you’re saying could be true for you, and not true for the community as a whole. With the benefit of experience, I can tell for certain there’s (on average) a strong undercurrent of jealousy against people perceived as overly ambitious, particularly if they are successful in their ambitions. This is not specific to this site, of course, or even to the tech community in general.
You're right that it could be the case. But I disagree that it is likely to be, in this specific case, and the other specific case you cited.
I think many people distrust and dislike Altman and Musk in particular because of their own specific behavior.
Some people are hated because people are just jealous, but other people use that as an umbrella excuse to deflect blowback from their behavior that they entirely deserve. I believe this is one of those latter cases.
But you don’t even know what their behavior really is. What you do know is the narrative created by fake news media mostly. At best it’s lies by omission, at worst - outright smears and fabrications. Idk much about Altman, but people who worked with Musk hold him in very high regard and say he’s the real deal. Yet if you go by the narratives you’d think the exact opposite
I don't know about you, but judging by what Elon says in his social media posts, I'm going to assume that the people who worked with Musk are poor judges of character. Or these same people happen to have a ton of TSLA stock or some other vested interest in Elon, so kissing his ass is good for their bottom line.
I don’t care what he says. Last I checked he’s free to say whatever he likes. Attempts to police the speech of others should be no more socially acceptable than farting in a crowded elevator. I care what he actually accomplishes. If his being a little unhinged leads to greater accomplishments, that’s a worthwhile tradeoff in my book.
Judging people for their behavior is not "policing the speech of others".
The hilarious thing about this line of reasoning is that I am saying "that thing he said is not acceptable to me" and people like you say "that thing you said about that thing he said is not acceptable to me". The pattern here is "saying X is not acceptable to me", just different values of X. If what I'm doing is policing his speech, and that isn't acceptable to you, then what you're doing is policing my speech, which also shouldn't be acceptable to you.
A paradox? No, not at all, because the resolution of this paradox is just: Nobody here is policing anybody's speech. Everyone is just expressing their own opinions, in a completely normal way.
I'm free to care about how people behave, and not just what they accomplish. And you're free both to not care about that, and also to judge people who do care.
But I wish you'd made your actual point, instead of asking this vague question. I don't want to guess what point this question is a setup to, but who knows if I'll ever make it back to this thread to find out what point you're going to make.
These are famous people who do and say things in public all the time, which I can see and evaluate entirely on my own.
That in no way implies that people who have worked with these people or know them socially will agree with my assessments of them. People just assess things differently from one another. We all care about and prioritize different things.
(For what it's worth, it's possible that I also know a few people who have worked for or with Musk, and have incorporated the nuances of their views into mine, to some extent...)
How about the you have all the hardware you need - oh wait oopsie you need to pay us thousands more
How about the taking Tesla private tweet?
How about the repeatedly and flagrantly violating government contracts that are basically his company's only revenue because he's too powerful for consequences?
Self driving is actually coming along better than I thought it would. Version 12 behaves like a human driver.
The real source of your frustration though is that he does not fully subscribe to the neoliberal dogma and lets millions of others to say whatever they like. Totalitarian left can’t handle that, it’s a visceral reaction
> Self driving is actually coming along better than I thought it would. Version 12 behaves like a human driver.
LOOOOOOOOOOOOOOOOOL
Anyway that would be cool if it was true but doesn't change what a conman he is. He said he was gonna make it true how long ago? Yeah.
> The real source of your frustration though is that he does not fully subscribe to the neoliberal dogma and lets millions of others to say whatever they like. Totalitarian left can’t handle that, it’s a visceral reaction
That's not even remotely true. Seriously. No part of it.
- It's been obvious to me that he was a conman since he started lying at Tesla, a decade or more ago. 'Fraid to say that alone disproves your unfounded personal attack since he didn't own a social media platform at the time.
- He doesn't allow people to say whatever they like unless he agrees with them.
- He (ok, and perhaps you) is the only totalitarian in this conversation.
But hey good job trying to make this out to be about politics instead of about what a terrible human Musk is, I know that's the only way for you Repugnantcans to cope with the cognitive dissonance.
That may be true, but you specifically said "on this site" and now you're saying "this is not specific to this site". And no, the vast majority of people "try[ing] to pull them down" to Musk are doing it because he's an egotistical hypocritical whiny jackass.
That was what Elizabeth Holmes claimed as well, however, we know that some people who try to achieve greatness are grifters. A pithy saying doesn’t change that reality.
You can’t seriously claim there’s any equivalence between Altman/Musk and Holmes. The former two have something to show for their ambition, Holmes was basically a fraud with no substance behind her whatsoever
So it's okay to commit fradulous acts if "you have something to show for it"?
Even if Altman was a good person, they are the face of a company that is doing some very suspicious actions. Actions that got the company cooked in litigation. So those consequences will assossiate with that face, consequences for not following robots.txt, for trying to ask forgiveness over permission against other large companies, and now this whole kerfuffle.
I'm not comparing the products, I'm criticizing the statement that people are just jealously trying to bring down those who attempt to achieve greatness. Also, you can have a great product and still do ethically and legally questionable things that people will criticize.
Tbf here Altman really screwed this over with that tweet and very sudden contacting. There probably wouldn't be much of a case otherwise.
If I had to guess the best faith order of events (more than what OpenAi deserves):
- someone liked Her (clearly)
- they got a voice that sounded like Her, subconsciously (this is fine)
- someone high up hears it and thinks "wow this sounds like SJ!" (again, fine)
- they think "hey, we have money. Why not get THE SJ?!"
- they contact SJ, she refuses and they realize money's isn't enough (still fine. But this is definitely some schadenfreude here)
- marketing starts semi-indepenently, and they make references to Her, becsuse famous AI voice (here's where the cracks start to form. Sadly the marketer may not have even realized what talks went on).
- someone at OpenAi makes one last hail Mary before the release and contacts SJ again (this is where the trouble starts. MAYBE they didn't know about SJ refusing, but someone in the pipeline should have)
- Altman, who definitely should have been aware of these contacts, makes that tweet. Maybe they forgot, maybe they didn't realize the implications. But the lawyer's room is now on fire
So yeah, hanlon's razor. Thus could he a good faith mistake, but OpenAi's done a good job before this PR disaster ruining their goodwill. Again, sweet Schadenfreude even if we are assuming none of this was intentional.
I'm a pretty forgiving person, I don't really mind mistakes as long they are 1) admitted to 2) steps are taken to actively reverse course, and 3) guidelines are taken to prevent the same mistakes from happening.
But you more or less drain thst good faith when you are caught with your pants down and decide instead to double down. So I was pretty much against OpenAI ever since the whole "paying for training data is expensive" response during the NYT trials.
----
In general, the populace can be pretty unforgiving (sometimes justified, sometimes not). It really only takes one PR blunder to tank thst good faith. And much longer to restore it.
Mistakes should be made once and once only, irrespective of good or bad faith. It is no longer a mistake when you do the same misstep over and over again, it is a deliberate pattern of behaviour.
Legally, the issue isn’t what they were thinking when they hired the actor, it’s what the intent and effect was when they went to market. (Even if there was documentary evidence that they actively sought out an actor for resemblance to SJ’s voice from day one, the only reason that would be relevant is because it would also support that that was there intent with the product when it was actually released, not because it is independently relevant on its own.)
Whether or not they had any interest in SJ’s voice when they hired the other actor, they clearly developed such an interest before they went to market, and there is at least an evidence-based argument that could be made in court that they did, in fact, commercially leverage similarity.
It is a curious reaction, but it starts to make sense if some of these posters are running ops for intelligence agencies. Balaji Srinivasan noted that as the US started pulling out of foreign wars, the intelligence apparatus would be turned inward domestically.
Some of it can also be attributed to ideological reasons, the d/acc crowd for example. Please note I am not attacking any individual poster, but speculating on the reasons why someone might refuse to acknowledge the truth, even when presented evidence to the contrary.