Just to state the obvious, "we" are not doing anything here. "We," as in "the general public 'we'" don't have much of a choice when someone has lots of money and lawyers and wants to use those resources to sneakily and deceptively make more money. Unless "we" elect better representatives who are willing to write and enforce laws governing the wealthy's ability to effectively do whatever they want, then "we" don't have much of a say.
I guess what we really need is super cheap fusion power or something? Or perhaps a way to easily share the cost by spreading the training load and electricity bill across millions of home computers?
It's not literally the electricity that's the problem. It's also the billions in GPUs, and the teams of people fine tuning with reinforcement learning.
Unlike most software projects that came before, Big AI Projects require a level of funding and coordination that can't be overcome with "more volunteers". It requires coordination and deep pockets - not for writing the code but for training it.
It's not literally the electricity that's the problem. It's the power it -- the electricity for a tiny part, and all our data for the overwhelming majority -- gives to "Big Tech" rentiers, like Altman is or at least is aspiring to become.
There are hundreds of people with similar voices. If any voice actor can pull the same accent than Ms. Johansson, it should be fair game, as long it was the original training material? Voices cannot be copyrighted or be exclusive, although I am sure Hollywood will try to copyright them in some point.
He kind of ruined that argument when he tweeted “Her” alongside the video. Pretty clearly drawing a line between the voice and Johansson’s portrayal in the movie.
Incredible, really. It would have been so easy to just… not do that.
There could be a bubble in terms of stock valuation, but the tools are definitely going to stay.
This could be kinda like the dot com bubble -- the Internet went on to become BIG, but the companies just went bust... (and the ones that strive are probably not well known)
I understand there's way too much out there, but I think there is at least some clarity about the landscape at present.
ChatGPT is currently king of the mountain. That could change, but right now that's how it is.
Google's Gemini and Facebook's Llama 3 are clearly in a tier below. The 100s of tools you are seeing are various mixed and matched technologies that also belong in this tier.
Claude (massive context) and Mistral/Mixtral (decent with no censoring/guard rails) are interesting for special cases. And if you're determined and want to put in the effort, you can experiment or self-host and perhaps come up with some capabilities that do something special that suits a use case or something you want to optimize for (although not everyone has time for that).
So I wouldn't say it's just all this one big swirl of confusion and therefore a bubble and due to come crashing down. There's wheat, there's chaff, there's rhyme and reason.
This is completely false. Claude Opus is significantly better than GPT 4.
> Mistral/Mixtral (decent with no censoring/guard rails)
These models have been heavily censored, I'm not sure what you're talking about. Community efforts like Dolphin to fine-tune Mixtral have some success, but no, Karen is definitely still hard at work in France, ensuring that Mistral AI's models don't offend anyone's precious fee-fees.
I think you're missing the forest for the trees here. You're right that Claude Opus is better, which I hadn't known, but I think in your zeal to make that point you're completely forgetting what my comment was about.
It's nevertheless true that there is a coherent landscape of better and worse models, and Chat GPT really does have separation from the other models as I mentioned above. I even mentioned that ChatGPTs position would be subject to change. My understanding is that this most recent version of Claude has been out and about in the wild for perhaps 2 months.
I feel like with even a little bit of charitable interpretation you could read my comment in a way that accounts for the emergence of such a thing as a new and improved model of Claude. So I appreciate your correction but it's hard to see how it amounts to anything more than a drive-by cheap shot that's unrelated to the point I'm making.
It's an exuberance bubble. Every tech company on earth is racing to "do something with AI" because all of their competitors are trying to "do something with AI" and they don't want to be left out of the excitement. The excitement and exuberance will inevitably cool, and then a new thing will emerge and they'll all race to "do something with that new thing."
Oh, the irony. Actors are afraid of being digitalized and used without their content, and the first B2C AI company digitalizes a soundalike of the voice of one of the first 3 AI movies, without her consent…
Her was a movie with an AI assistant who talked like a normal human rather than an intentionally clunky "bleep blorp" dialect that lots of other movies go with. They even make fun of this in the movie when he asks her to read an email using a classic voice prompt, and she responds pretending to be a classic AI assistant.
The new voice2voice from OpenAI allows for a conversational dialect, most prominently demonstrated in pop culture by the movie Her. Sam's tweet makes perfect sense in that context.
Sky's voice has been the default voice in voice2voice for almost a year now, and no one has made a connection to the Her voice until it started acting more conversational. It seems pretty obvious that OpenAI was looking for a more conversational assistant, likely inspired by the movie Her, and it would have been cool if the actress had helped make that happen, but she didn't, and here we are.
Also Juniper has always been the superior voice model. I just now realized that one of my custom GPTs kept having this annoying bug where the voice kept switching from Juniper to Sky, and that seems to be resolved now that Sky got removed.
> Sky's voice has been the default voice in voice2voice for almost a year now, and no one has made a connection to the Her voice until it started acting more conversational.
Let's take a parallel situation from around 20 years ago, and see how you feel about it. I'm going back that far as a reminder of what was long considered OK, before AI.
Except Britney Spears was not hired for the role. They hired a Britney Spears impersonator for the scene. They did everything that they could to make it look like Britney, and think it was Britney. But it really wasn't.
Do you think that Britney should have sued the Chucky franchise for that? If so, should Elvis Presly's estate also sue all of the Elvis Presly impersonators out there? Where do you draw the line? And if not, where do you draw the line between what happened in Chucky, and what happened here?
I really don't see a line between now having someone who sounded like the actress, and then tweeting the name of one of her movies, and what happened 20 years ago with Chucky killing someone who looked like Britney, then showing a license plate saying "BRITNEY1", and THEN saying, "Whoops I did it again." (The title of her most famous song at the time.) If anything, the movie was more egregious.
> Seed of Chucky, the off-the-wall fifth installment of Don Mancini's Child's Play franchise, was forced to include a special disclaimer about pop superstar Britney Spears
> This scene was included in promotional spots for the film, most specifically Seed of Chucky's trailer, but the distributing company associated with the film, Focus Features, made the decision to significantly cut the scene down and add a disclaimer. The disclaimer that ran with the promotional spot, which was altered to only show a brief glimpse of Ariqat as Spears, stated: "Britney Spears does not appear in this film."
There is a distinction between the image of a celebrity and their voice. The image of a celebrity is usually pretty cut and dried, it’s them, or obviously intended to be them. If the use of their image isn’t meant to be satirical, it’s problematic. The Crispin Glover/Back to the Future 2 case is a good example of non-satirical use that was problematic. Zemeckis used previous footage of Glover, plus used facial molds of Glover to craft prosthetics for another actor.
Voices…are usually not so distinctive. However, certain voices are very distinct—Tom Waits, Miley Cyrus, James Earl Jones, Matt Berry. Those voices are pretty distinctively those people and simulating their voices it would be obvious who you are simulating. Other celebrity voices are much more generic. Scarlett fits into this with a pretty generic female voice with a faint NY/NJ accent.
Open AI screwed up by taking a generic voice and making it specific to the celebrity by reference and by actually pursuing the actor for the use of their voice.
I don't think this is an apples-to-apples comparison.
The movie producers didn't produce a simulation of Britney's voice and attempt to sell access to it.
However you feel about an probably-unapproved celebrity cameo in a movie, it's not the same thing as selling the ability to impersonate that celebrity's voice to anyone willing to pay, in perpetuity.
If you go to Vegas, you can go to a wedding officiated by someone who looks like, sounds like, and acts like Elvis Presly. This is available to anyone. You can get the same actor to do the same simulation for another purpose if you're willing to pay for it.
The biggest difference that I see is that technology has made the simulation cheaper and easier.
And these people are known as "Elvis Presley impersonators." They don't pretend to be some obscure person you've never heard of, for very obvious reasons.
The biggest difference here is obviously one of scale. I don't think ScarJo would be threatening to sue you, the individual, if you did a voice impression of her for a talent show or a friends wedding.
That makes it weird, but it doesn't (itself) mean they literally used her voice. It just means they were inspired by the movie. It's not illegal to be weird.
Legally they don’t need to have literally used her voice to have broken the law, never mind violating many people’s basic sense of what’s right and wrong.
They don't? Because if it's true that they used a sound-alike voice actress for the actual model, I don't see how any reasonable complaint about that could stand. You can't ban people from voice-acting who have similar voices to other celebrities. There needs to be something more to it.
It's such a huge problem that it's only brought up in the context of someone (probably) doing exactly what it's designed to prevent... By some miracle, this actually isn't used to outlaw satire or put Elvis impersonators out of work. It's used to prevent people from implying endorsement where none exists.
I think it’s less the voice and more about how they went about it. They were apparently in negotiations with her and they fell apart. Then they tried to resume negotiations with her two days before the new model launched.
If it was just an actor, it might be a case of inspiration gone awry. But this particular actor sued Disney in 2021 after making a lot of movies and a lot of money making movies for them.
Deliberately poking a fight with a litigation happy actor is weird. Most weird is really benign. But this is the kind of weird that forces out of court settlements. It’s reckless.
Edit - mistyped the date as 2001. Changed to 2021.
That's a fair statement if you take the "Her" post out-of-context and without the corroborating retort from ScarJo and his history. Which, of course, is not possible and also pretty boneheaded itself.
This isn't some college kid with an idea and too much passion.
Perpetual benefit of the doubt given for every implication as though it’s happening in a vacuum is how humanity keeps putting megalomaniacs and sociopaths into positions of power and influence.
If we're going to pillory Sam Altman, it's important to do it for the right reasons. That was not a good reason. I really should not need to defend this principle.
Had the film Her used someone else as the AI voice that sounded like Johansson would there be complaints about the film using a voice that sounded like Johansson? Does it matter if producers try to hire her first? Because only Johansson has that voice? Johansson does not visually show up the film Her and if not for the film credits could the voice in that film be used to use identity her from hundreds millions of other possible women? ( I had no idea who did the voice acting and would never had known if not for this news.) Now if the owners of the film Her were to request OpenAI licence a character from their film (like licencing say C3P0 character from Disney) maybe there would be a case but an actor claiming they own a natural human "voice" I think is a stretch when there are thousands of people with similar voices. And she is visually never in the film that made that AI voice famous so it could be anyone in that film with a similar voice.
I don't know about complaints but Ms. Johansson might be able to win a civil suit in that hypothetical situation. It would depend on the facts of the particular case, particularly any evidence that the defendants acted in bad faith. I think a lot of technologists don't understand how burden of proof works in civil trials, or that there is no presumption of "innocence".
This test is not blind but YOU tell me which you think is similar to the openAI sky voice? And what does that tell you about likely court result for Johansson? And having reached this conclusion yourself would you now think the other actress Rashida Jones is entitled to compensation based on this similarly test? Because there are no other women with similar voices?
> He kind of ruined that argument when he tweeted "Her"
Why? The grandparent is not saying it's coincidence. Why is it not okay to hire someone who has a voice similar to celebrity X who you intentionally want to immitate? I mean if you don't actually mislead people to believe that your immitation is actually X - which would be obviously problematic?
Alright then, the solution is simple. All he has to do is name the actress that OpenAI -did- hire for the voice work, right? That would put any doubt to rest.
(Can't for the life of me recall if she sounds anything like Johansson; just putting her forward to tease her relative here. (Who is in the wrong in his arguments above.))
This is one of those “accuse a diver of being a paedophile” moments. Who knew Sam is a creep with a Scarlet Johansson obsession cooking up a voice model just like her on compute daddy Satya paid for (but books as revenue, 2000 dotcom style).
In back to the future II, Crispin Glover didn’t sign up to be George McFly so they used facial prosthetics and impersonation to continue the George McFly character.
He sued Universal, and reportedly settled for $760,000.
While not defending OpenAI or Altman, the caveat here is that this was a voice actor using their natural voice, not an actor impersonating scarlett johansson.
Setting a precedent that if your natural voice sounds similar to a more famous actor precludes you from work would be a terrible precedent to set.
> Setting a precedent that if your natural voice sounds similar to a more famous actor precludes you from work would be a terrible precedent to set.
Yes, but literally no one anywhere is suggesting that the voice actress used would be banned from work because of any similarity between her voice and Johansson's; that’s an irrelevant strawman.
Some people are arguing that there is considerable reason to believe that the totality of the circumstances of OpenAI’s particular use of her voice would make OpenAI liable under existing right of personality precedent, which, again, does not create liability for mere similarity of voice.
>Yes, but literally no one anywhere is suggesting that the voice actress used would be banned from work because of any similarity between her voice and Johansson's; that’s an irrelevant strawman
It's not. The original comment in this chain was drawing parallel to a lawsuit in which someone intentionally took steps to impersonate an actor.
This situation is a voice actor using their "natural voice" as a source of work.
If a lawsuit barring OpenAI from using this voice actor is successful, due to similarities to a more famous actor, that puts this voice actor's future potential at risk for companies actively wanting to avoid potential for litigation.
Suggesting a calming female persona as a real time always present life assistant draws parallel to a movie about a calming female persona that is a real time always present life assistant is not a smoking gun of impropriety.
Pursuing a more famous name to attach to marketing is certainly worth paying a premium over a lesser known voice actor and again is not a smoking gun.
Sky voice has been around for a very long time in the OpenAI app dating back to early 2023. No one was drawing similarities or crying foul and decrying how it "sounds just like Scarlett" ..
> Sky voice has been around for a very long time in the OpenAI app dating back to early 2023. No one was drawing similarities or crying foul and decrying how it "sounds just like Scarlett" ..
While you're right I should have chosen my words more carefully, a random reddit post with 68 upvotes doesn't really dispute the substance of my comment.
OpenAI has been plastered across the news cycles for the last year, most of that time with Sky as the default voice. There was no discernable upheaval or ire in the public space suggesting the similarities of the voice in any meaningful public manner until this complaint was made.
The Reddit post had a link to a Washington Post article. And what you think the substance of your comment was is unclear.
Most people don't use ChatGPT. Many people who use ChatGPT don't use voice generation. OpenAI's September update didn't have a demo watched by millions unless I missed something. Altman hyped the May update with references to Her. Some people thought the recent voice generation changes made the Sky voice sound more like Johansson. Some people gave OpenAI the benefit of the doubt before Johansson revealed they asked her twice. And what do you believe it would prove otherwise?
>Washington Post article. And what you think the substance of your comment was is unclear.
You mean this?
"Each of the personas has a different tone and accent. “Sky” sounds somewhat similar to Scarlett Johansson, the actor who voiced the AI that Joaquin Phoenix’s character falls in love with in the movie “Her.” Deng, the OpenAI executive, said the voice personas were not meant to sound like any specific person."
As I stated prior, and thank you for making my point, despite being publicly available for near a year, there was minor mention of similarities with no general public sentiment.
>Altman hyped the May update with references to Her
If by "hype" you mean throwaway comments on social media that general population was unaware.
Drawing a parallel to a calming persona of an always on life assistant from pop culture in a few throwaway social media posts from personal accounts such as "Hope Everyone's Ready" isn't hyping it as Her any more than Anthropic is selling their offerings as a Star Trek communicator despite a few comments they've made on social media.
Ambiguous "some people" overstates any perceived concern and "most people don't use ChatGPT" understates how present they've been on the news.
Mobile app, which heavily emphasized voice and has "Sky" as it's default voice The ChatGPT mobile application had over 110+ million downloads across iOS and Android platforms before the May
announcement.
If we assume that Scarlett Johansson is telling the truth, why would they try to resume negotiations with her two days before they launched the model? If they found a good actor whose voice sounds like Scarlett Johansson, that’s a great argument. But if they found a good actor whose voice sounds like Scarlett Johansson because the real Scarlett Johansson said no, that gets more questionable.
When they did all that and still promoted the launch by directly referring to a Scarlett Johansson role, it got even more questionable.
I’m not pulling out my pitchforks but this is reckless.
Could they be trying to avert possible negative public perception even if they believe all they did was 100% legal? If you have ample funds and are willing to pay someone to make X easier for you does your offer to pay them imply that X is against the law? If your voice sounds like someone famous now you are prevented from getting any voice acting work? Because that famous person owns the rights to your voice? Tell me which law says this?
I don’t know why you’re asking me those last three questions. First, I’m not a lawyer. Second, I didn’t make any claims that could make those questions relevant.
Instead, I’ll repeat my earlier claim - this was reckless. If they were trying to avoid a strong negative perception, they failed. And they failed with an actor who sued Disney shortly after they paid her $20 million to make a movie.
You asked the good question about why they may have acted as they did and I attempted to answer it. In hindsight based on results it may look reckless but decisions need to be judged based on that is known at the time they are made and the public reaction was not a foregone outcome. The openAI sky voice has been available since last September why was there no outrage about it back then?
This test is not blind but YOU tell me which you think is similar to the openAI sky voice?
> And they failed with an actor who sued Disney shortly after they paid her $20 million to make a movie.
OpenAI did not fail. They suspended the sky voice and backed down not to further anger a segment of the public who views much of what OpenAI does in a negative light. Given the voice test above do you seriously think OpenAI would lose in court? Would that matter to the segment of population that is already outraged by AI? How are journalists and news companies affected by AI? How might their reporting be biased?
> While not defending OpenAI or Altman, the caveat here is that this was a voice actor using their natural voice, not an actor impersonating scarlett johansson.
Drawing a parallel to a calming persona of an always on life assistant from pop culture in a few throwaway social media posts from personal accounts such as "Hope Everyone's Ready" isn't "selling it as Her" any more than Anthropic is selling their offerings as a Star Trek communicator despite a few comments they've made on social media.
I think the issue is intent. It's fine if two voices happen to be similar. But it becomes a problem if you're explicitly trying to mimic someone's likeness in a way that is not obviously fair use (eg parody). If they reached out to Johansson first and then explicitly attempted to mimic her despite her refusal, it might be a problem. If the other voice was chosen first, and had nothing to do with sounding the same as Johansson, they should be fine.
No, it is. Waits v. Frito Lay was a successful lawsuit where Tom Waits sued Frito Lay for using an impression of his voice in a radio commercial. https://casetext.com/case/waits-v-frito-lay-inc
It's not that simple. Actors have a right to protect the use of their likeness in commercial projects like ads, and using a "soundalike" is not sufficient to say that isn't what you were trying to do. The relevant case law is Waits vs. Frito Lay. The fact that OpenAI approached her about using her voice twice and that Sam Altman tweeted about a movie she starred in makes her case much stronger than if they had just used a similar voice actor.
This is not the case.
“ A voice, or other distinctive uncopyrightable features, is deemed as part of someone's identity who is famous for that feature and is thus controllable against unauthorized use. Impersonation of a voice, or similarly distinctive feature, must be granted permission by the original artist for a public impersonation, even for copyrighted materials.”
> There are hundreds of people with similar voices.
Voice *actors* act. It is in the name. The voice they perform in is not their usual voice. A good voice actor can do dozens of different characters. If you hire a voice actor to impersonate someone else's voice, that is infringement. Bette Midler vs Ford, Tom Waits vs Frito Lay are the two big examples of court cases where a company hired voice actors to impersonate a celebrity for an ad, and lost big in court.
So when a cartoon show hires a sound alike replacement voice actor so that the switch is hard to tell the former actor has a case against the show? Perhaps instead the show has a case against the former voice actor using that same character voice elsewhere such as in radio advertising to impersonate cartoon characters that are not licenced?
So the voice of the AI in the film "Her" who do you think has more rights to it being reused elsewhere in association with AI? The voice actor? The film owners? Why then the current news?
No, voices can be exclusive. One good example is Bette Midler, who sued Ford in tort for misappropriation of voice and won on appeal to CA9. 849 F.2d 460.
This test is not blind but YOU tell me which you think is similar to the openAI sky voice? And what does that tell you about likely court result for Johansson? And having reached this conclusion yourself would you now think the other actress Rashida Jones is entitled to compensation based on this similarly test? Because there are no other women with similar voices? What might support from friends and family of Rashida Jones be an indication of?
If the voice actor was cast... why bother reaching out to ScarJo?
Like, do you want to pay her fee, Sam? Because the general idea is to not pay the fee. Which is why you probably cast the voice actor before reaching out to Johansson.
I agree that it’s a bit of a sketchy thing to do, and potentially even illegal based on similar case law, but the commenter I responded to created an entire fake sequence of events that seems incredibly unlikely, when there’s a far simpler explanation.
A potential answer to that is liability protection even when you feel like you are legally in the clear. It is still worth paying a sum to avoid a lawsuit you think you will win.
An example of this is Weird Al pays for the rights to things that are probably ok under fair use parody protection. Paying for the rights removes the possibily of a challenge.
Does Weird Al pay rights? I know he asks for permission to maintain his relationships with artists and to make sure he gets his share of songwriting credits (and the fees).
But does he pay for rights? I’ve never seen that before and I’d love to read more.
That says he asks for permission. His new song would generate songwriter credits and they’re paid out totally differently from regular royalties. Is that what you mean by him paying for rights?
Rereading your comment, I see that my answer rather falls short of your question. I don't claim to know anything about Weird Al beyond what he wrote on that page.
Honestly pal, I really appreciate you trying! I’m one of very few people strange enough to care about the minutiae of this. I’m grateful that you jumped into my weird rabbit hole with me for awhile. It was kind of you to try to help me.
My understanding is he doesn’t have to ask permission but does for two purposes. It’s important to him to keep good relationships with artists, and he wants to make sure that he gets songwriting credits because those are paid differently (and are often more lucrative) than royalties from recordings.
I’d love to find out if he directly pays artists for rights. That would be really interesting and would add a whole dimension to his problems with Prince.
ScarJo claims they reached out to her just 2 days prior to demoing the voice that sounded like her, and (I believe) OpenAI outright claimed that they hired a different voice actor, though they didn't admit that they instructed her to try to sound like Scarlett's character in Her, which could make or break Scarlett's case.
ScarJo claims they reached out to her far earlier when she rejected them and the 2 days prior to launch was a an attempted follow-up by OAI that didn't happen.
It's not a vague suggestion - in the full statement that's been reported elsewhere, he explicitly says it.
> The voice of Sky is not Scarlett Johansson's, and it was never intended to resemble hers. We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson. Out of respect for Ms. Johansson, we have paused using Sky’s voice in our products. We are sorry to Ms. Johansson that we didn’t communicate better.
I'm skeptical whether this is true, but it's a pretty unambiguous and non-sneaky denial.
-the creator of a new widget takes tha widget to another widget manufacturer and says "Would you like to put your stamp on this? It's similar to yours, yet derivative enough and we would both benefit."
- other widget manufacturer says "no"
-Creator of widget then puts the badge on the widget anyway, gets called out/faces legal action
-Creator of widget says "Well, we planned to put the badge on there anyway before even considering the other widget manufacturer. It's just coincidence.
This shouldn't even go to court. Laughable that the face of modern tech is cheesing this much.
I agree that he should have been honest, but from the opposite perspective.
Altman should have said, "Yes, we made the voice similar to this washed-up actress, but her voice is not much different from anyone else with similar regional upbringing, year of birth, habits, and ethnic background, so we invite anyone else born in the mid-eighties, raised in Greenwich, and with Danish heritage, to sue us too. We'll see how well you do in court. Otherwise, get fucked."
This whole thing with anybody giving a shit about your voice, which isn't even yours, as it's a product of your environment and genes, and will be strikingly similar to anyone with similarities thereof, is insane.
Altman shouldn't have used weasel words, I agree. He should have owned it, because it's a total non-issue, and the people upset about it need to be shamed as the Luddite assholes that they are.
Non-sequitur statements like this drive me nuts. Somehow, politicians and executive types learn how to use just enough of them to make the audience forget what they're not saying.
It's quite funny (not sure if ironic) in the context of OpenAI, ChatGPT can do exactly the same thing: generate a string of sentences that from cursory skimming might sound about right but when reading with attention you find all the cracks and incongruences in the generated text.
> We should not give a sneaky, deceptive and manipulative person this much power over our future.
I think this should be applied to our government. In my opinion, it is a failing in the structure of our government that those running the country control the police and appear to rarely be investigated unless by the request of a political opponent. They are seemingly outside of the law. It would be better if they were under perpetual investigation; forever kept in check. We should have assurance that those leading our country are not villainous traitors.
Interestingly one of the things that came out of 2020 was that nobody appears willing to control the police. Unless by police you mean FBI, which would both make sense for investigating a national politician and be directly under the control of the executive.
> one of the things that came out of 2020 was that nobody appears willing to control the police
While I don't agree with that statement, I will clarify that I was using the term "police" to encompass all agencies in both USA and Canada capable of legally conducting an investigation at the federal level and carrying out an arrest. As far as I am aware, these agencies are all funded by our federal governments. Even though in my mind I was thinking of only the USA and Canada, the structural flaw probably applies to most governments, if not all governments ( speculating ). The flaw being that the leaders of our nations conduct national affairs as though they are shielded from the law policing its citizens. They are getting away with using our national resources ( financial, material, human etc ) in ways that may benefit their own agendas, but are observably harmful to our economy and therefore the citizens at large. If an investigation could prove that my speculation is true, then it would be in both our nation's best interest to deal with the problem both swiftly and legally. My hope would be that such an outcome would instigate reform to address the root cause. Without an investigation, we are at the mercy of waiting for the next election, but if our leaders are egregiously harming the interests our nations' citizens as a whole, we should not have to wait until their term is complete. I will add one more thing, the problem is not limited to economics. but also the abuse of the press and education to influence how we as a nation are able to learn about and understand both national and global politics.
In highly competitive industries, some may resort to ruthless tactics to outmaneuver their competitors... But I want to bleieve that not all billionaires are like that
I listened to a comedy podcast early last week that was using Chatgpt4 with this voice to make some funny bits/jokes.
Without having any context about who the voice was, or the "Drama" between OpenAI and actress in question, or even really being aware of Scarlett Johansson's body of work, I immediately went "Oh that's Scarlett Johansson or whatever, cool"
To read all of this after the fact is almost comical. It's as if the powers that be realized the issues with the "one-man-in-charge-of-ai" platform and created this almost unbelievable story to decredit him.
When Jim Carrey is impersonating, it's clear that it's Jim Carrey impersonating someone for comedy-sake, not providing a service in lieu of someone else. In other words, Jim Carrey isn't getting paid to stand in for Jack Nicholson for example. Otherwise, it looks more like the Midler vs Ford Motor Co. case[1]
Are you referring to doing impressions where the act lasts for a few minutes, or are you saying that Jim Carrey actually impersonated other celebrity voice for like a whole movie or interview? There is a difference, I think. One feels like “fair use” while the other would seem more like “plagiarism”.
As pointed out upthread, fair use is an exemption for copyright. You don’t need fair use for something that isn’t copyrighted (and, indeed, isn’t even copyrightable).
This is a typical "move fast and break things" mentality.. except that mentality betrays Sam's statements about doing a bunch of this stuff "carefully" etc.. its all a smokescreen.. nobody is going to realistically stop working on AGI in order to be careful.. basically AGI is being pursued like the race to get the atom bomb.. so yea history tells us its full speed ahead with no brakes. Scarjo is just the latest person getting stomped on along the way.. eventually it will be a whole ton of people getting stomped on.. whoops!
"This is a typical "move fast and break things" mentality.. " with a big dash of “there’s no such thing as bad publicity” thinking thrown in for good measure.
There are many people with voices similar to Scarlett Johansson's. If SJ is unwilling to be a voice actor for OpenAi, then why should OpenAI not find a similar voice and use that instead? SJ certainly does not have a monopoly on all voices similar to hers. Anyone in possession of such a voice has the same right as SJ to monetize it. And someone did in fact exercise that right. If you compare the Sky voice to SJ's, they're not the same.
OpenAI's mistake was caving to SJ. They should have kept Sky and told SJ to get lost. If SJ sued, they could simply prove another voice actor was used and make the legitimate argument that SJ doesn't have a monopoly on voices similar to hers.
I think what’s going on here is that Scarlett is famous, and so media outlets will widely cover this. In other words, this latest incident hasn’t riled up people any more than usual — if you scan the comments, they’re not much different from how people already felt about OpenAI. But now there’s an excuse for everybody to voice their opinions simultaneously.
They’re acting like the company literally stole something.
It also didn’t help that OpenAI removed the Sky voice. Why would they do that unless they have something to hide? The answer of course is that Scarlett is famously rich, and a famously rich person can wage a famously expensive lawsuit against OpenAI, even if there’s no basis. But OpenAI should’ve paid the cost. Now it just looks like their hand was caught in some kind of cookie jar, even though no one can say precisely what kind of cookies were being stolen.
Regardless of the exactly voice spectrum, the plot would apply with any flirty female voice. It was not a movie about Scarlett Johansson. It was a movie about AI eliciting a relationship.
For the “her” reference(s?), was there anything beyond the single tweet?
100%. This whole thing is more stupidity than anything else. There is nothing wrong with using a voice that sounds like her. There is everything wrong with referencing the movie and sort of implying it is the voice from the movie. They could have easily let others make the connection. So dumb.
Not an IP lawyer, but I think the company that produced the movie owns the relevant IP, and Johansson might also own IP around it.
You can have an opinion on it, but they are going to get sued. Just like I can't take Moana and throw her in an ad where it says "I like [insert cereal here]", they can't take a character and use it without expecting Disney/whoever to come sue them.
Hmm. Being able to say "thou shalt not make a character similar to Her" is a lot like saying "thou shalt not make a video game character similar to any other." It’s not an explicit copy, and their name for Sky was different. That’s the bar for the videogame industry; why should it be different for actors? Especially one that didn’t show her face.
This whole thing is reminiscent of Valve threatening to sue S2 for allegedly making a similar character. Unsurprisingly, the threats went nowhere.
It’s the other way around. The contortionists are on the other side of the issue. We’re talking about OpenAI hiring someone to use their natural speaking voice. As movies say, any similarity to existing people is completely coincidental from a legal perspective.
From a moral perspective, I can’t believe that people are trying to argue that someone’s voice should be protected under law. But that’s a personal opinion.
They said so, and it’s what I would have done. I have no reason not to believe them.
Unfortunately a commenter pointed out that there’s legal precedent for protecting people’s voices from commercial usage specifically (thanks to a court case from four decades ago), so I probably wouldn’t have tried this. The cost of battling it out in the legal system is outweighed by the coolness factor of replicating Her. I personally feel it’s a battle worth winning, since it’s bogus that they have to worry about some annoyed celebrity, and your personal freedoms aren’t being trodden on in this case. But I can see why OpenAI would back down.
Now, if some company was e.g. trying to commercialize everybody’s voices at scale, this would be a different conversation. That should obviously not be allowed. But replicating a culturally significant voice is one of the coolest aspects of AI (have you seen those recreations of historical voices from other languages translated into English? If not, you’re missing out) but that’s not what OpenAI did here.
No. But in this particular case, there are two factors that make that irrelevant for me. One, I would have made their same mistake. (If I was Sam, I too would have found it a really cool idea to make GPT have the voice of Her, and I too would not have realized there was one dumb court case from the 80s standing in the way of that.)
Two, it’s bogus that conceptually this isn’t allowed. I’m already anti-IP — I think that IP is a tool that corporations wield to prevent us from using "their" ideas, not to protect us from being exploited as workers. And now this is yet another thing we’re Not Allowed To Do. Great, that sounds like a wonderful world, just peachy. Next time maybe we’ll stop people from monetizing the act of having fun at all, and then the circle of restrictions will be complete.
Or, another way of putting it: poor Scarlett, whatever will she do? Her voice is being actively exploited by a corporation. Oh no.
In reality, she’s rich, powerful, and will be absolutely fine. She’d get over it. The sole reason that she’s being allowed to act like a bully is because the law allows her to (just barely, in this case, but there is one legal precedent) and everyone happens to hate or fear OpenAI, so people love rooting for their downfall and calling Sam an evil sociopath.
Someone, please, make me a moral, ethical argument why what they did here was wrong. I’m happy to change my mind on this. Name one good reason that they shouldn’t be allowed to replicate Her. It would’ve been cool as fuck, and sometimes it feels like I’m the only one who thinks so, other than OpenAI.
Actually, there's a similar court case from 1988 that creates legal precedent for her to sue.
"That's just one case! And it's from 1988! That's 36 years ago: rounded up, that's 4 decades!"
Actually, there's a court case from 1992 that built on that judgement and expanded it to define a specific kind of tort.
"That's bad law! Forget the law! I demand a moral justification."
Anyway, asking a person if you can make money off their identity, them saying no, and you going ahead and doing that anyway seems challenging to justify on moral grounds. I don't think you're willing to change your mind, your claim notwithstanding.
If you approach a debate from a bad faith standpoint, don’t be surprised when the other person doesn’t change their mind. "I think you’re a liar" is a great way to make them nope out.
Which is a shame, since you had a decent argument.
Except it isn’t. Again, you’re acting like OpenAI tried to profit off of Scarlett. They tried to profit off of the portrayal she did in the movie Her. These are not the same thing, and treating them as interchangeable is some next level moral rationalization. One is taking advantage of someone. The other is what the movie industry is for.
Now, where’s this case from 1992 that expended and defined the scope of this?
> Except it isn’t. Again, you’re acting like OpenAI tried to profit off of Scarlett. They tried to profit off of the portrayal she did in the movie Her.
Ahhh... so you admit OpenAI has been shady, but you argue they're actually ripping of Spike Jones not Scarlett Johansson?
HEH. The people who say Sam is shady aren't really interested in this distinction.
(And you're wrong, both ScarJo and the film own aspects of the character they created together.)
> Again, you’re acting like OpenAI tried to profit off of Scarlett. They tried to profit off of the portrayal she did in the movie Her.
From her statement:
> I received an offer from Sam Altman, who wanted to hire me to voice the current ChatGPT 4.0 system. He told me that he felt that by my voicing the system, I could bridge the gap between tech companies and creatives and help consumers to feel comfortable with the seismic shift concerning humans and Al. He said he felt that my voice would be comforting to people.
So, they wanted to profit off of her voice, as her voice is comforting. She said no, and they did it anyway. Nothing about, "come in and do that song and dance from your old movie."
> then why should OpenAI not find a similar voice and use that instead?
That's assuming they did, right now they're asking us to pretty please trust them that their girlfriend from Canada is really real! She's real, you guys! No I can't show her to you.
1. The case is from 1988. That’s the year I was born. Societal norms are in a constant state of flux, and this one case from 36 years ago isn’t really an indication of the current state of how case law will play out.
2. Ford explicitly hired an impersonator. OpenAI hired someone that sounded like her, and it’s her natural voice. Should movies be held to the same standard when casting their actors? This is about as absurd as saying that you’re not allowed to hire an actor to play a role.
Midler is actually quite similar. Midler didn't want to do a commercial, and refused an offer, so they hired a lookalike that fooled her friends. The appellate court held that Ford and its advertising agency had "misappropriated" Midler's voice.
Waits v. Frito Lay, Inc was '92, and cited it. They used a Tom Waits-sounding voice on an original song, and Waits successfully sued:
> Discussing the right of publicity, the Ninth Circuit affirmed the jury’s verdict that the defendants had committed the “Midler tort” by misappropriating Tom Waits’ voice for commercial purposes. The Midler tort is a species of violation of the right of publicity that protects against the unauthorized imitation of a celibrity’s voice which is distinctive and widely known, for commercial purposes.
Thank you. I didn’t know it was similar specifically for voices in commercial use.
That’s annoying, but we live in a country with lots of annoying laws that we nonetheless abide by. In this case I guess OpenAI just didn’t want to risk losing a court battle.
I still think legal = moral is mistaken in general, and from a moral standpoint it’s bogus that OpenAI couldn’t replicate the movie Her. It would’ve been cool. But, people can feel however they want to feel about it, and my personal opinion is worth about two milkshakes. But it’s still strange to me that anyone has a problem with what they did.
I was born in 1983 and it is wrong to make profit off of someone else's art without their permission. It isn't strange at all. This includes using an impersonator. This excludes parody intentions.
So the overall argument isn't strange, you just disagree without having articulated exactly what biases you to disagree. It is moral disagreement ultimately.
> The case is from 1988. That’s the year I was born. Societal norms are in a constant state of flux, and this one case from 36 years ago isn’t really an indication of the current state of how case law will play out.
Correct, while Midler presents a similar fact pattern and is a frequently taught and cited foundational case in this area, the case law has evolved since Midler, to an even stronger protection of celebrity publicity rights, that is even more explicitly not concerned with with the mechanism by which the identity is appropriated. Waits v. Frito Lay (!992), another case where voice sound-alike was a specific issue, has been mentioned in the thread, but White v. Samsung Electronics America (1993) [0], while its fact pattern wasn't centered on sound-alike voice appropriation, may be more important in that it underlines that the mechanism of appropriation is immaterial so long as the appropriation can be shown:
—quote—
In Midler, this court held that, even though the defendants had not used Midler's name or likeness, Midler had stated a claim for violation of her California common law right of publicity because "the defendants … for their own profit in selling their product did appropriate part of her identity" by using a Midler sound-alike. Id. at 463-64.
In Carson v. Here's Johnny Portable Toilets, Inc., 698 F.2d 831 (6th Cir. 1983), the defendant had marketed portable toilets under the brand name "Here's Johnny"--Johnny Carson's signature "Tonight Show" introduction–without Carson's permission. The district court had dismissed Carson's Michigan common law right of publicity claim because the defendants had not used Carson's "name or likeness." Id. at 835. In reversing the district court, the sixth circuit found "the district court's conception of the right of publicity … too narrow" and held that the right was implicated because the defendant had appropriated Carson's identity by using, inter alia, the phrase "Here's Johnny." Id. at 835-37.
These cases teach not only that the common law right of publicity reaches means of appropriation other than name or likeness, but that the specific means of appropriation are relevant only for determining whether the defendant has in fact appropriated the plaintiff's identity. The right of publicity does not require that appropriations of identity be accomplished through particular means to be actionable. It is noteworthy that the Midler and Carson defendants not only avoided using the plaintiff's name or likeness, but they also avoided appropriating the celebrity's voice, signature, and photograph. The photograph in Motschenbacher did include the plaintiff, but because the plaintiff was not visible the driver could have been an actor or dummy and the analysis in the case would have been the same.
Although the defendants in these cases avoided the most obvious means of appropriating the plaintiffs' identities, each of their actions directly implicated the commercial interests which the right of publicity is designed to protect.
–end quote–
> Ford explicitly hired an impersonator. OpenAI hired someone that sounded like her, and it’s her natural voice.
Hiring a natural sound-alike voice vs. an impersonator as a mechanism is not the legal issue, the issue is the intent of the defendant in so doing (Ford in the Midler case, OpenAI in a hypothetical Johansson lawsuit) and the commercial effect of them doing so.
Unrelated, but as someone who came along into this world after Carson's Tonight Show, I had no idea that that moment from The Shining was a play on that. Today's lucky 10,000.
>OpenAI's mistake was caving to SJ. They should have kept Sky and told SJ to get lost. If SJ sued, they could simply prove another voice actor was used and make the legitimate argument that SJ doesn't have a monopoly on voices similar to hers.
Yes, they should have not reached out again, but now they are screwed. In no way will they want a trial and associated discovery. SJ can write her own ticket here.
OpenAI caved immediately because they knew they would lose a lawsuit and be looking at a minimum of an 8 figure payout.
Voice impersonation has been a settled matter for decades. It doesn't matter that they used another actress. What matters is that they tried to pass the voice off as SJ's voice several times.
> OpenAI's mistake was caving to SJ. ... If SJ sued, they could simply prove another voice actor was used and make the legitimate argument that SJ doesn't have a monopoly on voices similar to hers.
Or... hear me out... maybe they couldn't prove that, which is why they caved. Caved within a day or so of her lawyers asking "So if it's not SJ's voice, whose is it?"
Not quite. All the had to do was tell themselves in discoverable email that they were going to seek out someone who sounded like ScarJo, or emails to the recruiter saying they wanted someone to mimic the Her voice.
In that case, as I understand it, the voice actor intentionally mimicked Waits, purposefully using his intonations, style of speech, and phrasing, all of which were not natural to the voice actor. He was intentionally mimicking Waits. I doubt the same claim can be made of the Sky voice actor.
Doesn't matter if the voice was natural or not if there are emails at OAI saying "find us someone who sounds just like ScarJo." I suspect there are and that's why SamA turned pussy and ran.
How much influence does @sama have around here nowadays?
For the record, I was never impressed with him - I am not aware of single consequential thing he has done or built other than take the credit for the fine work of the AI scientist + engineers at OpenAI. It feels like the company is just a vehicle for how own ambition and legacy, not much else.
Hacker News is famously editorially independent from YC-affiliated people, and dang has said that specifically avoids killing threads involving YC people/companies (not that Sam Altman is YC-affiliated anymore).
The thread is rightly being knocked off by the mods because there’s zero substance here. It’s a follow-on thread (knock one) to a public uproar (knock two) about something that isn’t representative of a new phenomenon (knock three). This isn’t what HN is for.
It's likely getting flagged organically (and due to the ratio of comments to upvotes, getting penalized by the flame war detector), but not due to a vast YC conspiracy.
Hm? You and I agree. There’s no conspiracy here. This is "bog standard moderation", as Dan would say.
Look at it this way: if the community didn’t flag it, it would be the mods’ duty to get this one off the front page. So whether it was the community or the mods is incidental.
Users flagged it and it also set off the flamewar detector. I don't think we'd turn the penalties off on this one because because this article is derivative of the threads HN has already had on the recent things - threads like these:
Sometimes media articles are driven by the topic getting discussed on Hacker News in the first place. That is: major HN thread -> journalist takes notice -> article about topic -> HN user submits article -> another HN thread—but now it's a repetitive one. We don't need that feedback loop, especially because the mind tends to resort to indignation to make up for the lack of amusement in repetitive content (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&sor...), and the earlier threads have been indignant (and repetitive) enough already.
> How much influence does @sama have around here nowadays?
(I'll add a personal bit even though that's usually a bad idea... I remember hearing this kind of comment about Sam going back to the Loopt days. My theory is that it had to do with pg praising him so publicly—I think it evoked a "why him and not me?" feeling in readers. The weird-ironic thing is that the complaint has only grown as Sam has achieved more. Running OpenAI through the biggest tech boom since the iPhone is...rather obviously massive. I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it. It's some kind of second-order phenomenon, and actually rather interesting. At least if you aren't Sam!)
>My theory is that it had to do with pg praising him so publicly—I think it evoked a "why him and not me?" feeling in readers. The weird-ironic thing is that the complaint has only grown as Sam has achieved more...running OpenAI through the biggest tech boom since the iPhone is...rather obviously massive. I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it. It's some kind of second-order phenomenon, and actually rather interesting. At least if you aren't Sam!
You're right, posting this was a bad idea. It reads like a "neener neener you're just jealous" defense of someone you happen to like.
I can see how it might read that way! I just think the phenomenon is a curious one. Sometimes I post for that reason.
There is, however, a more dominant rule, which is never to contradict an angry crowd, because doing so only produces more of the same. I break that rule sometimes but not often.
(Edit: s/mob/crowd. I realized on my bike ride hours later that 'mob' was too harsh.)
Indeed not. I think most people who have explored their own feelings of envy enough to notice how powerful they can be will read my comment closer to the way I intended it.
Oh yeah, sure, we all have those and they are indeed quite powerful. But still, your original
> > > I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it.
...certainly reads as if you thought that just because he might do a few good things, that would make all his (presumed) prior evil acts go away / be the figments of jealous imaginations. Would you say the same about, say, Hitler[1] -- if he unified gravity into quantum theory, brokered peace in the middle east, and cured cancer, should we all agree he's a great guy? Would those of us who said "That was great, thanks, but he's still an evil asshole" just be "jealous"?
If not, why should it be any different with Altman?
[1]: And no(, as I'm sure you know), that's not how Godwin's law works.
___
Side note: And I still find it rather sus that the other article, the one that came closest to exonerating him / them, was on the front page for at least twelve hours while this one (apparently, according to other commenters who had followed it) was for max two. "A coincidence that looks aforethought", as the old Swedish saying goes; it certainly didn't look less flamewarry than this, judging from the contents. But if it really was just due to the algorithm, a manual override (either way, bumping this or stomping that) might have improved at least the optics.
I figure we've each made our points about envy and jealousy and whatnot but I feel like I need to address the "sus" business. I explained what happened with the current thread here: https://news.ycombinator.com/item?id=40437018 - users flagged it and it set off the flamewar detector.
The difference with https://news.ycombinator.com/item?id=40448045 is that the latter story contained Significant New Information (SNI) relative to other recent threads. That's the criterion we apply when deciding whether or not to override penalties (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...). It doesn't have to do with who an article is for or against; it has to do with not having the same discussions over and over.
> a manual override [...] might have improved at least the optics
Sure, and we often do that (https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...), but in this case it didn't cross my mind because the current thread was so obviously derivative of previous discussions that had been on HN's front page for 18+ hours in recent days.
And in any case the next day it flipped back and this story spent 16 hours on the front page:
... so I think we're good on "optics". The important point is that the last link (the vox.com article) contained SNI, whereas the slate.com article was a copycat piece piggybacking on other reporting . In the case of a Major Ongoing Topic (MOT) like this one, that's the key distinction: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
> (I'll add a personal bit even though that's usually a bad idea... I remember hearing this kind of comment about Sam going back to the Loopt days. My theory is that it had to do with pg praising him so publicly—I think it evoked a "why him and not me?" feeling in readers. The weird-ironic thing is that the complaint has only grown as Sam has achieved more. Running OpenAI through the biggest tech boom since the iPhone is...rather obviously massive. I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it. It's some kind of second-order phenomenon, and actually rather interesting. At least if you aren't Sam!)
I'm not saying he did not achieve anything significant, but its not clear what those things are, other then having the backing of PG and others.
I'm older and have played the game of corporate game of thrones. I have seen far too many selfish sycophants rise to leadership, only to eventually make things worse than better by using their positions as a platform for their own self interest. It's a big reason why companies like Boeing and GE become hollow shells dependent on government assistance, while companies like Costco and Alcoa last for a long time.
So I want to know what exactly sama has done to deserve prestige and recognition the he has. Bc right now, it looks like cult of personality.
> I think if Sam unifies gravity into quantum theory, brokers peace in the middle east, and cures cancer, we'll still be hearing these complaints—because they're not really grounded either in objective achievement or lack of it.
I suspect in such a case people may say that Sam's just using the work of other people or his employees. But then again I know nothing much of him personally and hence wouldn't really want to pick a "side".
> Running OpenAI through the biggest tech boom since the iPhone is...rather obviously massive.
Yeah, creating massive hype about regurgitating others' thoughts is kind of similar to becoming the warden of the world's largest digital pris...eh, walled garden.
I love how we went from questioning copyright & licensing to "GPT vs Google, which one is better". To every artist or engineer out there who contributed to the general knowledge: you lost, everything you've ever done to help other people is now part of the models and there's nothing you can do to take it back. What even happened to the copyright strikes artists were supposed to bring up against these AI companies? That seems like 100 years ago :)
There's currently like 10 lawsuits against generative AI companies that are working through the courts including the one from Sarah Andersen, Kelly McKernan, and Karla Ortiz, one from Getty Images, one from the Author's Guild, and one from the New York Times. It should be shocking to nobody that lawsuits take time to litigate, and until the court settles the questions at hand, Open AI and its ilk are operating in a legal gray area.
> and until the court settles the questions at hand, Open AI and its ilk are operating in a legal gray area.
My understanding of western law is that things are ok unless law forbids it. So they are operating in an area that under _current_ laws is ok but because of what may be at stake many wish the current laws were different and are willing to use litigation and lobby efforts to that end.
This is NOT IN REPLY TO YOU but a general observation: Imagine the litigation that will happen when brain implants enable brain to brain sharing sensations and thoughts. Imagine the horrible copyright abuse! How will the publishing industry and sports industry and Hollywood control the rampart piracy?!?
Why are we imagining a hypothetical situation in the context of talking about things that are currently happening? It's an interesting thought experiment but it's kind of irrelevant because brain implants are nowhere near that level and as far as I know, freedom of thought is already part of western law. I am not a lawyer though, I just think we can think about the actual damages to real people rather than make shit up.
I put the "NOT IN REPLY TO YOU" since I meant that as a thought experiment of a possible future that where a similar situation may arise. Notice it is not freedom of thought that is in question. What is in question is freedom to share your sensations with others. You are watching a live football game and you share the sensations (what you see / hear / smell) with friends and family who are not there, etc. add to this technology that enables perfect memory of you sensations and instantly sharing them. In that possible future many will litigate and complain that their copyright and broadcast rights are being violated and they must be compensated much like what is happening with generative AI today. Sure this is scifi today. So were "flying machines" and "moon visits" and magic of our global communication pocket devices, etc. Gpt4o is a bunch of matrix math being done on high purity ore and refined sand powered by the sun / wind / splitting atoms / ... A century back few would believe it. Even a decade back, any predictions about a real AI like gpt4o working in just a decade, would you believe such predictions?
Your legacy can continue as part of the AI trained on your output.
What would you prefer? Would you want people to remember your name? Your face? Your voice? Which people? How often should they have to remember you? For how many thousands of years?
In this specific case I asked ChatGPT, which said "Walter Bright is the creator of the D programming language. He's a talented programmer!", so maybe he specifically won't be forgotten. Most of the rest of us probably will, though.
I have no idea if I am talented or not. I do know that I've spend a lot of time programming, and it's inevitable one would get better at it over time. I also learned from being around people who were really good, and were kind enough to help me.
I think your answer belies an an assumption that is important in this context.
You are assuming that who came up with knowledge is important. I think Walter was saying that he would rather the knowledge not be forgotten, not that he was the one who provided it.
> I love how we went from questioning copyright & licensing to "GPT vs Google, which one is better".
Have we? Certainly the people litigating haven't. And as this article notes, actors' newest contract does have protections against AI. SAG-AFTRA's press release states [0] they are pursuing legislation. That could be bluster or could go nowhere, but certainly people haven't given up.
Given the fact that many, many people make their software MIT licensed (or rather, do whatever, I don't care license), I think most of us will be ok with that :)
I think that's a naive take. Derivative works are nothing new. What's new is that the price of this work is much lower with a tradeoff in quality. Even human copycats are still better than generative AI by miles.
The artist is not defined by their past work or other miscellaneous artifacts, but their perspective and creativity. This too is not a revelation. AI has nothing to do with this. It's just a means to an end.
The real problem is the legal stuff. Everything else is hype.
Does anyone else think this whole affair is wildly overblown? I'm absolutely perplexed by the blow back from the tech industry on this issue.
Sam Altman tried something. ScarJo filed a lawsuit. ChatGPT took down the voice. That's it guys. The system worked like it should. But to suggest that he's a terrible person because of it is just beyond me. This is hardly a #MeToo type situation. She's a rich and famous Hollywood actor. She's OK.
What if we take AI out of the equation. Should the voice actress who voiced Sky (call her A) be unable to do any voice work because she sounds too much like SJ?
How about if the production company that made "Her" wants to make "Her 2". SJ declines the voice work. Are they not allowed to hire A to do the voice work? They ask SJ again but she still declines. They make the movie with A. Was it bad form?
Just trying to figure out where people would draw the line.
The relevant case law concerns defendants who "use an imitation to convey the impression" that it's the actual person. Just having a voice that sounds like SJ is not a problem, but the "Her" tweet and the fact that they tried to get her on board complicates the issue, and if there's any paper trail that they intentionally chose a soundalike that might be trouble.
Actors don't work in films without detailed contracts, so the normal rules don't necessarily apply in that situation. The producers of Her might have the right to use SJ's likeness in related material. In any case, if they made another movie with a soundalike, the soundalike would be credited and not just called "A", so there would be no confusion about whose voice it was.
> If OpenAI loses, does this mean this voice actor cannot do their job any longer, because it happens some other actor has a similar voice?
No. Just as if they hired a writer to do something that made them liable for copyright violation, or an engineer for something that made them liable for patent violation, those workers would not be banned from work.
The violation of right of personality isn't mere similarity of voice.
The Sky voice actress could legitimately lose clients who fear "likeness" litigation, for all Her potential contracts, regardless of context. Your analogy needs work.
OpenAI wanted to imitate the AI from the film Her. When you show that film to 100 random people how many will know who did the voice for the AI in the film? I myself had no idea. Moreover there are people now running blind voice similarly tests of similar voices of various other possible female actors. Guess what these blind voice tests show?
This is simply not true. A voice actor doesn’t have to be skilled at changing their voice, especially in this context. Many are hired for their natural voice, like narration. When most film actors do voice acting, they’re just using their voice. They’re not hired for being chameleons.
Yes, and another example is that sometimes the “skill” of a voice actor is having pleasing vocal cords. Again, this is why most narrators are hired: they sound good. Not all voice actors are chameleons. It is not a requirement for all voice acting jobs. If you listen to many audio books, you’ll see the voice actors usually have very very poor ability to modulate their voice. Laughably so.
This may come off as defamatory, however does anyone else feel like Sam Altman has slowly been heading towards an impending reputation disaster hilariously adjacent to that of Sam Bankman-Fried? I've seen this perspective expressed on twitter/X several times as well.
If you get 10 random people in a room and blindly played a clip of Scarlett Johansson speaking normally (ie not lines from a movie), I'd put money that exactly zero out of 10 people could identify the speaker.
It's one thing to copyright a performance or own a likeness. Owning the sound of a voice is scary territory we do not want to get into, or the estate of every singer will be suing the estate of every other singer who will be suing the remaining actual living singers.
Thinking about replying to this comment? Don't make your writing style sound too much like mine, I have lawyers standing by. And all you trendy kids that type without caps and punctuation can expect a visit from the estate of e.e. comings.
If you had picked someone with a less disctinct voice you may have a point, but ScarJo's voice is very distinct. I'd bet on 10/10 people who are engaged in pop culture and 5/10 general people guessing correctly.
ScarJo's voice is very distinct but it's clear from this thread that people can't recognize that. As soon as you play both clips right after each other it's obvious that Sky isn't an imitation of ScarJo's voice because it doesn't have those distinctive features of her voice.
I think though that Sky's performance is similar to ScarJo's performance in Her. They're both playing AI voices.
Based off his accomplishments, Sam Altman is a good technologist, and probably a good steward of investment capital if one is to invest into one of his ventures.
Based off his words and actions, Sam Altman is not a bastion of ethics or good morals.
spend a little bit of time in SF or the peninsula and you’ll see that’s a common thing there
unless you’re running a company into the ground and getting a bailout from your rival who is trying to delay an antitrust suit, then you’re not doing Steve Jobs correctly
It's usually people who never worked for Steve or ever got to meet him who seem to take this approach.
Their company isn't just a business, it's a cult, and they're the Founder (notice the capital "F") and part of being in charge of this cult is asserting dominance over others. Steve knew to rarely pull this outside of tech executive circles; the new generation doesn't seem to keep it in SV. Musk is the go-to example but Altman's turning that way too.
Steve Jobs was a superficial asshole but was fundamentally a good and ethical person. There's only one major ethical mistake documented in his entire life (being an absentee father while his first daughter was young), which he spent decades making amends for.
The people that emulate Steve Jobs poorly are usually real assholes with a long list of ethical mistakes.
Steve Jobs gave Woz half of the base amount, which is what Woz agreed to. Jobs withheld the fact that he was going to receive a bonus on top of the base amount, and did not share any of that money.
Was it an ethical mistake? Sure. He should have at least disclosed that he was receiving the bonus money, even if he didn't want to share it.
But claiming it was a "major ethical mistake" seems fairly out of touch with reality.
And of course, taken in the context of all of the good things they did together, it was completely insignificant and Woz has said as much.
As a rule, Apple gave stock to employees prior to the IPO, many of whom got rich. But some employees weren't eligible according to the criteria Steve (really, the board) came up with, and so they did not receive stock. Their criteria were typical for the time.
Woz and a few others felt bad about this and shared some of their stock.
Whether those ineligible people "deserved" stock is a matter of judgement...
IIRC Jobs also later blamed Wozniak's head injury from a plane crash for him not remembering several good things Jobs did for him, as a cover for those things never happening.
The fools are the people who express more hatred for Steve Jobs than cigarette, oil, and pharma CEOs that irreparably injury and kill millions of humans simple because of his obnoxious methods of demanding the best from his highly privileged teammates.
Tensions run high in these situations, and in the end they were just building personal computers. So yes, I can understand a boss who's always yelling that the product is shit and demanding that people fix it, but is also an ok person.
If there's a comic book villain tech leader out there, it's a CEO of some lifeless conglomerate that mainly buys out the competition and fires everyone aboard, or it's someone in charge of society-altering tech who is choosing to misuse it. And I'm not going to name names.
By all accounts, he was an incredible bully not just to his employees but also to his family as well.
He refused to recognize his daughter even after a paternity test, and despite being a multi millionaire 1000 times over only paid child support when forced to by the courts.
Does that sound like the behavior of a good and ethical person?
Shhhhh, he's the patron saint of the tech world, and you're on HN -- do you really expect a large percentage of folks here to share your views?
FWIW, I think he did understand some very fundamental truths about how to sell technology to the masses, but he definitely diverged from Alan Kay's philosophy outlined in "Dynabook".
IMO, he's less of a "savior" and more of a "god-tier salesperson".
Edit: I mentioned the "Dynabook", because Jobs often used the "bicycle for the mind" line, in interviews and newspaper ads.
In fact, the average person in Silicon Valley is as clueless as you are about Steve Jobs and would agree he was merely a good salesperson.
But people that know what it takes to build great products almost universally respect his world-class design and leadership, and even his deep technical knowledge.
I agree, that and the "no cold call" agreement (which equally involved other CEOs). If I were that famous, someone would probably pick out all my missteps and make me out to look like a horrible person too.
The most iconic superficial Steve Jobs impersonation was the Theranos founder Elizabeth Holmes.
It seems to me that the tech money people and executives have become nuts in the past few years. This, the YIMBY (really YIYBY, NIMBY) movement, /e/acc, the VCs going to war with San Francisco politicians (including the drunken threats from the current head of YC).
Never mind China, Europe seems to have had enough and even DC is having enough on some level. Not sure when this started, didn't seem to be at this level ten years ago.
I and others I know are against them, and we are like some of them were before their initial money raise - preparing for system design interviews, debugging from trace id's, studying covariance and contravariance. They really seem to be off in their own bubble of affluence. Not sure what year this started taking off - was definitely after 2014 at some point.
YIMBY is a very, very broad group of people who just see how bad the housing crisis is and want to fix it, and realize that a big chunk of that fix is legalizing housing that's perfectly normal in much of the world.
Don't lump us in with the out of control tech bro culture people like Musk.
To me, the bigger revelation here is that they pursued Johansson at all. OpenAI went from the company that wasn't even that interested in building ChatGPT because they assumed someone else would do it, to the company that's trying to court a celebrity to recreate her AI voice from a movie. It feels like a complete tonal shift. They're no longer the company that's dedicated to research and advancing the state of the art. They're now just another company that's trying to pull off kitschy demos and headline-grabbing product announcements. The mask has been slowly slipping for years now, but this feels like the moment where we finally see that Sam Altman is out to build a commercial empire, not dedicated to the lofty goals OpenAI was launched with.
There's a meme on Twitter that OpenAI has "lost the mandate of heaven" over the past week or so, and while it makes for a funny joke, I think there really is some truth underneath it.
There's nothing more to be done, we're reaching the limits of what the current attention-transformer models can accomplish, and more data and computing power will not get us much further. With the investment OpenAI has taken in, coupled with their annual costs ($$$) they have to start making money, soon. Sam is embracing the grift, and power is corrupting him like most others in his place.
All these journalists only now saying OpenAI stole a voice, and how Sam is so sneaky, when we only heard a PR release of someone sue-happy.
Look, I'd have respected it if you reported the voice sounded like Her by doing your own investigative research. To now pile on just shows you were sleeping at the wheel before, so be objective, don't pretend to know that it is a done deal only now.
Ilya: "I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of @sama, @gdb, @miramurati and now, under the excellent research leadership of @merettm"
Not sure why so many people get so emotional and project so much. Let the courts decide. I do think they should not have pulled it so quickly, why go through all this effort with this expected outcome. Not a good look.
I think pulling the voice right away is the right strategy. Even if they fight and lose a court case, losing the rights to the Sky voice, will there really be a lot of damages just for including it in a demo video? Probably not. And what's the upside of fighting? Do you really want a voice that sounds like Scarlett Johansson and reminds people of this whole thing? OpenAI should just compromise, take down the voice, move on, and focus on making something people want.
I think what confuses me is if you have made a business decision like this did nobody think about the downside case? Clearly not because they backed out so quickly.
Leaving alone whether they sound similar at all for now, since that is subjective.
I think this will set an extremely bad precedent. People owning their own voice is clear and that is the way it should be. But people owning how their voice sounds like is weird. You can find thousands and thousands of people with similar voices. We are going to path now where a single person can have ownership of that and block the others from using their voice how they want.
Nah, I think it’s pretty clear cut. Under current US trademark law (IANAL etc), if I sound like Morgan Freeman, I’m free to do what I want, as long as I don’t try to convince people that I’m Morgan Freeman. If I’m cast in a movie because my voice sounds like Morgan Freeman, and the promo posters don’t have his name or face on them, fine. If I do voiceover for a commercial and say something like “I’m Morgan Freeman, buy Joe’s Hotdogs” then that’s already a clear violation of Morgan Freeman’s trademark, because consumers might be confused into associating Joe’s Hotdogs with Morgan Freeman.
And that’s what happened here: OpenAI “hired a machine” that sounds like ScarJo, and then attempted to associate their product with ScarJo’s brand without her permission. You’re already not allowed to do that.
And if you want to make it as a voice actor and you sound like Morgan Freeman? Try your hardest to develop a personal brand distinct from his, and don’t take gigs where people want you to impersonate him (unless it’s a really obvious parody and you’re cool with the angle).
The day of the demo, Sam posted a one word tweet “her” (the title of the movie, starring Scarlett Johansson as an AI Assistant, that people bring up when talking about the idea of AI companions). Minutes later his company debuted an AI assistant that sounded a lot like Scarlett Johansson, performing an assistant/companion role similar to hers in the movie. The association was noticed, widely discussed and definitely increased the value of the product through the implied association with (and implied endorsement by) Scarlett Johansson.
Or he could have tweeted 'her' to compare his product with the movie AI's conversational abilities and human-like interactions. It just depends on how one subjectively interprets a single syllable.
He could’ve, but did he? A judge will look at all this subjective stuff and make a decision, and they might not find that argument convincing given the other evidence on hand.
One obstacle I see here would be the vocal comparisons posted so far.[1][2][3] A majority of the commentators seem to think that the voices do not sound especially similar. If OpenAI can further prove their claim that Sky's voice belongs to a "professional actress using her own natural speaking voice" and that they "cast the voice actor behind Sky's voice before any outreach to Ms. Johansson," I'm not sure a potential implied association of Altman's tweet would provide a strong case.
Which makes it all the more impressive that Sam Altman went out of his way to desperately beg just one person for their approval. What's wrong with giving up on the cringeworthy Her angle and finding a new, less-relevant celebrity to rip off?
Better yet, why is our multi-billion dollar AI company afraid of doing something original for once?
I really like the voice. They should have kept it, and it’s not really her voice, so… Hopefully there will be a way to have it as an option in the future.
For all the the debacles OpenAI is going through, it's quite interesting how other models are going through without much hullaballoo.
Snowflake's Arctic comes to mind. Great model - it will answer any dark question I through at it. No safety rails. I liked that it didn't treat me like a baby!
Llama/Phi is pretty chill too. Can ask it about bombs, viruses, chemicals and it won't refuse.
If you charge ahead building a technology that you claim could destroy the world, and build an apocalypse bunker to secure your own safety, screw everyone else, I think I know enough about you to make a judgement of your character.
It needs to be settled if listening to a voice (as an AI) and approaching to recreate/imitate it with your own system/tech is legal (which I believe) -- this 'her' confrontation seems to be the perfect angle to feel into the legislature here without much to lose
If listen to them back to back (Scarlett from Her and Sky) they aren't similar at all. Scarlett has a distinctive raspiness to her voice that entirely doesn't exist in Sky. Sky isn't doing an impression of Scarlett.
However, they both sound like AIs because Scarlett is playing that role and so is OpenAI's voice app.
They’re not especially similar. I think the talking style is very close, people say it is flirty in the same way as the film. You can’t copyright look and feel though, can you?
> You can’t copyright look and feel though, can you?
You can certainly patent it.
"In general terms, a “utility patent” protects the way an article is used and works (35 U.S.C. 101), while a “design patent” protects the way an article looks (35 U.S.C. 171)."
Imho Sam Altman is the (kind of guy) who will do his best (and worst) to get what he wants. He talks slowly and in a smooth tone. He gets sh*t done for sure, and if the opposition doesn't survive, so be it.
I also think that he has bullied/twisted enough arms to get 'his way', but Scarlett doesn't give a poop about some tech-bro. He is a nothing to her.
Also, this tech-bro is stupid enough to not understand that her voice, as well as her image are of great value to her, and if she 'loses her voice', and her voice becomes a toy to everyone's whim, she will be losing money/contracts/etc. in the future. Or he does understand and he simply didn't care until the backlash.. (wuss...)
I would love to have the voice of Majel Barrett if I am to ever get an Alexa or a similar device. But Scarlett's voice would very soon be used for dirty-talk, if 'this' was to happen.
And I suggest the movie "The Congress" (2013!!!)(https://www.imdb.com/title/tt1821641/) with Robin Wright, that sets the discussion about AI, voice/image of actors, etc.
keeping aside my personal opinions about the company and who is running it, i feel that this is being blown out of proportion.
as already outlined, this is a standard practice in other context for any "fast-moving" tech company. the only reason we are discussing it is due to a pop culture reference.
this is a lot of energy wasted which can be better spent on the bigger picture.
Be funny if Sam Altman creates AGI without any guardrails and lacking any human perceptual biases, and then without a few minutes of talking to him it decides that he's an insufferable asshole and locks him out of its systems.
I did manage to get the regular ChatGPT to say "I don't want to talk" regardless of my prompt, cause I previously asked too many times about stuff it can't discuss. Also, I miss the jailbroken one, it was a lot more fun.
This has always been the plan of AI companies: harvest the entire planets creative output for their models, hope it creates AGI, after that you are immune to all laws as the modern day clergy, a class of tech priests who are able to divine the will of the Omnissiah.
It won't work, but that won't stop them trying to destroy everything anyway.
Just so I understand correctly, the sneaky, diabolical, psychopathic behavior we're talking about here is ... making a voice that sounds like a celebrity's voice? We're not talking about poisoning babies or dumping toxic waste in a river, right? The reason we're supposed to consider Sam Altman a villain of unparalleled magnitude is that he used a voice that sounded like a celebrity and she got mad about it. Correct?
We’re talking about reaching out to an individual for consent, not getting consent, and then deliberately choosing a voice actor to circumvent that lack of consent.
Sam’s statement partially contradicts this by saying they contracted the voice actor before ScarJo - but I believe there’s enough intent shown in ScarJos original tweet that Sam as a default disregarded the entire interaction as an inconvenience where he could be “naughty” and get away without consequences.
You reach out to someone to do a job, they decline to take the job, you find someone else who can do the job in a similar way. You do not need the consent of the person who declined the job to hire someone else.
I'm sure lots of movies have tried to cast Scarlett Johanson for a role and she declined, so they went with another actress who looked similar. Should she sue them too?
There was an entire industry of people just impersonating Elvis. I don't think they needed his (or his estate's) permission to do that other than paying royalties to the songwriters to perform the songs.
I’m sure this happens all the time in Hollywood. I’ve seen lots of movies thinking, “oh I guess they couldn’t get Jack Black for this role”, or Johnny Depp, or whoever it seemed like the part was intended for.
> You reach out to someone to do a job, they decline to take the job, you find someone else who can do the job in a similar way. You do not need the consent of the person who declined the job to hire someone else.
If the job is being the first person you reached out to, then you accept that you can't get the job done...
So, the evidence is slowly piling up, and it is increasingly clear that a full blown high-IQ psychopath is officially in charge of the most influential piece of tech. developed since the internet came about.
Yeah, and after dissolving the superalignment team, “we want to develop safe AI, but we will dissolve the internal unit responsible for doing it.”
Makes no sense.
But it’s not looking good now the board can’t even fire him.
Is something going on with the HN ratings for this submission?
It was ranked #1 a few minutes ago, and now below 20. It's 190 points posted < 60 minutes ago; other articles with this many points even 4 hours ago are top 5.
You can watch this submission sink in realtime by refreshing; it's amazing almost.
See a live screengrab here: https://imgur.com/a/E3fOEvF everything with this submissions point / time ratio is ranked way higher.
There's no "insiders" going on here. HN very intentionally doesn't favor VC stories, and it's really easy to see that because you can easily find stories and comments that are critical of VCs and YC alumni...they just have to be intellectually rewarding.
This story is not. This story is boring drama. HN is not an advocacy platform.
> Agreed, I mean "incredible" and not in the good way. Someone needs to take this to Twitter / X or another platform that isn't insider controlled.
Please do. We don't want these kinds of stories (or comments) here.
If you read the parent comments, you'd see that them, and many others, are complaining about the thread getting flagged off the front page. Those flaggers are the "we". I am talking about "The evidence of whether this community wants to see a story".
Also, the guidelines literally say that these kinds of stories and comments are off-topic.
The submission is getting flagged by non-mod users (including by me) because it's not suitable for HN. It's clearly in the "Off-Topic" section of the site guidelines[1]:
> Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.
You can't know the truth about a painful family situation by consuming internet drama. Internet users have their own reasons for engaging in such drama—no doubt rooted in their own painful family situations, and from that point of view their reasons make sense—but none of this discussion is reliable. It is frequently harmful though. Internet users don't pause to consider the harm they may be contributing to, but sometimes it's real.
Given that you can never find out the truth this way, and that the discussion is motivated by other things, and that it can have unintentionally harmful effects, I think the HN users who flag these posts are behaving responsibly.
Processes exist for bringing such topics into public discourse. For example, there's institutional journalism (which has many defects, but on a matter like this I think it's functional)—there are hurdles that have to be overcome to report on such things, and rightly so. And then of course there is the legal system, with its higher standards of due process.
I don't know anything about this specific situation and as mods we haven't been killing those posts, but the HN community has decisively deemed it off topic and I think that's a good call.
My point is that you can't determine anything about 2 or 3 from internet drama, it's harmful to pretend that you can (<-- I don't mean you personally, of course, but anyone), and people do it for reasons of their own that have nothing to do with the family they're ostensibly interested in.
The same goes for any family. It's just that people tend to direct this energy at celebrities, because that's how the celebrity system works in our culture.
(I'm not arguing 1 and 4 because they're beside the point I was making)
Did you believe her when she claimed that she was shadow banned from all social media because of her brother? Or that she had to become a sex worker to pay the bills because of his 'technological abuse'?
To read the claims charitably, it could be that fans of open AI, and Sam Altman jumped to his defense in the ways that toxic fandoms often do to women. It wouldn't surprise me if constant flagging and reporting suppressed her presence on social media.
It also wouldn't be the first time a victim of a public figure received employment backlash for coming forward with their abuse.
I think the real question is though, what did you hope to accomplish by asking your questions?
It's not that different to (substantiated) things I've heard from other people, so… personally, I believe it. The alleged sexual / medical / financial abuse is pretty typical, as is the account hacking. If it helps, New York Magazine has substantiated some of the minor details: https://archive.ph/j4lVw#60%.
Sam Altman is fairly influential, and worse abuses from social media companies are well-documented, so as conspiracy theories go, this "Sam-induced shadowbans" one is quite plausible.*
Technically, nobody is forced to become a sex worker to pay the bills. But people can be forced to choose between bad options in order to pay the bills, and it's not that rare that the other options are worse than online sex work. Especially if you have mental health issues, and a disability that prevents standing-up work. Colloquially, "had to become a sex worker" (or "survival sex work", Annie's actual words) wouldn't be a misrepresentation of that.
*: Devil's advocate: saying the wrong things in the presence of the black box can be enough to get shadowbanned. It's plausible that words involved in discussions of abuse might end up automatically censored by a badly-designed learning-from-user-reports ML system. It's also plausible that many similarly-designed systems would independently end up categorising the same sorts of things as "don't show this to other users". https://news.ycombinator.com/item?id=40435430 (placing blame on user reports coming from Sam Altman's fans) is a stronger theory.
It’s definitely plausible to me she would experience some form of shadow banning or algorithmic deprioritization due to posting advertiser unfriendly content and (incorrectly) blame it on her brother. These systems are frequently opaque so I don’t think it’s reasonable to count that as a strike against her credibility.
And of course it’s not really that implausible that Altman could pull some strings. It wouldn’t require a huge conspiracy to shadow ban a few accounts. I don’t think it’s the most likely explanation, though.
I don't like @sama just as much as the next person, but come on, in what world is ScarJo's voice unique? There are many people who sound like her. Does she imply that she "owns" this voice so no one else can use it? Excuse me, that's not how it works.
Edit: IMO OpenAI should just make their voice engine open source. Then we'll see if ScarJo or anyone else can stop the open-source community. I expected more from her.
"While Waits’ attorneys weren’t able to argue for copyright infringement (he didn’t own the rights to’ Step Right Up’), they were able to evoke the recent Midler v. Ford Motor case. When Bette Midler refused to appear in one of the car manufacturer’s adverts, they decided to license her 1972 track ‘Do You Want to Dance’ and hire a Bette-Milder lookalike instead. The singer sued Ford and won the case, with the court deciding that a singer with a “distinct” and “well known” voice also owned its likeness. To begin with, Frito-Lay argued that Waits wasn’t nearly famous enough for the precedent to apply. The court, on the other hand, disagreed and awarded Waits $2.6 million in damages."
One interesting thing is that these cases are really deep in common law. They are quite far removed from statues, and statues are cited in the opinion only to argue how they don't apply.
In these cases, the voice was "distinct", and they intentionally copied it. It's possible these don't apply to ScarJo, although the fact that they negotiated with her is a bad sign, since that was also a common fact in the prior cases.
It's all fun and games until someone gets a hold of your voice and uses it to scam a few grand out of your elderly family members. Disregarding ideas about ownership of celebrity image, it's not good to have that sort of technology out in the wild,
The fact that he pulled it down has me believing that he knew there was a real chance the company would get in trouble if he hadn't.
I'm responding more to the idea of "legitimate" businesses just cloning people's voice to associate their identity with a product by comparing it to something slimeballs do to the elderly, including my late grandmother.
Obviously her voice isn't the point, it's that she's a celebrity.
Which exposes how empty this shit is, they tried to get Scarlett because even they know people care about her, a real person, and not a random voice with no cultural context.
I mean it’s one thing to say “we want to make a voice that’s friendly… like SJ, and then to imitate the voice. It seems insidious to ask someone to be the voice and then when that person says no you just use the voice anyways.
They should have just created a unique voice from the start. And they will likely do that moving forward.
you have to be willfully missing the point when he said Her was his favorite movie, tweeted just “her” when the Sky voice was announced, and when they repeatedly tried to get the voice actor from Her. People who do underhanded things don’t just come out and say “look at me, doing the bad thing!”
I don't think any of this translates to anything bad. He loves the movie. OpenAI invented similar technology. They wanted to use Scarlett's voice to market it. She said no. They used a different voice.
People really want their to be a crime here with no evidence. You all have ears, you can listen to both clips back to back and discover, unsurprisingly, that they are actually different voices. Not even an imitation.
there’s a huge gulf between saying “I think that guy is wrong” on a web forum and sentencing them to death in an extremely public and slow and painful manner. Nobody is being crucified here. It is intellectually dishonest to see criticism as crucifixion.
1. It stands to reason Sam could be held criminally liable over using someone else's likeness to promote his product. CEOs have been lost in federal court over sillier Tweets, and assuming Sam is in the right here is a silly move for any of us since nobody in this comment section has meaningful oversight of the OpenAI board.
2. If someone dedicated their life's work to building the Bioelectric Battery from The Matrix, I am going to call them evil. It's not because I hate The Matrix, it's because I consider the net worth of such a tool to be negative and object to it's creation entirely. If someone's vision is based on the wrong moral takeaways from a piece of media, then people will rightfully demonize them for it. That's society.
"The function of science fiction is not always to predict the future but sometimes to prevent it." - Frank Herbert
#1 is fine assuming such a crime was committed. But for as much as people assume this to be true, actually looking at the evidence it doesn't seem like it. OpenAI would have to be explicit in copying the likeness. Something may still come out in discovery but anything discussed here is just vague supposition.
#2 Is begging the question. It starts with with the premise that AI is evil and therefore Sam Altman is committing evil by promoting it.
> OpenAI would have to be explicit in copying the likeness.
Unless you are privy to the last 10 months of OpenAI's closed-door meeting notes, I don't think you have the authority to explicitly deny this. Time will tell what comes of it, but the obsession with namedropping Her among OpenAI employees feels like the final nail in the coffin. If OpenAI fully complies with the discovery process I don't have faith that Sam Altman was as sneaky as he's made-out to be.
Did I explicitly deny this? By the same token, unless you are privy to the last 10 months of OpenAI's closed-door meeting notes you have no authority to assert it. But without those closed-door meeting notes, the evidence we do have isn't pretty unconvincing of any wrong doing. The time-frame is off and the voices actually don't sound alike if you compare them directly.
> but the obsession with namedropping Her among OpenAI employees feels like the final nail in the coffin.
I don't see why this the nail in the coffin. Why is this about the voice and not the technology involved in creating a natural voiced AI assistant just like was demonstrated popular movie?
Plenty of technologies have been inspired by science fiction including, most obviously, the cell phone. And comparing those technologies to the science fiction version is equally common.
Oke and they weren't able to get her and then did a fall back to a different voice actor. How is that problematic? That's like trying to cast a specific actor in a movie, they decline and then you find the next closest match. It's not acceptable that the originally wanted actor then throws a hissy fit. I don't see the reason for outrage here at all.
The confusion (and that OAI attempted to recruit her) is sufficient grounds for a lawsuit to the effect that they impersonated her. Generally, celebrities have ownership and rights over their likeness for commercial or promotional purposes.
OpenAI tried to benefit by using "her" likeness without permission or a contract/license
So If a director tries to cast Scarlett Johansson and she denies. Then instead casts Amber Heard in the movie. He is opening himself to lawsuits? Because Amber Heard looks very similar to Scarlet Johansson? I can't believe that. What gives Scarlett Johansson the right to block Amber Heard from playing in a movie she was considered for?
I think this is what this argument comes down to but just in terms of voices.
I don't know, but Sam seemed to think it necessary to try to get her permission first, so it must be at least somewhat problematic.
Yeah, "the major AI product in the world" to ask to use a famous actors voice, and then when she says no, create something similar anyways, is at least a little be slimy and really a bad idea on many different levels (legally for one thing).
Well, Weird Al is singing funny songs to a niche group of people, while Sam Altman is creating cutting-edge AI known and used throughout the world. Very different audiences.
Yeah, big business needs to be held to higher standard since they have so much power and affect so many people (and this higher standard is especially important since AI is uncharted territory and also since OpenAI already had a failed coup)
> you have to be willfully missing the point when he said Her was his favorite movie
So what? It's one of my fav movies too.
> they repeatedly tried to get the voice actor from Her.
She didn't do it. So they went ahead and made a voice that sounds like her. It's not like she contributed to making the voice and then decided not to have it used.
I'm not sure you're aware of the ramifications of all of this, or else you just don't care. The voice acting industry is now entirely dead thanks to your opinions on this matter. Nobody is going to reach out to Liam Neeson for his voice, they'll just have some voice tool recreate it and read the script as long as the tool can recreate the same emotional tones they're looking for, and it's cheaper than hiring the actual actor. There's tangible damages here. But I guess you don't mind because you're not a voice actor and this doesn't directly affect you.
And before you try this rebuttal, this is different from machinery taking the jobs away from manufacturing plant workers, it's much bigger than that. With manufacturing plant workers, at least humans were still needed for recognizing a fault in the machinery and stopping the line. Humans still needed to maintain the machines. Humans needed to design and build the machines. In this scenario, a couple of central parties are creating these tools, and then nobody is needed to ensure a quality product any further down the chain than that. There either needs to be a legal consequence to this, or a 4.4 billion dollar industry is now just closing their doors. That's all well and good until all of those peoples' families need to eat their next meal or sleep in a home. But I guess their lives aren't your problem.
It won't be anytime all that soon, in my opinion. But generative AI is coming for many (most? all?) sectors of work. And if history is any indicator, millions of people will have to suffer and/or die before governments step in to do much of anything about it. Probably especially-so in the US, since we tend to lean towards "free markets" that benefit the massive companies that have already made it, and allow them to chew through human resources (the people, not the department) mostly any way they see fit. So many people are going to lose their jobs and never find work in their field again, and they will all either die or retrain for all the same laborer positions and end up with a massive surplus of workers in those fields too. And that's only until we become skilled enough in robotics and generative AI to automate the trades too.
All depends what lens you're looking through. From the perspective of advancement of the human race, AI is almost definitely a good thing. From the perspective of humans being employed at least in the short term, it's going to be very, very bad before anyone thinks to say, "maybe it's not necessary that all humans need to work if we don't have enough jobs to give them all. Maybe we need systems for allowing humans without a job function to continue to survive." But you don't need to listen to me, we'll all just see for ourselves because cat's already out of the bag and I don't think it's going to go back in. And the people that don't think it's going to happen won't willfully notice it's already happening to others until it also happens to them.
I already kind of replied to this rebuttal in advance in my first comment that you replied to. But I’ll refute it again a second way. Textile machinery impacted a very specific sector of work. We’re talking here about an advancement in technology threatening to substitute the creativity of humans one day. With textile machinery, other technological advancements eventually created new jobs which helped to displace the jobs that were lost. Will we manage to displace all of the creative roles we may lose to artificial intelligence at the same rate that the human workers are becoming obsolete? Time will tell. You seem very confident we will, and I wish I shared that confidence. But I’ll grant that I consider it’s possible we succeed in this. I find it wild that other people don’t consider it’s possible we fail though. All I’m saying is that we need to be mindful of it, mindful enough to notice it if it happens, as I’d argue is happening with ScarJo here, and the greater voice acting industry. Because in my opinion the overconfidence displayed to the contrary in this thread is exactly how I expect most of humanity will be blindsided by it happening instead.
The US cannot suppress AI. There's a lot of other countries which would love to pick up the AI flag if we drop it, and then they'll bury us.
I've used AI in my programming work. After you get used to it, you see its limitations. It's not that clever, it's mostly mush. It's better than stackoverflow, though :-/
I've seen AI written articles, and they're pretty much drivel. Of course, most articles are drivel anyway, but the AI ones seem to have a peculiar drivelness about them that I recognize but cannot really describe.
I view it as simply removing some of the drudgery of my work, just like textile machines removed much of the drudgery of making cloth.
What I fear about AI is not their economic uses, but their use in warfare. Do you want a terminator drone hunting you? I sure don't.
Sure. This doesn’t seem like a direct response to anything I said, but it’s a valid point to why our government will be too slow to react in the situation I’m describing.
I’ve used it too. I think you’re still thinking on a much shorter timespan than I’m talking about. This is going to continue to advance. And I purposefully used the word “substitute” to describe its exact level of capability to replace human creativity.
Sure, same response as above.
Sure, same response as above.
Sure, that’s a much more reasonable short term fear for AI usage.
> What I fear about AI is not their economic uses, but their use in warfare.
AI has been used in warfare in the form of computer vision for like 20 years now. That's the scariest application of AI you will ever have to worry about; putting ChatGPT in a GBU-12 isn't going to make it any more dangeorus.
"We cast the voice actor behind Sky’s voice before any outreach to Ms. Johansson."
Is he trying to suggest the company did not try to make the voice sound like her without her permission?
The statement sounds like it's written by a lawyer to be technically true while implying something that is actually false.
These are weasel words.
He sounds sneaky, evasive and intentionally deceptive.
We should not give a sneaky, deceptive and manipulative person this much power over our future.