Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I ran a reverse image search on the image of Steve Jobs, and couldn't come up with anything, so it does appear that it might be AI generated, which I don't approve of.




This is something that really grates with me, but it’s made so much worse with the AI-generated image of him. If you want to say that you don’t think Apple should do that, then fine. But stop using Jobs to fight your battles, and especially don’t generate images of him with that attention-seeking YouTube thumbnail face.

> It’s WWDC week. Every time this rolls around, I see people saying the same sort of thing. “Steve Jobs wouldn’t have done this”.

> Firstly, Jobs wasn’t perfect. He got a lot of things right and a lot of things wrong. His opinion wasn’t the end of the argument when he was alive, and it’s certainly not now that he’s been dead 14 years.

> But more importantly: Stop putting your opinion in a dead man’s mouth to give it more credibility. It’s ghoulish. Let your opinion stand on its own two feet.

https://news.ycombinator.com/item?id=44246274


Same reaction here. I think the author certainly crossed a line by using a diffusion model to publish an image of a dead famous person doing something he never did.

Does that somehow invalidate the message of the article?

It weakens the message because knowing that the image is fake casts equal doubt on anything which the text says.

This is total nonsense. Every reader will understand that Jobs was never photographed like that while saying "stop" to anyone crossing his red lines. Even if the photo exists, it would have been out of context.

The only question here is if using that image is tasteful or not.

Also, suggesting that Jobs did not have these red lines is not making the situation any better.


Well, I'm another person who shares that opinion. When I see AI in an article, I think: If an author will use AI to fake one thing, what else is he willing to fake? It totally draws into question the credibility of the whole article.

No it does not. Give some real arguments against the article otherwise I'm going to assume bad faith.

While it may not be totally sensible, using AI imagery to depict something that never happened definitely does decrease trust. The exact same effect can be seen happening in sports as they added gambling on top. People are losing faith that the truth hasn’t been manipulated because there is ample opportunity to break trust and insert fiction.

What decreases trust is taking something innocent and blowing it out of proportion, then using it to attack people.

But what cause that decrease to happen is by presenting false things as accurate. There will always be someone snooping around and if it’s an obvious false thing then people will make lots of noise about it, the evidence is this post thread.

It doesn’t matter if they actually cheated at sports or if the image is real. The threat of it being untrustworthy is actively eroding trust.


We are at an impasse then. You can't prove an opinion. Have a great day.

it's super distasteful, i thought, having seen steve jobs in person face to face

Same here, ironic since the article is about crossing lines.

Honestly, there should be laws against gen AI models creating fake media with real individuals. We're going to end up with a massive mess on our hands once the video starts looking more realistic

How to you expect those laws to be enforced?

It's impossible to determine with 100% confidence whether or not an image/video was AI generated. If the AI-generated image of Steve Jobs had been copied a bunch on the web, a reverse image search would have turned up lots of sources. Watermarks are imperfect and can be removed. There will always be ambiguity.

So either you're underzealous and if there's ambiguity, you err on the side of treating potentially AI-generated images as real. So now you only catch some deepfakes. This is extra bad because by cracking down on AI-generated content, you condition people to believe any image they see. "If it was AI generated, they would have taken it down by now. It must be real".

The alternative is being overzealous and erring on the side of treating potentially genuine images as AI-generated. Now if a journalist takes a photo of a politician doing something scandalous, the politician can just claim it was AI-generated and have it taken down.

It's a no-win situation. I don't believe that the answer is regulation. It'd be great if we could put the genie back in the bottle, but lots of gen-AI tools are local and open-source, so they will always exist and there's nothing to do be done about it. The best thing is to just treat images and videos with a healthy amount of skepticism.


It sure looks it. It was my assumption the moment I saw it.

I also instantly asked myself if the image was AI and reverse searched it when I saw it.

Very ironic using an AI image of Steve Jobs.

This author is a man who worked closely with Steve Jobs, and the photo was obviously AI generated, so I think this gives him leeway to do such a thing.

???

If someone I knew generated AI images of me I wouldn't think it was okay


It isn't obvious to you that it's AI? You had to look it up? Please get more familiar with actual photographs, maybe skim a few AI free photo sites or, oh, I don't know, buy a few coffee table photo books and develop some discernment, because that one is about as obvious a fake photo as a stick figure would be. It's truly gross.

I'm glad he tried to look it up. But we shouldn't have to, all AI generated images/videos should be watermarked full stop.

Right? I only generated a single AI picture of myself and it had that exact shading seen in this picture. Extremely obvious.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: