Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I see they're continuing the campaign to make people dislike something called "AI safety" by redefining it as corporate prudery.


This hijacking of the term really bugs me. As though The Terminator himself would have been "safe" if only he didn't speak cursewords to his neighbour.


I think they've just redefined "AI safety" as an analogue of "brand safety." To corporations, that's what safety means:

https://en.wikipedia.org/wiki/Brand_safety


When car companies talk about safety, they mean the car is unlikely to kill its occupants, rather than that the stereo plays only unoffensive music to protect the brand.

AI safety is a thing apart from brand safety, and OpenAI would be well aware of this, just like GM is aware of what crash safety means.


> they mean the car is unlikely to kill its occupants, rather than that the stereo plays only unoffensive music to protect the brand

Right, so in that case the occupants are their customers, and they're hopefully protecting them from harm. They're not optimizing for, say, pedestrian safety[0].

In this case, OpenAI's customers are other companies, and they're keeping them from harm, and the number one harm that companies are worried about re: AI is "what if we deploy an AI tool and it generates nudity etc that damages the bottom line."

I'm not saying this is a good thing, but it seems to describe the situation as it is, doesn't it?

[0] https://driving.ca/auto-news/driver-info/blind-spots-on-pick...


It's the perfect next vehicle for activist-censors to parasitize now that they more-or-less have secured control of "content moderation".




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: