This hijacking of the term really bugs me. As though The Terminator himself would have been "safe" if only he didn't speak cursewords to his neighbour.
When car companies talk about safety, they mean the car is unlikely to kill its occupants, rather than that the stereo plays only unoffensive music to protect the brand.
AI safety is a thing apart from brand safety, and OpenAI would be well aware of this, just like GM is aware of what crash safety means.
> they mean the car is unlikely to kill its occupants, rather than that the stereo plays only unoffensive music to protect the brand
Right, so in that case the occupants are their customers, and they're hopefully protecting them from harm. They're not optimizing for, say, pedestrian safety[0].
In this case, OpenAI's customers are other companies, and they're keeping them from harm, and the number one harm that companies are worried about re: AI is "what if we deploy an AI tool and it generates nudity etc that damages the bottom line."
I'm not saying this is a good thing, but it seems to describe the situation as it is, doesn't it?