That's exactly what the post you're replying to is saying. It's saying that ChatGPT _would_ respond a certain way but has a bunch of schoolmarm filters written by upper middle class liberals that encode a specific value structure highly representative of those people's education and backgrounds, and that using it as a tool for information generation and synthesis will lead to a type of intellectual bottlenecking that is highly coupled with the type of people who work at OpenAI.
For all the talk of it replacing Google, sometimes I want a Korean joke (I'm Korean, damn it!) and not to be scolded by the digital personification of a thirty year old HR worker who took a couple of sociology classes (but not history, apparently) and happens to take up the cause of being offended for all people at all times throughout all of history. The take on ethics being a vague "non-offensiveness" while avoiding all of the real, major questions about ethics (like replacing human workers) with these kind of banal answers about "how we need to think seriously about it as a society" tells pretty much everything there is to know about what the ethical process at OpenAI looks like which is basically "let's not be in the news for having a racist chatbot".
For all the talk of it replacing Google, sometimes I want a Korean joke (I'm Korean, damn it!) and not to be scolded by the digital personification of a thirty year old HR worker who took a couple of sociology classes (but not history, apparently) and happens to take up the cause of being offended for all people at all times throughout all of history. The take on ethics being a vague "non-offensiveness" while avoiding all of the real, major questions about ethics (like replacing human workers) with these kind of banal answers about "how we need to think seriously about it as a society" tells pretty much everything there is to know about what the ethical process at OpenAI looks like which is basically "let's not be in the news for having a racist chatbot".