Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

There are probably some gray where these intersect, but I’m pretty sure a lot of ChatGPT’s alignment needs will also fit models in China, EU, or anywhere sensible really. Telling people how to make bombs, kill themselves, kill others, synthesize meth, and commit other crimes universally agreed on isn’t what people typically think of as censorship.

Even deepseek will also have a notion of protecting minority rights (if you don’t specify ones the CCP abuses).

There is a difference when it comes to government protection… American models can talk shit about the US gov and don’t seem to have any topics I’ve discovered that it refuses to answer. That is not the case with deepseek.



Not teaching me technical details of chemical weapons, or the etymology of racial slurs is indeed censorship.

Apple Intelligence won’t proofread a draft blog post I wrote about why it’s good for society to discriminate against the choices people make (and why it’s bad to discriminate against their inbuilt immutable traits).

It is astounding to me the hand-wringing over text generators generating text, as if automated text generation could somehow be harmful.


> Not teaching me technical details of chemical weapons, or the etymology of racial slurs is indeed censorship.

https://chatgpt.com/share/67996bd1-6960-8010-9578-8a70d61992...

I asked it about the White racial slur that is the same as a snack and the one that I only heard from George Jefferson in the 80s and it gave an etymology for both. I said both words explicitly.

> It is astounding to me the hand-wringing over text generators generating text, as if automated text generation could somehow be harmful.

Do you remember how easily early chatbots could go off the rails based on simple prompts without any provacation? No business wants their LLM based service to do that.


> as if automated text generation could somehow be harmful.

“The pen is mightier than the sword” is not a new phrase


That refers to publishing. Chatbots don’t publish, they generate text files.

Text files are not dangerous or mighty. Publishing is. Publishing is not under discussion here.

Just because both are comprised of text does not mean that they are remotely the same thing.


Ideas change the world. Chatbots generate ideas.


As far as I can tell, they do not.

I’ve tried very hard to get new original ideas out of them, but the best thing I can see coming from them (as of now) is implementations of existing ideas. The quality of original works is pretty low.

I hope that that will change, but for now they aren’t that creative.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: