I mean shocker: large language model trained in mainland China may have censorship around topics considered politically sensitive by Chinese government, more news at 11. Can we move on?
But it's also an easy low-hanging fruit if you want to add a comment to a Hacker News Post that you otherwise don't know anything about.
The point is, the amount of mutilation done to models released by OpenAI and co. is enormous. So I very much hoped a Chinese model would be much more free from all this kind of BS. But when I think deeper about it, they actually had no choice: imagine the amount of criticism they would face. At this point, the only accusation you can hear from their competition is "For sure they used more housepower" which seems to be quite weak.
I mean shocker: large language model trained in mainland China may have censorship around topics considered politically sensitive by Chinese government, more news at 11. Can we move on?
But it's also an easy low-hanging fruit if you want to add a comment to a Hacker News Post that you otherwise don't know anything about.