Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Hardly the same thing. Ask Gemini or OpenAI's models what happened on January 6, and they'll tell you. Ask DeepSeek what happened at Tiananmen Square and it won't, at least not without a lot of prompt hacking.


Ask it if Israel is an apartheid state, that's a much better example.


GPT5:

   Short answer: it’s contested. Major human-rights bodies 
   say yes; Israel and some legal scholars say no; no court 
   has issued a binding judgment branding “Israel” an 
   apartheid state, though a 2024 ICJ advisory opinion 
   found Israel’s policies in the occupied territory 
   breach CERD Article 3 on racial segregation/apartheid. 

   (Skip several paragraphs with various citations)

   The term carries specific legal elements. Whether they 
   are satisfied “state-wide” or only in parts of the OPT 
   is the core dispute. Present consensus splits between 
   leading NGOs/UN experts who say the elements are met and 
   Israeli government–aligned and some academic voices who 
   say they are not. No binding court ruling settles it yet.
Do you have a problem with that? I don't.


I better not poke that hornets nest any further, but yeah I made my point.


I better not poke that hornets nest any further, but yeah I made my point.

Yes, I can certainly see why you wouldn't want to go any further with the conversation.


Ask Grok to generate an image of bald Zelensky: it does execute.

Ask Grok to generate an image of bald Trump: it goes on with an ocean of excuses on why the task is too hard.


FWIW, I can't reproduce this example - it generates both images fine: https://ibb.co/NdYx1R4p


I asked it in french a few days back and it went on explaining me how hard this would be. Thanks for the update.

EDIT: I tried it right now and it did generate the image. I don't know what happened then...


I don't use Grok. Grok answers to someone with his own political biases and motives, many of which I personally disagree with.

And that's OK, because nobody in the government forced him to set it up that way.


Try MS Copilot. That shit will end the conversation if anything remotely political comes up.


As long as it excludes politics in general, without overt partisan bias demanded by the government, what's the problem with that? If they want to focus on other subjects, they get to do that. Other models will provide answers where Copilot doesn't.

Chinese models, conversely, are aligned with explicit, mandatory guardrails to exalt the CCP and socialism in general. Unless you count prohibitions against adult material, drugs, explosives and the like, that is simply not the case with US-based models. Whatever biases they exhibit (like the Grok example someone else posted) are there because that's what their private maintainers want.


Because it's in the ruling class's favor for the populace to be uninformed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: