Hacker News new | past | comments | ask | show | jobs | submit login

> When people shoot off their mouths about something, they’re not necessarily talking about what they’d actually do, but often merely what they feel in that passing moment.

I hope that the existence of things like ChatGPT (which does not "care" about the truth or falsity of its utterances) will cause people, in an attempt to distinguish themselves from the AIs, to consider paying more attention to consistency.




ChatGPT might not care, but its creators certainly do, and they’ve embedded their politics into how it responds and what topics it considers taboo.


How would one go about creating a non-politics-embedding chatbot? Assuming no memory, everything's going to come down to choice of training set, and that seems pretty difficult to do in an objective manner.


Don’t ChatGPT and other scrapers at their core essentially steal information from content creators and present it as their own content without attribution in many cases? Whose politics does that represent? Are they thieves or freedom fighters?


details on how they inserted present-day political beliefs into chatGPT https://cactus.substack.com/p/openais-woke-catechism-part-1


This is a good case in point. Citizens United is safe from ChatGPT-authored briefs, because judges are among the few people who care about consistency in argument.

(if we get an AI that performs self-consistency checks, I'll reexamine this statement. but we're not there yet: https://news.ycombinator.com/item?id=34092710 )




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: