> When people shoot off their mouths about something, they’re not necessarily talking about what they’d actually do, but often merely what they feel in that passing moment.
I hope that the existence of things like ChatGPT (which does not "care" about the truth or falsity of its utterances) will cause people, in an attempt to distinguish themselves from the AIs, to consider paying more attention to consistency.
How would one go about creating a non-politics-embedding chatbot? Assuming no memory, everything's going to come down to choice of training set, and that seems pretty difficult to do in an objective manner.
Don’t ChatGPT and other scrapers at their core essentially steal information from content creators and present it as their own content without attribution in many cases? Whose politics does that represent? Are they thieves or freedom fighters?
This is a good case in point. Citizens United is safe from ChatGPT-authored briefs, because judges are among the few people who care about consistency in argument.
I hope that the existence of things like ChatGPT (which does not "care" about the truth or falsity of its utterances) will cause people, in an attempt to distinguish themselves from the AIs, to consider paying more attention to consistency.