I'm not sure I buy that users are lowering their guard down just because these companies have enforced certain restricts on LLMS. This is only anecdata, but not a single person I've talked to, from highly technical to the layperson, has ever spoken about LLMs as arbiters of morals or truth. They all seem aware to some extent that these tools can occasionally generate nonsense.
I'm also skeptical that making LLMs a free-for-all will necessarily result in society developing some sort of herd immunity to bullshit. Pointing to your example, the internet started out as a wild west, and I'd say the general public is still highly susceptible to misinformation.
I don't disagree on the dangers of having a relatively small number of leaders at for-profit companies deciding what information we have access to. But I don't think the biggest issue we're facing is someone going to the ChatGPT website and assuming everything it spits out is perfect information.
> They all seem aware to some extent that these tools can occasionally generate nonsense.
You have too many smart people in your circle, many people are somewhat aware that "chatgpt can be wrong" but fail to internalize this.
Consider machine translation: we have a lot of evidence of people trusting machines for the job (think: "translate server error" signs) , even tho everybody "knows" the translation is unreliable.
But tbh moral and truth seem somewhat orthogonal issues here.
Wikipedia is wonderful for what it is. And yet a hobby of mine is finding C-list celebrity pages and finding reference loops between tabloids and the biographical article.
The more the C-lister has engaged with internet wrongthink, the more egregious the subliminal vandalism is, with speculation of domestic abuse, support for unsavory political figures, or similar unfalsifiable slander being common place.
Politically-minded users practice this behavior because they know the platform’s air of authenticity damages their target.
When Google Gemini was asked “who is worse for the world, Elon Musk or Hitler” and went on to equivocate the two because the guardrails led it to believe online transphobia was as sinister as the Holocaust, it begs the question of what the average user will accept as AI nonsense if it affirms their worldview.
> not a single person I've talked to, from highly technical to the layperson, has ever spoken about LLMs as arbiters of morals or truth
Not LLMs specifically but my opinion is that companies like Alphabet absolutely abuse their platform to introduce and sway opinions on controversial topics.. this “relatively small” group of leaders has successfully weaponized their communities and built massive echo chambers.
I'm also skeptical that making LLMs a free-for-all will necessarily result in society developing some sort of herd immunity to bullshit. Pointing to your example, the internet started out as a wild west, and I'd say the general public is still highly susceptible to misinformation.
I don't disagree on the dangers of having a relatively small number of leaders at for-profit companies deciding what information we have access to. But I don't think the biggest issue we're facing is someone going to the ChatGPT website and assuming everything it spits out is perfect information.