A study of the formalism called Promise Theory will quickly show how that is not true.
Autonomous agents -- humans or otherwise -- base their decision on the imperfect information they have. No one has a global, perfect view of everything, and so the perception of how well other agents fulfill their promises (formally defined as intentions made known to an audience) will always be based upon local, imperfect information.
I can mention other frameworks -- Cynefine, and the error where one confuses a Complicated domain (that can still be accurately modeled) with a Complex domain (that is impossible to accurately model). Or what James C Scott discusses an idea called "legibility" and the fallacy in imposing legibility on complex systems in his book, Seeing Like a State.
Perceptions matter. Blaming the participants of a system for their being uninformed will not lead to voluntary cooperation, much less reliable systems that involve both machines and humans.
> Autonomous agents -- humans or otherwise -- base their decision on the imperfect information they have. No one has a global, perfect view of everything, and so the perception of how well other agents fulfill their promises (formally defined as intentions made known to an audience) will always be based upon local, imperfect information.
There's going to have to be (or there may already be) a rule against accusing people of being ChatGPT, but I can't believe that this is an argument that the "imperfect information" that average people have about some condition or event is somehow more important than the actuality of the condition or event precisely because of how wrong average people can be?
Because voluntary cooperation? I should only be concerned with that if I'm doing PR work for these companies. It's their job to sell safety, the only thing I'm concerned with is when people are lying. Or intentionally confusing the public about some fact, polling the confused public about what they think the facts are, then reporting the poll to further confuse the fact in lieu of simply reporting the data.
Autonomous agents -- humans or otherwise -- base their decision on the imperfect information they have. No one has a global, perfect view of everything, and so the perception of how well other agents fulfill their promises (formally defined as intentions made known to an audience) will always be based upon local, imperfect information.
I can mention other frameworks -- Cynefine, and the error where one confuses a Complicated domain (that can still be accurately modeled) with a Complex domain (that is impossible to accurately model). Or what James C Scott discusses an idea called "legibility" and the fallacy in imposing legibility on complex systems in his book, Seeing Like a State.
Perceptions matter. Blaming the participants of a system for their being uninformed will not lead to voluntary cooperation, much less reliable systems that involve both machines and humans.