Content recommendation algorithms reward engagement metrics. One of the metrics they reward is getting a user's attention, briefly. In the real world, someone can get my attention by screaming that there is a fire. Belief that there is a fire and interest in fire are not necessary for my attention to be grabbed by a warning of fire. All that is needed is a desire for self-preservation and a degree of trust in the source of the knowledge.
Compounding the problem, since engagement is improved and people make money off of videos, there is an incentive in place encouraging the proliferation of attention grabbing false information.
In a better world, this behavior would not be incentivized. In a better world, reputation metrics would allow a person to realize that the boy who cried wolf was the one who had posted the attention grabbing video. Humanity has known for a long time that there are consequences for repeated lying. We have fables about that, warning liars away from lying.
I don't think making that explicit, like it is in many real world cases of lying publicly in the most attention attracting way possible, would be unreasonable.
No one hears the lies once they're removed. That can be good, but it also might work to prevent more people from criticizing or exposing them and thus, even if only slightly, validate them in the eyes of liars. This might yield nastier lies in the future.
Content recommendation algorithms reward engagement metrics. One of the metrics they reward is getting a user's attention, briefly. In the real world, someone can get my attention by screaming that there is a fire. Belief that there is a fire and interest in fire are not necessary for my attention to be grabbed by a warning of fire. All that is needed is a desire for self-preservation and a degree of trust in the source of the knowledge.
Compounding the problem, since engagement is improved and people make money off of videos, there is an incentive in place encouraging the proliferation of attention grabbing false information.
In a better world, this behavior would not be incentivized. In a better world, reputation metrics would allow a person to realize that the boy who cried wolf was the one who had posted the attention grabbing video. Humanity has known for a long time that there are consequences for repeated lying. We have fables about that, warning liars away from lying.
I don't think making that explicit, like it is in many real world cases of lying publicly in the most attention attracting way possible, would be unreasonable.