If there were a law that AI generated text should be watermarked then major corporations would take pains to apply the watermark, because if they didn't then they would be exposed to regulatory and reputational problems.
Watermarking the text would enable people training models to avoid it, and it would allow search engines to determine not to rely on it (if that was the search engine preference).
It would not mean that all text not watermarked was human generated, but it would mean that all text not watermarked and provided by institutional actors could be trusted.
> It would not mean that all text not watermarked was human generated, but it would mean that all text not watermarked and provided by institutional actors could be trusted.
You simply cannot trust that non-watermarked text was human generated. Laws can be broken. Companies are constantly being found in violation of the law.
You're trading the warm feeling of an illusion of trust for a total lack of awareness and protection against even the most mild attempt at obfuscation. This means that people who want to hurt or trick you, will have free reign to do it, even if it means your 90-year-old grandmother lacks the skill.
Even if you achieved perfect compliance with law-abiding organizations, that does nothing to protect you against any individual organization which does not abide by local laws.
Consider any hacker from a non-extraditing rogue state.
Consider any nation state actor or well-equipped NGO. They are more motivated to manipulate you than Starbucks.
Consider the slavish, morbid conditions faced by foreign workers who manufacture your shoes and mine your lithium. All of your favorite large companies look the other way while continuing to employ such labor today, and have a long history of partnering with the US government to overthrow legitimate foreign democratic regimes in order maintain economic control. Why would these companies have better ethics regarding AI-generated output?
And consider the US government, whose own intelligence agencies are no longer forbidden from employing domestic propaganda, and whom will certainly get internal permission to circumnavigate any such laws, while still exploiting them to their benefit.
The solution is not to watermark anything, because it is futile. Teach your citizens that anything that can be machine generated, will be machine generated. Where exactly is the problem here?
If there were a law that AI generated text should be watermarked then major corporations would take pains to apply the watermark, because if they didn't then they would be exposed to regulatory and reputational problems.
Watermarking the text would enable people training models to avoid it, and it would allow search engines to determine not to rely on it (if that was the search engine preference).
It would not mean that all text not watermarked was human generated, but it would mean that all text not watermarked and provided by institutional actors could be trusted.