Hacker News new | past | comments | ask | show | jobs | submit login

Google is branding this in a positive light but this is just AI text DRM.



It's likely more about preventing model incest than digital rights management


Like all things a computer can / can't do; DRM isn't inherently bad: It's how its used that's a problem.

IE, DRM can't change peoples' motivations. It's useful for things like national security secrets and trade secrets, where the people who have access to the information have very clear motivations to protect that information, and very clear consequences for violating the rules that DRM is in place to protect.

In this case, the big question of if AI watermarking will work / fail has more to do with peoples' motivations: Will the general public accept AI watermarking because it fits our motivations and the consequences we set up for AI masquerading as a real person, or AI being used for misinformation? That's a big question that I can't answer.


This is not a “good deed for the public” done by Google, this is just a self serving tool to enforce their algorithms and digital property. There is nothing “bad” here for the public but it’s certainly not good either.


I for one am glad we might have a path forward to filtering out LLM-generated sludge.


> we

If by "we" you mean anyone else than Google and the select few other LLM provider they choose to associate with, I'm afraid you're going to be disappointed.


If there is a detectable fingerprint, we can detect it too. Probably don't even need a Bletchley Park.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: