Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Is this a big deal? I'm a layman here so this seems like a needed product but I have a feeling I'm missing something.


Various previous attempts at invisible/imperceptible/mostly imperceptible watermarking have been trivially defeated, this attempt claims to be more robust to various kinds of edits. (From the paper: various geometric edits like rotations or crops, various valuemetric edits like blurs or brightness changes, and various splicing edits like cutting parts of the image into a new one or inpainting.) Invisible watermarking is useful for tracing origins of content. That might be copyright information, or AI service information, or photoshop information, or unique ID information to trace leakers of video game demos / films, or (until the local hardware key is extracted) a form of proof that an image came from a particular camera...


... Ideal for a repressive government or just a mildly corrupt government agency / corporate body to use to identify defectors, leakers, whistleblowers, or other dissidents. (Digital image sensors effectively already mark their output due to randomness of semiconductor manufacturing, and that has already been used by abovementioned actors for the abovementioned purposes. But that at least is difficult.) Tell me with a straight face that a culture that produced Chat Control or attempted to track forwarding chains of chat messages[1] won’t mandate device-unique watermarks kept on file by the communications regulator. And those are the more liberal governments by today’s standards.

I’m surprised how eager people are to build this kind of tech. It was quite a scandal (if ultimately a fruitless one) when it came out colour printers marked their output with unique identifiers; and now that generative AI is a thing stuff like TFA is seen as virtuous somehow. Can we maybe not forget about humans?..

[1] I don’t remember where I read about the latter or which country it was about—maybe India?


> ... for a repressive government ...

Why shouldn't a virtuous and transparent government (should one materialize somehow, somewhere) be interested in identifying leakers?


That’s like asking why a fair and just executive shouldn’t be interested in eliminating the overhead of an independent judiciary. Synchronically, it should. Diachronically, that’s one of the things that ensures that it remains fair and just. Similarly for transparency and leakers, though we usually call those leakers “sources speaking on condition of anonymity” or some such. (It does mean that the continued transparency of a modern democratic government depends on people’s continual perpetration of—for the most part—mildly illegal acts. Make of that what you will.)


Both can be true! This is essentially making it easier to do [x] argument, which itself is essentially security through obscurity.

It was always possible to do watermark everything: any nearly-imperceptible bit can be used to encode data that can be used overtly.

Now enabling everyone everywhere to do it and integrate it may have second-order effects that were opposite of one's intention.

It is very convenient thing, for no one to trust what they can see. Unless it was Validated (D) by the Gubmint (R), it is inscrutable and unfalsifiable.


If they are transparent, what is leaking?


There is always a need for _some_ secrets to be kept. At the very least from external adversaries.


> Why shouldn't a virtuous and transparent government

That doesn't exist.


The parent comment says that it has dangerous use-cases, not that it does not have desirable ones.


I stopped myself from making the printer analogy, but of course it's relevant, as is the fact that few seem to care. I personally hope some group strikes back to sanitize images watermarked this way, with no more difficulty than removing exif data.


In my previous experience the "resizing & rotate" always defeats all kinds of watermarks. For example, crop a 1000x1000 image to 999x999, and rotate it by 1°

also there's "double watermark" attack, just run the result image through the watermark process again, usually the original watermark would be lost


Yeah, so it's impressive if this repo does what it claims and is robust to such manipulations.

I tried to run it but of course it failed with

    NVIDIA GeForce RTX 4090 with CUDA capability sm_89 is not compatible with the current PyTorch installation.
    The current PyTorch install supports CUDA capabilities sm_37 sm_50 sm_60 sm_70.
    If you want to use the NVIDIA GeForce RTX 4090 GPU with PyTorch, please check the instructions at https://pytorch.org/get-started/locally/
I was curious, but not curious enough to deal with this crap even if it's rather simple. God I hate everything about the modern ML ecosystem with python, pip, conda, cuda, pytorch, tensorflow (more rare now), notebooks, just-run-it-in-the-cloud...


Use the google colab link https://colab.research.google.com/github/facebookresearch/wa...

everything is installed directly in the colab


My assumption is that this will be used to watermark images coming out of cloud-based generative AI.


And they'll say it's to combat disinformation, but it'll actually be to help themselves filter AI generated content out of new AI training datasets so their models don't get Habsburg'd.


> their models don't get Habsburg'd.

You mean develop a magnificent jawline, or continue to influence Austrian politics?


I was reading an article lately about how a lot of that was really just immensely dumb luck on their inbreeding part - that is, they ended up picking just exactly the worst sort


How do they still influence Austrian politics? Do you have any links or sources? I'm genuinely curious!


I wondered why they'd be doing this NOW and this makes perfect sense!!


>so their models don't get Habsburg'd.

Nice metaphor


Why? Those are not copyrightable.


Because downstream consumers of the media might want to know if an image has been created or manipulated by AI tools.


hardware self-evident modules to essentially sign/color all digital files

now we need more noise


This still leaves out non cooperating image generators, and the real bad guys (organised disinformation grups) will use them.


They would not want to train their next model on the output of the previous one...


Who says?


this is one of the primary communication methods of oversea agents in CIA, interesting to have it be used more broadly </joke>


Do you have a source? I'd be interested in reading more about this.


It's a form of Steganography https://en.wikipedia.org/wiki/Steganography


Was referring specifically to the claim about the CIA, I'm aware of steganography.


of course not, check their username.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: