I tried enabling this recently and I immediately noticed that websites started appearing in light mode instead of copying my system settings and displaying in dark mode. It seems like in it's efforts to make my fingerprint the same as everyone else's Firefox stops telling websites about my display settings. The issue immediately went away when I changed the setting back to privacy.resistFingerprinting = false.
Why does the browser need to tell the website about local display settings? In my case, with privacy.resistFingerprinting = true, the Zoom level resets to 100% every time I navigate to another page on a site. Why can't the browser just remember my zoom level locally and re-apply it? Why does it have to tell the website?
Zooming basically changes the dimensions of the viewport as JS/CSS see it. Reapplying the zoom level would involve running the same CSS media queries and/or JS so that the website looks good at those dimensions.
It's not just an "optical" zoom.
But more directly, zooming can put you in very nonstandard width x height dimensions. Carrying those dims across different pages makes for an easy fingerprint which is probably why it's reset.
> Zooming basically changes the dimensions of the viewport as JS/CSS see it.
I don't care what the website's JS/CSS says. At the end of the day the browser has a rendered canvas; I just want to zoom the canvas (and clip it at the window dimensions, providing scrollbars if necessary, if zooming makes it larger than the window dimensions). The browser shouldn't have to re-run anything to do that; zooming and clipping a canvas are graphics operations that have existed in computers for as long as there have been computers with graphics at all.
When people increase their font size/zoom they generally don't want that - what you're describing is the default zoom on phones, etc which is different from increasing the page text size, and you'll note is typically about increasing undersized ui components rather than simply reading text.
When people are in a browser and expanding the content they want the content reflowed - having to scroll to read the width of a line is super obnoxious, and makes reading text much harder. This is made even more frustrating when you recall that a lot of time the reason for zooming is to make things easier to read.
There are very few times where the correct response to "increase the zoom" is simply an affine transform of the rendered content, from both a usability standpoint or from user intent.
> When people are in a browser and expanding the content they want the content reflowed
Even if this is the case, I don't see why the browser has to re-run anything from the website or tell the website anything. It can just do the reflow operation locally. Yes, the central data structure then is the DOM rather than a rendered canvas, but the DOM is still held locally.
You can always zoom the website using built-in OS zooming.
However, browser zooming incurs layout logic. It's no different that resizing the browser viewport. Code is run on the site (whether CSS or JS) to determine how the site should render at that size.
CSS/JS is run even when loading the site in the first place. There is nothing special about zooming, so it's like asking why layout code has to be run when you visit a site.
Well, you can turn off JS and CSS styling, but that's too hamfisted for most people.
Here's how a site can load different stylesheets depending on viewport width:
<link rel="stylesheet" media="screen and (min-width: 601px)" href="desktop.css" />
<link rel="stylesheet" media="screen and (max-width: 600px)" href="mobile.css" />
It's unclear to me what you think should happen on first website load vs. zooming.
> Reflow and relayout is entirely local just as it is if you resize your window.
Then why does the zoom level reset itself to 100% every time I reload the page if I set privacy.resistFingerprinting = true?
> What are you concerned is happening?
I'm concerned that setting privacy.resistFingerprinting = true breaks a feature (that my browser remembers the zoom level for a given site so I don't have to reset it every time I reload that site) that should, as you say, be "entirely local".
The issue is not related to page loads, and layout behaviour is not impacting or causing differing load behavior.
First we need to consider what the goal of fingerprinting a browser is, and subsequently how that is done. The goal is not just "track a user", it is "track a user without using any explicit storage", so no cookies, client storage, etc. So instead all that a fingerprinting service can do is read implicit data from the browser, and using a collection of that data construct a unique ID. Most data that you read will be the same across large numbers of browsers: user agents, installed fonts, etc so what you do is build up a signature from those properties that vary from the mean. If you query enough different properties that hope is that you can accumulate enough variation to create a unique (-enough?) identifier that persists for that user.
Which gets us to your feature. The enormous majority of users have default zoom. So if your browser presents a different zoom level that provides a large amount of information to uniquely fingerprint you.
Hence `privacy.resistFingerprinting = true` disables non-default zoom on load, because it's directly finger-printable.
No. I already understand why non-default zoom gives websites a way to fingerprint you, if your browser insists on telling the websites that you have a non-default zoom level.
What I don't understand, and what nobody in this discussion has been able to explain, is why a browser with privacy.resistFingerprinting = true can't just lie to the website about what the zoom level is. You have said that zoom should be a local operation; that means the browser shouldn't have to tell the website anything about the actual zoom level if the user doesn't want it to. It should just load the page, telling the website whatever default things it tells the website when privacy.resistFingerprinting = true, including, presumably, a default zoom level, and then do the local zoom operation afterwards.
It doesn't have to but if you want websites to follow your system settings for light/dark mode then the browser has to tell the website which one you want at this moment.
It shouldn't, CSS should contain both modes. You need some checks to ensure JavaScript doesn't leak, but you can place limits on how much you check and avoid having to solve the halting problem.
It does contain both modes. But only one of those declarations will be used, and that declaration can do things like background images, which is behaviour the server can observe.
You could potentially try and "execute" all possible declarations at the same time, in effect just loading every URL or image declared in the CSS file at once, so the server can't tell which path was actually used. But (a) this would itself be identifiable as an anti-tracking measure (which can contribute to a fingerprint), and (b) this loads a lot more data in the general case, which is exactly what browsers want to avoid.
You can verify that in either path the same images are loaded, without loading them. (this is what I was getting at by invoking the halting problem - if you cannot determine easily that it is loaded in both paths they are trying to fool your anit-traking and so you default to assuming it is tracking)
The more people identified as having anti tracking on, which should be the default, the less useful that bit of tracking is.
I don't entirely understand your point, I'm sorry. Could you explain it again?
One would generally expect that both paths produce different outcomes, because this is the purpose of media queries, to produce different appearances for different screens. In the example about light mode vs dark mode, a well-designed, non-fingerprinting CSS file might well load different background images for an element to match the user's theme - a dark-background image for dark mode, and a light-background image for light mode. This is the sort of behaviour we are aiming for with this feature.
The problem is that this good behaviour is indistinguishable from more malicious behaviour where the images are only used to do fingerprinting. And FWIW, this is the simplest way of doing fingerprinting that I could think of. In the general case, it is not possible to detect whether a given media query would be fingerprintable by a server. For example, a given media query might increase the height of a particular element, pushing a lazy loaded image below the fold and causing it to not be requested immediately, but only after a few seconds when the user scrolls down to it. Or instead of having one "homepage" link on the page, you have multiple, but you only show one depending on which media query fits best. Then, as soon as the user clicks the "homepage" link, you know which link was visible to them and can fingerprint them accordingly.
Which is why the nuclear option here is just turning off all possibilities for existing a user's unique preferences, because it's the preferences themselves that are being used to fingerprint the user.
> if you want websites to follow your system settings
I don't want websites to follow my settings; I want my browser to follow my settings, overriding or ignoring what the website says if necessary. I don't see why the browser has to tell the website what it's overriding or ignoring.
The browser does follow your settings, and it doesn't necessarily directly tell the website what's going on. The problem is that the website can observe a lot of things indirectly.
For example, with the dark mode/light mode "attack", the browser will download the necessary HTML and CSS in as unidentifiable a way as possible, but then it needs to render that for your machine. But the CSS file might contain a media query line that says something like "if the user wants dark mode, load this dark image as a background for this element". And to correctly respond to the query, the browser then needs to send another request to the server to download that image, that effectively indicates whether the user is using dark mode or not.
This principle can be used to detect a lot of your user settings. For example, your zoom level will effectively change how wide the browser window appears to be from the perspective of a CSS file*, which means that it's possible to use more media queries to detect that. Likewise a lot of accessibility queries like prefers-reduced-motion, while really useful for many people, can be used alongside other information to create your unique browser fingerprint.
This is just with HTML and CSS. If you add Javascript to the mix, it's even easier to fingerprint you based on various settings.
* there are technically other ways of performing zooming that wouldn't necessarily be visible, but they have poor usability. For example, you could have the classic PDF-style zoom where the PDF is rendered in a fixed size, and the user simply views a small, viewport-sized portion of the file. But this is a pain if you want to read text that's wider than your screen, because now you need to scroll back and forth. The browser approach allows text to be reflowed to match the viewport width, but this reflow will always be observable, and therefore can always contribute to a fingerprint.
> The problem is that the website can observe a lot of things indirectly.
If the browser insists on doing those things, yes. But why does the browser have to do that?
For example, if I set privacy.resistFingerprinting = true, why can't the browser just locally have a "light mode" and a "dark mode" that does the best it can to render the site locally in those modes without making any additional requests that it didn't already make for the default version of the page? Yes, I'm sure the website designer has lots of wonderful stuff to customize the look and feel in those modes--and I might like that if I could be sure that the website wasn't also using that stuff to fingerprint me. But if I'm telling my browser to resist fingerprinting, clearly I don't trust that website, so why would I want all of its customizations for light mode/dark mode?
Sorry, I didn't see this earlier. The problem is that it's very difficult to determine what properties are observable for fingerprinting purposes. I used the background image as an example because it's very simple, but you can also trigger requests in more obscure ways. For example, you could have a lazy-loaded image in the rendered HTML - the image will only be loaded if the user's viewport contains the image. Then you create a rule where if the user is using dark mode, the element immediately before the image becomes really long, forcing the image off the screen. Now, if the user loads the website and doesn't immediately also load the image, you know that they were using dark mode.
Alternatively, everywhere where you have a link, you could have one link for each combination of bits that you want to send to the backend. Then using CSS, you can hide or display this links so that only one version of each link is displayed at a time, and then monitor what gets clicked. If the user clicks the link that says `/?dark-mode=true&orientation=vertical`, you now know two extra bits of information.
This is obviously all excluding Javascript, which can just read this information straight out and use it.
The problem ends up being that there's so many different (and often valid) ways to customise a website that it's very difficult to limit these customisations to only the "safe" ones. Even if the only properties I was allowed to use were colour/background-color, I'm sure I could come up with some sort of way to use them to convey information. So the only safe option here is to turn off the customisations altogether. Yes, it's still possible to track if a user is using light mode or not, but now they're all using light mode, so that bit of information becomes useless.