this is like saying that you should keep your public street address private... you are thinking about it backward. What needs to be done is don't use your email address as a login or worst, as a password recovery option.
those are not mainstream, what... next thing you'll tell me liquid tension experiment is mainstream? people on this sub live in a different world apparently
There’s like a billion side channels to determine how big the screen is unless you just want to entirely break basic css. Which is a pretty unreasonable way to address this problem.
Surprisingly, most sites are perfectly usable with CSS disabled. They end up looking a bit like "motherfucking website"[1], or what you see in a text-based web browser.
Wouldn't loading all external links right away (think background-image) solve this? How does the site exfiltrate the gathered information without javascript or tracking pixels?
Edit: Having a bunch of html buttons/links, showing a different to agents based on their resolution and waiting to see which ones they follow would break this, unless everyone crawls a lot of stuff they don't need. Pretending to be one of a few common sizes is probably a better solution.
How? Admittedly my knowledge of CSS is dated, but without scripting enabled you can't set cookies, make auto server request, or even auto set an external CSS file (that could be served and counted)..
Its not something I've considered before and i am genuinely curious how this would work.
You can make server requests by loading images and fonts. Browsers only load those resources they actually need, so there's lots of opportunities for conditionally triggering requests. Media queries for window size, fallback fonts to check installed fonts, css feature checks to make guesses at the browser type, ...
> Apps need it to determine where to place elements.
Could they hide the actual window dimensions from website javascript by only allowing a special kind of sandboxed function to access it? The website's code only really needs to do arithmetic on those values, so the browser could deny access to the actual values and force the code to manipulate them symbolically.
If I'm allowed to query the position and/or size of anything else in the DOM I can figure out window size by aligning elements at the edges or making one 100vw x 100vh and querying the position/size of those, so you really can't let me access the position or size of anything. I might have elements styled based on media queries, or old-fashioned DOM queries, so if I'm allowed to change how a button looks based on window size I can then check something about this element that isn't directly related to size or position. For example it doesn't make since to have a "download the app" button on desktop, but if you let me make it invisible then you can't let me query the visibility of it. This is true of all styling, if you let me derive it from vh/vw then you can never let me query it after that, which makes a lot of things tricky. Trading functionality that relies on DOM/media queries for privacy is totally valid, I'm just saying that it will make some non-obvious things impossible for a developer to do, and there are sites today that people enjoy using that will have their core functionality broken if this is the future. Browser-based CAD tools were recently discussed on HN, and those are right out. Really, I think the future is both, but I'm not quite sure how they'll coexist.
> Trading functionality that relies on DOM/media queries for privacy is totally valid
Perhaps it should be a site-specific permission like the microphone or camera. Your generic news site doesn't need that functionality (and shouldn't ask for the permission - you'd know something shady was going on) but your browser-based CAD tool would and you'd grant it there.
This will cause a permissions fatigue. Only the most sensitive things should have permission. The usage of these capabilities is large enough that it should not be behind a permission.
If we went down this path, I think that the any permissions dialog would come at the end of a very long PR campaign and feature ratcheting to get developers to update their sites to not need the permission unless absolutely necessary. Sort of like what's happened with the deprecation of Flash.
That part doesn't seem too unreasonable to me, but you could also just go with the largest available size and then scale it as necessary on the client.
The browser could pick a fake screen size, and behave in a way that is consistent with that fake screen size. This would probably break many sites, but it would mitigate fingerprinting if a common size was used.
I doubt that is avoidable, as the browser would still probably need to render at the false viewport dimensions. For a common adversary, fingerprinting based on timing would be more involved and less useful.
Even if they do, there's variation in what "full screen" means. Some people have the bookmarks toolbar enabled, others don't. Some have compact icons, others don't. Some keep a sidebar open, others don't. Some use a theme that changes the size of toolbars. Some have a dock/taskbar always visible, some have differently sized taskbars, etc.
This all leads to a huge variation between users of even the same screen size (e.g. 1920x1080), since the portion of the screen available to the page is different.
The Tor browser fixes this by having the window always be the same size on all machines, regardless of screen resolution. This is a bit annoying because it means you have less stuff on the page visible at a time, but since it makes you look the same as every other user, it's worth it for privacy conscious users.
Yes. I'm in adtech. 60% of browsers are mobile/tablet which are already fixed. The rest are almost always fullscreen. Maybe 2% have non-standard sizes.
Fixed except when the user enables android split-screen mode!
I believe split-screen mode implies the height of the browser window can change at runtime (in a JS visible way), but haven't looked at it recently.
I'm not sure what percent of people customize their dock height on macOS, but that setting uses a slider, which would cause a bunch of unique heights for a maximized browser.
The OS chrome between users varies a ton. Each taskbar, dock and titlebar can have their own size. In my case I'm using a window manager without decorations, so I don't even have a titlebar!
Huh, I thought the original was a sarcastic question. In that case, let me explain:
I keep a browser window open at all times. It is never full screen, because if it were full screen I wouldn't be able to see multiple windows at the same time.
I keep my browsing window as close to 1024x768 as possible. In 2019, a lot of websites can't handle a browser window using a mere 75% of the laptop screen, so they either render incorrectly or, worse, switch to a mobile view. When that happens, I either blacklist the website forever in a contemptuous fervor, or just resize the window. Apparently, this resizing action is trackable.
When I say "as close to 1024x768" as possible, I mean exactly 1024x768 unless I have resized it and forgotten. I use a little AppleScript thing to resize it to 1024x768, precisely for browser fingerprinting reasons. When you resize the window by hand, you typically end up with a VERY unique window dimension.
Even if you had 100 users with 1024x768 resolution for their screen they can be fingerprinted further because of small differences in the browser. Zoom setting, toolbar size, bookmarks button showing, full screen mode, small icons, additional toolbars, task bar auto hide, larger than standard taskbar all affect the viewable area of the browser and this is what the site operator or analytics will see.
It matters because if 99% of people have the same 5 configurations and only the outliers are identifiable, then this method would not be as valuable for spying as it is reported to be.
Would something like Perl's taint functionality work? I.e., all values derived from size, position, colour, pixel data, user agent, etc. are marked as tainted, and are stripped (or randomized or replaced with default values) from data that is sent over XMLHttpRequest and other communication methods. It's probably extremely hard to make that watertight though.
Even if it was implemented perfectly, you could work around that using timing side channels.
For example, multiply the value (e.g. window width) by some huge number, perform a slow operation in a loop that many times, and finally clear a flag. Meanwhile another thread is filling an array one by one until the flag gets cleared. The last non-tainted index in the array indicates your approximate window width.
>Apps need it to determine where to place elements.
This determination can't be done client-side? In other words, if I resize the window, it's going to send the new size to determine where to place the elements in the "new" area?
The W3C should probably create a new, rich spec hundreds of pages long so that frontend developers may instead declare images as a unitless set of point relationships to be rendered at any resolution without digital artifacts.
For example, instead of working on the pixel level, the developer would be free to simply declare, "an arc may exist in one of these four locations." Then, merely by declaring two further "flag" values, the developer can communicate to the renderer which three arcs not to draw, except for the edge case of no arc fitting the seven previously-declared constraints.
Just imagine-- instead of a big wasteful gif for something as simple as an arc animation, the developer would simply declare, "can someone just give me the javascript to convert from arc center to svg's endpoint syntax?" And someone on Stackoverflow would eventually declare the relevant javascript.
The browser can also ship with a pretrained GAN, so the site just asks for a picture of a cat and then the GAN creates one as needed, but nobody will know exactly which cat you saw.
You can't. Even if you block JavaScript, they can still get it from CSS media queries, where the can say which file to download given a certain screen size.
The best solution is to use the same screen size as everyone else so you don't stand out. And that's what this does.
Indeed, but not revealing the screen size is super easy. Just turn off javascript (except, maybe, for a whitelist of 2 or 3 sites where you really need it).
It's more a case of the web being used in ways in which it wasn't really originally intended. Of course developers can implement things poorly and create problems (and often do) but demand for things like responsive sites is user-driven in my experience.
If you don't understand how the web works and actively dislike the community I don't understand why you keep commenting here.
Might be good progress but it still sounds very low to me as I didn't know anything below 100% was possible... it sounds crazy to me (almost like something that was introduced to be able to inject backdoors undetected).
Lots of problems come from things like timestamps, or race conditions in concurrent build systems giving slightly different bytes on disk. These generally aren't "trusting trust" level problems, since they do not and cannot affect program behaviour; but they do screw up things like digital signing, cryptographic hashes, etc. which are useful for automatically verifying that self-built artefacts are the same as distro-provided ones.
These problems can also cascade, if component A embeds the hash of another component B, e.g. to verify that it's been given a correct version. If that hash comes from an unreproducible upstream, and building it ourselves gives a different hash, then we'll need to alter component A to use that new hash. That, in turn, changes the hash of component A, which might be referenced in some other component C, and so on.
Every significant project I've worked on embedded the build host and build time in the resulting executable or firmware image. This was along with other static build information, like version number, compiler version and build flags.
Once you make the sensible choice to include build time in the result you've broken reproducibility. Fixing this means tracking down every package that does this and removing the timestamp.
What I've moved to is splating that info into the binaries during the release process. Far as I can tell there aren't standard tools to do that though. At least last time I looked.
Would be nice if there was. I think this is the root of issues such as firmware with the same password/cryto keys across a whole product family instead of unique ones.
A timestamp is sensible if reproducibility isn't your goal, and exact reproducibility of build artifacts was never a goal on any of my projects. It was simply never a priority.
Anyone else can say "fuck you google"? They're going to have to build it anyways. I know that the NSA will profit from it, but still... that is pushing it.