> I am an app developer. How do I protect my users?
> We are not aware of mitigation strategies to protect apps against Pixnapping. If you have any insights into mitigations, please let us know and we will update this section.
IDK, I think there are obvious low-hanging attempts [0] such as: do not display secret codes in stable position on screen? Hide it when in background? Move it around to make timing attacks difficult? Change colours and contrast (over time)? Static noise around? Do not show it whole at the time (not necessarily so that user could observe it: just blink parts of it in and out maybe)? Admittedly, all of this will harm UX more or less, but in naïve theory should significantly raise demands for the attacker.
[0] Provided the target of the secret stealing is not in fact some system static raster snapshot containing the secret, cached for task switcher or something like that.
Huh. I remember a while ago Google Authenticator hid TOTP codes until you tap on them to reveal them. I remember thinking this was an absolutely stupid feature, because it did not mitigate any real threat and was annoying and inconvenient. Apparently a lot of people agreed because a few weeks later, Google Authenticator quietly rolled that feature back.
I wonder if they were aware of this flaw, and were mitigating the risk.
They could have made it a setting, with an explanation of the security benefits of it, so that folks who are paranoid can take advantage of it.
A relevant threat scenario is when you're using your phone in a public place. Modern cameras are good enough to read your phone screen from a distance, and it seems totally realistic that a hacked airport camera could email/password/2FA combinations when people log into sites from the airport.
Ideally, you want the workflow to be that you can copy the secret code and paste it, without the code as a whole ever appearing on your screen.
I've made something (probably) very similar for quick GB vs US pronunciation check that also leeches on Google's snapshot of what I believe is a licensed copy of the Oxford collection the same way the shell script does, but mine "runs in browser's URL bar" instead. It's a super tiny dataURI HTML document, intended to be bookmarked with a keyword (say, "say"):
then hitting Tab plays it in British and Shift+Tab plays it in US English. It uses older 2016 batch, because I totally adore the US voice in it: just listen to "music" [1] and tell it isn't pure ASMR.
(I'm afraid it just a matter of time they will prevent our mischief, though.)
Ha ha, really glad to hear that. (The fact is, I am kinda freak/junkie about human voices, and that particular one stands really high on my list of irresistible tingles-inducing specimens. So happy to hear I am not alone.)
Have you found any you like in the AI world for text to speech? I know ElevenLabs and OpenAI have voices, but I'm hoping to build something that can be run locally.
> I do think an interesting approach would be a browser extension that lets you override the prefers-color-scheme property on a per-domain basis, similar to the toggle in dev tools.
Presumably, most users wanting flashbang-less browsing experience use Dark Reader extension or similarly radical solutions.
The sad truth is that the user preferences and per-site persistence for stuff like this should always have been browser's responsibility to begin with: just the same way like the font-size/page zoom already is, and likewise some (blatantly limited) security settings. (Bitterly) amusing fact is that there was (and still is) concept of "alternate stylesheets" from the beginning of CSS (still part of the spec [0], no support outside Gecko), that also fade into obsolescence for it's lack of persistence. So to this days, Firefox, for example, has View → Page Style menu, where user can choose alternate stylesheet but the choice is not preserved across navigations, so is pretty useless on its own.
Similarly userstyles: specifications dictate there is like CSS origin level and how they should behave and that all "user agents" are supposed to give user a way to enter the cascade this way, but does not give any official way how to scope individual recipes to concrete origins. That's what the unofficial `@-moz-document` extension was that, and briefly had a chance to be formalised [1]. But I digress.
(Likewise all the "European" cookies banners: tragic example of regulation applied on the wrong level. Instead of putting users in charge with help of their "user agents": implicitly blocking pretty much everything and using permissions system that actually would have a chance to be more than "pinky promise we will not track you if you don't click this toggle inside our banner". But I digress even more, sorry.)
> I'd be curious to know if anybody has found a way to avoid this issue with JS switchers -- ideally without needing to delay the initial paint.
At this point, when browsers do not support per-site user preference for that natively, pragmatic (most robust) way would be to respond with properly set HTML payload straight away. There is even specified HTTP header for this, so once adopted in browsers, we could even ditch HTTP cookies [2] for the persistence, but it seems quite demanding on the server (IIUC negotiating these "Client Hints" takes extra initial request round-trip).
Pragmatically, I guess having early running JS in the HEAD that ensures the proper color-scheme is set on the root not and only proper stylesheets load should pretty much prevent most flashbangs, provided the relevant bit would arrive early enough from the server. I think there does not exist any good no-JS-no-Cookie (or any JS-less persistence) solution that supports navigations, sadly.
Firefox does have a global setting to override the System setting if you want system dark but webpages light, for instance.
Most browsers also support per-page overrides, but the only consistent place to find it is Dev Tools, which is a shame.
I think browsers decided to invest in "Reader Mode" as a UX over more direct control of user styles and page styles, and I'm not always sure that is the correct choice, but I can understand how it seems the simpler "one-button" choice.
> Great! Then the user gets his preferred font, as requested, instead of the one the page specified.
No. You've misread the main point. The user would have gotten his preferred font if the font stack was either just plain
font-family: monospace;
or
font-family: <list of fonts their system does *not* support or does *not* allow to be used>, monospace;
. But the case is that the suggested font stack contains some "unwanted" font that their system both supports and allows to be used, that precedes the generic `monospace` font family the user actually prefers, or, more precisely, have assigned their typeface to. Is it more clear now?
I agree it is not a huge "bug" on the first sight, and as it seems even this is somewhat solvable without disabling font support completely. But since it takes some effort and expertise on the user's side, it adds the "bug" some weight nonetheless.
That hurts. I see where you are standing, and can confirm you expressed opinion of the contemporary majority of browser users, but man, how sad it that. The attitude diverged by a light years, when "Setting preferred fonts for generic font families" is now "esoteric". (Web) browsers ("user agents") came to existence with these capabilities in mind, and even now are build around the principle of "preferences reconciliation" between defaults, author and user (as opposed to simple "display what author dictates"). And default font choice is probably the very first aspect it ever had to handle.
Browsers have ceded way too much control to web designers. The user should be in control. When it comes to what fonts the computer uses, the text size, the color scheme, the user preference should be able to easily override the web site's code. Who's computer is it, anyway?
I'd be pretty over the moon if the browser supported the following preferences... especially given the number of electron or otherwise browser embedded UI options..
It might be reasonable to have more than this, and the accent and highlight color may or may not be the same color... but it would go a long way towards matching the system defaults, with appropriate css variables injected as well.
The branding people have definitely won this war here. I agree with you, but the answer to your final question is, sadly these days, never the user or the supposed owner of the hardware. I think it’s pretty easy to argue today that when you boot a computer or phone, it belongs to Apple or Microsoft or Google. When you open a website or “app,“ the computer temporarily belongs to its developer. The fact that even browsers don’t have a built-in, simple-to-configure option to toggle persistent cookies on or off per website, opt-in, of course, is all the evidence you need of that for the web. None of this is OK with me, but it’s the world we have now.
The problem is that most fonts don’t support basic OpenType features. I make heavy use of small caps on my websites (they are IMO criminally underrated). If I were not using a custom font, most users would get hideous “synthetic” small caps.
The esoteric part is the combination of "Setting preferred fonts for generic font families" AND the security adjustments necessary to trigger "Request for font XYZ blocked at visibility level 2"
Sure if you want to set browser prefs for fonts, go for it. It'll make the OG sites with almost no stylesheet a little more readable (looking at you, wiki.c2.com). But you should not expect or ask web page authors to not use their preferred fonts. If you want to override web page fonts, use a more invasive or pervasive tool.
Font/page size preferences, on the other hand, web page authors should respect and do a better job with.
It's mixed bag... the designer of a given website has an intended look/feel and style... if you override that you can do as you like, but it's not like the author's intent should always simply be dismissed.
Beyond this, not every web developer expressly wants to burden a browser to a specific web font payload when they have a close/suitable match where this modern font stack is good enough in terms of design intent.
Third, if all else fails, the user sees their own selected default... I'm not sure that I understand the objection here... As long as appropriate semantic markup and the font is one that actually scales to appropriate px/pt then it should be fine. If the selected font/typeface doesn't, then it's on the user to select a better default/fallback.
> it's not like the author's intent should always simply be dismissed.
Yes it is. The designer should always understand that the user is ultimately in control of a web page, and that their (the designer's) vision is not what matters at the end of the day.
If you choose to use w3m or lynx you get what you get. Same for disabling fonts or JS... most people don't have time to cater to 0.05% of users who go way off the norm.
It is not default, and explicitly indicates this kind of outcome can potentially happen. But truly agree that the situation here is suboptimal in all aspects.
Also maybe worth noting that we can always force our (three) font faces everywhere simply by unchecking the "Allow pages to choose their own fonts" in settings. Yes, this is nuclear option, but I can attest that I use it time to time, and it is quite usable.
BTW, I have somewhat softer workaround that interestingly makes the (local) Cascadia on modernfontstacks work even in the Strict Tracking Protection mode: I have a "userstyle" [0] (more precisely userCSS in Stylus) that "remaps", among other things, "Consolas" to a @font-face of the same name but loading `src: local("Cascadia Mono")` instead. Not sure why exactly this circumvents that (I don't think that Stylus-injected styles have more privileges than page styles), but I am glad it works nonetheless.
> Allow pages to choose their own fonts" in settings. Yes, this is nuclear option, but I can attest that I use it time to time, and it is quite usable.
Good question! Actually (to my minor dismay): not completely.
Disabling "font support" in Firefox surprisingly still has a hatch for "well-known" icon fonts, with intention to prevent "blind" icons in webpages. I believe it is driven by the pref
that contains "FontAwesome" and (Google) Material Icons and Symbols (many, presumably all, variants). So to truly disable all "non-preferred" fonts, we have to both wipe that pref and also change for the
browser.display.use_document_fonts
to zero. But that's what the GUI checkbox controls, so no need to go to about:config for this one.
What’s bad for usability is using icons on their own. Using icons with visible labels is the best practice among people who actually want their software to be usable.
And of course “bad for usability” becomes absolutely catastrophic for a11y.
For those who prioritize aesthetics over usability and use icons alone though, there are at least a dozen methods to make assistive tech read the names of your icon buttons. Something as simple as aria-label is one way.
> Also maybe worth noting that we can always force our (three) font faces everywhere simply by unchecking the "Allow pages to choose their own fonts" in settings. Yes, this is nuclear option, but I can attest that I use it time to time, and it is quite usable.
Occasionally I deliberately trial major changes for a week or two. Sometimes I revert, other times I stay. I turned font selection off in this way in early 2020, and never went back, it made the web so much better.
Out of the box, Firefox still loads fonts with certain names, to avoid breaking icon fonts. After maybe six months I decided to nuke that with blocking all fonts in uBlock Origin, and although it made some things uglier, and Material Icons is ridiculously stupid in practice (frankly achieving almost the precise opposite of its stated intent for using ligation) it took until this year before I encountered an actual breakage (a couple of sites not realising document.fonts.load() can throw).
I encourage others to turn off font selection, though I wouldn’t encourage most to block web fonts altogether in the way I decided to.
I also urge developers to shun icon fonts: they were always a bad idea, a dodgy hack, and the time when they had meaningful justifying qualities is now long past.
Tried to dig some info about what (the hell) it is supposed to depict and the only official hint so far was from their video reveal transcript [0]:
> It depicts a symbol made of three flowing lines that resemble the links of a chain. The words ‘The World Wide Web Consortium’ circle around it.
So the main part is not a word mark, after all, or at least not intended to be one. (Yes, hard to believe.) Some more hints may eventually appear in their mastodon thread [1] what (to me) does not seem like a properly managed public relations shtick at all, starting from the "alt text" of the new logo, that reads "new W3C logo" (sic), to single (at this point) response
> it's not a 'W', darling
(also sic). Yes, it seems that the this standards body public relations had been hijacked by some covert adversary, and can only hope it is just the PR part.
> If you really need to translate ONE WORD, it's not that onerous to type it.
I'm confident that I can type just a tiny fraction of all Latin characters all world languages use. I'm sure that pretty much any Vietnamese word is way beyond my keyboard layout. No clue about writing any non-Latin script. Can you type any Cyrillic, Kanji, Hebrew, Abjad, …, character you see?
There are also a bunch of characters in other languages that look identical or almost-identical to ASCII characters. It’s very difficult to tell the difference with the naked eye.
Sorry if I've misunderstood sarcasm and taken your comment at face value, but are you really unaware of current developments? There are fields literally covered with thick webs of optical fibre near front lines. "Fibre optic drone" even has its own Wikipedia entry: https://en.wikipedia.org/wiki/Fiber_optic_drone
I understand that keeping track of news can be difficult, and staying out of that depressing information cycle has clear mental health benefits. However, when joining discussions about current conflicts, it's worth acknowledging any resulting knowledge gaps.
I had no idea. A kilometers long wire sounded completely infeasible to me, though clearly I underestimated the fiber optics.
I would have thought kilometers of wire would be too heavy to keep on a spool on the drone itself, and without the spool on the drone you probably can't have fly by wire. That's why I was dismissive, it sounded to me like a completely infeasible idea.
Fair enough, I remember being sceptical myself when I first read about that. Well, learnt something new today, at least. (In that WP article I see that wire-guided war devices are much older invention than I thought.)
I also wonder, why conceal bits of information from readers, while they could possibly benefit of them the same way editors and writers do. Admittedly, the outcome then seem like a poetry, but … why not?
To give it a shot on that page, simple way to see these breaks it to run
document.body.insertAdjacentHTML
( 'afterend'
,`<style>p, li { white-space: pre-line; }</style>`
)
in devtools console. (Using `pre-wrap` instead of `pre-line` is also interesting: indents "wrapped" lines by the source code indent, what gives it even more clarity.
(By the way, HN comments
also
preserve
line
breaks
in
the
source output, but unless revealed by some extra style, they are usually not presented on the surface.)
Here's a userscript to apply it automatically, and show all the extra whitespaces people put in their comments. (There aren't many.)
// ==UserScript==
// @name HN comments show whitespace
// @description Changes HackerNews CSS to show comments with original whitesapaces
// @match https://news.ycombinator.com/*
// @run-at document-body
// @grant none
// ==/UserScript==
// HN inserts a newline after <pre> so formatted code blocks have a whole newline after them
// but we can remove that extra space with negative margin
const HN_noformat_CSS_rule = `
div.commtext.c00 {
white-space: pre-wrap !important;
}
div.commtext.c00 > pre {
margin-bottom: -.25em !important;
}
`;
let myStyleHolder = document.createElement('style');
myStyleHolder.textContent = HN_noformat_CSS_rule;
document.head.appendChild(myStyleHolder);
IDK, I think there are obvious low-hanging attempts [0] such as: do not display secret codes in stable position on screen? Hide it when in background? Move it around to make timing attacks difficult? Change colours and contrast (over time)? Static noise around? Do not show it whole at the time (not necessarily so that user could observe it: just blink parts of it in and out maybe)? Admittedly, all of this will harm UX more or less, but in naïve theory should significantly raise demands for the attacker.
[0] Provided the target of the secret stealing is not in fact some system static raster snapshot containing the secret, cached for task switcher or something like that.