If someone manages to place malicious code onto any of the small number of websites I visit, I'm certain the first thing they'd be trying to do is exploit vulnerabilities in software that <0.001% of the site visitors would be using. The more vulnerabilities they attempt to exploit the faster they're going to be noticed so that's a risky gambit.
Outside of vulnerabilities related to web browsing, software vulnerabilities rely on new, untrusted code running on my machine and in many cases the exploits rely on someone having physical access to my machine.
The threat models just aren't all that relevant to me.
The more realistic threat model is software being compromised and the auto-update feature pushing out malicious code to all of the users like what happened with the Transmission BitTorrent Client.
> The more vulnerabilities they attempt to exploit the faster they're going to be noticed so that's a risky gambit.
It seems your entire risk computation is based on the premise that this is true, and that if true how much more likely they are to be noticed is significantly high enough to make it less worth the risk.
Do you actually have data to back that up, or are you just going by an assumption? Because to me it seems likely that there are blobs of JS that look for a wide number of vulnerabilities floating around blackhat sites ready to be slightly tweaked for the individual case and then deployed. That's basically the entire premise of what script kiddies are, but with JS, so I don't see why it wouldn't at least be easily available.
> The more realistic threat model is software being compromised and the auto-update feature pushing out malicious code to all of the users like what happened with the Transmission BitTorrent Client.
That's possible. It would still be interesting to see more of the reasoning on this. I can think of a few things that might mitigate what I think you are referring to, but there's not much by the way of details to address.
>Do you actually have data to back that up, or are you just going by an assumption?
An assumption. The idea is that conducting more malicious activity is generally easier to spot than conducting less malicious activity and trying a wide range of exploits is more likely to be noticed than a smaller, possibly more targeted exploit. Script kiddies attempt a wide range of exploits when they're going after a single target. For example, throwing any and all known vulnerabilities against a website hoping it is hosted with an outdated version of Wordpress in an attempt to gain access. But unless you're going for attention and de-facing the home page, once you're in you want to draw as little attention to what you're doing as possible. OTOH, they might throw everything and the kitchen sink in under the assumption that they'll be caught quickly so they want to capture as much as possible before they're kicked out.
>...Because to me it seems likely that there are blobs of JS that look for a wide number of vulnerabilities floating around...
I browse with Javascript disabled. I use Tampermonkey and Stylish to inject CSS and Javascript that I write (and therefore trust) to pages I use frequently enough to justify the time spent on restoring certain functionality.
> The idea is that conducting more malicious activity is generally easier to spot than conducting less malicious activity and trying a wide range of exploits is more likely to be noticed than a smaller, possibly more targeted exploit.
I understand the idea, I just think some assumptions that go into it are wildly unproven. I would think you're just as likely (if not more likely) to be exposed through a high traffic site that is dangerous for a very short period of time than a low traffic site that's exposed for a longer period.
E.g. if nytimes.com is exploited, there's probably a window of minutes to an hour before it's noticed and fixed, and the fix make take as long or longer to happen than the first notification of a problem. In that scenario, the amount of time unnoticed reduced by stacking every exploit you can think of is fairly minimal, and it likely doesn't reduce the time to fix after notification at all.
So, if this assumption any less possible than yours? The only difference is that whether my scenario is right or wrong, it promotes behavior that doesn't leave you more vulnerable to exploitation by random sites, while for yours if you're wrong (and you act on it, as you are) it leaves you more vulnerable.
>>...Because to me it seems likely that there are blobs of JS that look for a wide number of vulnerabilities floating around...
> I browse with Javascript disabled.
Okay, then image library exploits, or css parsing exploits, or any number of other things. For example, ambiguously listed stuff like this[1], or this[2] or this[3] or this[4]... or how about I just point you at a bigger list[5] (code excution exploits for Firefox reversed by date, with severity rating over 9. There are hundreds). It's not like Javascript has been the only attack vector of the last few years. I have no idea how many of these affect that Firefox version. My guess is it's more than two or three.
I addressed that in the response - and admitted that, yes, it is an assumption against the largest attack vectors.
>OTOH, they might throw everything and the kitchen sink in under the assumption that they'll be caught quickly so they want to capture as much as possible before they're kicked out.
If I felt the risk was large enough to be concerned over - I'd fork and backport the patches and compile a personal version of FF 36 with critical bugs patched. I'll be honest in "possibly execute arbitrary code via unknown vectors" is not one of my highest security concerns and a concerning amount of these are only possible when running Javascript. In fact, the most concerning issues I saw are video codec related. Reading a few of them, they require me to decode/play the video in the browser to trigger so I can probably avoid that by downloading videos to watch locally in a media player instead of through Firefox.
The severity of a bug doesn't matter to me as much as how trivial it is to exploit and how it needs to be exploited.
Software combating malicious sites routinely set up very easy targets using outdated software. Many exploit kits nowadays that constantly get updated to find new avenues of attack and to avoid obvious targets.
I imagine things meant to site on servers for as long as possible without detection that might expect weeks/months had a different defense strategy than something targeting a website that sees a lot of public traffic. If you expect discovery withing hours anyway, a lot of the benefit of keeping a low profile may be negated.
While I agree with you that the threat model may or may not be very relevant for your usage, I disagree that just because you're on an old/minority version you'll be less likely to get pwned. Try hooking up some Windows XP and some of its services to the internet sometime... Additionally for software like browsers many vulnerabilities are found that affect basically every previous version back several years, and only get fixed in the newer versions. Of course there will be new vulnerabilities that only affect FF > 56 that you don't care about.
FWIW I'm still on 52.9.0 on my home PC... (At work I use the latest, it does keep improving, but feels very much 2-steps-forward-1-back when they keep doing stuff like removing long-standing features.) Some of the vulnerabilities that have been fixed in later versions are potentially concerning. I rely a lot on NoScript (pre-Quantum NoScript even detects click-jacking attempts, which post-Quantum doesn't), ad blocking, link un-shorteners, not running Windows/MacOS, and generally not visiting every sketchy site I may be pointed to, but it's still risky -- e.g. a rouge SVG might pwn me one day. I've accepted the risk, for now.
Even as the risk becomes untenable, I worry that as Mozilla continues its war against its users we'll end up in a Windows 10 situation where malware that's actually out there (rather than hypothesized) targeting older versions is generally going to be more respectful of your PC than the software vendor is. A lot of malware probably won't force you to reboot (or restart -- had a wtf moment when I opened a new tab in FF and it couldn't render anything, saying I needed to restart because it had silently updated something), or remove features you use all the time, or constantly nag at your attention about stupid stuff... Ransomware is probably the most user-unfriendly you're likely to get (that impacts your experience, I'm ignoring passive data harvesters that drain your bank account when you're on vacation), but then you have backups, right?
>While I agree with you that the threat model may or may not be very relevant for your usage, I disagree that just because you're on an old/minority version you'll be less likely to get pwned.
It's not quite that - the context of where the attack is coming from is important.
A site that has been compromised isn't the same threat as visiting an actively (and always) malicious website which isn't the same threat as downloading and opening files which isn't the same threat as downloading new software, installing, and running it.
If I ran Javascript, I'd have a different threat model. If I regularly downloaded files or software, I'd have a different threat model. If I browsed every website I come across like some sort of a web crawler, I'd have a different threat model.
You must not realize how slow a lot of companies are in updating software. If you're doing something like online banking, why expose yourself to that risk at all?
Outside of vulnerabilities related to web browsing, software vulnerabilities rely on new, untrusted code running on my machine and in many cases the exploits rely on someone having physical access to my machine.
The threat models just aren't all that relevant to me.
The more realistic threat model is software being compromised and the auto-update feature pushing out malicious code to all of the users like what happened with the Transmission BitTorrent Client.