I feel like this gets at something that you don't really hear talked about much, namely: we, as an industry, have for a very long time assumed when thinking about how to secure our products that the attackers those products would face would be either low-skill amateurs ("script kiddies") or, at worst, small gangs of professional criminals. But more and more we're seeing a new class of attacker, actual nation-states (and their associated intelligence services), who can bring resources and expertise to bear that those other attackers could only dream of. Which means they can open up defenses that were more than sufficient against the attackers of the past like soft fruit.
That's a big change, and I don't know that most developers have even started to grapple with what it means for the products we build and the way we build them.
Take nation-states creating tools that can then be repurposed by anyone and hardware that is insecure all the way down and you have a cyberworld where it looks like offense is going to be superior to defense for a while.
This is the way real world warfare has gone historically. Guns made medieval armor obsolete, defenses against missiles are more fig leaves than serious factors and simple mobile units have become standard.
Where does it end? Or at least go next? The death of the general purpose computer [1], that's where. Signed bootloaders show up as opt-in, luring you with safety. Soon enough, they'll be mandated by legislation. Next ISPs will be required to execute authentication protocols against all devices -- no more anonymity.
Cellular providers are already feature-gating users based on whether their crapware is installed on user devices or not. Certain providers are also basing it off of whether or not you bought the phone from them, via the IMEI number. Updates to devices are often different based on the phone's "affiliation." They can easily corral people into using specific firmwares.
And in a world like this with weaker defense than offense, the only good defense is a good offense, i.e., deterrence.
In military terms, this means that an attacker knows that arousing the enemy will mean hell to pay, and they can only engage in asymmetric warfare.
In the computing space, we don't have much of an offense. The best I saw was the Blue Frog anti-spam, which was great while it lasted. Russian crackers have mostly learned not to attack their own govt unless they want the appropriate unit of measure for the expected lifespan to change from decades to weeks. Seems that we'll need to live very cautiously for some time, or develop responses that 'reach out and touch someone' (specifically the attackers).
edit:+ Moreover, it seems that in international cyber-attacks, some of the responses will need to be kinetic in order to deter.
Then you’d better be able to prove you’re striking the right targets, and that the purpose of the attack you’re responding to isn’t to elicit that strike.
Yup. That's the job of good intel agencies and cyber-defense.
Of course with BlueFrog, that work was already done -- just automate the response do complain/unsub, and the spam campaign becomes a DDOS on the advertiser. I'd love to see that back again.
Same for spam/spoofed-ID tel calls. Answering and trolling the telemarketer by wasting their time is nice by ineffective 1-on-1. It'd be nice to ID & target them with a scaled-up response.
The bit at the bottom should provide a clue: it starts by advertising the invulnerability of Microsoft's more restricted operating systems to attacks like this.
Like it or not, in the near future most ordinary computing devices will only run signed, vendor-approved, whitelisted code from the bootloader all the way down to the application level. Anything else is a security nightmare waiting to happen.
I think it's strange how citizens are fined and jailed when they crack other people's computers but it's legal and necessary when the state does it. Apparently, they maintain vulnerability stockpiles.
If you see it that way, it seems unfair. But if you look closer, you'll see that it is at least consistent with other qualities of the state and it's agents, namely that only state agents are allowed to do certain things like, for example, to judge a trial and execute the verdict. If those rights were equally held by individuals, you'd end up with the chaos of people not recognizing each other's jurisdictions. (Perhaps there is a middle-ground, e.g. something like classic anarchism that allows for tiny, hyper-local states. But even micro-states must interact with each other in ordered ways for the system to work.)
"Cracking someone's computer" is, these days, equivalent to acquiring total surveillance on them, past present and future. So yeah, personally I'm very okay with this being illegal, and I'm not really okay with the state doing it either. It's not actually like wire-tapping, it's more like acquiring panopticon with perfect memory over the lifetime of the target, and it seems to me with such a power you could find countless illegal activity in virtually everyone's life.
The amount of obfuscation going on here is very impressive. It always makes me wonder how these things are even written and designed in the first place.
I seem to recall a C++ protection library that would allow you to define custom versions of types like int and long. It would use operator overloading to allow you to use the types in a fairly normal fashion, but underneath these operators would actually be implemented as a very complicated virtual machine. This way, your algorithm code could look fairly normal but it would compile to an absolute beast of a protection layer.
Microsoft gave a Jan 2018 presentation on Windows 10 "Hardening with Hardware" features that are going to be rolled out in upcoming releases, e.g. Hyper-V dynamic root of trust, remote attestation, per-app VM with copy-on-write memory, GPU isolation via IOMMU, secure enclaves.
Great read. Most of my knowledge in this area is for macOS and Linux, so it’s interesting to see how obfuscation is done on Windows. The core ideas are the same: add weird instructions to fool disassemblers, indirect through a VM, disable debugging and tracing, etc. but the techniques are different on each platform. Plus you have creative solutions like the one mentioned in the article: taking a screenshot of the screen and displaying it over everything to give the impression that nothing malicious is going on, all while you’re running code that displays security messages that you couldn’t figure out how to subvert.
That's a big change, and I don't know that most developers have even started to grapple with what it means for the products we build and the way we build them.