It's a very good question. The stand-in system itself has been built to have basically no external dependencies itself.
So, the question you are really asking is "to what extent are the other parties involved in the processing of payments resilient to AWS failure" – e.g. Stripe probably isn't and that's probably a decent chunk of e-commerce.
I definitely don't think this would be anything close to smooth sailing if AWS was to fully go down, but we do have the benefit that underlying payment infra is still dominated by on-prem with leased lines etc. My best guess of the actual behaviour would be that bank transfers would keep working, the card networks themselves would keep working but the average e-commerce website would not.
Naturally, we can only control for what we can control for – and for us the primary benefit of stand-in is what it gives us in the much more likely scenario of an incident in our platform.
Just in case it is relevant for anyone here this is what our security team have established thus far:
- Can be mitigated by enabling the root user with a strong password
- Can be detected with `osquery` using `SELECT * FROM plist WHERE path = "/private/var/db/dslocal/nodes/Default/users/root.plist" AND key = "passwd" AND length(value) > 1;";`
- You can see what time the root account was enabled using `SELECT * FROM plist WHERE path = "/private/var/db/dslocal/nodes/Default/users/root.plist" WHERE key = "accountPolicyData";` then base 64 decoding that into a file and then running `plutil -convert xml1` and looking at the `passwordLastSetTime` field.
Note: osquery needs to be running with `sudo` but if you have it deployed across a fleet of macs as a daemon then it will be running with `sudo` anyway.
>if passwd is a lone asterisk, then you haven't been exploited.
At the risk of sounding a bit pedantic you can't really assume that, it's possible that somebody used this vulnerability, installed some sort of backdoor and then disabled the account to hide their tracks.
Bad news: I tried the exploit in my macOS Sierra installation and it didn't seem to work. However, the passwd entry on the output of your first command IS A LONE ASTERISK.
However I still can't login as root. This leads me to believe this behavior has always been there, and maybe the login methods just didn't allow an empty password.
This is very normal in 'nix' systems. '' indicates a locked account. (I've given up figuring out how to escape an asterisk)
ex:
daemon:*:1:1::0:0:Owner of many system processes:/root:/usr/sbin/nologin
operator:*:2:5::0:0:System &:/:/usr/sbin/nologin
bin:*:3:7::0:0:Binaries Commands and Source:/:/usr/sbin/nologin
tty:*:4:65533::0:0:Tty Sandbox:/:/usr/sbin/nologin
If the OS is letting you in with a '*'in the encrypted password field, something is very very wrong.
When you do this you'll get the creationTime and passwordLastSetTime as seconds since the 'epoch' – January 1, 1970, 00:00:00 (UTC). These are numbers like 1474441704.265237 which aren't very easy for a human to read :-)
To convert this into a human-readable date and time, open a terminal and do this:
One of my Macs is showing a root password change date of Nov 10th 2017. I can't explain that, so I'm reinstalling now. It did have sshd enabled and remotely accessible, though I thought root login was prohibited.
If I understood correctly, this particular bug was only exploitable from the GUI and this machine hasn't been away from home, so it's likely this isn't related, but posting here, in case it's part of a bigger picture.
OK, I guess when doing OP's root trick, the root user gets activated/created, and that's that's when the PW gets set to empty. I guess that's where my passwordLastSetTime comes from.
Oh wow. Is there any other explanation for this other than this having been exploited in the wild for almost three weeks? Or maybe someone just tried to log in over SSH to exploit some other weakness (something like predictable SSH passwords on jailbroken iOS devices), and happened to create the root user on your machine?
Did you also have sshd running, and do you know what kind of network you were using at the time?
Wait, isn't the point of having root you can erase your traces? Are these logs immutable, even to root? That sounds pretty next level.. and how do I trust the tools?
As far as I know, possibility of root = root = pwn, game over, time to format.
System Integrity Protection (SIP)[1] does prevent even the root user from modifying some system files[2]. It seems possible, at least in principle, to protect system logs from modification by user root. In practice, I think most system logs are stored in /var, and that part of the directory tree does not appear to be protected by SIP (but I hope I'm wrong!)
[2] Unless/until you reboot to a diagnostic monitor on a special partition (which requires pressing command-R from a local keyboard during the POST), then run a command to disable SIP, and then reboot again. Continuity Activation Tool requires users to perform this step as part of the install process to allow installation of Bluetooth drivers not originally signed by Apple.
You can't load unsigned kexts anymore, due to that same SIP. It's a pain in the gonads when hacking your own kexts. I had forgotten about this, but it does indeed allow for a system that leaves an audit trail which cannot be hidden, even by root.
However, user labcomputer is right, I doubt that applies to the solutions proposed by OP here. Well, I'm certain: root can switch out the shell or terminal emulator binary itself and have it lie about executing those commands and return something trustworthy. One way or another, to truly check this, you'd need an immutable audit log (probably not currently available), AND a reboot into safe mode or a mount as a HDD onto a safe system.
So, there is the HTTP cache. It turns out a lot of storage use cases are actually answered by the cache. E.g. say you need some lookup table that changes every five minutes -- make it cacheable, max-age=300, and have your worker load it from cache.
Of course, that's not the answer to everything. We don't plan to offer other storage in v1 but we are definitely thinking about it for the future.
I use ligatures in atom and they are 100% aware of context. I disable them in contexts where they don't make sense e.g comments and I disable them on the line the cursor is on.
I've not had any issues. The => ligature looking like the right arrow character is just like a cyrillic A looking like a latin A - it's a problem that never manifests itself.
The author has a very subjective opinion that they try and present as fact.
The illusion of knowledge about crypto is very dangerous. People get this stuff wrong all the time and introducing more examples with really bad mistakes will only make the situation work.
I don't know. People are constantly learning new things and making mistakes is part of the learning process. If everyone told Linus a kernel is too complicated for newbies there would be no Linux.
This constant scare tactics and putting crypto on a needless pedestal is counterproductive as it may well turn potential learners off. It may be complicated but people learn complicated things all the time. No one is born an expert.
Making a kernel as a hobby doesn't put anyone in danger. And using a kernel someone else wrote as a hobby doesn't tend to put anyone in danger, either, because it's very clear when a kernel was written as a hobby (mostly because you can [almost] never run common commercial software on such kernels — a hobbyist kernel with Illumos-like "branded zones" supporting Windows or Linux software would be a very dangerous beast, if someone wrapped it up into a black box.)
To contrast, writing crypto as a hobbyist isn't dangerous, but using someone else's bad crypto can be dangerous indeed—because it's almost never clear that the crypto within the software you're using was written by a hobbyist.
People can learn crypto all they like. In private, sharing their learning with peers and/or mentors. But their "homework projects" should never be posted on the public Internet where they could be found by people looking for professional solutions, any more than a chemistry student's homework project of devising a work-up for making non-volatile explosives with volatile intermediaries should be posted on the public Internet where it might be treated as a safe, well-known procedure for doing so.
You don't need your crypto to be wrong for your software to be compromisable and dangerous to users. People use random packages and libraries all the time, and it's thought a best practice to not write things, to reuse random stranger's code instead. Yet when we talk about crypto, people jump to their guns. When we talk about package consumption and management, people are way more chill about it. In that sense crypto _is_ put on a pedestal: it's just one among many parts that can make a system unsecure. I guess it's worst in the sense that it gives an illusion of security. Yet for all practical purpose, it's just as likely to be wrong as any other part of the system.
Learn, but don't go around teaching others if you haven't a clue, like the author of this book is doing. That harms potential learners - not to mention everyone who is put at risk when insecure crypto inevitably ends up in production systems.
Yeah, but you don't learn crypto from a chapter in a book. You take classes, learn the mathematics, and then you use a library made by someone much more experienced than you