ECR is kind of hard to beat if you're ok with being in the cloud.
The last time I used it earlier this year for a company already on AWS, it was ~$3 / month per region to store 8 private repos and it was really painless to have a flexible and automated life cycle policy that deleted old image tags. It supports cross region replication too. All of that comes without maintenance or compute costs and if you're already familiar with Terraform, etc. you can automate all of that in a few hours of dev time.
While neat, I feel like in the current age of "let's throw shitloads of packets and see how they like that", this solves _a problem_, but I feel that most of the security products solve it by anycasting IP ranges.
No but maybe yes:
It would be impossible, and undesirable to issue certificates for local addresses. There's no way to verify local addresses because, inherently, they're local and not globally routable.
However, if a router manufacturer was so inclined, they _could_ have the device request a certificate for their public IPv4 address, given that it's not behind CG-NAT. v6 should be relatively easy since (unless you're at a cursed ISP) all v6 is generally globally routable.
Even behind CGNAT, you could probably get away with DNS here. If you provide your customers with customeraccount.manufacturerrouters.com, you can then use DNS validation to get a valid certificate for *.customeraccount.manufacturerrouters.com. Put a record in there that points to the local router IP (I.E. settings.customeraccount.manufacturerrouters.com) and you can get HTTPS logins on your local network, even with local IP addresses if the CAB still allows that.
It's not exactly user friendly, but it'll work.
Personally, I have a private CA that I use. My home router has a domain name pointing towards it and has been loaded up with my private certificate. I get the certificate error once a year when the thing expires but in the mean time I can access my router securely.
Generally, these devices will use the mp1 to do all of the cryptographic operations around the devices.
The biggest part of this is the keys defined between the terminal and the acceptance gateway (something like CyberSource or Authorize.net).
When the temper protection is tripped the keys that are used are immediately dropped from RAM and you can't recover them, they have to manually be input into the device again to reset the tamper protection.
(Side Note: keys are specific to a merchant. If you're able to extract them, it limits the blowback.)
To play devil's advocate: The hypocrisy that was pointed out in the post are post "post compile" events.
If you want to setup Play Services, that's on the onus of the user, same thing with distribution of GPLv3 applications.
The problem is that the utilizing GPLv3 code during compliation (and yes it's technically copying the APK more or less) of the operating system, means that it trips some of GPLv3's linking requirements.
Don't get me wrong, I'm all for accessibility, but coming under legal fire is a big no-no for many open source projects.
Including GPLv3 code within a GrapheneOS release would make GrapheneOS more restrictively licensed than the Android Open Source Project (AOSP). GrapheneOS is meant to be usable everywhere AOSP can be used. It's meant to permit making a locked down device if desired. It's not what we want for GrapheneOS ourselves, but we want organizations using it to be able to make that choice. Some organizations won't use devices which can be unlocked, so GrapheneOS would never be usable for them if it prevents having a locked device variant.
There are permissively licensed text-to-speech implementations including ones more modern than eSpeak NG. We haven't had time to properly review them, fork one and integrate it into GrapheneOS. Text-to-speech is fully usable on GrapheneOS, but the app needs to be installed. Including text-to-speech in the OS with it pre-configured and working out-of-the-box is a planned feature. It can take a long time to get planned features implementations, particularly if we need to make hard choices about what to use as we do in this case. We aren't sure which app we want to use yet. These decisions are very difficult to ever change since users have an expectation of things not changing or especially breaking their existing setups. It's not something to be taken lightly.
We don't need to include GPLv3 code in GrapheneOS to provide text-to-speech. We just can't use one of the implementations, eSpeak NG, which is also largely written in C code that's not particularly modern or battle hardened. Since this would be something enabled by default and exposed to untrusted input, we would greatly prefer something more security oriented.
Android 1.6 added SVOX Pico as a text-to-speech implementation. SVOX Pico was a dead project for ages and was replaced by a closed source Google TTS app. SVOX Pico turned out to have all kinds of memory corruption bugs and it was a major issue for our hardening features when it was still around, often crashing or having corrupted output due to memory protections. It had lots of use-after-free bugs, etc. It was eventually removed from AOSP due to being a major security issue. By then, it was also a terrible text-to-speech app compared to mainstream options. We couldn't justify keeping it around. This was also before we had a fork of TalkBack (screen reader) included in GrapheneOS. After we added TalkBack, we've looked for a text-to-speech implementation we can use but the existing options were missing Direct Boot (Before First Unlock) support and had licensing issues for their code and/or language support.
Most responsible orgs do TLS termination on the public side of a connection, but will still make a backend connection protected by TLS, just with a internal CA.
Disclaimer: Former YT Engineer.
reply