The opsec reason I use Safari as a work browser today is that Safari has a much more blunt tool to disrupt cookie stealers: Safari and macOS do not permit (silent) access to Safari's local storage to user level processes. If malware attempts to access Safari, its access is either denied or the user gets presented a popup to grant access.
I wish other browsers implemented this kind of self protection, but I suppose that is difficult to do for third party browsers. This seems like a great improvement as well, but it seems this is quite overengineered to work around security limitations of desktop operating systems.
Seems like a very weak mitigation, if this is to protect against malwares running in your user session, alongside your browser. Can't they already do all kinds of nefarious keylogging/screen recording/network tracing/config file editing enabling impersonation and so on?
I mean, if my threat model starts with "I have a mal/spyware running alongside my browser with access to all my local files", I would pretty much call it game over.
> I mean, if my threat model starts with "I have a mal/spyware running alongside my browser with access to all my local files", I would pretty much call it game over.
This is a big problem I have with desktop security - people just give up when faced with something so trivial as user privileged malware. I consider it a huge flaw in desktop security that user privilege malware can get away with so many things.
macOS is really the only desktop OS that doesn't just give up when faced with same user privileged malware (in good and bad ways). So there it's likely a good mitigation - macOS also doesn't permit same user privileged processes to silently key log, screen record, network trace and various other things that are possible on Windows and common Linux configurations.
Yeah, I'm siding with the sceptics on this one. Adding more layers of indirection against those malware running under a user session seem like a good idea in general, but in practice, you showed how ineffective the macOS approach is: under this model, every application is let to defend itself in an ad-hoc and specific manner. That doesn't generalise well: you can't expect every software, tool, widget, … vendor to be held to the same level of security as Apple.
Another approach is to police everything behind rules (the way selinux or others do), which is even better in theory. In practice, you waste a ton of time bending those policies to your specific needs. A typical user won't take that.
Then there is the flatpak+portal isolation model, which is probably the most pragmatic, but not without its own compromises and limitations.
The attitude of trusting by default, and chrooting/jailing in case of doubt probably still have decades to live.
> under this model, every application is let to defend itself in an ad-hoc and specific manner.
This description of the macOS model doesn't really apply so I'm not sure if I'm misunderstanding you or you're misunderstanding the model.
> Another approach is to police everything behind rules (the way selinux or others do), which is even better in theory. In practice, you waste a ton of time bending those policies to your specific needs. A typical user won't take that.
While SELinux could probably provide this kind of data protection on Linux, the method of technical enforcement is only one part. There's a lot of UI involved to get right, and that will require far more effort.
> Then there is the flatpak+portal isolation model, which is probably the most pragmatic, but not without its own compromises and limitations.
That model doesn't really apply here. Flatpak et al allow applications to self confine in order to protect the other things the user is doing. What I'm talking about is for an app to have some protections of its own data from the other things the user is doing. I'm not talking about sandboxing, this data protection.
>> under this model, every application is let to defend itself in an ad-hoc and specific manner.
> This description of the macOS model doesn't really apply so I'm not sure if I'm misunderstanding you or you're misunderstanding the model.
I admit I might be misunderstanding, since, again, I don't use macOS. But from your description:
>>> Safari and macOS do not permit (silent) access to Safari's local storage to user level processes. If malware attempts to access Safari, its access is either denied or the user gets presented a popup to grant access.
it sounds like safari detects that a foreign application is trying to read its data, warn the user and lets them call the shot on that. I don't see how that isn't very specific to safari and to one specific type of mitigation. Unless the same prompt shows up for every program trying to access every other one's configuration? Then I suppose we hit the usability nightmare I'm on about, with utilities like ncdu, borg and others just unable to do their job.
> While SELinux could probably provide this kind of data protection on Linux, the method of technical enforcement is only one part. There's a lot of UI involved to get right, and that will require far more effort.
My experience with SELinux was not that of a problematic UI or ecosystem of utilities around it, but more one of incurred fatigue working against rules: once you've hit your tenth AVC denial trying to get something to run, you might as well want to disable SELinux altogether. Or maybe that's what you call UI? Either way, I don't think there is a viable "fix" for it.
>> Then there is the flatpak+portal isolation model
> That model doesn't really apply here.
I mean, I was merely stating facts about what's existing out there. Anyhow
> What I'm talking about is for an app to have some protections of its own data
This isolates applications and their data from one another, in that aspect they are relatable.
On macOS, basically all of these are extra permissions that you have to grant to an application - you'll get prompted with a popup when they try to do it.
eg: local network access, access to the documents and desktop folder, screen recording, microphone access, accessibility access (for keylogging), full disk access, all require you to grant permission
Strix Halo is impressive, but it isn't AMD going all out on the concept. Strix Halo's die area (300mm2 ish) is roughly the same as estimates for Apple's M3 Pro die area. The M3 Max and M3 Ultra are twice or four times the size.
In a next iteration AMD could look into doubling or quadrupling the memory channels and GPU die area like as Apple has done. AMD is already a pioneer in the chiplet technology Apple is also using to scale up. So there's lots of room to grow for even higher costs.
I don't think AMD really uses the name "Strix Halo" to market it to a large audience, it's just an internal codename. Just two other recent internal names are "Hawk Point" and "Dragon Range" internally, where Hawk and Dragon are names that MSI and PowerColor use to market GPUs as as well. Heck, PowerColor even exclusively sells AMD cards under the "Red Dragon" name!
AMD's marketing names for especially their mobile chips are just so deliberately confusing that it makes way more sense for press and enthusiasts to keep referring to it by its internal code name than whatever letter/number/AI nonsense AMD's marketing department comes up with.
You don't need to wait for Valve to get this experience today. The HTPC build of Bazzite [1] brings an experience identical to SteamOS to all computers with an AMD or Intel GPU from the past 8 years or so.
It works amazingly well and I can't imagine going back to Windows for a PC that is built only for video games. I use it on my "Gaming HTPC" (Ryzen 3600, Radeon RX6600, Fractal Design Node 202) and it brings a great console experience to my TV, with access to my PC game library, without being locked into a console ecoystem, and without the enormous cruft and user hostility that Windows has you manage these days.
I'm a pretty casual and patient gamer, and for that use case this Steam machine experience is unmatched - despite being built on desktop Linux, it works out of the box and requires zero manual maintenance. For dedicated gaming boxes this Linux user experience is significantly better and easier to use than Windows - we're truly living in the future.
[2]: It's built on top of Fedora and Universal Blue, so under the hood it's different from SteamOS which is built on a custom immutable version of Arch Linux. However, that implementation detail is actually almost totally irrelevant if you want to play games since all software is managed by Steam and Flatpak on both systems.
I have both a Steam Deck and a (Windows) gaming PC.
While the "happy path" in SteamOS is truly amazing, there are dark corners where it falls down. Third party launchers (like EA's garbage) are extremely janky. Hardware support in Linux/SteamOS is questionable for exotic peripherals (I have a TrackIR which never worked right, a MS XBox USB controller dongle that requires third party kernel modules, and a HP Reverb G2 which has only preliminary support through third party software). And some types of multiplayer anti-cheat are completely unavailable.
Some of this is solvable, some probably isn't. But there's a reason I still keep Windows on the gaming PC - sadly.
I've been wondering what the limiting factors are for migrating gamers and I think the larger software ecosystem and cumulative effect of paper-cut issues will cause people to bounce off.
Linux and running games under steam/wine/proton is great in the broad strokes, but users will have built up their own collection of tools or ways of doing things they will seek out equivalents for and judge the linux experience as a whole on whether they can do that. Many of the windows applications are very mature compared to linux because that's the ecosystem and audience its had for decades, there's nothing touching Foobar2000 for example (and the UI glitches in wine). Now add in all the other things gamers regularly expect to do, what's needed to accomplish them and how well they do it, overlays, screen recording, using modding tools, etc.
It also strikes me with the win10 end of life there's going to be a huge variety of hardware configurations people want to 'just work', in terms of age and which model someone chose in a particular generation. For example support for fan control on my Z270 board doesn't exist, presumably because of the way ASUS made that model.
I can appreciate Valve and their direct partners picking their battles on what to support as it's a huge gauntlet to pick up, but I really doubt the needle is going to move large distances and saying "bye bye windows gaming"
If you are demanding or particular about your gaming experience, then Linux isn't there yet. Compatibility with the very latest AAA titles can sometimes trail behind Windows, anticheat for competitive multiplayer often blocks out Linux compatibility, and you need to adapt to different tools for customizing and surrounding your gaming experience if you're so inclined.
What I'm highlighting if you just want to sit down to play some damn games already in your library, especially on a dedicated "console" like a handheld or HTPC, then the Linux experience is superior to Windows. And I expect that there's a sizable audience for that.
> cumulative effect of paper-cut issues will cause people to bounce off
I disagree, PC gaming has always been rife with papercuts, especially relative to console gaming.
The real moat that Windows has is that anonymous-matchmade competitive multiplayer games are decreasingly going to want to run on hardware that supports user freedom. Which for me personally is fine, because I find anonymous-matchmade competitive multiplayer games to be dogwater that I ain't missing, but for a lot of people that's a non-starter.
(Disclaimer: proud owner of a Steam Deck which has also served double duty as my desktop machine while I wait for a replacement power supply for my laptop.)
They mean "cloud native" in the sense that it adopts atomic system updates and containerized application installs, which has been common in "the cloud" for years but is much less commonplace in personal Linux installations. Working in this way is a large part of why Bazzite "just works". It is also actually exactly how SteamOS works (with some implementation differences under the hood), so SteamOS is "cloud native" in the same sense.
I do think this marketing is unnecessarily confusing. The dayjob of the original master mind behind Bazzite and Universal Blue is working with cloud systems IIUC, so they find it an important thing to highlight.
I tried this a while back, when going 6700 XT HDMI 2.1 to LG Oled C2 HDMI 2.1 with proper cable i could not get RGB 444 with 'correct' colordepth in Bazzite (or any distro)
Windows 10 or 11 does not have this problem. Apparently it's an issue with the HDMI board and proprietary drivers for linux.
The full SteamOS experience is pretty tied up in Linux' open source graphics stack, moreso than regular Linux desktop environments because Valve built it for high performance on the Steam Deck's AMD GPU. Nvidia's proprietary driver has traditionally done its own thing and has been quite incompatible with things targeting the open source stack. So it's hard to replicate the Steam Deck experience on Nvidia, no matter the distro.
That said, over recent years Nvidia has made some efforts to improve compatibility. Just a few days ago Bazzite announced a Steam Deck beta image for Turing and later Nvidia cards [1]. It's too early to run though if you want the seamless experience you get on Intel and AMD, and progress mostly depends on Nvidia and Valve, but I hope they get there.
The Nintendo 3DS (and DSi too) contain a version of the ARM7TDMI CPU used in the Game Boy Advance [1]. On the 3DS, this chip was used for the Game Boy Advance Ambassador titles [2], which effectively run "natively" on the 3DS - when launching a game, the 3DS reboots into a different firmware and just runs the GBA game.
Later, homebrew was able to sideload GBA titles [3], which essentially has perfect software compatibility. However, emulating GBA titles still has advantages over rebooting the device so software emulators are available for the 3DS. The New 3DS is fast enough to provide pretty high quality GBA software emulation.
I don't think a similar path to directly use the DSi's ARM7 to boot GBA games was ever found by homebrewers (it may just be that the DSi is not able to reboot in a "different mode", which Nintendo did release for the 3DS?). The best available on the DSi seems to be a "compatibility layer" solution that tries to run the ARM7 code on the main ARM9 CPU [4], which seems to work surprisingly well.
> it may just be that the DSi is not able to reboot in a "different mode", which Nintendo did release for the 3DS?
From GBATEK:
"The memory regions and IRQ bits do still exist internally, but the DSi does basically behave as if there is no GBA cartridge inserted. Reading GBA ROM areas does return FFFFh halfwords instead of the usual open bus values though."
Since the memory map isn't flexible and GBA games expect to load data from the cartridge at the hardcoded area, games won't function on the ARM7. I assume the 3DS has special hardware to handle this properly.
I wish other distros documented ways to make it easy to customize the initramfs like this. I'd love to build a setup like this, but I don't want to use Alpine as I don't like musl for compatibility reasons or RC scripts for managing services.
There are other options, but they have considerable barriers to entry as well, like NixOS which requires learning their specific DSL. I like the idea of `bootc` but that doesn't support running from RAM best I can tell. Other distros really only document customizations to the initramfs as a means to provide an installer for a stateful system, which makes running a server like this a bit of uncharted territory.
> I wish other distros documented ways to make it easy to customize the initramfs like this.
Well this is not exactly a documented or "official" way to do things, it's just that Alpine is so darn simple, that producing an elegant but crazy hack doesn't look all too different from wrangling Ubuntu to do a normal, sane thing (like installing Firefox without Snap).
In fact, building an initramfs completely from scratch, with just enough userspace to start doing useful things, is not that difficult. It's just a cpio archive with an arbitrary filesystem layout - you can drop a statically linked executable (name it "/init"), pass -kernel & -initrd to Qemu, and you've got yourself a "hello, world" of embedded/single-purpose Linux.
> I don't like musl for compatibility reasons or RC scripts for managing services
That's the point. You can afford hacks like this because you got rid of all that complexity. musl is simple. RC scripts are simple. NixOS is all but.
> Then yes, you have a regular Linux (although based on musl, instead of usual glibc) on which you can install Docker.
The OnePlus 6/6T are also supported by Mobian [0], which is just regular glibc-and-systemd based Debian, and so a pretty familiar Linux server experience.
> it might indeed be a good idea to avoid Android
It's a good idea to avoid Android because of kernel security as well. Old Android devices always use out of date Linux kernels even when using custom ROMs, and when running (containerized) network services you really depend on the security of your Linux kernel to keep those things properly isolated. Both PostmarketOS and Mobian do bring current mainline Linux support to these devices, so you can be quite a bit more confident in your kernel that way.
It's a shame PostmarketOS and Mobian don't really support many newer devices well. Last I checked the OnePlus 6(T) were still the highest performance devices that had okay support. The Snapdragon 845 - a 2018 flagship SoC - in the Oneplus 6/6T made them real high performance device to repurpose for a long time. In 2024 though it's now beaten in performance by the Raspberry Pi 5 or RK3588 based Armbian based devices. Those SBCs of course already have much better I/O and more straightforward ways to get a supported Linux running on them (and don't require disconnecting the battery with custom soldering). So you need to be really committed to reusing your old hardware to go down this route.
> It's a good idea to avoid Android because of kernel security as well.
Well it's "just" a matter of updating the kernel, right? Those linux projects like PostmarketOS do a lot of mainlining, which benefits custom Android ROMs.
I agree that if the goal is to use the device as an RPi, then it's better to avoid Android. But I wouldn't say that Android is less secure than Linux in general (on the contrary, Android has an interesting security model).
It's not "just" the kernel. The bigger issue is the firmware - Android devices have a ton of closed-source low-level firmware bits, and you really shouldn't expose them to the Internet after the device has reached end-of-support.
But if you're only using it for limited projects as an RPi replacement, then it's probably alright if you're also putting a firewall in front of it, or having it in an isolated network segment with a reverse proxy.
> Android devices have a ton of closed-source low-level firmware bits
Can you elaborate on that? You seem to be suggesting that there are low-level firmwares on Android devices that are exposed to the Internet and do not receive updates. Which ones? And do they receive updates with Linux on mobile OSes? And if yes, why couldn't alternative AOSP-based systems use those firmware updates?
The important ones - from a security and privacy standpoint - are the baseband (cellular stack), WiFi, Bluetooth, NFC, camera, mic, bootloader and the Trusted Execution Environment. Then there's also minor firmware bits for the sensor hub (accelerometer, ambient light sensor etc), touch controller, audio etc.
You can imagine the consequence if there was a vulnerability in say the WiFi firmware or the microphone. The Bluetooth stack is especially vulnerable, with it being an attack vector many times in the past.
On Android devices, only Android has been able to deliver updates to those firmware blobs. This is mainly because these are closed source binary blobs, and are provided by the OEM (often in conjunction with the respective chipset manufacturer, covered by a license agreement).
AOSP and unofficial Linux based OSes like PostmarketOS do not have a license to obtain and distribute these firmware. But even if they did, it means nothing if the support agreement from the chipset maker has ended. Being closed source bits, you can't do anything about it if the respective manufacturer refuses to provide updated firmware.
Ocassionally, some Android custom ROM makers may extract these blobs from more recent devices having the same chipset but running newer firmware, and of course, it doesn't always work (well), not to mention, it's technically illegal. And of course, an official project like PostmarketOS or LineageOS would never do something like redistribute proprietary firmware bits. Projects like these conveniently ignore the firmware issue, and leave it as an exercise for the end user.
Nintendo optimizes for cost, not maximum performance and almost always selects older technology. AMD Z2 chips go into $600+ bulky low margin PC gaming handhelds whereas Nintendo likely will want to hit $300-350 while keeping a healthy margin.
This also means that the Switch SoC doesn't use an expensive cutting edge manufacturing process. And it probably won't be made in TSMC factories at all. Leaks pretty clearly indicate an Nvidia Ampere based SoC built on Samsung's 8nm process, so it's the same tech as Nvidia's consumer line circa 2020.
I wouldn't automatically prefer any random N100 mini PC over a nice second hand enterprise mini PC.
In home server use cases, mini PCs stay idle the vast majority of their runtime. So it's idle power consumption that is the most useful metric to look into. The N100 can have great idle performance in theory, but most data I can find about N100 boxes is them idling in the 12W-15W range. This is something that older enterprise mini desktops have no trouble matching or beating [1]. Especially since roughly the Skylake era (Intel 6th gen), idle power consumption for enterprise PCs has been excellent - but even before then it wasn't bad.
Enterprise vendors like Dell/HP/Lenovo have always optimized for TCO and actually usually use quite high quality power supply circuitry, whereas most N100 mini PCs tend to be built with cheaper components and not as optimized for low power usage for the whole system.
[1]: I recommend reviewing Serve The Home's TinyMiniMicro project, which often finds the smallest enterprise PC form factors to idle at 8 to 13W, even older ones. Newer systems can get below 7W! https://www.servethehome.com/tag/tinyminimicro/
One can also do things like undervolting to reduce the power draw even more. Modern BIOSs can give a lot of freedom for underclocking/volting, not just pushing things to consume more power.
The Shield TV has had an impressive support lifecycle for an Android device but it still falls well short of a 10 year support cycle.
The Shield was released in May 2015 and its latest software update has an Android security patch level of April 2022 and was released November 2022. No more updates seem to be forthcoming. Notably, all Shield TVs today are vulnerable to remote code execution when displaying a malicious WebP image [0], a widespread issue uncovered last year.
Apple released the Apple TV HD two months after the Shield TV, but it still receives the latest tvOS updates to this day and will be receiving this year's new tvOS 18 [1] [2]. It received a fix for that WebP issue the same day as supported Macs, iPhones and iPads did last September.
Even the best Android device examples with good vendor support still seem to be falling short. The Shield TV is still capable streaming hardware in 2024 used by many people, but it's sitting there doing that while missing important security patches unbeknownst to the users.
[2]: To be fair it's the only Apple A8 device that receives support until today. The iPhone 6 with the same chip was launched mid 2014 and received its last update in early 2023.
I wish other browsers implemented this kind of self protection, but I suppose that is difficult to do for third party browsers. This seems like a great improvement as well, but it seems this is quite overengineered to work around security limitations of desktop operating systems.