Battlefield 6 might never run on the average Linux desktop, but I could see a future where it would run on Steam hardware in an end-to-end Secure Boot environment.
I find it much more likely that Valve enables Secure Boot on their Steam hardware.
I imagine that if this happens, it will be followed by popular Linux distros finally becoming serious about their Secure Boot implementations, instead of simply shimming it or seen as a rarely-used feature reserved for enterprise distros like RHEL.
Some of us actually think that having some sort of validation that our OS hasn't been tampered with is a feature and not a bug. It's only a problem when companies parlay that validation into anti-consumer DRM - but that's a political problem, not a technological one.
All the platforms that went all-in on secure boot like things and attestation are anti-consumer hellholes that slurp all your data. The evidence just does not look good. Maybe Linux is different, but it's swimming against the tide here. It would be the first of it's kind.
A few anti-cheat systems rather than inspecting the local machine look for things like impossibly fast target acquisition in FPS games, or the server noticing when a shot is taken on an opponent who’s supposed to be totally obscured. Those aren’t perfect, but they don’t require kernel-level anticheat.
1. Start building stuff that is hard to build that requires touching these niche topics. Especially stuff you don't know how to build
2. As you encounter problems, you'll have to scour for solutions (AI doesn't know these things due to lack of training data). In the process you will find people who are also working on these problems. Ask these people well-formed, intelligent questions.
I am skeptical of your second claim here… if you can “scour for solutions”, and you find something about it on the internet, then AI could find it the same way.
Just cause the AI could find the info definitely does not mean it will find and apply that knowledge correctly to solve a problem.
I find AI shockingly bad ad searching the web, as SEO blogspam sites heavily pollute AI context windows, while relevant and important resources are typically very densely presented reference material which must be constantly revisited.
It doesn’t need to. It has already all the fundamental knowledge it needs. Just set it up on a system with an editable proc file system and it would be able to figure it out.
A lot of the solutions are buried in places AI can't scrape or train on. Like inside people's brains or inside private codebases or chatrooms not open to bots. However you can find these people and the products and services they're making and start talking to them.
Most LLMs can't even count parentheses properly to build basic Lispy stuff. Building something niche like a logic solver in Scheme macros only? Forgetaboutit.
This is a silly take because having your ATC workers unpaid for over 30 days is going to increase the risk of catastrophic plane crashes. Even if this had nothing to do with this.
Footage of plane crashes are certainly important to know _this could start happening to passenger planes_
Having the infrastructure for reporting incidents is the expensive part.
Doing it often doesn’t really add to the cost. More reporting is helpful because it explicitly makes it clear even operational issues can have lessons to be learned from. It also keeps the reporting system running and operationally well maintained.
reply