Hacker Newsnew | past | comments | ask | show | jobs | submit | assassinator42's commentslogin

Rumor is Samsung won't support Google's Linux Terminal (at least for their existing phones) since their Knox conflicts with the Android Virtualization Framework :-(.

Honestly I'd like to see Windows 11 running under this as well, but that seems incredibly unlikely.


It's interesting to hear because Samsung had a Linux feature previously: https://developer.samsung.com/sdp/blog/en/2017/10/18/samsung...


They had Linux on DeX in 2018, killed in 2019. It was a partnership with Canonical

https://9to5google.com/2018/11/09/samsung-linux-on-dex-andro...

It was the Ubuntu 16.04 desktop running in a LXD container. It crashed when the tablet went in out of memory, so I had to be careful with what I was running.


Maybe it's possible anyways? Qualcomm was able to integrate their own hypervisor on top of AVF

Linux Plumbers Conference 2025 | Adding Third-Party Hypervisor to Android Virtualization Framework

https://lpc.events/event/17/contributions/1447/attachments/1... https://youtu.be/hLdUCrlheKg


For my 2016 Volt I need to run the defroster to be able to actually see out of the window though. And that takes around 4KW on average.

I can get less than half the range on cold days in the winter (65 MPGe) vs the maximum in the summer (140 MPGe)


Java and the .NET Framework had partial trust/capabilities mechanisms decades ago. No one really used them and they were deprecated/removed.


It was not bad, but without memory/cpu isolates, it was pretty useless. The JSR for isolation got abandoned when Sun went belly up.


It was more like no one used them correctly.


Wouldn't that mean they were poorly implemented. If no one uses something correctly, seems like that isn't a problem with the people but the thing.


I don't think so. Software is maybe the only "engineering" discipline where it is considered okay to use mainstream tools incorrectly and then blame the tools.


Do the “mainstream” tools change every five years in other disciplines?


To be fair they only change when chasing trends, consumers don't care how software is written, provided it does the job.

Which goes both ways, it can be a Gtk+ application written in C, or Electron junk, as long as it works, they will use it.


Partially yes, hence why they got removed, the official messaging being OS security primitives are a better way.


I took a bit of umbrage with LineageOS for this.

CyanogenMod required a CLA to assign them copyright to Cyanogen Inc, only for them to basically kill the project. They forked it as LineageOS only to still require a CLA.


This is a game; I don't think a debug configuration (with checks for things like this enabled) would run fast enough to be playable on contemporary hardware.


That's not accurate.

Generally, game console "debug" configurations aren't "true" debug like most people think of -- optimizations are still globally enabled, but the build generally has a number of debug systems enabled that naturally require the use of a devkit. Devkits, especially back then, generally had 2-3x as much memory as retail systems -- so you'd happily sacrifice framerate during feature development to have those systems enabled.

Debugging was (and still is) generally done on optimized builds and, once you know the general area of the problem, you simply disable optimizations for that file or subsystem if you can't pinpoint the issue in an optimized build.

The biggest performance hit, in general, comes from disabling optimizations in the compiler. I say "in general" because there are systems that might be used to find this kind of thing that DO make a game wholly unplayable, such as a stomp allocator. Of course, you wouldn't generally enable a stomp allocator across all your allocations unless you're desperate, so you could still have that enabled to find this kind of bug and end up with a playable game.

The more likely reason here is that no one noticed or cared. GTA:SA is 21 years old and this bug doesn't affect the Xbox or other versions.


From GP:

> (with checks for things like this enabled)

You can (and could) easily compile an optimized build with debug symbols to track down sources of issues, but catching a bug like this would likely take a dynamic checker like Valgrind or MSan, which do not allow for any optimizations if you want to avoid false negatives, and add even more overhead on top of that. (Valgrind with its full processor-level virtualization, and MSan with its shadow state on every access. But MSan didn't exist at the time, and Valgrind barely existed.)

At minimum, fine-grained stack randomization might have exposed the issue, but only if it happened to be spotted in playtests on the debug build.


This was a PS2 game and codebase.

MSan didn’t exist at the time and valgrind doesn’t work on a ps2.

Neither of those are necessary to find this bug as it could be found using a stomp allocator if you’re a developer on the project at the time.


How could a stomp allocator have possibly found this bug? The offending values are stored on the stack, in-bounds when written to, and again in-bounds when read from.

At no point is there an OOB access, just a failure to initialize stack variables. And to catch that, you'd need either MSan-style shadow state that didn't exist, thorough playtesting with fine-grained stack randomization, or some sort of poisoning that I don't think existed.


No you don't, MinGW(-w64) targets windows directly (with MinGW statically linked in). I've built a Windows->Linux cross-compiler that depends solely on DLLs built-in to Windows (kernel32.dll, MSVCRT.dll, and user32.dll).

Granted that was hundreds of hours, some patches (only 8 lines though), and probably a bit of masochism.

I did of course need MSYS2 command line utilities like make and bison to run the GCC configuration/make scripts. Although we use the mingw32 version of make along with the cross-compiler which also has no other dependencies (it uses cmd.exe as a shell if you don't have a bash.exe in your PATH).


I would assume targeting OpenXR would be much more productive? It's unfortunate that Apple insists on being so proprietary.


The app in question appears to be a safari extension... so... unlikely.


They are a latecomer to this market. Decisions like this are going to kill any hopes of success...

(Just a guy who has seen all proprietary APIs in this space being left in the dust when a standard arrived)


I've been confused by this, aren't these systems using ACPI instead of Device Tree? I know AWS ARM systems use ACPI.


Qualcomm is using device tree.


I believe it supports both, however only uses acpi when booting windows.


The proliferation of Docker containers seems to go against that. Those really only work well since the kernel has a stable syscall ABI. So much so that you see Microsoft switching to a stable syscall ABI with Windows 11.


Source about Microsoft switching to stable syscall ABI due to containers?


https://learn.microsoft.com/en-us/virtualization/windowscont...

"Decoupling the User/Kernel boundary in Windows is a monumental task and highly non-trivial, however, we have been working hard to stabilize this boundary across all of Windows to provide our customers the flexibility to run down-level containers"


I've used Cash App Taxes (previously owned by Credit Karma) for several years. No income limit and free state taxes as well: https://cash.app/taxes


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: