You're absolutely right to be skeptical! You do ignore that vibe coding isnt going away...
That's exactly why I built TheAuditor - because I DON'T trust the code I had AI write. When you can't verify
code yourself, you need something that reports ground truth.
The beautiful irony: I used AI to build a tool that finds vulnerabilities in AI-generated code. It already
found 204 SQL injections in one user's production betting site - all from following AI suggestions.
If someone with no coding ability can use AI + TheAuditor to build TheAuditor itself (and have it actually
work), that validates the entire premise: AI can write code, but you NEED automated verification.
What could go wrong? Without tools like this, everything. That's the point.
So 0.1% extra lifetime risk for every CT scan, I guess I went from 40% lifetime risk to 40.5, I guess I'll keep not drinking, not smoking and not being obese to help with the statistics.
Radiographer: “MRI (magnetic resonance imaging) creates detailed images of the inside of the body using strong magnetic fields and radio waves, rather than X-rays. MRI is/was the holy grail for medical imaging professionals. Arguably the coolest images come from MRI”
In my case, I had a lung issue and CT scans are more sensitive to air being where it shouldn't be. At least two of the 5 ct scans could probaly just have been x-rays tho.
The answer is both. Devs will first try to fix it by correctly emulating the system behavior because like you said, that can also fix other games and because that is the right thing to do. There are occasions where doing that can result in a huge performance penalty or some other underised behavior so they just resort to hacks in the emulator or straight up patching the game.
Also, at least in the Dolphin emulator for the Gamecube/Wii, they only use game-specific hacks as a last resort. They learned from a lot of older emulator projects that game-specific hacks pile up on each other and eventually make the code an unmaintainable mess.
This is one of the differences between the bsnes/higan family of SNES emulators and the previous generation (ZSNES/snes9x/etc). bsnes/higan and emulators derived from them managed to emulate the SNES more accurately (which required more processing power), and this allowed having fewer game-specific hacks.
Emulation benefits hugely from increases to processing power over time.
The other comment on this thread mentions that it also does something else:
>disables all the system calls not explicitly invoked by the program text of a static binary
This means that if the original library didn't have an execve call in it, you would'nt be able to use it even if with ROP. In short, this seems useful to block attackers from using syscalls that were not originally used by the program and nothing else. It can be useful.
Sure, assuming your programs don't execute other programs. I don't know much about OpenBSD specifically, but spawning all over the place is the "norm" in terms of "Unix philosophy" program design.
(I agree with the point in the adjacent thread: it's hard to know what to make of security mitigations that aren't accompanied by a threat model and attacker profile!)
> assuming your programs don't execute other programs.
What about language runtimes? They don't execute other programs in the sense of ELF executables (although the programs they interpret might), but they have to support every syscall that's included in the language. So, for example, the Python interpreter would have to include the appropriate code for every syscall that Python byte code could call (in addition to whatever internal syscalls are used by the interpreter itself). That would be a pretty complete set of syscalls.
Yep, language runtimes are an (inevitably?) large attack surface. My understanding is that OpenBSD userspace processes can voluntarily limit their own syscall behavior with pledge[1], so a Python program (or the interpreter itself) could limit the scope of a particular process. But I have no idea how common that is.
The syscall goes in a register but it does not have to appear literally right next to the `syscall` instruction in the binary. As TFA explains in the introduction, a syscall stub generally looks like
mov eax,0x5
syscall
However it doesn’t have to, `syscall` will work as long as `eax` is set no matter where it’s set, or where it’s set from. You could load it from an array or a computation for all `syscall` cares.
So as an attacker if you can get eax to a value you control (and probably a few other registries) then jump to the `syscall` instruction directly you have arbitrary syscall capabilities.
The point of this change is that the loader now records exact syscall stubs as “address X performs syscall S”, then on context switch the kernel validates if the syscall being performed matches what was recorded by the loader, and if not it aborts (I assume I didn’t actually check).
This means as long as your go binary uses a normal syscall stub it’ll be recognised by the loader and whitelisted, but if say a JIT constructs syscalls dynamically (instead of bouncing through libc or whatever) that will be rejected because the loader won’t have that (address, number) recorded.