Those aren't considered to be security issues. Makes me wonder what the point of banning `unsafe` is at all. You're going to need some other system anyway...
A large number of unsoundness bugs only work if you have access to the stdlib, because they're flaws in stdlib types and functions that use `unsafe` internally, and are supposed to present a safe interface around it.
If you a) don't have access to unsafe, and b) don't have access to the stdlib that lets you do powerful things without unsafe, then you're very limited in what you can do.
https://smallcultfollowing.com/babysteps/blog/2016/10/02/obs... discusses this further. Conceptually, you can think of "entirely Safe Rust" to be a very limited language, which you then progressively add "capabilites" to by exposing safe interfaces implemented with unsafe code. For example, Vec and Box (which require unsafe) grant safe code the ability to do heap allocations.
It's true that this is not designed as a security boundary. As I note in my comment above, the PL/Rust devs also make that clear. That doesn't mean it has no value as part of a defence in depth strategy.
The rustc driver for trusted PL/Rust prevents using the subset of the Rust language required to trigger those issues. Most of them are things that would have a hard time traversing the Postgres procedure call boundary, anyways, in a legitimate use-case, so this isn't expected to meaningfully affect actual user code.
Surely all a "sufficiently motivated" attacker would need to do is peruse the unsound bugs on GitHub?
https://github.com/rust-lang/rust/issues?q=is%3Aopen+is%3Ais...
Those aren't considered to be security issues. Makes me wonder what the point of banning `unsafe` is at all. You're going to need some other system anyway...