For example: No buffer overflows, null pointer exceptions, use-after-free, etc. On ARM and RISCV64 not even the C compiler has to be trusted, because functional correctness has been proven for the binary. And there are more proofs besides functional correctness.
https://docs.sel4.systems/projects/sel4/frequently-asked-que...
"Succinct Non-interactive Arguments of Knowledge", it's a system for zero-knowledge proofs, which allow proving a fact of some kind without disclosing the inputs
Instead of "hallucinating" I would have preferred the term "bullshitting" -- in the Harry G. Frankfurt sense of not caring about the truth of one's utterances. But it's too late for that.
Using "bullshit" would be interesting, but to me would introduce a backdoor anthropomorphism to describe the output. The picture is still too human.
Isn't Frankfurt's concept of bullshit made up of 2 parts: 1) a distinction between lying and telling the truth, AND 2) the absence of caring about either when speaking when normally it's assumed present?
Part 1 seems to apply, but part 2 wouldn't. It doesn't make sense to talk about GPT "caring" about its output beyond anthropomorphism. No one talks about their computer caring about having correct or accurate output and neither is it assumed. People would think your imagining a demon in the box. Really, it's even odd to say "GPT lied" outside of very specific circumstances.
I think "bullshitting" fits better than "hallucinating" - just keep spitting out words rather than admit ignorance, but maybe the best human analogy is freestyle rapping where one has to keep the flow of words coming regardless!
Maybe we just need to coin a new word for it - "LLM-ing" perhaps ?!