It's usually not the disk space that's the bottleneck, but rather, the CPU instruction cache. On modern Intel and AMD CPUs, the jump from L1 -> L2 alone can triple the latency of a memory fetch. For a "hello world" application, that doesn't really mater, but for, say, an OS kernel, it becomes really important to keep as much of your hot-path code in i-cache as possible.
The binary size tells you ~nothing about how good it is at effectively utilizing L1 icache. In fact, optimizations regularly increase binary size because it turns out inlining to avoid function calls can be even more important. See also loop unrolling, SIMD paths, etc...
I'm more likely to believe Zig's "small binaries" are more from lack of optimizations than some obsessive focus on L1 icache density. Which, given it's not a 1.0 language, isn't something that can be held against Zig. But it'd hardly be a strength, either.
I really am stretching to find a link between a typechecker and L1 cache. Perhaps some kind of dynamic analysis of code and averaged data could give you a non-empiric measurement of L1 cache utilisation but considering cache behaviour it's not standardised even within one vendor it would be a ball-park guess at best.
Given that the size of L1 cache can vary, I strongly doubt it. At best it can guarantee everything fits into a specific size, which may or may not be smaller than L1 cache.