> Accidentally getting non-typechecked code in the compiler, or running into problems with the compiler thinking two equal types are distinct?
Yes, this sort of thing. Basically, you have to have this be deterministic, or you end up with very strange possibilities, possible miscompilations, and in the best case, confusing errors. One option is to simply accept that these things can happen. Another is to restrict what you can do at compile time to ensure that they can't.
An extremely simple example is cross compiling. In Rust, usize is dependent on the architecture you're compiling for. A very simple "just compile and run the program, get the answer, and use it" implementation of compile-time execution will produce a usize of the size of the host, not the target. That's a miscompilation. This example, while being simple, is also simple to fix. But it's an example of how it's not as trivial as "use the compiler to compile the program, then run it." Which maybe isn't how you think of this feature, but how I was, back before I knew anything about this topic :)
> I might be overreacting here though
Nah, I think that you're not wrong. It's just that, when you start applying this super rigorously, you end up in weird places. How can you trust any behavior in a language without a specification? How you can you trust a specification if that specification hasn't been formally proven? How you can trust an implementation of a formally proven specification? How you can you trust that the silicon you're running on does the right thing, even with your bug free, formally proven code?
Everyone chooses somewhere along this axis to be comfortable with. And everyone does something, including "I can ignore these problems because in practice they don't happen to me," to deal with the bits outside of what they consciously choose to focus on.
> An extremely simple example is cross compiling. In Rust, usize is dependent on the architecture you're compiling for. A very simple "just compile and run the program, get the answer, and use it" implementation of compile-time execution will produce a usize of the size of the host, not the target.
I get that there are non-obvious problems here, and as you say, this problem specifically has an easy fix, but I'd just like to note how Zig does this:
Modulo the `.{` weirdness, this probably looks familiar. By default, this prints out 8 and 8 on my system, but if I cross-compile to a 32-bit target, it prints 4 and 4.
> It's just that, when you start applying this super rigorously, you end up in weird places. How can you trust any behavior in a language without a specification?
I think this is a social issue for me; if rustc decides one day to change its behavior under my feet it feels like it's my fault for not having written proper Rust in the first place (even if the behavior wasn't properly defined in the first place), but if there's crazy things going on in the compiler due to bugs (that we didn't find, because proofs are hard), then that's not really my fault, in a sense. And of course, if my CPU decides to run my program wrong, that can't really be blamed on me. The end result in these three cases are all the same: the program didn't run as expected, but the blame (I don't want to point fingers, but this is the best word I could come up with) is different, and the probability of this happening is vastly different. I've never hit a CPU bug, but in the little unsafe Rust code I have written, I've had behavior change with a compiler update, which I'm sure is because I hit UB.
And for what it's worth, I would greatly prefer Rust having a proper spec, even if that would increase turnaround time for the language evolution, just to ensure that everyone really is on the same page with respect to what the language really should and shouldn't do. I realize that Rust would rather be careful and make sure that the decisions made are the right ones. I think it's a fair trade-off, but I'm not sure I would have made it, if it were up to me.
Yes, this sort of thing. Basically, you have to have this be deterministic, or you end up with very strange possibilities, possible miscompilations, and in the best case, confusing errors. One option is to simply accept that these things can happen. Another is to restrict what you can do at compile time to ensure that they can't.
An extremely simple example is cross compiling. In Rust, usize is dependent on the architecture you're compiling for. A very simple "just compile and run the program, get the answer, and use it" implementation of compile-time execution will produce a usize of the size of the host, not the target. That's a miscompilation. This example, while being simple, is also simple to fix. But it's an example of how it's not as trivial as "use the compiler to compile the program, then run it." Which maybe isn't how you think of this feature, but how I was, back before I knew anything about this topic :)
> I might be overreacting here though
Nah, I think that you're not wrong. It's just that, when you start applying this super rigorously, you end up in weird places. How can you trust any behavior in a language without a specification? How you can you trust a specification if that specification hasn't been formally proven? How you can trust an implementation of a formally proven specification? How you can you trust that the silicon you're running on does the right thing, even with your bug free, formally proven code?
Everyone chooses somewhere along this axis to be comfortable with. And everyone does something, including "I can ignore these problems because in practice they don't happen to me," to deal with the bits outside of what they consciously choose to focus on.