Holy crap, this is f*ing awesome. Rust has made systems programming exciting for me again.
Seeing well written Rust code like this (also, the repos on Github https://github.com/trending?l=rust) has been a really good learning experience. As a Rust newbie, I only wish there were better examples of testing in public Rust repos.
>As a Rust newbie, I only wish there were better examples of testing in public Rust repos.
Testing is incredibly easy. The reason you may have missed the testing is that tests are often in the same file as tested code. Go back and check, it's very common in Rust to test your code. And it's as simple as writing:
#[test]
on the line before your unit test function, and then using some of the unit test macros in std like `assert` and `assert_eq` along with `fail` to perform the tests[1].
Then you test your code by running the normal rustc command but with the `--test` flag. A test runner with very attractive output is built into the compiler.
Rust unit testing is just about the easiest testing I've ever used, right up there with golang. I love that it's built into the language.
Also, the `cargo` package manager makes everything -- including unit testing -- incredibly convenient. And I say this as someone who cringes every time a language comes out with yet another package manager or build tool (ahem, .js). But in the case of rust's cargo, it's incredibly worth it to use a new package manager, and so much better than a Makefile (although it's quite easy to integrate makesfiles with cargo).
One of the huge wins of rust is that the devs put a lot of effort into its basic infrastructure. Testing, documenting, and benchmarking are almost too easy.
These are really valuable projects. It's not only a nostalgic project to hack on. It's also a learning tool for those interested in game programming, idiomatic Rust, and C-to-Rust conversion. Even building complex software from requirements. Thank you, Cristi Cobzarenco.
Indeed! Looking at projects like this and Piston are helping me get to grips with the language in a much more useful manner than working through the various guides/tutorials.
This is sort of off topic, but the fact that every change to the source requires (on my machine) 3-4 full seconds to recompile, without optimization, is indicative of the compiler performance problems that remain my main roadblock with Rust.
It's probably better that before 1.0, the development effort concentrates on getting the language right rather than compilation speed which can be fixed later.
While this is true, it's also important that things aren't introduced which make these kinds of optimizations impossible. I think we've done a good job of that, but it is something to keep in mind.
Have you profiled the build? Is most of it spent in the compiler?
A more practical point is that the main competitor is C++, a language notorious for long compilation times, so not-ultra-fast compile times might not be the highest priority for people working on rustc.
C++ compilers are very fast (I would say amazingly fast) unfortunately atrocious header only implementations like Boost drags it down. I just did clang++ -E on a single #include of cpp-netlib which depends on Boost asio and the dumped output came to:
$ wc test1
279506 897765 10174700 test1
And this is after enabling dynamic linking which cuts down a few thousand lines. Library writers are not giving compiler writers a break! I find this practice atrocious. People should be able to just import the interface instead the whole implementation of everything (and enable "headers only" feature at the end if so desired to eliminate the need of linking).
Compatibility with C tooling is also a big reason, as templates can only be header only to be consumed by other translation units.
Unfortunately "export template" was a failed experiment.
Now C++ developers need to wait until C++17 for modules, if they ever get into the standard. And if they do, most likely it will take until around 2020 for all major C++ compilers across embedded, desktop and server systems offer support for it.
Now the question is, if one is willing to wait that long or rather use a language that can use modules today.
Err, header-only libraries exist mostly because templates require them, not because people don't like linking (well, there's some of that too, but it's the minority).
While the problems with templates and huge includes are undeniable, I've found in my experience that for a decently size C++ program the linking stage alone can take way more than the 3-4 seconds the GP is talking about.
C and C++ are definitely some hard beasts to compile, even without boost.
Not for incremental compilation when a single file changed, without optimization, it's not - depending on the type of change, of course, and how heavy the C++ code in question is, and I guess on whether you're using a broken IDE/build system (my standard is the command line, using make for small projects and ninja for larger ones). And incremental compilation can scale up to much larger projects without increasing the time much, whereas in Rust that would require splitting up into crates and even then would frequently require a full rebuild due to the coarse dependency tracking of "anything in this crate changed -> rebuild all dependents".
That said, I expect incremental compilation built into rustc to greatly improve the situation - if done right, it should easily beat C++, although I don't know the details of the plan - so I'm happy someone is working on it. (In the past I wanted to work on it myself, but I was in a big generally unproductive slump..)
Another thing which I don't see often mentioned, but which alleviate most of the compile-time waiting-pain for me (and I'm actually a dynamic language guy) is the fact that since optimizations are usually what takes longest, the compiler is quite quick about telling me about errors.
But maybe my projects just haven't been big enough :)
I want two types of results: when developing, I want fast turnaround and most of the time don’t care about runtime performance; when deploying, I don’t care much about how long it takes to compile, but I want runtime performance to be optimised.
Even if it's a simple idea, IIRC Fabien Bellard suggested to use very simple compilers such as tcc for prototyping and gcc for final built. A few years ago there was also an intermediate step using clang for error messages (and also portability).
There is a reason that every C/C++ compiler has various optimization options. Developers like the compiler to respond quickly to changes and later they'd like the highest performing code at the cost of compile speed.
Depends on how many people are developing on the code base. If every morning is a full build with a few spread throughout the day for integration it is pretty awful. Additionally, I think submit queues are the way to go for committing code to master and if that takes an hour before you know your change is good I'm pretty unhappy. As far as incremental compiles go, just developing in IDEA does a pretty good job.
Turbo Pascal and Delphi had incredibly fast compilers. Why can't we make fast compilation a required feature today? We should never have to wait for the compiler/linker.
They also didn't have the zero-cost memory safety abstractions that Rust has. Nor did they do much optimization.
That said, we're working on compilation speed. The focus so far has been getting the language in shape and runtime performance of the generated code, not compilation speed (although we've picked most of the low-hanging fruit in compilation speed anyway).
How optimistic are you that the compilation speed can be significantly increased? Are there a lot of easy optimizations available or is this just the price paid for extra compile time safety?
I'm not really a fan of Go but waiting for the compiler can be a pretty significant productivity killer on larger projects. Their fast compilation times are a pretty big selling point for what is (IMO) an otherwise underwhelming language design.
> How optimistic are you that the compilation speed can be significantly increased? Are there a lot of easy optimizations available or is this just the price paid for extra compile time safety?
The vast majority of compile time for optimized builds (80%-90%) is spent in code generation and optimization in LLVM.
For unoptimized builds, most of the compile time is spent in the typechecker doing type unifications for method lookup. With some optimizations to quickly reject method candidates I suspect this can be greatly improved.
Incremental compilation for the fast turnaround is being worked on and there has been significant progress, to address comex' complaint.
> Their fast compilation times are a pretty big selling point for what is (IMO) an otherwise underwhelming language design.
Go 6g/8g also doesn't do much optimization by comparison to GCC/LLVM. (Rust uses LLVM.)
Generally I think many people complain too much for what takes a few seconds.
C++ builds are measured in hours, usually require distributed build systems, clever use of forward declarations and cutting class private declarations into static code or PImpl classes to bring it down to something manageable.
While I agree with you, I'll also say that the difference between a second or two and instant is _huge_. The Ruby world has been talking about how to get unit tests runs down for the past few years for this reason. Sub-second test suite runs are _amazing_.
> Static typing vs. unit tests is a false dichotomy.
Not at the enterprise.
In what is now almost 30 years of dealing with computer systems. I only had the luck to work at one single company that took unit tests seriously.
All the other companies, the guys just write tests if a manager imposes them (usually only if the customer makes it a condition of payment), or they are somehow related to their performance evaluation.
Otherwise, the best you can get are integration tests at around one month before delivery date.
Ah, and agile in the enterprise is a synonym for a 3 week long mini-waterfall project.
> I don't use them since the late 90's, but they used to be comparable to contemporary C and C++ MS-DOS/Windows compilers.
There has been an enormous difference in the quality of optimization and code generation since the 1990s, when single-pass compilers such as Turbo Pascal and Delphi were common. At that time you usually went straight from AST to machine code, doing some peephole optimizations along the way, but this is unacceptable today if you want to compete with modern C and C++ compilers (or even Java HotSpot). The introduction of SSA (and along with it GVN, SROA, SCCP, etc.) was a big deal.
What I was trying to say, it that if you put a C or C++ compiler against Turbo Pascal or Delphi compiler of the same age, the latter will compile way faster and generate similar code in terms of quality.
If they had received the same investment as C and C++ had become since those days, the situation would still hold.
But history took another path, so it is kind of moot point now.
The Borland compilers were single-pass compilers; they did not go through an AST, but generated machine code directly from parser input. Also, the original Turbo Pascal compiler was written in assembly language. (Not sure about the later compiler versions used by BP and Delphi.)
Also, the language was quite simple. With large, complex programs, the most time-consuming stage was in the linking of the final binary.
If you want a similarly fast compiler today, look at Go.
First and foremost, a language must satisfy some need that is not met adequately by another existing language. Otherwise it's useless.
There are languages already with fast compilers, and some of them are pretty fast and have pretty good semantics.
Rust is really trying to offer a combination of things not available elsewhere: safety, speed, and control. But that innovation is based heavily on compiler features, which have a compile-time performance penalty.
Assuming rust succeeds on delivering an innovative new language, they can secondarily try to improve compiler performance. Or find ways to avoid paying some compile-time costs in certain situations.
(Even more OT and not a criticism of you comment.)
I wish someone could send your comment back in time so I could have read it when Doom was first released. I think it took longer just to get the game started up back then.
OT but it's wonderful to see those screenshots again. It's been decades since I played that game and I can still remember the hidden tricks in those rooms.
Seeing well written Rust code like this (also, the repos on Github https://github.com/trending?l=rust) has been a really good learning experience. As a Rust newbie, I only wish there were better examples of testing in public Rust repos.