Hacker News new | past | comments | ask | show | jobs | submit login
A Doom Renderer written in Rust (github.com/cristicbz)
229 points by adamnemecek on Oct 12, 2014 | hide | past | favorite | 64 comments



Holy crap, this is f*ing awesome. Rust has made systems programming exciting for me again.

Seeing well written Rust code like this (also, the repos on Github https://github.com/trending?l=rust) has been a really good learning experience. As a Rust newbie, I only wish there were better examples of testing in public Rust repos.


>As a Rust newbie, I only wish there were better examples of testing in public Rust repos.

Testing is incredibly easy. The reason you may have missed the testing is that tests are often in the same file as tested code. Go back and check, it's very common in Rust to test your code. And it's as simple as writing:

  #[test]
on the line before your unit test function, and then using some of the unit test macros in std like `assert` and `assert_eq` along with `fail` to perform the tests[1].

Then you test your code by running the normal rustc command but with the `--test` flag. A test runner with very attractive output is built into the compiler.

Rust unit testing is just about the easiest testing I've ever used, right up there with golang. I love that it's built into the language.

Also, the `cargo` package manager makes everything -- including unit testing -- incredibly convenient. And I say this as someone who cringes every time a language comes out with yet another package manager or build tool (ahem, .js). But in the case of rust's cargo, it's incredibly worth it to use a new package manager, and so much better than a Makefile (although it's quite easy to integrate makesfiles with cargo).

1. http://doc.rust-lang.org/std/index.html


The guide has a whole section on this: http://doc.rust-lang.org/guide.html#testing

And we even have a full guide on testing, though I haven't checked it lately: http://doc.rust-lang.org/guide-testing.html (it's on the todo list)


One of the huge wins of rust is that the devs put a lot of effort into its basic infrastructure. Testing, documenting, and benchmarking are almost too easy.


These are really valuable projects. It's not only a nostalgic project to hack on. It's also a learning tool for those interested in game programming, idiomatic Rust, and C-to-Rust conversion. Even building complex software from requirements. Thank you, Cristi Cobzarenco.


Indeed! Looking at projects like this and Piston are helping me get to grips with the language in a much more useful manner than working through the various guides/tutorials.


You're welcome! :D


Stuff like that is the reason why I'm excited for Rust's future. It's really an amazing language.


This is sort of off topic, but the fact that every change to the source requires (on my machine) 3-4 full seconds to recompile, without optimization, is indicative of the compiler performance problems that remain my main roadblock with Rust.


The core devs are sympathetic (what with Rust being self-hosted) and apparently incremental compilation is coming soon: https://github.com/rust-lang/meeting-minutes/blob/master/wee...

There is also a way to parallelize compilation, which might sometimes speed things up, it's just not enabled by default yet: http://discuss.rust-lang.org/t/default-settings-for-parallel...


It's probably better that before 1.0, the development effort concentrates on getting the language right rather than compilation speed which can be fixed later.


While this is true, it's also important that things aren't introduced which make these kinds of optimizations impossible. I think we've done a good job of that, but it is something to keep in mind.


Sounds like that's mostly the lack of incremental compilation, which is being worked on.


I remember a day where I was proud that my C projects took a few seconds to compile. Made me feel like I had written something with some umph to it.


Have you profiled the build? Is most of it spent in the compiler?

A more practical point is that the main competitor is C++, a language notorious for long compilation times, so not-ultra-fast compile times might not be the highest priority for people working on rustc.


C++ compilers are very fast (I would say amazingly fast) unfortunately atrocious header only implementations like Boost drags it down. I just did clang++ -E on a single #include of cpp-netlib which depends on Boost asio and the dumped output came to:

$ wc test1

  279506   897765 10174700 test1
And this is after enabling dynamic linking which cuts down a few thousand lines. Library writers are not giving compiler writers a break! I find this practice atrocious. People should be able to just import the interface instead the whole implementation of everything (and enable "headers only" feature at the end if so desired to eliminate the need of linking).


Walter Bright, the author of D and the first real C++ native compiler (Zortech C++), explains it

http://www.drdobbs.com/cpp/c-compilation-speed/228701711

Compatibility with C tooling is also a big reason, as templates can only be header only to be consumed by other translation units.

Unfortunately "export template" was a failed experiment.

Now C++ developers need to wait until C++17 for modules, if they ever get into the standard. And if they do, most likely it will take until around 2020 for all major C++ compilers across embedded, desktop and server systems offer support for it.

Now the question is, if one is willing to wait that long or rather use a language that can use modules today.


Err, header-only libraries exist mostly because templates require them, not because people don't like linking (well, there's some of that too, but it's the minority).


While the problems with templates and huge includes are undeniable, I've found in my experience that for a decently size C++ program the linking stage alone can take way more than the 3-4 seconds the GP is talking about.

C and C++ are definitely some hard beasts to compile, even without boost.


3-4 seconds is extremely fast if you're coming from the C/C++ world. I can't see how this could possibly be a roadblock.


Not for incremental compilation when a single file changed, without optimization, it's not - depending on the type of change, of course, and how heavy the C++ code in question is, and I guess on whether you're using a broken IDE/build system (my standard is the command line, using make for small projects and ninja for larger ones). And incremental compilation can scale up to much larger projects without increasing the time much, whereas in Rust that would require splitting up into crates and even then would frequently require a full rebuild due to the coarse dependency tracking of "anything in this crate changed -> rebuild all dependents".

That said, I expect incremental compilation built into rustc to greatly improve the situation - if done right, it should easily beat C++, although I don't know the details of the plan - so I'm happy someone is working on it. (In the past I wanted to work on it myself, but I was in a big generally unproductive slump..)


C++ definitely if you're using heavily templated code, C not so much though.

Although I agree with your point, I've been using Rust quite a lot lately and compilation time was never an issue so far.


Another thing which I don't see often mentioned, but which alleviate most of the compile-time waiting-pain for me (and I'm actually a dynamic language guy) is the fact that since optimizations are usually what takes longest, the compiler is quite quick about telling me about errors.

But maybe my projects just haven't been big enough :)


It is in the top 3 issues with Scala as well. The amount of drag it causes in a project just isn't worth it.


The Rust compiler is much faster than Scala's compiler, from everything I have heard.


really doesn't matter how long the compiler takes, so long as the result is fast.


I want two types of results: when developing, I want fast turnaround and most of the time don’t care about runtime performance; when deploying, I don’t care much about how long it takes to compile, but I want runtime performance to be optimised.


Even if it's a simple idea, IIRC Fabien Bellard suggested to use very simple compilers such as tcc for prototyping and gcc for final built. A few years ago there was also an intermediate step using clang for error messages (and also portability).


That's not true; for quick turnaround it's important to be able to get results out of the compiler quickly.


In what scenario does executable correctness and performance take a backseat to compiler runtime? Seems backwards to me.


There is a reason that every C/C++ compiler has various optimization options. Developers like the compiler to respond quickly to changes and later they'd like the highest performing code at the cost of compile speed.


FWIW, I don't find it to be a big issue in Scala anymore with SBT's incremental compilation.


Depends on how many people are developing on the code base. If every morning is a full build with a few spread throughout the day for integration it is pretty awful. Additionally, I think submit queues are the way to go for committing code to master and if that takes an hour before you know your change is good I'm pretty unhappy. As far as incremental compiles go, just developing in IDEA does a pretty good job.


More than speed it's the ridiculous ram requirements that made me cringe while learning Scala. sbt itself ate 2G of ram on my machine.


Turbo Pascal and Delphi had incredibly fast compilers. Why can't we make fast compilation a required feature today? We should never have to wait for the compiler/linker.

http://prog21.dadgum.com/47.html


They also didn't have the zero-cost memory safety abstractions that Rust has. Nor did they do much optimization.

That said, we're working on compilation speed. The focus so far has been getting the language in shape and runtime performance of the generated code, not compilation speed (although we've picked most of the low-hanging fruit in compilation speed anyway).


How optimistic are you that the compilation speed can be significantly increased? Are there a lot of easy optimizations available or is this just the price paid for extra compile time safety?

I'm not really a fan of Go but waiting for the compiler can be a pretty significant productivity killer on larger projects. Their fast compilation times are a pretty big selling point for what is (IMO) an otherwise underwhelming language design.


> How optimistic are you that the compilation speed can be significantly increased? Are there a lot of easy optimizations available or is this just the price paid for extra compile time safety?

The vast majority of compile time for optimized builds (80%-90%) is spent in code generation and optimization in LLVM.

For unoptimized builds, most of the compile time is spent in the typechecker doing type unifications for method lookup. With some optimizations to quickly reject method candidates I suspect this can be greatly improved.

Incremental compilation for the fast turnaround is being worked on and there has been significant progress, to address comex' complaint.

> Their fast compilation times are a pretty big selling point for what is (IMO) an otherwise underwhelming language design.

Go 6g/8g also doesn't do much optimization by comparison to GCC/LLVM. (Rust uses LLVM.)


> Go 6g/8g also doesn't do much optimization by comparison to GCC/LLVM. (Rust uses LLVM.)

You should compare with gccgo, though.


Rust compilation times can still be made faster.

Generally I think many people complain too much for what takes a few seconds.

C++ builds are measured in hours, usually require distributed build systems, clever use of forward declarations and cutting class private declarations into static code or PImpl classes to bring it down to something manageable.


While I agree with you, I'll also say that the difference between a second or two and instant is _huge_. The Ruby world has been talking about how to get unit tests runs down for the past few years for this reason. Sub-second test suite runs are _amazing_.


I am a huge sponsor of agile, but in the enterprise world I work on, strong type checking wins over unit tests.

It is a lost battle trying to make enterprise guys to write a single line of unit tests.


Ruby has strong type checking. ;)

Static typing vs. unit tests is a false dichotomy.


> Static typing vs. unit tests is a false dichotomy.

Not at the enterprise.

In what is now almost 30 years of dealing with computer systems. I only had the luck to work at one single company that took unit tests seriously.

All the other companies, the guys just write tests if a manager imposes them (usually only if the customer makes it a condition of payment), or they are somehow related to their performance evaluation.

Otherwise, the best you can get are integration tests at around one month before delivery date.

Ah, and agile in the enterprise is a synonym for a 3 week long mini-waterfall project.


> They also didn't have the zero-cost memory safety abstractions that Rust has.

No, but it is was better than what C and C++ have.

> Nor did they do much optimization.

I don't use them since the late 90's, but they used to be comparable to contemporary C and C++ MS-DOS/Windows compilers.


> I don't use them since the late 90's, but they used to be comparable to contemporary C and C++ MS-DOS/Windows compilers.

There has been an enormous difference in the quality of optimization and code generation since the 1990s, when single-pass compilers such as Turbo Pascal and Delphi were common. At that time you usually went straight from AST to machine code, doing some peephole optimizations along the way, but this is unacceptable today if you want to compete with modern C and C++ compilers (or even Java HotSpot). The introduction of SSA (and along with it GVN, SROA, SCCP, etc.) was a big deal.


I am aware of it.

What I was trying to say, it that if you put a C or C++ compiler against Turbo Pascal or Delphi compiler of the same age, the latter will compile way faster and generate similar code in terms of quality.

If they had received the same investment as C and C++ had become since those days, the situation would still hold.

But history took another path, so it is kind of moot point now.


The Borland compilers were single-pass compilers; they did not go through an AST, but generated machine code directly from parser input. Also, the original Turbo Pascal compiler was written in assembly language. (Not sure about the later compiler versions used by BP and Delphi.)

Also, the language was quite simple. With large, complex programs, the most time-consuming stage was in the linking of the final binary.

If you want a similarly fast compiler today, look at Go.


> If you want a similarly fast compiler today, look at Go.

There's also the amazing and much underrated Free Pascal: http://www.freepascal.org


Why?

First and foremost, a language must satisfy some need that is not met adequately by another existing language. Otherwise it's useless.

There are languages already with fast compilers, and some of them are pretty fast and have pretty good semantics.

Rust is really trying to offer a combination of things not available elsewhere: safety, speed, and control. But that innovation is based heavily on compiler features, which have a compile-time performance penalty.

Assuming rust succeeds on delivering an innovative new language, they can secondarily try to improve compiler performance. Or find ways to avoid paying some compile-time costs in certain situations.


Oh, Gods, 3-4 full seconds, perish the thought. That's really not that long a time at all.


(Even more OT and not a criticism of you comment.)

I wish someone could send your comment back in time so I could have read it when Doom was first released. I think it took longer just to get the game started up back then.


A similar project, Quake 2 map renderer written with Julia https://github.com/jayschwa/Quake2.jl


This is cool! Thanks for posting it.


OT but it's wonderful to see those screenshots again. It's been decades since I played that game and I can still remember the hidden tricks in those rooms.


This is awesome. Would you be interested in writing sort of a tutorial on building something like this? (Obviously much more basic, but still...)


How does it interact with OpenGL? Using C bindings?


We have a syntax extension that uses kronos' xml registry (https://www.opengl.org/registry/#specfiles) to generate symbol a loader (somewhat like GLEW) at compile time. https://github.com/bjz/gl-rs


Thanks for the pointer!


It uses gl-rs. Which auto generates a rust wrapper around the C bindings using a compiler plugin.

https://github.com/bjz/gl-rs


This is very cool. Really love seeing devs doing stuff like this.


why do these post's titles keep getting moderated?

"Doom I/II written in Rust" is a fine title for what this is.


The new title is more accurate. I sometimes disagree with moderated titles, but in this case it's not an issue.


Doom I/II suggests the whole game is implemented, which isn't. But I am working on it!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: