I think there's two major against-C groups: those of us who have worked with C for decades and those who never worked with it. I'll try and speak for those of us who've used it for decades. The popular high-level languages that have arrived since ~1995 (Java, Python, JS, C# and friends) are excellent productivity increases. In general, they sacrifice memory and performance in favor of robustness and security. For enormous software problem domains, we just don't need C's complexity or error-proneness.
Until Rust, there's been very close to zero serious competitors for C if I wanted to write a bootloader, OS, or ISR. Not even C++ could do those (without being extremely creative on how it's built/used). The ~post-2000 languages (golang, swift, D etc) can't do that (perhaps D's an exception but it wasn't an initial goal AFAICT). This is huge, IMO.
We've groaned and grumbled about how hard it is to parse C/C++ code for decades. This is a big deal for tooling. Because of the language's design, even if you use something "simple" like libclang to parse your code, you still have to reproduce the entire build context just to sanely make an AST. All of those other new languages above probably address this problem but also add all kinds of other stuff which we can't have for specialized problem domains (realtime/low-latency requirements, OSs, etc).
> collection of ... software not going to be rewritten in eg. Rust or some (lets face it) esoteric/niche FP language
IMO it's not appropriate to lump Rust in with "nice FP language"s. And don't look now but lots of stuff is being rewritten in Rust. Fundamental this-is-the-OS-at-its-root stuff: coreutils [1], "libc" [2], kernels [3], browser engines [4].
> IMO it's not appropriate to lump Rust in with "nice FP language"s.
Maybe I should have expressed it better, but I didn't intend to lump these together.
>And don't look now but lots of stuff is being rewritten in Rust.
I'm myself cautiously optimistic re Rust, but having been burnt by C++ in the past I'm not enthusiastic about fighting language idiosyncrasies (though modern C++ certainly deserves a second look). Then there's the issue (some might argue it's a plus) that Rust is at the same time a language, a lib, and the only compiler implementation (unlike C or C++ which give you choice).
The rust coreutils is an excellent example of the issues of having such a mess of abstractions, the resulting binaries are literally magnitude larger than busybox equivalents.
> issues of having such a mess of abstractions, the resulting binaries are literally magnitude larger
They're significantly larger, yes -- it's a fair complaint of rust. But it's mostly because of static linkage AFAIK [1] and not "a mess of abstractions".
Actually, the culprit is Rust's decision to statically link its standard library and all its dependencies by default.
Things like libunwind, libbacktrace, embedded debugging symbols for backtraces, and the jemalloc allocator aren't free.
If you ask for dynamic linkage (with the caveat that Rust doesn't have a stable ABI yet), you get a ~8K Hello World binary.
It's also possible to prune down the statically-linked size by opting out of various conveniences like jemalloc. (They're working toward making the system allocator default but don't want to regress Servo in the interim.)
...and if opt into static linking with GCC and G++ (and ask Rust to make its link to libc static), Rust can actually outdo them on a Hello World.
> Actually, the culprit is Rust's decision to statically link its standard library and all its dependencies by default.
No it really isn't, static linking does not imply bloat as commonly perpetuated.
> It's also possible to prune down the statically-linked size by opting out of various conveniences like jemalloc
Try this: opt out of everything except the standard library, create something somewhat trivial and idiomatic in both rust and c, compile and see what you get.
> Rust can actually outdo them on a Hello World.
Hello word is hardly a use of the standard library.
> No it really isn't, static linking does not imply bloat as commonly perpetuated.
I never said it implied bloat. I said that, if you ask Rust to link dynamically despite the lack of a stable ABI, you'll get binaries of a size similar to C and C++.
> Try this: opt out of everything except the standard library, create something somewhat trivial and idiomatic in both rust and c, compile and see what you get.
I'll need you to be a bit more specific than "somewhat trivial", given that "Hello world" uses println! or printf() but you consider it ineligible.
> Hello word is hardly a use of the standard library.
println! aside, it's a data point and that's all I meant by it.
Not wyldfire, and I think that claim is a mischaracterization, but the main obstacle to using C++ in the kernel is that some of its language features require runtime support (new/delete, globals/statics with constructors, exceptions).
You can of course just ignore those when writing kernel code- they get ignored in application code much of the time! But I suppose at that point it could be argued that you're just writing C with a C++ compiler?
I mean, if you're writing a kernel in Rust you have the same issue. In that case you'd use no_std, which takes away the part of the stdlib that depends on allocation and such (also threads and other niceties).
You can lose new/delete and .bss statics and still write reasonable, even "safe", C++. Rust doesn't have .bss statics by design (lazy_static emulates this for you though). new isn't necessary for the "modern" C++ safety stuff and you can write pretty good modern C++ without new. All new gets you is a nice wrapper around allocation, and when writing a kernel you can't and shouldn't allocate anyway. In Rust, too, you would not be allocating, either via memmap/malloc or via Box::new().
So it wouldn't be "C with a C++ compiler", it would be "C++ without allocations", which is a restriction from the problem statement anyway.
I don't get it. AFAIK you can implement all of those things in your abstraction, and then use it like canonical C++. I think you are wrong. Please correct me.
I might have to walk that back. It seemed to me that no_std was "more straightforward" and/or "more formalized" than "#pragma interrupt" (etc). But I could be wrong there -- if so, mea culpa (the post is no longer editable).
Rustc did recently get a "x86-interrupt" calling convention, but that's unrelated to #[no_std], and only works on x86. Either way, "#pragma interrupt" should work just as well in C++ as in C, since C++ doesn't really change any aspect of the language that matters there.
Further, even in C I rarely see use of "#pragma interrupt"-like tools- rather, everyone still seems just to use per-platform assembly glue code. (To be fair, my experience is mostly in kernel code for things like Linux, rather than standalone embedded applications where "#pragma interrupt" would be more valuable.)
no_std is more formalized, though C++ enforces the same thing by failing to link if you try using malloc (or whatever) when writing a kernel. no_std also means that it's very easy to tell if a crate works without the stdlib, so you can use code from the ecosystem instead of rolling your own.
Ultimately the Rust OSes resort to some handwritten assembly as well. I think that's going to be a constant of writing a kernel. Rust is working to minimize it (e.g. with things like `extern "x86-interrupt" fn`), but at a kernel level there are just some kernel specific asm instructions (like all of the TLB stuff) that either compiler will probably never support generating without inline asm.
So while Rust may be better than C++ at writing OSes (I'm not sure! I haven't looked at all the stuff you need to write an OS in C++), I do think they're in the same ballpark, close enough that if Rust is a "serious" competitor C++ probably is too :)
> Until Rust, there's been very close to zero serious competitors for C if I wanted to write a bootloader, OS, or ISR … We've groaned and grumbled about how hard it is to parse C/C++ code for decades.
I honestly think that Common Lisp can do this quite well. It was designed to be a high-level language, but it's completely capable of working at the machine level, pleasantly and easily. Unlike C, most of the time one has safety, but one can disable safety when necessary with a simple (declare (safety 0))).
Performance is extremely good with modern compilers, although I don't know how good they would have been back in the old days.
From what I can tell of Rust, it doesn't look easier to parse than C (but I've not looked deeply); certainly, it's orders of magnitude more difficult to parse than Lisp.
I believe that Standard ML or OCaml could do similar things as well, albeit at the cost of being more difficult to parse. Smalltalk is maybe a little less capable, but somewhat easier to parse.
Yes, it's harder than lisp, but it's still much easier than C. C and C++ have issues due to ambiguities that make them context-sensitive. C++ has it worse because parsing is dependent on typechecking because of templates.
Rust is not 100% context-free, but the feature that is non-context-free (raw strings, a rarely used feature) is still pretty easy to parse, and even if you capped it at 6-level raw strings you'd probably be able to parse all the Rust code out there.
> I honestly think that Common Lisp can do this quite well
I haven't used any lisp dialects for decades, so I have naive questions: is there really sufficient support from compilers+linkers to write a bootloader in lisp? Do I have to do a lot of bootstrapping in assembly to bring up lisp interpreter before I can execute the lisp code or does the ahead-of-time-build result in executable machine code? Can I do inline assembly (not required but a really key benefit IMO)? Are there numerous examples where someone's already written one in lisp?
https://github.com/dym/movitz is a Common Lisp system that runs on bare metal x86. The source code is quite readable.
The rest of this post is an excerpt from an email I sent 6 years ago.
The following comments on runtime systems are partially based on a long c.l.l thread with posts by Lucid, Symbolics, and Franz alumni.
Franz uses a 3-layer approach: CL, a low-level Lisp, and C.
Lucid started with Lisp that generated assembler but reluctantly added some C.
Symbolics Lisp Machines used bootstrap code in a Pascal-level language with prefix syntax. A Symbolics alum said that in retrospect they should have used C.
Most Lisp implementations have subprimitives - low-level functions that can circumvent the type system, often with a prefix such as % or :.
Assembly language integration dates to Lisp 1.5 and there are several common approaches.
1. turn the optimizer off - this is easy to use and implement.
2. optimize the assembler block - Naughty Dog GOAL did this.
3 sounds like where GCC got its inline asm concept from: annotate the assembly with what are the inputs and output operands with constraints (do they have to be certain kinds of registers), and whether anything has surprising side effects.
Parsing is IMO of all the complaints you could make about C/++ rather bikesheddy. Parsing is a solved problem. Modern compilers can parse millions of LoC per second. And most of the specific parsing-related complaints (pointer dereference or multiplication?) about C/++ are also true of Rust. (Edit: Nope, brain fart on my part, see below). And, AFAIK, all C/++ parsing is well-defined, if counterintuitive in certain edge cases.
> most of the specific parsing-related complaints (pointer dereference or multiplication?) about C/++ are also true of Rust.
This should not be true, and we fought hard to keep it that way. There's one spot of Rust's grammar that's context-sensitive, for something very rarely used, and other than that, it's all much simpler.
You're right. AFAIK types and identifiers are always unambiguous in Rust. I was thinking visually (same operator) instead of in terms of specification and implementation. Shows me to make flippant comments from the toilet!
My larger point is that there are plenty of very good reasons to criticize C/++, and parsing is a minor one since parsing is fast, and even if the creation of the AST isn't context-sensitive, verifying its correctness (is this identifier in scope?) still is.
What is the context-sensitive spot in Rust's grammar?
> plenty of very good reasons to criticize C/++, and parsing is a minor one
Ok, fair bit, it's a frustration for me but admittedly not as important as the other differences.
I mention it because it's a wart in C's language design and I figured Rust's safety features are already well-known and heavily discussed. If I want to write a simple tool "ask this tree of .c files how often they use an identifier with name 'X' or type 'Y'", I have to find out the include paths, defines, all kinds of other "noise" just to find out what could be a relatively simple query of the source base.
This also means that autocomplete tools usually need to be taught how to build a project. YCM has this whole conf file where you specify the header locations and stuff and it's like rewriting half the makefile.
Please, connect the dots for me. An initial skim of the commit messages did not yield any egregious "Utter Disregard for Git Commit History" [1]. Even if it did, it may just mean that the maintainer is focused more on results and robustness than preserving a pristine history of the project.
Don't think that was where I was going. It was more of the none of this stuff is really done, it is hard in this language and help would be appreciated..as understood through the project splash page and then examined through commits?
I hope Rust doesn't face the same fate as other ambitious projects by Mozilla. Rust has a quite unusual syntax compare to any other systems programming language. Also, there is a big learning curve. Keeping all the benefits aside, I really hoped Rust had a simpler syntax. I really think, one day a language will borrow the good parts of Rust with a simpler syntax and get ahead of it. Rust in its current form will never be as successful as C/C++.
I am not against Rust. Rust has some great ideas and intent. I just feel they should have created simpler syntax. A more complex and unusual syntax doesn't have any real benefits IMO.
You still haven't pointed out what syntax is problematic exactly. I've never programmed in Rust, but I don't have any trouble reading it coming from a C and C# background.
What exactly is the problem with the example you linked to?
Not having programmed in Rust, the fact that Rust requires all type parameters to be used, thus ruling out proper phantom types, was semantically surprising to me. But I don't understand what syntactic issue the other poster had with that example.
In my understanding, it has to do with variance. This happened a very long time ago, before 1.0, and so I don't know where the discussion happened, off the top of my head.
It still seems bizarre to me that a purely type-level expression is forced by an effectively non-existent term. That RFC specifically states that the main problem is that the results of variance inference are largely erased by assuming invariance. That seems like a sensible default for unused type parameters too.
It seems from the conclusion of that post that PhantomData only survived because this was the smallest change they had to make to get this all to work better, and because some of this PhantomData could be used for other analyses in the compiler (although it's not clear if better type information could have replaced these uses anyway).
That's quote ,a library that generates code for you at compile time ,it takes code as input,has it's own syntax. It's like compile-time reflection.
It's not code rust programmers would normally write, I'm one of those programmers. I'm glad some libraries like serde,rocket and diesel are using it to generate code instead of doing run-time analysis.
I think there's two major against-C groups: those of us who have worked with C for decades and those who never worked with it. I'll try and speak for those of us who've used it for decades. The popular high-level languages that have arrived since ~1995 (Java, Python, JS, C# and friends) are excellent productivity increases. In general, they sacrifice memory and performance in favor of robustness and security. For enormous software problem domains, we just don't need C's complexity or error-proneness.
Until Rust, there's been very close to zero serious competitors for C if I wanted to write a bootloader, OS, or ISR. Not even C++ could do those (without being extremely creative on how it's built/used). The ~post-2000 languages (golang, swift, D etc) can't do that (perhaps D's an exception but it wasn't an initial goal AFAICT). This is huge, IMO.
We've groaned and grumbled about how hard it is to parse C/C++ code for decades. This is a big deal for tooling. Because of the language's design, even if you use something "simple" like libclang to parse your code, you still have to reproduce the entire build context just to sanely make an AST. All of those other new languages above probably address this problem but also add all kinds of other stuff which we can't have for specialized problem domains (realtime/low-latency requirements, OSs, etc).
> collection of ... software not going to be rewritten in eg. Rust or some (lets face it) esoteric/niche FP language
IMO it's not appropriate to lump Rust in with "nice FP language"s. And don't look now but lots of stuff is being rewritten in Rust. Fundamental this-is-the-OS-at-its-root stuff: coreutils [1], "libc" [2], kernels [3], browser engines [4].
[1] https://github.com/uutils/coreutils
[2] https://github.com/japaric/steed
[3] https://github.com/redox-os
[4] https://github.com/servo/servo