> Zig is dramatically simpler than rust... Most of this difference is not related to lifetimes.
I've been thinking about this recently: the borrow checker gets a lot of attention, but I think the majority of the learning curve of rust is actually due to "unforced errors" in the language UX which have nothing to do with the language's core USP's.
For instance, the module system just seems needlessly complex. Like you need a mod.rs file in each subdirectory, and the main.rs/lib.rs serves as the module file for the src directory, it took me like a day to figure out what exactly the rules are, and I can't understand why there can't be sensible defaults to define modules based on the file system structure alone.
> why there can't be sensible defaults to define modules based on the file system structure alone.
There could be, and in fact, I personally advocated for them. But there was significant community pushback; it turns out many people like to "comment out" entire modules when doing big refactorings. There are also some interesting edge cases, for example, you can use a #[path] attribute on a module to change what the path to the file is; without a mod statement, it's not clear where that would go.
Proof that it is still quite confusing, I tried to remove the mod.rs in my project similarly to that example but now I don't understand how to access these modules.
With the mod.rs in place I can do this in my main.rs:
pub mod foo
and then in a different file do:
use crate::foo::bar::*
But without the mod.rs it doesn't work and I don't know what the correct incantation is...
Edit:
Nevermind, I thought this update meant you could remove the mod.rs completely but this is about declaring foo.rs as a file outside of the foo folder.
If you'd like to have two modules, `main.rs` and `foo/bar.rs`, and you want the latter to be `foo::bar`, you can do this in `main.rs` to avoid needing a foo.rs that just does `mod bar;`:
So I'm not sure I'm a fan of this either. So now for a module with sub-modules, part of it is defined in the top-level directory, and part of it is defined in a sub-directory. Now if I want to move a module from one directory to another, I have to move two items in the file system.
> it turns out many people like to "comment out" entire modules when doing big refactorings.
It seems to me it would be much better to have an "exclude.rs" file or something to opt out of a sensible default rather than forcing extra work in the common case just to support the common case. Or else you could still allow a mod.rs if you want to be explicit about your module contents, and just assume it includes everything in the directory if it's missing
> There are also some interesting edge cases, for example, you can use a #[path] attribute on a module to change what the path to the file is; without a mod statement, it's not clear where that would go.
Again, I don't think a design should be optimized to support interesting edge cases. It should make the common case as simple as possible. If edge cases need to be supported, I'm sure a solution can be found
I am not saying you should love the module system, just trying to provide some context.
(Everyone seems to love and/or hate the module system for various, opposing reasons. It is a great mystery to me. Explaining the module system is like, kind of my white whale.)
Yeah I just think it’s a trend with rust where when a new user complains about a beginner-unfriendly feature, often they are met with an explanation of some esoteric benefit.
I just wonder how much of that is as-hoc reasoning, and it doesn’t seem like it’s a particularly good sign.
I thought this way too until I saw a bunch of people praising how easy the module system was because they felt it was very similar to Python's. I don't know Python well so I can't speak to it.
My working theory: each language does modules in a different way. People think "Oh, a module system, I know this" and then run into issues when it works differently than their language. Just a theory though, I have not been able to validate it, or to figure out an explanation that works for all people.
And yes, more generally: as Rust continues to grow, more and more people will hit more and more edge cases, and they cannot always be fixed, thanks to being backwards compatible. That's just how things are as they grow up. More and more users also brings more and more use cases. It's a feedback loop.
Oof I dunno I'd categorize the Python module system as "makes no sense and I can never get it right" - I'm sure it's consistent but it doesn't feel approachable to me.
Honestly, that idea makes a lot of sense to me, and personally, I find the module system pretty normal seeming for a modern language. I'm used to Perl and CPAN, and it's pretty similar to that, except for the option to use dir/mod.rs, which honestly seems like it's kinda nice for keeping a module contained nicely for those that want to do it that way.
Now I'm more interested in what the complaints about it are, and specifically what they're in comparison to. Either they're not used to using so many modules in a method such as this, or they're honestly expecting some better system for their use case that I'm unaware of, or a little of both, so my curiosity is piqued.
It just really depends. I should have been writing these down over the years. A few common points of confusion off the top of my head:
People expecting to put "mod foo;" at the top of foo.rs, to declare that foo.rs is a module.
People expecting to put "mod foo { }" with the contents of the file inside the {}s to declare that foo.rs is a module.
People expecting that "use" declares a module, not "mod."
People expecting every file inside a directory "foo" to be the contents of the "foo" module, regardless of filename, all concatenated together.
People expecting that mod statements never need to exist because it should be inferred from the filesystem.
General confusion about the privacy rules; that "pub" may not mean that your thing is globally public.
General confusion about crates vs modules; main.rs and lib.rs being in the same directory, but declaring different crates. Not understanding how to import stuff from lib.rs into main.rs, because it feels a bit different than other modules.
People used to also struggle a lot with "use" and paths pre-Rust 2018, but that's been mostly cleaned up at this point.
So a point of comparison would be like a Swift or a Java where module/package definitions are defined implicitly, based on the location in the file system. In contrast, Rust's explicit module declarations seem cumbersome and unnecessary, especially since 90% of the time you're just typing out a structure which is identical to what already exists in the file system. So this could easily be inferred, but it's just not.
I think the same can be said for match statements on enums: in Rust you need to match against the fully qualified type name, when in other languages (including Zig) you can infer everything up to the enum case, since it's already specified by the type you are matching against.
These kinds of things just seem weirdly inconsistent, since in many cases Rust favors inference and elision to remove boilerplate, while in other cases it requires explicitness when there is no technical reason for it to be required, and other languages handle inference just fine.
I actually the module system is one of the few exceptions where it's actually poorly designed. It could very easily have a 1:1 mapping to the filesystem with no need to declare modules to use them. Lot's of people apparently don't like that idea, but most other module systems do it that way, and IMO it would be a lot more intuitive.
> It could very easily have a 1:1 mapping to the filesystem with no need to declare modules to use them.
I would love to see this. The issue isn't that people hate the idea. The issue is not breaking existing projects and workflows when making such a change. If we could find a way to make this work without breaking existing projects and workflows, I think there's support for doing so.
What if we could provide this with tooling? Like, everyone that uses Go for an extended period eventually complains about how it's an error to import a module and not use it... until they enable goimports-on-save and then the problem vanishes (mostly). (My point is not to compare go/rust module systems but to illuminate how a pain point introduced by the language's guarantees can be eliminated by tooling.)
If there is such an easy to define 1:1 mapping, then could there be a rustmods-on-save tool that automatically adds mod statements according to a fixed scheme whenever any *.rs file is created in the current directory or a child directory?
At the very least, we could have a warning if you have any .rs file that isn't being brought in by a `mod` statement, which would help people understand why their code isn't working.
That'd also get us halfway towards just making it automatically work without `mod` statements.
The new module system is already incompatible with pre-2018 edition projects, so I don't know why this argument explains why the current module system doesn't bite the bullet and use the file system hierarchy.
> The new module system is already incompatible with pre-2018 edition projects
Code written for the 2015 edition should generally just work with the 2018 edition module system. That's part of what we worked to ensure, and that's why we didn't mix in changes that might have reduced that compatibility.
Rust has an RFC process, which is great in many ways. But for something like a module system where everyone has an opinion, it can result in an over-engineered solution that tries to satisfy everyone.
I think the main reason why the foo.rs foo/bar.rs stuff was done was so that the tabs in an editor have informative headings. Instead of ten times mod.rs you have foo.rs, baz.rs, test.rs and so on. Personally I don't like it either (for the same reason as you) but the old way is still possible, even on the 2018 edition, so that's what I use.
Nice to hear that there has been some progress here. I recall trying Rust 4+ years ago and my experience with the module system really left a lot to be desired from someone most experienced with Python's module system.
I was confused by rusts module system too, after having used es6 modules in js and python but then I realised the module systems couldn't be compared.
In dynamic langauges like python or js u can individually use modules but in rust everything has to link back to either main or lib if I'm not mistaken.
This was what enabled me to get rid of preconceived notions of how a module system should work.
I don't think this is fair. There is definitely some unforced error, but a lot of the complexity is shared with other languages in it's heritage eg the trait system is no more complicated than haskell's typeclasses, and haskell also has macro systems on top of that.
Also the simplicity in zig comes from a major innovation in controlled partial evaluation, which we've yet to really explore the consequences of. Rust had already made a major innovation in the form of the lifetime system. It needed to figure out all the implications of that so it made sense to draw from existing designs where possible for the rest of the language rather than piling on more new and untested ideas.
I mean, if you're saying Rust isn't any more complicated than _Haskell_, I mean, you're not wrong...
[edit: I'd like to clarify that the Rust team has tons of talent and good ideas and can probably find a goal more ambitious than this. Haskell has a definite aesthetic, and it fits some people/projects better than anything else. That aesthetic does not include simplicity.]
Rust has many syntax features that can be described as "Rust takes some common pattern that's onerous to write and read, and automatically infers it for you under certain conditions". On its face this seems like a strict improvement: you can still be explicit if you want/need to, or you can skip some boilerplate in certain cases.
But the problem comes when you're trying to learn about the language. Because these UX "optimizations" are layered on fairly arbitrarily - not unlike actual compiler optimizations - they sometimes create a very confusing landscape to try and form a mental model around, in the same way that compiler optimizations can make it hard to understand what will and won't make something faster.
This often plays out as:
1) You have some clean piece of code that does what you want
2) You add something innocuous to it
3) It no longer fits the sugar-pattern that the compiler was silently invoking underneath
4) You now have several sprawling errors because you're expected to be explicit about something you didn't have to be before
An inexperienced Rust programmer would (reasonably) assume those errors were caused by the thing that was added, and start trying to figure out what's wrong with it. But that's a red herring. The real issue, which is not indicated, is that a sugar-pattern was bailed out of.
I'm glad these shortcuts exist in some capacity: Rust is a complicated and fairly verbose language, and they make it less so. But I think they seriously damage the learning experience and early impressions that people form of the language. I've been using it for years and I still discover new quirks with this stuff that I didn't know about, and incorrect assumptions I had about the language itself that were driven by these mysterious mechanisms.
What if rustc had a "no-sugar" mode that people could use until they've gotten a handle on what's really going on? What if the language server somehow indicated inline when sugaring was being invoked?
Edit: A different way of phrasing the problem is that debugging is like navigating a landscape: there's a locality to it. "Did this change bring me closer to my goal, or further away from it? If closer, I'm probably on the right track, if further, probably the wrong one." Most languages mostly adhere to this idea of contiguous space-navigation. But these patterns in Rust are like constructing a maze across the landscape; there are cul-de-sacs and roundabout pathways you have to follow to get where you're going. There's also inconsistency about a given subject: you look at one piece of the terrain from a different angle, and it changes. It's hard to form a coherent, generalized mental map because base truth is relative based on what direction you're coming at it from. So over time instead of learning 2N concepts (lifetimes, references, iterators, boxes) you learn an NxN matrix of concepts (using references with boxes, using references with iterators, using lifetimes with iterators, etc...), because they interact with each other in unpredictable ways.
I think Zig's biggest advantage is that it's just C without the warts, or footguns as is said in the Zig world. The comptime feature is probably the most exciting thing I've seen in a while. I've looked at Rust and feel it's more of a competitor to C++, Java and C#. Whereas Zig is C done right.
"Rust is a competitor to C++, not C" is a meme, and there's some truth there, but I don't think it's all that accurate. I know lots of folks who prefer C to C++, but still like Rust.
I do think that these sorts of language comparisons are useful, but they don't always generalize. Partially this is because what a language means to each person can vary. As long as they're understood in a very coarse grained way, I think they can still make sense, but it's tricky!
> I know lots of folks who prefer C to C++, but still like Rust.
I'm one of those people, and, while I have yet to use Rust or Zig for any real work, at least so far I also see Rust as being more directly a competitor to C++, and Zig as the more direct competitor to C.
Or perhaps I should say analogue. Because, it's true, I might choose Rust over C. I'm even tentatively planning to, for one project that's still in the idea phase, and that I would normally have wanted to do in C. Though that's not really because I see Rust as being more C-like. It's more that I see choosing Rust as being perhaps more likely to be worth the extra effort than I've found to be the case for C++.
Yes, it also can matter what you even mean by "competitor." Some people take it to mean "similarity in language features." Others can take it to mean "would I choose it as a substitute." Some take it to mean "would I choose it as a substitute for this specific domain or project."
I've seen this play out a number of times. "Rust cannot compete with C, because C is too entrenched in embedded." "I do embedded development in Rust, so it can." "Well, I work with these chips that Rust can't target yet, so it can't, for me." None of these statements are incorrect, but they can lead to huge back-and-forths.
I'm one of those, but, any way, Rust is mainly competitor for C++, while no-std Rust (or Core Rust) is direct competitor for C.
I did fairly large and complex program in Rust for bicycle computer, and, while I like developer ergonomic, speed, and memory usage, I'm disappointed by the total size of the binary, number of dependencies used, and compilation time.
Have you researched this space already[1]? By default Rust doesn't optimize for the resulting binary size, but there are lots of things that can be done to bring size down where you'd expect.
> number of dependencies used
When this comes up it becomes as much a technical discussion as a philosophical one :)
> and compilation time.
No arguments there. There are some things that can be done in your project to avoid spending too much time (simplify bounds to minimize recalculation in the type system, avoid proc macros, leverage cfg conditional compilation), but they are work arounds.
Yes. I think Zig is a safer C, while Rust is a safer C++. C programmers will be more excited by Zig than by rust, and the opposite is true for C++ afficionados.
Those are nice taglines, but while Rust is a safer C++ in the sense that it is a language that espouses C++'s design philosophy and has a similar feel to it, Zig is something new altogether. The only thing that makes it more similar to C than to C++ is that C is an extremely simple language, as is Zig, while C++ is an extremely complex language. But other than that, Zig is a low-level language that is its own family, and with a design that is radically different from either C or C++; it's hard to compare it to anything, really. Zig goes well beyond C in its power of abstraction to match C++/Rust, but it does so in a very different way than C++/Rust.
They're low-level languages -- with all the attention to low-level detail all low-level languages require -- that aim to appear like high-level code on the page. All the same details are still there, and you must confront them when changing the code, but they don't appear as text when you look at the code.
The safety guarantees of Rust are much stronger than Zig's. For example Rust statically prevents use-after-free and data races, and Zig doesn't (and I don't think anyone expects it ever to). To me this matters more than which one is more like C or C++ (partly because I don't think Rust is very much like either).
It's interesting people keep saying this. I see Rust as being much closer to C than it is to C++. Rust is safer C with the bare minimum of added advanced features.
Although Rust syntax might feel familiar with other languages the point of comparison shouldn't end there. The Rust language community is big and growing (including the number of Crates) and the mindset behind writing safe code is actually what got me interested in the first place. I’m glad that other languages are inspired by that and are able to provide some of those features on their specific niches (like in the case of Zim is simplicity)
D is much more "C++ done right" (and then some). C and Zig are very barebones, close-to-the-metal language. D has garbage collection, classes, templates, exceptions, built-in dynamic arrays and hash tables, and on and on and on. The "Features Overview" page makes me go cross-eyed [1].
Yes, you can turn many of these things off or ignore them and just use D as a better C, but the same could be said of C++ (for some definition of "better"). Or most languages, really, if you squint enough.
Yes, and people do use C++ as a better C to get OO and a stronger type system, at the cost of compilation times (and the usual C++ pitfalls).
I use D as better C. I use it as better C++ at times. Once ownership/borrowing [1] is stable, I'll use it as a better Rust too.
For close-to-the-metal code, I use D's inline assembler [2]. "To go any lower level than that, you'd need a miniature soldering iron and a very, very steady hand." (Andrei Alexandrescu)
A common misconception, probably because the majority of D users are C++ veterans. But D was never designed as a successor to C++.
"One of the earliest design decisions that Walter made about D was that it would be easy to use
with software written in C. Many widely-used libraries are implemented in C or have a C interface.
He wanted to provide an easy path for established software companies to adopt the D language. A
straightforward approach was to guarantee that users of D could immediately take advantage of
any C library their project required without the need to reimplement it in D." from "Origins of the D programming language" by W.Bright, A.Alexandrescu, M.Parker.
D has an optional conservative mark-sweep Boehm GC which only gets triggered when you attempt to allocate something on the heap. If you don't do that, it won't bother you. Finally, you can explicitly disable it.
People generally ignore or miss the fact that D really shines when it comes CTFE, reflection and metaprogramming.
But native code is native code, there isn't a magical layer underneath that would make Zig closer to the metal than D, unless perhaps the voluntary Undefined Behaviour zig is willing to take such as those for loops.
This is the first time I see such a comparison :) D is a systems programming language with an optional GC and in the league with C/C++/Rust. I would add Nim and Zig here as well.
Nim is in the GC category (and its documentation says you should use a refcounting GC rather than turning it off). Zig does not use GC.
Memory management is one of the more important aspects of systems programming so it is useful to be clear about what is the main strategy used by each language.
You can use packages that have been converted to not use the GC and all of the libraries you would be using in C or C++.
You still get the following language features, which is more than what C, C++ or Rust have to offer (though Rust is hot on D's trail).
Unrestricted use of compile-time features
Full metaprogramming facilities
Nested functions, nested structs, delegates and lambdas
Member functions, constructors, destructors, operating overloading, etc.
The full module system
Array slicing, and array bounds checking
RAII (yes, it can work without exceptions)
scope(exit)
Memory safety protections
Interfacing with C++
COM classes and C++ classes
assert failures are directed to the C runtime library
switch with strings
final switch
unittest
printf format validation
> The comptime feature is probably the most exciting thing I've seen in a while
Just please do not make the mistake of believing that it is unique to Zig. Factor brings the best of Forth and Lisp together, so meta-programming or extending the language is possible quite easily, for example. You could extend the syntax or add constructs pretty easily, and so forth. Anyways, an example can be found here: https://rosettacode.org/wiki/Compile-time_calculation#Factor but this barely scratches the surface. It does not mention `<< ... >>` which evaluates some code at parse time. You can execute code before the words in a source file are compiled.
Those are just some examples, but it is pretty powerful. It supports (and encourages) interactive development. Profiling and debugging is a breeze and highly detailed and useful, you can easily disassemble words (functions), you can get a list of how many times malloc has been called in some circumstances, there is runtime code reloading (a vocabulary that implements automatic reloading of changed source files[1]), and so on. And on top of all this, you can compile your stuff to an executable that is less than 4 MB!
And of course you do not have to do stack shuffling at all, you can easily use locals which is useful for math equations and whatnot. Plus did you know that the Factor compiler supports advanced compiler optimizations that take advantage of the type information it can glean from source code? The typed vocabulary (yes, it is not part of the language, but implemented as a vocab) provides syntax that allows words to provide checked type information about their inputs and outputs and improve the performance of compiled code.
I would like to repeat because if this was not the case, I would have never bothered with it: you can create a single executable file that is less than 4 MB of size if you wish so! Of course it encourages interactive development, but still, it is great to have an optimizing compiler that can do all this easily. And mind you, this part is also written in Factor itself and is available as a vocabulary (vocab).
So all in all, I think Factor is great. I was shocked at how modern (and how many) libraries it has, especially considering only a handful of people have been working on it. Slava Pestov created the language, and some people joined him later on. If you want to learn more about it, start here: https://concatenative.org/wiki/view/Factor. There are videos, there are papers, there are lots of resources to get started. :) The language misses a couple of things, but it is being worked on.
> Just please do not make the mistake of believing that it is unique to Zig. Factor brings the best of Forth and Lisp together...
What is unique to Zig is that it has these features without bringing together "the best of Forth and Lisp". Sometimes, just being pedestrian is a virtue.
But comptime is kind of LISP's unique feature, it's just called macros.
EDIT: In fairness, Zig's presentation is pretty likeable. You can do a comptime expression pretty trivially in LISP, a comptime parameter or block would require actual effort.
No. There is a real difference between staged computation of the type that zig has and macros in common lisp, scheme/rust have (so much for lisp's uniqueness BTW). In common lisp you can do completely arbitrary computation at compile- (or read-)time in zig you cannot and crucially you are also not responsible for manually ordering the "stages". E.g. in common lisp you have use eval-when to make sure that stuff is available at the right phase, whereas zig works out the dependency for you.
In common-lisp you could do something equivalent, but you'd have to wrap the definitions you want to be able at both compile and and run time in eval-when.
It's almost just C without the warts; I wish someone would make a language that was both as good as Zig or Rust, and could be compiled to C code. The dependency on LLVM rules out a lot of embedded use cases, because some chipsets require forks of LLVM, or worse, proprietary C compilers or forks of GCC or something else, to run. Would be nice to have a language that just works everywhere.
I feel like the onus should be on chip makers to write LLVM backends because proprietary C compilers and GCC forks almost universally suck. I understand why they don't, but it's just disappointing on both ends of the pipeline. That said, ARM is great, and you can target most ARM chips with LLVM.
There's also the downside that transpiling to C locks you into an ABI, which limits what compiler developers can do towards the end of the compilation passes.
But a strict improvement on the preprocessor madness that you have to get into with C. Having `static if`-like behavior is much preferable to `#ifdef` and the like and other things falling out of that system is just a bonus.
It's tempting to view `comptime` as additional complexity, but the truth is that it replaces a much more complicated and hard-to-work-with system. That one is just one that people have gotten used to over decades.
Very interesting set of observations. As a C and C++ programmer, I am feeling more and more excited about Zig.
One thing that feels missing from Zig though is encapsulation. I don't believe that you can declare struct fields private, such that they can only be accessed by methods. This seems really important to me, not only as a technical means of enforcing invariants, or hiding internal-only implementation details that may change, but even just as a communication of intent. There is a big difference between "you are invited to read/write this directly" and "please don't read/write this."
C doesn't have this, true, but at least in C you can put internal-only structs in a .c file instead of .h, making the members effectively invisible to clients. Does Zig have anything comparable?
If you want to communicate intent, put an underscore in the field name, like it's being done in Python. `field` is "you are invited to read/write this directly", `_field` is "please don't read/write this."
Even in a lot of languages that do have "private", it's still pretty consent-based. Java and C#, for example, don't prevent you from mucking with private members, they just make it inconvenient to do so.
I used to be suspicious of the Python approach, because it doesn't even pretend it's enforced by anything but the honor system. But I've discovered that, in practice, my Python-using colleagues are no less likely to respect `_field` than my Java-using colleagues are to respect `private field`.
> But I've discovered that, in practice, my Python-using colleagues are no less likely to respect `_field` than my Java-using colleagues are to respect `private field`.
That's a very interesting observation. You really have to go out of your way to access private fields in Java -- I can only recall about 3-5 instances of doing that in my 15+ years programming in Java. You have to look up a field by name, and call setAccessbile(true) on its reflection... and this is definitely going to raise eyebrows in code review. Some of the DI frameworks do it as a matter of course, but that's kind of a different thing...
(Now, package-private is a different matter. In that case you can[0], just put a class in the same package and access from there. I've done that... twice or so?)
What is your recollection in terms of doing the same thing in Python? Obviously, this is just going to be anecdata, but it could be kind of interesting.
[0] Modules change this a bit, but it doesn't seem modules are really a thing outside the standard library yet. (I'm programming in Scala these days, so haven't kept up with Java practices.)
I've mostly seen it done in situations that are roughly analogous to IoC containers. So, custom serde libraries that use reflection, testing utility code, object mapping tools, magic validators, stuff like that. Which, any sufficiently venerable enterprise Java application seems to have at least one or two of those knocking around the codebase.
FWIW, Java is also where I see stringly typed designs, too. I'm increasingly coming to fear that languages with more safety-oriented features aren't associated with safer practices because those features encourage safer design, so much as because they tend not to attract programmers with a swashbuckling attitude in the first place.
Right, for DI/IoC it seems pretty standard. (I think it's probably a historical accident, honestly. Reflection was really the tool to get constructor parameters and then someone figured, hey, why not just inject private values directy? It surely convenient at the time, but... lessons learned, I guess. I have no problem with compile-time DI via static reflection of e.g. constructors. It's a little more boilerplate, but worth it, IMO)
Definitely agree about "stringly" typed programming becoming an increasing issue in Java over the years, but that had more to do with over-use of instanceOf, etc., not so much actual strings (as in className).
> I'm increasingly coming to fear that languages with more safety-oriented features aren't associated with safer practices because those features encourage safer design, so much as because they tend not to attract programmers with a swashbuckling attitude in the first place.
For the life of me I cannot parse this sentence. Could you please rephrase or expound? Is there a missing negative somewhere, or...?
> I'm increasingly coming to fear that languages with more safety-oriented features are[] associated with safer practices [not] because those features encourage safer design, [but] because they tend not to attract programmers with a swashbuckling attitude in the first place.
Indeed. JavaScript also doesn't have private fields. But in practice using an undocumented field means "I'm familiar with the internals of this library and willing to check for breaking changes on every update".
There are lots of footguns in JS, but I've never seen this cause issues.
I bet it wouldn't be hard to write something in zig that enforces this. I think the compiler is going to start getting pluggable and when a package manager comes out I'm sure something like an opt-in check against this will become a thing.
I'm 90% in favor of this; the 10% comes from the few times where I've wished that a particular field was exported, and I know it would be safe to access, but now I need to open an issue or submit a patch, wait for it to be merged upstream, update the dependency version...
Give that Zig is a low-level language where "unsafe escape hatches" are the norm, it wouldn't surprise me if this ended up being an optional compile-time check rather than a mandatory one (as a sibling comment suggested).
I'm ok with an escape hatch. I'm even ok with a strong convention. My primary concern is that there is a clear contractual line between "reading/writing this is supported" and "you're on your own."
Yes this is an area where Zig fundamentally doesn't solve the same problems that C had. Zig can't check that you're not dangerously wrecking an invariant while Rust can do that, or in the worst case simply cause a program exit with a panic rather than random memory corruptions like Zig would cause.
I quite honestly wonder sometimes why Rust excited me so much when I first started using it, but I do not get such excitement from Zig. Nowadays it's very easy to be excited about Rust because it demonstrated that ownership works but when I started using it, there was still garbage collection etc.
Zig looks really cool but it feels like it has a high chance of being a niche language. Rust never felt like that.
My impression was completely the opposite. I was very excited about Rust at first, especially the ownership system, but after a while I saw that it was a cleaned-up C++ with most of C++'s problems (one of the most complex programming languages in software history; very slow feedback loop). Zig, on the other hand, seems revolutionary and a complete rethinking, from the ground-up, of what low-level programming should be. True, Rust is probably 100x more popular, but is still well below 1% market adoption, so we're comparing two rounding errors in terms of adoption.
That's one way of looking at it, but that I'm shopping for a C++ replacement doesn't mean I'll necessarily like another C++-like language. C++ is my primary language, and if I replace it -- given that switching a new language is very costly no matter which new language I pick -- I might as well wait for a more revolutionary replacement, that fixes all/most problems I have with it, rather than just a few. I don't know if Zig is that thing just yet, but it looks more promising -- to me, given my particular taste -- than Rust. But, TBF, I was very excited about Rust at first, so my excitement for Zig might wane as well.
Rust is not "trying" to replace anything. Rust can and is used plenty for embedded programming where only C was ever used previously. People have commented they prefer writing simple command line applications in it more than they do python.
I really don't see how Rust has most of C++'s problems. Slow compiler, sure, though things are changing. Most complex languages in history? Only if you squint at it from 10000 feet and ignore what the complexity is doing.
C++ has approximately eighteen different partially-overlapping categories of variable initialization, many of which are legacy-but-still-used! [0] And some of those categories have changed their boundaries every language version since C++11 (based on the definition of "aggregate").
C++ has three to five partially-overlapping kinds of type inference all in active use (auto, decltype(id), decltype((expr)), decltype(auto), template argument deduction)! This is interleaved with name lookup and overload resolution (below), so if some subexpression isn't compiling how you expect, you have a vast space of language features potentially to blame.
C++ has so many kinds of name lookup and namespacing that I'm not even sure how to count them. There's unqualified lookup, argument-dependent lookup, qualified lookup, class member access, etc. Sometimes you can't refer to things defined later (outside a class) and sometimes you can (inside a class). There is even undefined behavior if you mess up namespacing! (Undiagnosed ODR violations are every experienced C++ programmer's nightmare.)
C++ has ad-hoc overload resolution based on un-scoped identifiers, which even crosses namespace boundaries using one of the above name lookup modes. It has two kinds of user-defined implicit conversions ("explicit" and implicit) that also affect this selection process, on top of the zoo of "type promotions" inherited from C.
C++ classes have five to six kinds of "special member functions," some of which may be defined automatically by the compiler, each with its own rules for when and how, based on what else is defined in the class. These also contribute to overload resolution, of course.
Rust has a complexity of its own, but it's quite different in scale and quality. There is exactly one way to initialize a variable, exactly one kind of type inference, only two ways for names to be resolved (directly or via an imported trait). Overloading, implicit conversions (of which there is only one kind, Deref), and the replacement for "special member functions" (Copy, Clone, Drop) are all based on exactly one mechanism (again, traits).
The article has a pretty accurate description of how people experience Rust's remaining C++-like complexity, IMO: "I don't remember the order in which methods are resolved during autoderefencing, or how module visibility works, or how the type system determines if one impl might overlap another or be an orphan."
But one important aspect it leaves out (not being a C++ article) is that if you mess up any of these, you just get a compiler error- and Rust is well-known for having extremely helpful error messages. In C++ you may get a compiler error (known for being extremely unhelpful) or you may get undefined behavior.
Zig is certainly a smaller language than either C++ or Rust, but that comes at a cost. I would much rather hear discussion of those actual trade-offs than yet another "Rust is just as complicated as C++" non-claim.
You're comparing 35-year-old C++ with 10-year-old Rust (and that doesn't even capture the story, because after two years C++ had something like 5x the market penetration Rust has after ten). But even now Rust, I think, together with C++, Ada, and Scala, easily ranks among the top five or so most complex programming languages in software history.
It's definitely a matter of personal taste, but to me, Rust seems a monumental, awe-inspiring shrine erected to worship at the altar of accidental complexity. I admire the technical achievement -- I never imagined accidental complexity could be given such spectacular prominence -- but having spent my share of time with both C++ and Ada, I'd like to look elsewhere. I don't know if Zig will do the job, but the vision it has for low-level development is so refreshing, radical and different from everything else I've seen in my >20 years of professional software development that I'd like to give it a chance before settling for an improved C++. Anyway, it's a matter of personal aesthetic preference.
Four of the five pieces of accidental complexity I listed were present in C++98, so I'm not sure what the relative ages have to do with this. (You've also not mentioned any accidental complexity in Rust...) Instead, I attribute it to hindsight and differences in priorities- C++ today is still introducing new features with the same level of unforced complexity (e.g. compare C++ lambda capture clauses and coroutines, to Rust closures and async fn), while additions to Rust instead tend to "fill in" inconsistencies to remove edge cases and exceptions.
I'm also not trying to convince you to drop Zig and settle for Rust! You can use either or both or neither, I don't mind! Rather, my point is that Rust's direction relative to C++ is the same one you praise Zig for- it provides the same control with drastically more economical application of fewer language features. Just because Zig goes further (again, at great cost to things like tooling, error checking, and messages) doesn't mean Rust didn't make a lot of progress.
Sorry, I don't see it that way (and I don't entirely agree with your characterisation of what new Rust changes do, certainly not all of them). I think that both Rust and C++ are fundamentally built around a design concept that I find distasteful and wrong-headed for low-level programming -- https://news.ycombinator.com/item?id=24840818; I guess you can call it too much implicitness aimed to make the language appear something it isn't when printed on the page. Rust does it somewhat more elegantly than C++ and perhaps has other justifications for it (like sound guarantees) -- which is why I prefer it to C++, although not enough to justify a large investment until it has a significant market share -- but it espouses the same foundational design aesthetics. I think that whether people like or dislike Rust mostly has to do with whether they find that aesthetic appealing. I'm sure many people do and many people don't.
BTW, I haven't "adopted" Zig that I would need to drop it for anything (it's not even 1.0 yet). I'm still with C++ for the time being. But I'm deeply impressed by Zig's revolutionary design and complete rethinking of low-level programming that I'm keeping a watchful and hopeful eye on it.
That's much closer to a substantial and fair similarity between Rust and C++! Indeed, if I understand you correctly, it's a major cause of both languages' slow compile times, and a source of a lot of difficulties for humans learning and writing both languages.
But this makes me suspect we may be using very different definitions of "accidental complexity" here: Your usage seems to apply to programs, which wind up over-specifying low-level details in both languages. My usage of the term applies instead to the languages themselves, and the level of extra pain they inflict on programmers who have already accepted the C++/Rust/etc aesthetic.
By accidental complexity I mean aspects that go beyond what you would write in pseudocode when describing an algorithm, and in a language it means the number and "depth" of features dedicated to those aspects relative to the language's total.
> because after two years C++ had something like 5x the market penetration Rust has after ten
C++ is highly backwards compatible to C to the degree that it's almost (but not quite) a superset. Of course it's quite easy to start creating cpp files. Also C++'s benefits were quickly realizable while for Rust's safety benefits to play out, you need to have replaced significant portions of highly risky components (e.g. those that parse user data, have a history of bugs, etc). Adding Rust to an existing C++ codebase is much harder than adding C++ to an existing C codebase.
Also do you have a link for your claim? I'm interested in reading on the early rise of C++.
It carefully designs its name mangling such that this doesn't happen in the first place, even in the presence of multiple slightly-different builds of a library.
The only way to get matching names is to ask for them explicitly, via FFI.
> It carefully designs its name mangling such that this doesn't happen in the first place, even in the presence of multiple slightly-different builds of a library.
but, after checking apparently this adds a hash of the function to the name mangling - how does that work when you want to call it from another language ? e.g. for instance you can call C++ code directly through some dialects of Lisp, Perl , ADA, or D (AFAIR) as they all have libs or mechanisms that kinda understand C++ name mangling - how are you going to do the same with, from what I'm seeing, "name_of_the_file::name_of_the_function::some_hash" ?
> The only way to get matching names is to ask for them explicitly, via FFI.
and what happens if you have two libraries which expose the same extern-C function name ?
If you want to call Rust code from another language, you use its FFI tools to export an un-mangled API. This is typical for C++ as well- trying to interop with mangled C++ names requires a lot of coordination across the toolchains and so even the examples you cite don't work without a lot of pain.
If you do wind up exporting the same name twice, you just get a linker error, because Rust doesn't play the same games C++ does with linkage. (This is also true of C++ FFI- the problematic ODR-violation stuff tends to involve more complex language features than `extern "C"`.)
> If you want to call Rust code from another language, you use its FFI tools to export an un-mangled API.
this does not answer the question of whether the behaviour is defined if multiple libraries export the same name (which is the original question). See my other comment, what happens if from rust code you dlopen libbar.so ?
The behavior in that case is defined by the implementation of dlopen. This is entirely outside of Rust's control, but fortunately it's also perfectly well-defined by the platform. Again, does not intersect with the ODR violations I mentioned originally.
> but fortunately it's also perfectly well-defined by the platform.
that's the same for every language and thus not very relevant. if you have single, static binaries / libraries of course everything is simple, and you'll get linker errors in C++ just like you would in Rust. What is not simple is when you start loading twelve dozen libs at load-time or run-time and it does not seem that Rust defines behaviour any more than C++ in that case.
> Right, I've been telling you it's not relevant for the past three comments now.
but it is ! the only reason why ODR is UB in C++ is because the C++ language authors can't force the system linkers (again, whether at link time, load time or runtime, I'm not only referring to dlopen) to perform LTO which trivially makes ODR violations a diagnosticable error.
But as far as I know, neither can the Rust language authors do so - so either the behaviour in Rust is as defined as in C++, or Rust does not support creating standard platform object files that are linked by ld, gold, or whatever (such as D, ADA, Fortran etc all support) which would make a fair amount of use cases impossible - it's pretty common in some HPC circles to link C++ and Fortran directly in the same executable for instance. And then, of course it's easier to define behaviour when you use a reduced set of constraints, but it definitely does not makes something worth bragging about.
Going back to my original answer, the difference lies in what the two languages ask of the linker under normal circumstances.
Normal, non-FFI-using C++ can hit ODR violations in response to things like typos or subtle mis-uses of `inline` and templates.
Normal, non-FFI-using Rust is designed such that these situations never come up.
My original comment was never talking about FFI in the first place, where yes, both languages are much more at the mercy of what the platform provides. However, in that case the spooky UB ODR violations I was referring to are also not relevant, because you just get normal, fully-defined platform behavior- the expectations of the compiler (and thus the chances for them to be violated, resulting in UB), are different.
I don't think that dlsym would accept "bar::my_function::h4ed6ea856a52cd6b" as a symbol, just like it would not accept "foo(std::vector<int, std::allocator<int> >&)" and wants "_Z3fooRSt6vectorIiSaIiEE" instead, no ?
To be clear, my comment was about the fact that two different functions produce exactly the same symbol name, c++filt or not, which is not what I was told above in "
It carefully designs its name mangling such that this doesn't happen in the first place, even in the presence of multiple slightly-different builds of a library."
I have no particular comments on the idea of using hash though I believe that something that changes 95% of chance error in 0.5% of error (I'd assume, as it took me 10 seconds to find a collision) is very bad - you want errors consistently when you fuck up, not once every hash collision as it sounds like a really really big pain to debug when it happens.
While Zig is an awesome project I feel a little bit the same. My two cents are that it is because Zig, as innovative as it is, has nothing that other languages couldn't copy or assimilate. With Rust it is a different story, because the ownership model cannot be plugged into other languages without changing them fundamentally. My prediction is similar to yours that Zig will be an important research project but not go mainstream. A counterargument is of course that Zig's closeness to C will make it win in conservative industries and it wouldn't be the first time a more incremental approach wins over the more innovative one. Also I have to stress that I only have a cursory knowledge of Zig so far. I hope I didn't do the language injustice and I'd like to hear other opinions about this.
I have the feeling that what will push Zig ahead is the uncanny sense of technical aesthetics shown by the creator. Unsexy details like language simplicity, what-you-read-is-what-runs, orders of magnitude faster compile times, effortless cross compilation and C interoperability.
I have never seen anyone focusing so ruthlessly, so early, on nitty-gritty details of how to go about engineering a compiler and language that is and will let you be as close to optimal as possible in terms of compile-time and run-time performance. "Perfect software" as Andrew talks about.
In short, engineering choices taken early will let Zig be something that Rust maybe could approach too (in theory), but in practice never will. Of course Rust is something that Zig will never be, too.
This is excellent point. Zig authors like Go authors prioritize software engineering and hence upfront work on tooling, fast compilers, cross compilation etc. Whereas a lot of upcoming languages in last decade are prioritizing PL design part, so tooling will be delegated to external projects.
I have seen many say just combine PL design innovation and tooling part and it will be perfect but I think this will not happen because it becomes very difficult and sensibilities of these approaches do not match.
I also think that engineering concerns can dictate or at least strongly influence language design. It can still lead to innovation though, like in the way Zig handles async/await, and how it will try to handle recursion.
It is just innovation mostly guided by practical engineering concerns. In this case: How to avoid requiring expensive heap allocation for async tasks, and how to be sure you don't crash from a stack overflow due to unexpected input for example.
I suspect that the excellent Zig comptime support is made possible by intentionally going for a pretty unambitious type system. Otherwise how could you soundly let arbitrary code generate a type? A fancy type system that wants to reason about that will fail, or at least cause the whole language to revolve around making that work.
I think you are right that it would be very hard for a language to innovate both here, and in the traditional PL-teoretical way.
Rust and Zig target different audiences though. It's often been said before, but Rust is mainly a C++ replacement, while Zig is mainly a C replacement. I have switched from C++ as my default-language to C a few years ago, and having dabbled a bit in both, I feel a lot more attracted to Zig than Rust for future projects (and mostly for the same reasons why I switched from C++ to C).
If Zig can settle in the niche that C is used for today (including "nearby" areas where C programmers consider switching to a higher-level language), then it is already a great success. I think (rather: hope) what we will see in the future is that no single language will dominate certain fields anymore like it was the case in the 90's and early 00's.
Rust doesn't need to fit into every niche, and it would be harmful to bend Rust in a way that it fits everywhere. It would end up as a "kitchen-sink language" with tons of competing concepts and ideas. This is exactly what's currently killing C++.
The more I use Rust, the more it doesn't remind me of C++, but instead the C structure of larger projects. C (and really any language probably) kinda feels like a different language with each few orders of magnitude in code size. Rust feels a lot like forcing the structure of 10KLOC to 1MLOC C projects to me with sane defaults and more powerful versions of the same abstractions.
We're not using string based macros, but hygenic macros.
We're not using goto error, but instead RAII.
We're not guessing if a function returns 0 or 1 on success and if the big struct we passed in as a pointer is valid on either of those, we're using an ADT that makes it clear.
Slices mean everyone aren't reimplementing 1000 morphs of the same buffer struct.
And the deeper separation between code and data that the fat pointers give you and how the more you go down OO principles, the less idiomatic it feels, really feels more like a giant C codebase than a C++ one to me.
I agree (I think), Rust is more like C++ and less like C in that it provides mechanisms to enforce certain "custom rules" in bigger code bases and teams via language constructs. Those same mechanisms which help organizing big code bases written by big teams often increase friction in smaller codebases and teams.
In C one has to be very disciplined about this type of stuff and needs to put much more thought into module API design (for instance to enforce memory management rules), because the language itself is extremely "freestyle".
You would think that, but for example I think that, for example, Oxide should definitely have picked zig, if it were more ready. Obviously it's not, and oxide wants to ship now.
> If Zig can settle in the niche that C is used for today
That niche is "code that needs to be portable to any system with a C compiler", or "... to a particular system that only has a C compiler" so i am not sure it can.
There are lots of projects using C that aren't in that niche, but i don't believe there is a good reason for any of them to be using C over C++ these days. It's either history, inertia, or Luddism.
> ... for any of them to be using C over C++ these days.
That's the old (and frankly: tiresome) mindset that C++ is a successor and improvement of C. After using "modern C" (as in C99 or later) for a while it becomes quite obvious that this isn't the case anymore, instead C++ was a fork of C and developed into a very different direction (including developing the original C subset into a non-standard C dialect). Especially with more recent C++ standards, C and C++ have become different languages with very different goals.
Agreed. One thing though is if a language is simple because it's early or if simplicity is a core value to the language development. A lot of "simple" languages just become complex over time as they get more users and feature requests.
>has nothing that other languages couldn't copy or assimilate
What about extremely fast compilation? In theory a complex language could have a really fast compiler, but I don't think there are any historical examples of languages with a very slow compiler getting compilation down to around one second as the article describes.
For me personally, Rust was a challenge. Most programming languages are more or less the same. But Rust forced me to think in a way I've never encountered before. I guess this is also why many people like it.
For me, I wouldn't say it's just because it was a challenge. The ideas that Rust makes explicit — borrowing, tracking lifetimes, mutable vs. immutable references — they're lessons that carry over into other languages very well.
It might be the timing that matters. The very needs for safe and friendly system programming arose---or rather became clear---around 2010, when both Rust (0.1) and Go appeared. Having appeared 5 years later, Zig can be seen as a follower (a very great one, of course) and may no longer give much excitement as Rust did.
Never underestimate the importance of aggressive marketing and sales around tools for software developers. Even OSS projects need clever sales pitches and visibility to become successful. Mozilla had the resources to pull that off for Rust.
For me its easy: haskell was cool for more obvious reasons but a bit much, and rust felt like an interesting yet practical very strongly typed language.
Rust is currently backed more by Amazon and Microsoft than it is by Mozilla. The main contribution of Mozilla is to hold the trademark on behalf of the Rust project.
I think it's much about preference, some like simpler languages, some like more expressive ones.
I for one have a really hard time liking expressive ones, hence Go & Zig is my small but high-quality toolbox rather than a larger toolbox.
I just can't get that excited about the language itself beyond a certain point, I just want simple, predictable high-quality tools to help me produce simple, high-quality code and applications.
What do Zig "generics"/"comptime" errors look like ?
> One of the key differences between zig and rust is that when writing a generic function, rust will prove that the function is type-safe for every possible value of the generic parameters. Zig will prove that the function is type-safe only for each parameter that you actually call the function with. On the one hand, this allows zig to make use of arbitrary compile-time logic [...]
This is fundamentally identical to how C++ templates, constexpr and concepts work. Its a really flexible system (you can implement how you want to type check things using constexpr), but has three cons that Rust system does not have:
- can't typecheck library APIs, so library authors aren't sure if their "constraints" are correct. Testing this requires writing lots and lots of compile-time tests.
- errors deep inside a library implementation when user code passes incorrect arguments to generic APIs.
- rust traits can be used for static dispatch, or boxed and used for dynamic dispatch, C++ at least can't really do this well.
It would be cool if someone could explain how Zig fixes or improves upon these problems that this system has in C++. C++ tried to fix this with concepts, but failed.
This was a nice read that has motivated me to learn Zig. I want to know how Zig improves on these C++ issues.
That's like saying that a helicopter clearly has the advantage over a car in that it can fly. True, that's an advantage, but you don't get to just pick the advantages -- they come in a package. That package is better overall only if the advantage comes at no additional cost and has no associated downsides. I, for one, think Rust's advantages comes with a hefty price tag, and can have negative effects even on correctness compared to Zig. However, it's hard to intuit which of the approaches is better for correctness; only time and empirical observation will tell.
This isnt quite true, the borrow checker forces us to use things like RefCell, or Vec with (generational) indices, which all have run-time overhead. (this isnt even counting those who use Rc)
It takes some time to get up to speed with Rust, but I don't think it takes particularly longer to write a correct program in Rust than it does in other languages. I often find myself reaching for Rust rather than Python now, even for small things.
The same is true for C++, and early reaction was similar: hey, we don't need high-level languages anymore! But after a few decades with C++ we've found that the burden manifests not when writing the program but when maintaining a large codebase over many years in a team whose members change over time.
I've used C++ since the 90s and it's only ever been presented as a Serious Language for Real Applications. I feel that until recently, C++ stood alone and there were almost no serious threats to its position, except maybe Java. If you were planning on writing commercial programs (or even more so a collection of programs) that have good performance and huge featuresets, like an operating system or a browser, what else could you have possibly used?
I agree that in practice maintenance has been pretty bad for C++ programs, but I think I understand the kind of reasoning that led so many organizations to pick C++, even if I don't agree with it. It was not about ergonomics or productivity, but about control. The language grants an unprecedented level of power to a small group of elite programmers inside every organization, who author the standard library and headers that all the other programmers use. They're building the smart pointers, the memory allocators, the IO libraries, and the build systems. It's imperative (to them) that their good decisions don't get snowed under the mountain of application code that the more mediocre programmers are going to be producing by the truckload. So it's important to have powerful encapsulation, compile-time checks, and metaprogramming capabilities. Doesn't matter how safe by default or how slow it all is, because the elite add guarantees themselves through the abstractions they write, and don't have to compile a full application very much, if ever.
This is a really great writeup! I was using Rust as my main programming language from some months before 1.0
up until maybe early 2019. I have only written somewhere in between 100 and 1k lines of Zig, but
generally feel that I agree with most of what's being brought up here.
Here's a mind dump:
> Zig manages to provide many of the same features with a single mechanism - compile-time execution of regular zig code. This comes will all kinds of pros and cons, but one large and important pro is that I already know how to write regular code so it's easy for me to just write down the thing that I want to happen.
This is a huge one for me, and I really don't understand why Rust didn't jump on this earlier.
Using the programming language for configs, generics, macros, and anything else that you'd want
at compile time just seems like such a huge win, instead of having weird preprocessors-like systems,
some config file format with arbitrary limitations, and weird marcro-like systems that either
have crazy syntax (like `macro_rules!` in Rust), or that are way too limiting (like `const` functions in Rust).
Jon Blows language seem to take a similar stance as Zig; let's see if it ever hits public beta.
> On the other hand, we can't type-check zig libraries which contain generics. We can only type-check specific uses of those libraries.
This is definitely a concern I have about this "C++-like generics".
My experience with C++ suggests that this is bad, but on the other hand, improving even just error messages
would be such a day-night improvement, that I don't really trust my judgement on this one.
> Both languages will insert implicit casts between primitive types whenever it is safe to do so, and require explicit casts otherwise.
Is this really true? I seem to recall having to have plenty of `as usize` in my code when
using smaller integer types for indices, but maybe this has changed (or maybe I'm misreading what's being said here).
> In rust the Send/Sync traits flag types which are safe to move/share across threads. In the absence of unsafe code it should be impossible to cause data races.
This is probably Rust's main selling point, because as far as I can tell, no other mainstream language comes even
close to getting static thread safe guarantees (up to your definition of mainstream).
As time goes on, however, I'm getting less and less excited about this, because most of my programs
are not multithreaded, and the very few times that I need multiple threads, there is often
very obvious and small boundaries in between the threads.
It's just not very interesting to me that I _could_ be writing programs with thousands of threads
all jumping around without having to worry about data races, because I don't really worry about it
in the first place. But still, I as a Rust programmer, have to pay the price for this option being available.
> Undefined behavior in rust is defined here. It's worth noting that breaking the aliasing rules in unsafe rust can cause undefined behavior but these rules are not yet well-defined.
I'm not sure what to say about this, except that it's surprising that there seem to be a
lack of voices about this in the Rust community.
How can anyone comfortably write `unsafe` code without knowing what the rules are?
Especially when the compiler is so "good" at depending on the "rules"?
I don't understand.
I have pretty limited experience with unsafe, but I have written some, and
was often confused about which bugs were my logic bugs and which were the compiler assuming
I didn't break some rule I didn't know about.
Combine this with a poor debugging story overall, and you have a pretty miserable experience programming.
Maybe this isn't a problem in practice, or maybe all people succesfully writing `unsafe` code for libraries
are also `rustc` veterans?
> @import takes a path to a file and turns the whole file into a struct. So modules are just structs.
This is a very nice approach! I remember from earlier Rust that the module system was a real pain point
for beginners, and can also remember really struggeling with it.
Curiously though, I also remember looking back, not understanding why anything was confusing about it.
This was also redone(?) at some point, and I think it's nicer now.
> In rust my code is littered with use Expr::* and I'm careful to avoid name collisions between different enums that I might want to import in the same functions. In zig I just use anonymous literals everywhere and don't worry about it.
I've always been bothered by Rust's inability to infer the `enum` type in a `match`;
in other places Rust has no problems being automagick, and this is really very annoying to go around,
either with `use Foo::*` before each match, or having it in file scope and hope for no collisions.
Zig seems to take exactly the approach I'd go for.
> Re allocators
I think Zig's stand on explicit allocators is very good; I've seen enough bad code in other languages
that allocates here and there for things that, very clearly, doesn't need to be there.
Having the language be explicit about allocations makes it easier to stop and say "hey wait a minute, is
this realy the way I'm supposed to do it?", but without having to jump through hoops
if you _just_ want to allocate something somewhere (define a global allocator yourself).
And, as a bonus, it's easier to handle the allocations of other peoples code.
> Zig has no syntax for closures.
I definitely need to write more Zig to find out whether this is a problem or not.
I've written a lot of C++ lately, and while there _are_ closures available, I think I've only used them once.
Maybe the reason for my comparatively heavy closure usage in Rust was that so many methods in the standard libray
took closures that you're shephearded into making similar methods for your own types.
> Zig's error handling model is similar to rust's, but it's errors are an open union type rather than a regular union type like rust's.
I really think the error story is why I prefer Zig to Rust now.
The giant error `enum` in Rust is definitely what I'd go with because it's simply not feasible to
manually track which functions return what errors and making individual enums yourself,
even though this is super easy for the compiler to do, like Zig shows.
Not to pick on anyone in particular, but sometimes it feels like many programmers think
that a program only consist of the happy path and that errors are somehow rare and not
worth dealing with properly. Both Rust and Zig are huge helps to combat this mindset, but I do think
that Zig comes out ahead, simply by being less annoying to work with.
Also, while some people might say that `Result` just being a part of `core` and not a magic special language thing
is cool, I do appreciate Zig's usage of `?` since it's way less typing for something that happens _all_ the time.
> Zig's compilation is lazy. Only code which is actually reachable needs to typecheck. So if you run zig test --test-filter the_one_test_i_care_about_right_now then only the code used for that test needs to typecheck.
I didn't know this, but this is awesome!
> Zig has absurdly good support for cross-compiling.
I've never understood why cross-compiling isn't an out-of-the-box feature in all languages.
Don't you basically just have to target a different instruction set?
Well, and a different executable format.
But still, compared to all of the other crazy things compilers are doing,
this seems very straight forward in comparison.
> Zig has an experimental build system where the build graph is assembled by zig code.
See above. I really really really don't understand why all languages doesn't do this already.
> In rust, blocks are expressions.
This is something I really like about Rust and a pattern I've used a lot, where I'd say
let some_thing = {
let foo = ...
let bar = foo.baz() + quiz();
...
foo
};
to avoid accidently using `bar` somewhere else. Granted, since Rust allows shadowing this isn't really
a problem most of the time, but it's definitely something I miss when writing C++.
Zigs version is somewhat verbose, but I'll manage.
> There is an in-progress incremental debug compiler for zig that aims for sub-second compile times for large projects. Based on progress so far, this is a plausible goal.
Andrew's work on binary patching executables is really cool.
I hope we'll get to compile times this low, even for moderately sized projects.
> real 23m27.475s
This is just sad. Despite all the work the contributors to `rustc` are doing, it just seems that
they are in a completely different league with respect to compile times than what I'd like.
I hope the steady progess they're making will either make some jumps, or continue for a while :)
> Main points so far:
This is a great summary, and I think people reading it (or the whole post) will have a pretty good idea of
where they stand re. the two languages.
> This is a huge one for me, and I really don't understand why Rust didn't jump on this earlier.
Doing this well is not easy. Exposing a full language at compile time isn't a difficult feature, but doing compile-time execution in a sound, safe way is not simple. For example, cross compilation becomes more of a thing. It is easy to accidentally break the type system. I don't actually know how Zig implements comptime, but I expect given Andrew's chops, it's probably in a good way.
(And, one can argue that there are pros and cons here too, the article gets into them a bit. Doing everything this way has significant drawbacks, as well as significant pros.)
> I'm not sure what to say about this, except that it's surprising that there seem to be a lack of voices about this in the Rust community. How can anyone comfortably write `unsafe` code without knowing what the rules are? Especially when the compiler is so "good" at depending on the "rules"? I don't understand. I have pretty limited experience with unsafe, but I have written some, and was often confused about which bugs were my logic bugs and which were the compiler assuming I didn't break some rule I didn't know about. Combine this with a poor debugging story overall, and you have a pretty miserable experience programming.
There's just not a lot to say. The exact details are being worked on. This takes time. The team isn't going to make rules that invalidate large swaths of existing code, and has a history of making compatibility warnings with a long time before things break.
There is a fairly clear list of "this is absolutely not acceptable," and a bunch of "it depends...".
It's also just that the vast majority of people never need to reach for unsafe in the first place, so it isn't a huge pressure on a lot of people.
> doing compile-time execution in a sound, safe way is not simple. For example, cross compilation becomes more of a thing. It is easy to accidentally break the type system.
Why? I don't think I've ever seen a concrete example of why this isn't simple (but I'm not really a compiler person, so there might be!). I can imagine plenty of ways of doing Bad Things, like adding compiler flags based on what day it is, but I can't immediately see why this would _break_ anything. What do you mean by breaking the type system? Accidentally getting non-typechecked code in the compiler, or running into problems with the compiler thinking two equal types are distinct?
> There's just not a lot to say. The exact details are being worked on. This takes time.
I understand and appreciate this! Maybe I should've phrased myself better: I don't understand how people are using `unsafe` today successfully when something as fundamental to the Rust language model such as aliasing isn't really defined properly yet. It sound like people writing books without the rules on verb conjugation being really set. It sounds to me like plenty of the unsafe code out there might end up breaking at some point, and that, by extension, all other safe code out there is basically built on extremely shaky grounds.
I might be overreacting here though, since I don't write a whole lot of Rust anymore and am pretty distanced from the community. Considering their track record, I'm sure it'll turn out just fine.
---
> (By the way, Hacker News doesn't support markdown;
Ah! It's always tricky to remember which places support what syntax with these things. Thanks!
> Accidentally getting non-typechecked code in the compiler, or running into problems with the compiler thinking two equal types are distinct?
Yes, this sort of thing. Basically, you have to have this be deterministic, or you end up with very strange possibilities, possible miscompilations, and in the best case, confusing errors. One option is to simply accept that these things can happen. Another is to restrict what you can do at compile time to ensure that they can't.
An extremely simple example is cross compiling. In Rust, usize is dependent on the architecture you're compiling for. A very simple "just compile and run the program, get the answer, and use it" implementation of compile-time execution will produce a usize of the size of the host, not the target. That's a miscompilation. This example, while being simple, is also simple to fix. But it's an example of how it's not as trivial as "use the compiler to compile the program, then run it." Which maybe isn't how you think of this feature, but how I was, back before I knew anything about this topic :)
> I might be overreacting here though
Nah, I think that you're not wrong. It's just that, when you start applying this super rigorously, you end up in weird places. How can you trust any behavior in a language without a specification? How you can you trust a specification if that specification hasn't been formally proven? How you can trust an implementation of a formally proven specification? How you can you trust that the silicon you're running on does the right thing, even with your bug free, formally proven code?
Everyone chooses somewhere along this axis to be comfortable with. And everyone does something, including "I can ignore these problems because in practice they don't happen to me," to deal with the bits outside of what they consciously choose to focus on.
> An extremely simple example is cross compiling. In Rust, usize is dependent on the architecture you're compiling for. A very simple "just compile and run the program, get the answer, and use it" implementation of compile-time execution will produce a usize of the size of the host, not the target.
I get that there are non-obvious problems here, and as you say, this problem specifically has an easy fix, but I'd just like to note how Zig does this:
Modulo the `.{` weirdness, this probably looks familiar. By default, this prints out 8 and 8 on my system, but if I cross-compile to a 32-bit target, it prints 4 and 4.
> It's just that, when you start applying this super rigorously, you end up in weird places. How can you trust any behavior in a language without a specification?
I think this is a social issue for me; if rustc decides one day to change its behavior under my feet it feels like it's my fault for not having written proper Rust in the first place (even if the behavior wasn't properly defined in the first place), but if there's crazy things going on in the compiler due to bugs (that we didn't find, because proofs are hard), then that's not really my fault, in a sense. And of course, if my CPU decides to run my program wrong, that can't really be blamed on me. The end result in these three cases are all the same: the program didn't run as expected, but the blame (I don't want to point fingers, but this is the best word I could come up with) is different, and the probability of this happening is vastly different. I've never hit a CPU bug, but in the little unsafe Rust code I have written, I've had behavior change with a compiler update, which I'm sure is because I hit UB.
And for what it's worth, I would greatly prefer Rust having a proper spec, even if that would increase turnaround time for the language evolution, just to ensure that everyone really is on the same page with respect to what the language really should and shouldn't do. I realize that Rust would rather be careful and make sure that the decisions made are the right ones. I think it's a fair trade-off, but I'm not sure I would have made it, if it were up to me.
Does this also apply to the in-language build system? Given both Rust and Zig's ergonomics, it just seems so brilliantly simple (at least in hindsight) to let the build system be a library. Is it just coincidence that I've only heard of this approach for zig and Jonathan Blow's language, or is there a technical reason this is more difficult than it seems?
If you're interested in this sort of thing, this is an excellent paper on build system design, although it focuses more on the properties of the build graph than on how the graph is constructed:
> Is it just coincidence that I've only heard of this approach for zig and Jonathan Blow's language
It's also a feature in elixir (Mix). Interestingly elixir also has a comptime concept, so I think having comptime makes having a build library more sensible.
In case you didn't know, cargo is available as a library as well[0].
I don't know why the current Zig approach[1] would be preferable, as you dump some source code that you are now responsible to maintain in your project. Any bigger project project will require writing boilerplate code just to get things built.
It's also a bit of comparing apples to oranges, as Zig's build system doesn't come with a package manager (which I would say makes up a good chunk of Cargo's complexity).
>> Zig manages to provide many of the same features with a single mechanism - compile-time execution of regular zig code.
> This is a huge one for me, and I really don't understand why Rust didn't jump on this earlier.
I believe this outcome was mostly defined by the history. Here is my reasoning:
Rust, at least since 0.5, was undoubtly designed as a replacement for C++ (of course that doesn't necessarily mean that it only appeals to C++ programmers), and C++ was notable for its unexpected sophiscation and problems with its primary compile-time mechanism, templates in the other words.
Rust replaced templates with two other features: traits and reimagined syntactic macros. The former formalizes C++'s long-waited concepts feature and avoids issues with C++'s "ad hoc" polymorphism, while the latter deals with code generation, the remaining use of templates.
Traits (and lifetimes) required a complex type system, which takes time to compute and ideally the result for some file should be intact when other files have been changed. This constraint makes most additional compile-time mechanisms undesirable for addition because they can create unexpected dependencies between files, so recompiling one can trigger others. Note that the compile-time code is still a code, with states and everything attached, so this is far from trivial. (What if your compile-time code needs an external input?)
Zig and Nim show what the different starting point might result in: they didn't have to solve problems with C++ templates (among others), and could retain ad hoc polymorphism. This limits an ability to incrementally compile, and what's common with those languages? Much simpler type system, which compiles fast and makes the incremental compilation less concern for them. Rust needed a complex type system for its goals, which unfortunately limited its options for compile-time mechanisms.
> Much simpler type system, which compiles fast and makes the incremental compilation less concern for them.
Where is this "complex type system -> long compilation times" meme comes from?
Most of rustc's time is spent in llvm. And bottlenecks are identified as monomorphization, producing large amount of LLVM IR and lack of binary dependencies.
Type checking is a small portion of time, and not a bottleneck, IIRC.
You are absolutely true for the release build (sharing the same backend, Zig is also significantly slower in the release build). For the debug build however there may be multiple answers: typing, borrow checking (broadly this is a sort of typing), codegen and LLVM all can contribute significantly to the compilation time [1]. The situation may have been improved since the last time I've checked though.
The problem with this is that it's incompatible with a well-designed compiled[0] programming language; a cross compiler can't replicate the architecture-specific behaviour of the compiled code because the target architecture is unavailable or possibly even nonexistent at the place and time the compiler runs.
Consider, eg, a compiler on ARM targeting x86, when the code uses x86-specific assembly or intrinsics, or more subtly depends on x86-specific handling of things like pointers or integer overflow. If you instead compile the compile-time code for ARM, you a: need two different codegens (soon to become three when you compile the compiler to run on RISC-V), and b: now the compile-time code is depending on ARM-specific semantics, which is even worse.
Giving up on cross compilers means your language isn't well-designed. Giving up on architecture-specific behaviour means your language isn't designed for (direct[0]) compilation. (The latter is a legitimate choice, of course, but its negation is also a legitimate choice, and thus a legitimate reason not to support same-language metaprogramming.)
0: in the sense of C-like compiling directly to equivalent machine code; obviously any language can be compiled in the more general sense of producing a working executable
> a cross compiler can't replicate the architecture-specific behaviour of the compiled code because the target architecture is unavailable or possibly even nonexistent at the place and time the compiler runs.
I mean, it can't possibly be nonexistent when you write the program, since if you don't have any idea what the target looks like, how can you output code for it? Do you mean physically existent?
I think I see the overall point though, namely that arch specific code would be a mess, with which I agree.
But isn't this okay? I mean, if you really want to switch on some platform specific thing at compile time, then different behavior for different host platforms is exactly what you want.
To me, there's basically two uses of compile time execution: precomputation and codegen. If we look at the following snippet
we're looking at the size of a `usize`, which is pointer-sized, and we're taking it both with and without `comptime`.
If I compile and run this on my system, it prints out 8 and 8, since I'm on a 64-bit system.
It's worth noting that the docs for `@sizeOf` says "This function returns the number of bytes it takes to store T in memory. The result is a target-specific compile time constant.".
> you a: need two different codegens
Won't you need `n` different codegens if you support cross compilation to `n` different targets anyways?
The only place where the lack of closures in zig has been clunky for me so far is for when your firing off new threads and need to hand-package all the data for the new thread. Since this is something that one only does infrequently, it's not the end of the world.
I think this an argument like the following is not really meaningful:
> Most of this difference is not related to lifetimes. Rust has patterns, traits, dyn, modules, declarative macros, procedural macros, derive, associated types, annotations, cfg, cargo features, turbofish, autoderefencing, deref coercion etc
Nobody forces beginners to write macros. Beginners are only macros _users_. With time and experience, the need for macros emerges by itself, then learning them is a natural part of the process. But even then, nobody is forced to write any.
Dyn is also a very obvious concept to anybody who knows a bit of OO-programming in lower-level languages (e.g. C++). The choice to make dynamic dispatching explicit is arguable, but ultimately, although making it explicit is (AFAIK) Rust-specific, the concept itself isn't.
Complaining on pattern matching? C'mon :-) It's a bit like a Python programmer complaining that Golang has a switch/case.
I don't argue that Rust is hard or not, but it seems to me that the author was overwhelmed, and complained about everything, even simple things.
In my experience, in the Rust learning process (and programming experience), all the concepts above are dwarfed by the headaches induced by the borrow checker.
> Zig has it's own implementation of standard OS APIs which means that linking libc is completely optional. Among other things, this means that zig can generate very small binaries which might give it an edge for wasm where download/startup times matter a lot.
I'm curious about the details of this. Rust has `no_std`, however, it seems that in Zig, this is more (in a way) granular?
> Nobody forces beginners to write macros. Beginners are only macros _users_. With time and experience, the need for macros emerges by itself, then learning them is a natural part of the process. But even then, nobody is forced to write any.
Macros in Rust aren't only confusing for beginners, because it has a completely different syntax (well, at least macro_rules! does; idk about proc macros) than the one you use for the rest of your program.
Even _if_ you draw the line between macro writers and users, you've effectively made macros a black box you're not supposed to look into. Having trouble debugging anything related to a custom derive or a macro? Too bad, macro's are hard. This is (a) not useful and (b) not necessary, as Zig clearly shows.
I think Rust made the C++ mistake of trying to accommodate everyone by having both low level control of things, but not too low level since that's dangerous, so we'll come up with some rules that you can't break, and oh by the way the rules aren't really ready yet, oh, and if you're not used to dereferencing pointers, don't worry we have this deref trait, what's a trait you say? and so on and so on.
It's effectively a barrier to entry, which, ironically, is a thing the Rust community tries really hard to combat.
If you mix a bunch of nice colors, blue, red, green, turqouise, purple, orange, eggshell white, you just get ... brown.
How? My impression is that they are not trying to do the language easier at every release. Contrarily, I see more new features added all the time (which is a good thing if Rust is your thing).
What I see is top-quality documentation. No doubt about it. Perhaps they focus on quality learning material, but the truth is the more I read, the more I scratch my head thinking "what is this construct and when and why do I need to use this?". Then overchoice[1] anxiety kicks in and I go back to zero, that is, my good old C.
On top of that... you have to use proc macros to do any kind of annoying repetitive implementation of traits. But to do that, you have to add another crate, then you have to possibly make another crate if you want to share any code with the proc macro implementations and the actual library.
I understand why it is this way but it is very much not ergonomic
I think that what is lost in the conversation when this topic comes up is that the current macro alternatives, macros by example and procedural macros, are stop-gap features: the first is what was available in 1.0 that got stabilized with the explicit intent to deprecate in eventually (its replacement is an ongoing effort[1]) and the later is a minimal stabilization of compiler internals that had proven to be useful both internally and in nightly crates, where an API surface that we didn't mind maintaining into the future was stabilized.
The entire macro space in Rust (just like async/await) is in MVP status: they are available and useful already, but their current feature set and approachability isn't the end-state.
I know some will read that and think to themselves "Great! More changes to the language! See, they can't help themselves.", but these changes are about removing restrictions and tapering edges.
The author is obviously in the top ~1% or something of programmers: the article is thoughtful and shows a lot of technical depth and knowledge about things most programmers won't even have heard of and he clearly has a fair amount of practical experience. So telling him he's just somehow imagining all this complexity and cognitive load, because he doesn't have to use all this complexity feels a tad tone deaf, in a very HN way.
This reminds me of the (by now) old saying that code is more often read than written - sure, you're not forced to write macros, but if you use macros (or more exotic features), you force the developers who read your code to become familiar with (possibly arcane) features that they maybe would have never used (or needed). That's why I think Go's approach to not try to be "everybody's darling" by implementing every conceivable feature is commendable (even if mentioning this in a Rust-related thread will get me downvoted again).
As coined by Google engineers: software engineering is programming integrated over time. Languages that are paragons of cutting edge PL research unfortunately tend to forget that. Go’s simplicity tends to be brought up as a negative trait (dumb language, dumb users, etc) mostly by those who weren’t yet woken up by a page that required them to quickly read, debug and fix things. Or even jump into another team’s codebase to do the same thing. Last thing on your mind then is the type safety and elegance of a language.
> Go’s simplicity tends to be brought [...] mostly by those who weren’t yet woken up by a page that required them to quickly read, debug and fix things. Or even jump into another team’s codebase to do the same thing. Last thing on your mind then is the type safety and elegance of a language.
Actually, type safety can be quite useful in such a context: it allows the person who designed the types in use to enforce certain restrictions on the developer who is hurriedly adding code to an unfamiliar codebase, thus reducing the requirement for that developer to understand larger issues in the system.
On the other hand, more often than not the developer who is (more or less hurriedly) adding code is trying to do something that the original designer didn't think of beforehand, and because of that they now have to jump through additional hoops to make it work...
This is a good observation. Some of the best codebases I’ve worked with are the ones whose authors insisted on keeping things easy to modify or remove (not by introducing unnecessary abstraction, but by simplifying things as much as possible). Some of the worst codebases are those where the author assumed that their elegant solution is ultimate and will never be read in circumstances other than to be admired.
I don't disagree, but this goes both ways (and I typically only see one side of this argument put forward). They also allow the person who designed the types to make the types so complex that the average user cannot understand what is going on. You may argue "well that person has no business looking at the code" but sometimes you have to look into other people's code when you are debugging an issue.
No doubt type systems can allow you to write safer, more robust code. But they can also allow you to introduce new risks to your codebase, one of which is difficult to read/understand code.
I don't think that people complain about Go's simplicity. People complain about the hamfistedness of Go's simplicity. The downstream effects of it include, for example the wat that you handle JSON in Go.
But there are a lot of quite frankly crazy choices, like making (milliseconds) an integer type that is incompatible with numbers. This means to multiply an time with a non-constant integer, you have to cast an integer to millisecond type first, then multiply. Which breaks the brain of a scientist like me and makes me want to throw my computer out the window.
My daily driver is Elixir, and it's not impossible to jump into totally foreign code and debug it. In fact, just the other day I did an interview where the 20 minute technical problem was drilling down from the frontend into the backend of a new-to-me system and finding the logic bug in a database query that was causing the wrong data to be surfaced.
It's not about how effortless the code is to write, but the total lifecycle effort. This includes for example maintenance, security, extending and on-boarding new team members.
> Nobody forces beginners to write macros. Beginners are only macros _users_. With time and experience, the need for macros emerges by itself, then learning them is a natural part of the process. But even then, nobody is forced to write any.
I think one of the problems with this mindset is, you are only assuming the case with you have full control on the code-base e.g. writing thing from scratch so that you can only use those features in rust that you feel comfortable with, in other cases, you have little control on what others use.
I've read that as a (minor) complaint that Zig doesn't have Rust's pattern matching, not that there's anything wrong with pattern matching (and that Zig's switch still does 90% of what the author uses Rust's pattern matching for).
As someone with lots of Rust experience I agree with the author. Rust has more concepts that users need to learn. It's not gratuitous/accidental complexity. Each of these features is IMHO pretty well designed and useful, but nevertheless Rust does have a larger toolbox.
It's not necessarily bad. Do you prefer to put more effort into your program before it compiles, or debug it after it's up an running? Do you want a static guarantee, or do you trust yourself to get it right? These are fair trade-offs.
Well, by that logic, Java is overwhelming if you dive right into Aspect oriented programming, the Atomics and Multi-threaded Java.
I don't consider myself an expert in Rust, but hell, the borrow checker was always helpful. I.e. even if it prohibited a sound program, it explained its reasoning at length. It was easy to fix the issue even if it came.
Macros are meta programming, of course they are hard. And even then, there are ways around. Like cargo-expand, it makes procedural macros easier to reason about.
Sure, the rust language is "simple" if you abstain from using 80% of the features of rust, but then it would be impossible to write non-trivial programs. Zig, on the other hand, gets rid of this 80% as well, AND still lets you write non-trivial programs.
You're right, but I find Rusts concepts/keywords also a bit confusing.
I know what a class and what an interface is, but Rust doesn't have these. It has a struct, which is kinda like a class without methods? It has a trait, which is kinda like an interface but with implementations for methods? It has ... implementations... which kinda make a struct to a class, but not really.
I don't mean to hate here, but these are all things that need to be understood in some kind of way.
I quite like the trait system in Rust, but I'm not sure if I would argue that it's simpler than a class-based language. Just looking at `foo.bar()` there are similar types of complexities. In a class-based language you need to be aware of the class hierarchy, and in Rust you need to be aware of imported traits and auto dereferencing.
I honestly think this is a factor of prior experience. When I started using Rust my background was in OOP languages, so I also went through the exercise of fitting trait pegs into a class shaped holes and it was painful. But this doesn't mean that traits and ADTs and how they interact is harder to learn or understand than Objects with its inheritance, polymorphism, encapsulation and abstraction concepts, it just means that if you already know those, you will have to learn new concepts to learn Rust, which can be surprising if you aren't told about it ahead of time, particularly when you already know several languages that don't use traits.
I do not have any non-Rust resources off the top of my head, but I can give you some quickly looked up resources and enough phrases to search for that should help you in this endeavor.
ADTs are Abstract Data Types[1][2][3]:
> ADT is implementation independent. For example, it only describes what a data type List consists (data) and what are the operations it can perform, but it has no information about how the List is actually implemented.
In the context of Rust it means that the traits, structs and enums are ADTs, while the impls are Data Structures.
Having structs and enums be the way they are in Rust (simplistic and with little extensibility beyond implementing traits) is that pattern matching[4][5] and destructuring is cheap and built in. Pattern matching becomes specially useful when combined with sum types/tagged unions/enums[6][7][8]. On the other corner you have Scala which lets you implement specific interfaces to allow structuring and destructuring for arbitrary types, but that has the same problems as overriding constructors in C++: the performance implications of pattern matching is impl dependent and hidden at the point of calling.
ADTs are only concerned with the "shape" of the data and what interactions you can have. It is related to classical OOP in the sense that the "interactions" you can have are equivalent to the message passing from the original conception of OOP[1].
The distinction is similar to database design in SQL: the Schema is how the data is laid out and what the relationship between tables is (ADTs) while the queries is the operations performed on them (traits). On the other hand, in OOP there's a higher reliance on encapsulation, making behavior an integral part of what the class is and using inheritance for expansion. When all you have is ADTs, you _can't_ have inheritance, so you end up using composition (which is generally considered better design) and you are more likely to rely on the creation of "new-type" container types for everything. You think of them as a way to describe what the data is, not how you interact with it.
Apologies if this is a bit hand-wavy, I'll try to write a more thoughtful answer at a later time.
Also, I don't mean Rust is harder than C++ or Java. It's just quite different.
If you are used to visualizing software as graphs of functions or classes in your mind for years, switching to the Rust model is quite a change of thinking.
> Both languages will insert implicit casts between primitive types whenever it is safe to do so, and require explicit casts otherwise.
I think Rust always requires explicit casts. It's a bit annoying tbh - especially for array indexing - indexing with a u8, u16 or u32 should always be fine but you still have to do `as usize`.
It is annoying since in relationship to bounds checking, we could write a non-surprising automatic conversion - given array length type and index type, we'd need to cast to the bigger of the two and make the bound check. But it doesn't mesh well with the fact that all but primitive indexing are implemented in the library, and is not built in.
This is indeed the main reason array[42u32] isn't supported today: implementing Index on both usize and u32 would make array[42] ambiguous for the type system, it needs to unify the {integer} literal and some implementor of Index, but there are now two possibilities. This would be a breaking change.
Which is not recommended, because it could silently truncate if _ is a smaller type, or later becomes one as the code is edited over time.
Much better to use ::from() or into() for numeric type conversions, which are only implemented for types that are guaranteed to fit the value, unless you know that truncation is perfectly fine behavior in a particular instance... which it rarely actually is.
Annoyingly, .into() is mostly useless for array indexing, since even when you're compiling for a 64-bit machine (and thus x[my_u32.into()] would be perfectly fine) it's only defined for up to u16 (because you might want the same multi-megabyte code to also work in a tiny microcontroller). If you don't want to use x[my_u32 as usize], you are forced to use x[my_u32.try_into().unwrap()], or just use usize everywhere (of course, you could also use traits to create your own .into_usize() and use it everywhere).
Ah, sorry; I forgot that `as` let you cast bigger types to smaller types. Yeah, that does make the tragically verbose `.into()` the safer option, unless you know that the type you're casting into will always be bigger.
I believe if you write tests hooking up your tests with the test allocator will effectively prevent all blatant UAF and memory leak events. More subtle ones that happen due to spooky action at a distance and wierd incomposability might be out of reach.
(Not at op, who does write tests:) You are writing tests, right? ;)
Testing is a good way to ensure that your program won't have UAF under most normal circumstances. But when it comes to security it's adversarial - your program will get pushed into parts of the state space that were never seen during testing.
Things like browsers and operating systems are all heavily tested and fuzzed using tools like asan. They still have security issues from UAF.
if you're worried about errors coming in through testing and fuzzing, you could be just as in trouble in a language as rust due to, say an unsafe block not composing well with another unsafe block two dependencies over. The challenge then is to figure out how to debug, I would worry that obsession with "zero-cost abstractions" making your code difficult to reason about and obscure the bug more than a system that has a more barebones relationship with the computational processes. However, only time will tell which is the better strategy.
> As long as all unsafe code obeys the aliasing and lifetime rules, rust protects completely against UAF. Zig has little protection.
This is the most relevant part to me. As someone who will probably never write a line of either myself, the way I will work with languages like this is thorough libraries or extensions of higher-level languages like python or ruby. To that end, safety is the most important factor to me, with performance very much second. While rust has unsafe operations, these are relatively easy to audit if the code is open source.
Ok, so to the programmer, Zig may be more ergonomic than C or Rust. But until Zig can offer the safety assurances of Rust, I'm still rooting for Rust to take over the world as the dominant and de-facto low-level language.
I enjoy Rust and have been writing it for quite some time. However, I feel like server side languages still have a long way to go. Client-side languages in comparison have been only growing better (one could make a lot of negative points about TypeScript and node in general, but the ecosystem is a joy to work with). I feel like this is because people have accepted C and C++ with their respective pain points for close to 50 years now, because there really were no viable contenders. I like how this is starting to change.
I still feel Rust, Nim and Zig kind of get the right ideas, but they are not there just yet. I like rust with it's expliciteness and correctness, but I wish some stricter features were opt-in. I do not feel that the borrow checker is the ultimate solution to safety problems. Its strictness can make working with Rust a pain. As the author demonstrates, sometimes you know what you want to do and how to write it in other languages, but it can take quite a while to get it down in such a way that the rust compiler will accept it.
I think enabling a language to be garbage collected in general, while making a borrow checker opt in for special, time critical functions is the best of both worlds. I also think Rust may focus a bit too much on the terseness of its syntax. This can make modern Rust hard to read because there are so many special tokens.
> I think enabling a language to be garbage collected in general, while making a borrow checker opt in for special, time critical functions is the best of both worlds.
I remain to be convinced that this is possible.
Rust's ownership model exerts huge design pressure on its standard library. There are some parts that just wouldn't work without ownership (like guards), and many that are far less ergonomic than they could be with GC (like iterators).
If you make a language GC by default with opt-in ownership, what does your standard library look like? It either isn't usable in ownership code, or it's crippled for GC code.
I think it would need to have a GC like Go’s that allows arbitrary interior pointers. Then most of the standard library could continue to take references, which would be allowed to point to GC or non-GC memory. There isn’t too much in the standard library that wants you to pass things to it to take ownership of that make sense to be shared. For example, the buffered reader structure wants ownership of the file it’s reading from, but it’s not like it’s going to make sense to read the same file descriptor at the same time it’s being read by the buffered reader.
I think the bigger problem is that GC pointers will have to work like Rust’s reference counted pointers do today, meaning you need to use mutexes, read-write locks, etc. to mutate anything behind them. Most shared-memory GC languages allow free data races on all member variables, and just provide unordered atomicity to prevent you from being able to cause a race that writes a bad pointer somewhere. Changing that would be a much stronger mismatch, and as long as that’s the case you’re still not going to be able to write Rust code like you would Java or C#.
The obvious solution is that the standard library would provide both options wherever appropriate, just like Rust has `borrow` and `borrow_mut`, `raw_entry` and `raw_entry_mut`, etc. You might be able to save some of this pain with some polymorphism, as some people suggest[0] for `&` and `&mut`, too.
It might also be a good idea to look at the standard library of Idris 2[1] — it has linear types and garbage collection, so it has many of the same issues you describe, and it also seems to solve them with duplication where necessary[2], at least sometimes.
Well, the idea would be that references and values alike could be marked to be excluded from the garbage collector and then used in a borrowed way. If you want to interface with the garbage collector, you have to do so in an explicit way. However, I have no idea how to do so.
Tools like ESLint or prettier are really top-notch. TypeScript itself is a really nice language, too. Its type system is pretty powerful, more than one would expect certainly. Going from TypeScript to Kotlin feels like such a downgrade in that regard. Also the wider JS ecosystem really has some great, high-quality projects that explore (or possibly reinvent, in a positive way) ways to do things in an elegant way.
I have some frontend developers in my team who think java is slow, but wait minutes waiting for NPM to finish its job. Our backend code builds 3x faster than our frontend code these days.
But is that because of npm, or is it because of their massive array of dependencies, plus webpack, bable, pollyfills and whatever 5 transpilers they have integraated into their project?
Last time I used NPM, it tended to download megabytes of dependencies for each project. If you used a package in, say, 10 different projects, you had 10 copies of it. Did they solve that issue?
It's good for isolation: i.e. in general if you copy a directory containing an NPM project to another location, in general it will just work, and deleting something elsewhere on the file system should never affect the project. In general I would argue this is a more important property for dependency managers to have rather than using the least amount of space possible.
If you really want to avoid it you can install everything as a global dependency, but it's not best practice.
This is still the case. There are some alternate package managers (pnpm, yarn) that handle this better, but they both break some tooling. In practice, I've found that it's not too big an issue—most people have plenty of disk space, and if you are running low you can just `rm -rf node_modules` on some old projects and install again next time you touch them.
I really like how there's a package for everything. Many other languages have adopted this method, but for example in Rust, most packages are not as mature as JS packages are. The JS ecosystem is responsible for spawning services that fund Open-Source developers.
Packages are also easy to install. I spent a whole weekend trying to install Postgres and Drogon (a http server) on C++ with conan/vcpkg, and in the end I could only manage with a docker container installing these dependencies via apt-get, which was exactly what I did not want.
Furthermore, NPM is the package manager among package managers. No other package manager comes close. Python has too many options, virtual environments and so on. Rust's cargo is good enough, but I really do not enjoy having to install a seperate package (cargo-edit) just to add a package via the command line instead of editing text files. C++'s package management systems are most of the time a total letdown or don't have widespread adoption. Not only that, npm also takes care of a package maintainer needs (semver, transitive dependencies, etc)
---
Many people express dissatisfaction about build tools and the like, but as someone who got into webdev at the exact time people began building larger applications on the client, I love them. Sure, when they first appeared they were a pain to work with, but most modern build systems are amazing. I can just include most files (MD, SVG, images) and work with as if they were JSON/JS files and don't have to worry about how it's done internally.
---
Prototyping is really fast and important if you work with startups or want to create a proof of concept for a customer. I can throw together a functioning backend with http server and database in a couple of days.
Modern JS frameworks are uncomplicated and can be minimal if you know how to use them. For example, my personal site (https://juliankrieger.dev/) is written in Gatsby and React.js, but it weighs only 20kb. Now, there's not much on it but all content is rendered server side and only rehydrated when I need it. For anyone interested in getting even better results on a personal homepage, I recommend looking at 11ty for a static site generator and htm/preact for an absolutely minimal React implementation that only ships javascript where you really need it.
---
I also like how it enables me to write scripts for personal use and at the same time I can use Node for larger projects. I've used python for this in the past, but a large python code base can be a beast of its own.
Moreover, not having to recompile dependencies on a code change is a welcome feature. My main problem with Rust are the large compile times. Node even enabled me to hot reload code under the right conditions, making iterative development an insanely fast process.
> I think enabling a language to be garbage collected in general, while making a borrow checker opt in for special, time critical functions is the best of both worlds.
Put everything you don't want to manage into Rcs, RefCells, Boxes and the like, and you'll more or less feel like you're using a (verbose) GCed language (with a loss of performance and safety as a natural consequence).
> I enjoy Rust and have been writing it for quite some time. However, I feel like server side languages still have a long way to go. Client-side languages in comparison have been only growing better
The fact you call them "server-side language" is a strong bias. Not everything is a client/server app.
RefCell is runtime "borrow checking" where a panic is thrown if the object is already mutably borrowed. Depending on your use case you might not even care about this behavior because you have other invariants that ensure that you're only borrowing mutably once, you just can't encode it in the type system. Otherwise, you would combine it with Rc[1]: Rc<RefCell<T>>.
Box, on the other hand, doesn't have any runtime cost. In fact, it's runtime cost is the same as a borrow: both & and Box<T> are just plain pointers, with the extra benefit of having compile time lifetime checking.
Has there been much progress with the D borrow checker? It always comes up in these threads but it seems to still be missing some key features (parameterized lifetimes for example). What’s the timeline for this landing in some form of “non-experimental”?
my ears pricked up at the mention of gtk apps for the pinephone; i was planning on exploring rust and D to do precisely that. will definitely add zig to the mix now.
thanks! when i had my n900 i used vala for the same use case (small, quick apps that were just for me to use) but the language is getting a bit long in the tooth :)
It's weird because out of all the languages I have made a special environment for, Zig is the only one I couldn't figure out how to. For example, how to write global asm? How to call the Zig main function? I guess I need a trampoline? I have an emulated env with no filesystems and such things, where you just communicate using system calls - so nothing special, however since you can't just use the standard libraries I have to set the environment up myself. That is fine. It's also cross-compiled.
For rust I had to add some custom linker flags, use global_asm for the startup code, and then with no_std I could just call my own main function. The only really annoying part of the whole thing is that there is no feature in Rust to force a function to not be removed. I could override the global allocator to make allocations very fast.
For C/C++, it's the same as in Rust, except you can use __attribute__((used)) to make a function not get removed. It's also easy to override memory- and string-functions if you have system calls that do these things faster. Overall the C/C++ environment was the fastest.
For Nim, I only had to use C++ as a backend, and then call NimMain(), add a few extra flags, and it would just work. I only wish that Nim was more popular.
If the function you trying to keep is in non-pub module, it will be removed.
Maybe you could move it out of private module: <https://rust.godbolt.org/z/PYPbG8>.
There's also an `#[used]`[1] attribute, but only for static items.
If your use case is special, consider open a feature request in
Rust repo.
Zig has very little focus on safety and has had several regressions related to security and correctness, most of which Andrew has said he doesn't care about as much. This concerns me greatly.
The discord community operates like a cult (sound familiar, Rust community?) and any criticisms or anything not exuberantly positive results in a flame war.
I saw all of that happen several times so far with the community and it's ultimately what drove me away from the project altogether.
> security and correctness, most of which Andrew has said he doesn't care about as much
You will find me to be quite open minded about criticism but you're not going to get very far by misquoting me.
I have an entire kanban board[1] dedicated to improving safety, and "security and correctness" are both properties of the word "robust" which is the very first adjective ziglang.org uses to describe the language.
I could spend 20 more minutes on HN debunking the claims in this thread, but instead of rewarding your behavior I'm going to give that time instead to the people who have opened pull requests on Zig and help them get their code merged.
>Zig has very little focus on safety and has had several regressions related to security and correctness, most of which Andrew has said he doesn't care about as much.
A regression appeared where members were accessible outside of their scopes even without the `pub` modifier. This took months to fix and the person bringing it up initially was yelled at about not understanding programming languages in the discord channel.
Another one I personally brought up was that the standard library's utf-8 module had a decoder that panics on invalid input sequences, certainly setting consumers up for DOS attacks with malformed utf-8 inputs. EDIT: DOS vulnerability is still there (https://github.com/ziglang/zig/blob/master/lib/std/unicode.z..., PR to fix that was closed https://github.com/ziglang/zig/pull/4929). I never responded to the PR because it was at that moment I decided to abandon Zig altogether.
The response to the latter was pretty much "the standard library isn't meant to be used right now", to which I really don't have a response. There was a very, very long and heated argument in the discord channel about it where instead of addressing the concerns about DOS and security I was instead insulted for apparently trying to taint an otherwise perfect language.
The community is vile and the few examples I've seen of the maintainer disregarding safety and security in this way don't give me any amount of confidence in the project overall.
EDIT: Worth mentioning, the syntax and semantics surrounding Zig are not new ideas. I'm sure another project will pop up at some point to compete; many discussions I've seen in the language design channels on IRC and a few discord servers have many people arriving at similar conclusions Zig has made, without knowing Zig even exists. I think we're slowly converging on a language that looks a lot like Zig, but I don't think Zig will be its ultimate incarnation.
Looking at that unicode PR, It seems like the thought process is "we don't want to spend time fixing unicode security regressions until we have stabilized the rest of the language", the unicode library is totally broken rn, don't fix a minor part of the problem.
There is no excuse, in my opinion. The PR would have taken a step in a safer direction, even if the entirety of it is scrapped at a later date. It wouldn't have broken anything else in the codebase, and it was a completed, merge-able change.
Again, the frustration wasn't just from the PR alone - it was also the Discord flame war that ensued prior to the PR.
While I'm not usually 100% happy with the Discord server (it currently has very little moderation and has had one user behave like a piece of shit for a long time) I've had a hard time finding the flame war you mention. I found the discussion about utf8 decoding and it seemed cordial, with two people agreeing with you and only one saying (paraphrasing...) "Fuck this, I'm leaving this community until people stop asking too much" (which didn't make much sense).
I usually find community overrated and I've made a conscious effort to separate it from technology for the most part. Zig as a language has certain values that it's built with in mind and those values for the most part are aligned with mine, so this is basically what keeps me interested in it. I would use Rust despite its community if it aligned with my values. I despise the leadership of Elixir and Phoenix but I still use both of those when it makes sense.
With all this said, the IRC channel is a lot less about memery and wild discussions (but also less active) than the Discord server, so if you feel like it you can always just pop into #zig on FreeNode if you have questions and would like to talk about the language.
Every language has overzealous fans, when a language is new this is likely a largely percentage of the community. If you use that as a reason to avoid a language you will likely just end up avoiding new languages.
See my other comment. It's more the fact Andrew stepped into the conversation himself and just said "be nice" instead of addressing security concerns, just fueling the flame war even more.
A response to a security concern should never be "fuck off".
> A response to a security concern should never be "fuck off".
It can be when it is out of context and/or out of proportion.
But it could be more likely that temperamentally you are disposed towards different programing language where security concerns must override any and every issue at hand.
Any new language project in 2020, for good or ill, is going to be highly opinionated and self-select for people who share similar views on language design. When someone comes along on their discord and says "you need to have this security and correctness issue that I care about deeply fixed yesterday" you really can't be surprised if not everyone shares your urgency- There are probably other languages/communities that DO care deeply about exactly those types of issues, that's the beauty of the long tail of the internet.
Yes, a bit of tribalism seems to be ingrained in every online community these days. That clashes with a lot of people having pretty high expectations for code provided for free to them.
I don't see how those two things coincide at all. Further, your second statement is a strawman - for a project and community that touts being a serious replacement for C, there are indeed expectations about the security mindset of the individual providing the code. When the project has 7k+ stars on Github, clearly people are looking at it and using it. If the maintainer is being unsafe, it's ridiculous to imply nobody is allowed to be critical of that.
Junon, you are mistakenly assuming that a language that aims to be safe has to prioritize safety all the way from inception to maturity. Right now Zig has much, much, bigger priorities than safety.
The language is not yet production-ready for almost every production use-case, and that should not be a surprise to anybody that has looked into it a bit. We even had somebody make a "Using Zig in Production" talk that started with a few jokes on how he decided to do so despite Andrew publicly saying that it's too early.
Right now docs, tooling, the self-hosted compiler, making design decisions on corners of the language not yet finalized, getting more contributors, getting funding to speed up development (and give back to contributors), and building up the community are all needs with an immensely higher level of priority.
On the last point, the community, since that's my job, I'll spare a couple more words: "the" discord community doesn't exist. The Zig community is decentralized and anyone is free to start their own space, as stated in the Community wiki page of the project https://github.com/ziglang/zig/wiki/Community
So when it comes to Discord servers, at the moment of writing there are two listed in that document: the older, bigger one, and mine. You are probably talking about the bigger one, where I can see your discussions with other members. From what I can see in the logs, the discussions were calm and reasonable. I also don't see any of the insults that you refer to in your other comments. In case I missed them though, you'd need to raise the problem with the moderators of that space, and not chalk it up to 'the community' being a cult. This is very different compared to how Rust runs its communities btw.
I'm sorry, but from what I can gather in your case you simply had strong opinions on specific topics and other people just disagreed with you, partially for design (i.e. non strictly technical) reasons. From what I can see from your other comments in this thread, my only recommendation is to work on being more dispassionate when approaching a new community and when issuing PRs (btw a good way of avoiding doing useless work is to open an issue first or to find Andrew / other core contributors on IRC and get their opinion). At the end of the day Zig is an opinionated project where Andrew gives the final approval on what the language should or should not be. By missing that nuance, you built up expectations that in the end were unmet, resulting in understandable frustration.
That said, from my PoV, this doesn't justify excessive criticism of Zig and its community.
As for debating changes and raising criticism, we do that too, but to do that successfully you need to understand more the nuances in the history and design of Zig.
The problem is that safety means, by definition, ruling out unsafe code at compile time or run time. If you don't prioritise safety early then you run a high risk of discovering when you try to retrofit safety later that you need to rule out a lot of existing code. Even if you haven't promised stability, breaking existing code hurts the ecosystem.
Therefore when setting priorities for language evolution it seems better to identify work that is less likely to result in breaking code, and prioritise safety over that.
Your are correct, yet general statements. The situation in the Zig ecosystem right now is not one based on "retrofitting" security into the language, but, if today we don't have a function that sanitizes utf8 in the standard library, that doesn't mean that the language is going to become a swiss cheese in terms of security.
Please read Andrew's answer and check the linked project management dashboard on GH.
Rust restricts compile time code to what t can prove.
Exactly! That's a feature. Rust is a step forward to a future where code is based on sound theory (type Theory) not on some ad-hoc "seems to be working" basis.
I am still torn about this. I can see the usefulness of it from python and go, but I also fear that it brings an unnecessary large maintenance burden.
Http standards are evolving and soon the implementation will get stale, being in the standard library there will be need to updated it and possibly maintain backwards compatibility, even when it becomes clear that a new design may be a better option.
So, suddenly people will start using new, often better, libraries but you need to keep find workforce to maintain the old stdlib http library.
That is mostly why I keep thinking that it would be better to have it as a separate library, with an easy-to-use package manager to discover and use it.
>I am still torn about this. I can see the usefulness of it from python and go, but I also fear that it brings an unnecessary large maintenance burden.
Http standards are evolving and soon the implementation will get stale,
Why would it "get stale"? It doesn't get stale in Golang.
If anything, being part of the standard library is a greater assurance for more eyes going into it, and not having it get stale, as opposed to the language having 5-6 half-abandoned third party libs...
Keep in mind that a HTTP server is only useful for a subset of users, it might be an "obvious" requirement for you, but it definitely isn't for me ;) Features like this should go into libraries, but not the standard library. When looking for a Go or Python alternative, Zig doesn't immediately come to mind TBH.
Also: Go (or python) has no UI system or 3D API wrapper in the standard library, but those are (probably) useful for at least as many people as a HTTP server. Does that mean that Go should get UI and 3D-rendering support in the standard library?
>Keep in mind that a HTTP server is only useful for a subset of users, it might be an "obvious" requirement for you, but it definitely isn't for me ;)
Yes, but that "subset" is huge.
>Also: Go (or python) has no UI system or 3D API wrapper in the standard library, but those are (probably) useful for at least as many people as a HTTP server.
Not in the backend/network server world that Go primarily targets...
Why a http server in the standard library? It's quite likely to end up where Python 3's http.server module is: "Don't use this in production". So when can I then use it? It's mostly a party trick of a module, then.
Yet, Go’s net/http client/servers are fully production ready. This can be done right. Being able to just use them without having to follow the newest trends in a language’s ecosystem is a huge cognitive burden off my mind.
The Python standard library is old. For instance, urllib.urlopen() already existed in Python 1.4 (the oldest I could quickly find docs for), which is from 1996. Things that were useful back then (like a built-in uuencode module) no longer make much sense as part of the standard library. And mistakes in the API of a standard library are hard to fix without breaking backwards compatibility, while a third-party module can, in the worst case, be discarded and replaced by a newer one.
Exactly. Probably better using some established C http library in Zig? Using C libraries is very easy in Zig, as far as I understand. What‘s the problem in using Libcurl?
None of these languages represent the medium term future, but are certainly interesting milestones along the way that will inform it.
The real future though is theorem proving systems, and generating code from them. I don't mean so much Agda, Idris, etc., which try to approach theorem proving from a programmer's point of view. But theorem proving systems that embrace the full spectrum of abstract and correct thought.
Zig and Rust try to get there without paying the heavy price that theorem proving incurs (Rust pays that price more than Zig already, though). But to truly progress we need to pay that price and make it lower and lower with time.
> The real future though is theorem proving systems, and generating code from them.
I've heard the same thing about model-driven development 25 years ago. Outside of some very small niches, it's pretty much dead now.
> But theorem proving systems that embrace the full spectrum of abstract and correct thought.
And here's where reality and the ideal world collide: the vast majority of applications consists of incomplete models (in that data and/or understanding of the problem are incomplete), quickly shifting goals, and pressure to release.
Theorem proving systems offer nothing that helps with this class of software - abstract and correct thought sounds marvellous in an academic environment or if you have unlimited budget to hire top talent and perform thorough analysis.
In practice, however, your typical line-of-business application is faster, cheaper, and well-enough programmed using traditional languages and a healthy dose of best-practises.
Optimisation and platform support are another area where a TPS won't be very useful. Every hardware has its quirks and workarounds are required to get the best performance or avoid pitfalls. These aren't easily expressed (and identified) using abstract thought and models.
Last but not least, no sane company is going to just throw away decades worth of investment in applications, libraries, and (software-)infrastructure just for a nebulous promise of what basically boils down to smarter and better staff with the right tools.
Theorem proving systems have their place and that place might get bigger in the future, but they're most certainly not the panacea of all software development.
The "generating code from them" brings the bad vibe of useless UML tools with it, but I put it in there anyway, because that's how it is going to be.
There is room for "experimental" programming of course, where you experiment with stuff and you are glad that you get it somehow working in the first place. But should stuff that peoples lives depend on depend on experimental software like that? No. And as software more and more becomes part of our lives, I don't see much software for which that attribute does not hold.
Could you elaborate or illustrate what you mean by "theorem proving systems that embrace the full spectrum of abstract and correct thought"?
As a mathematician (and programmer on the side) who regularly works in Coq, my impression is that Rust does represent "the future of programming" (or rather, my ideal of it). Type systems are the only mechanism (that I know of) for formally ensuring properties of programs, and proof assistants and languages like Rust lie on two extremes of the spectrum. The former puts the type system in focus; indeed all my work in Coq is about convincing the compiler that certain functions (terms) type-check. The latter puts types in the background, trying to prove as much as possible with minimal friction.
My work is related to homotopy type theory. There are mathematicians working in Agda and Lean (2, but also 3) as well, but it certainly is a niche field.
Ah, makes sense. By being able to express "abstract and correct thought" I mean being able to express myself in a system much like a mathematician would. Coq for example is too constructively oriented for my taste to make that possible (I know, you can "just add an axiom" ...), and its automation is also (therefore?) not good enough.
Would love a real programming language where one can write subroutines and structures that are directly checked for logical correctness and performance. Having to rewrite something in an alternative language to prove correctness is a major pain.
I've been thinking about this recently: the borrow checker gets a lot of attention, but I think the majority of the learning curve of rust is actually due to "unforced errors" in the language UX which have nothing to do with the language's core USP's.
For instance, the module system just seems needlessly complex. Like you need a mod.rs file in each subdirectory, and the main.rs/lib.rs serves as the module file for the src directory, it took me like a day to figure out what exactly the rules are, and I can't understand why there can't be sensible defaults to define modules based on the file system structure alone.