I agree with your conclusion but not with your premise. To do the same research it's not enough to be as capable as a human intelligence; you'd need to be as capable as all of humanity combined. Maybe Albert Einstein was smarter than Alexander Fleming, but Einstein didn't discover penicillin.
Even if some AI was smarter than any human being, and even if it devoted all of its time to trying to improve itself, that doesn't mean it would have better luck than 100 human researchers working on the problem. And maybe it would take 1000 people? Or 10,000?
I'm afraid that turning sand and sunlight into intelligence is so much more efficient than doing that with zygotes and food, that people will be quickly out scaled. As with chess, we will shift from collaborators to bystanders.
Who's "we", though, and aren't virtually all of us already bystanders in that sense? I have virtually zero power to shape world events and even if I want to believe that what I do isn't entirely negligible, someone else could do it, possibly better. I live in one of the largest, most important metropolises in the world, and even as a group, everything the entire population of my city does is next to nothing compared to everything being done in the world. As the world has grown, my city's share of it has been falling. If a continent with 20 billion people on it suddenly appeared, the output of my entire country will be negligible; would it matter if they were robots? In the grand scheme of things, my impact on the world is not much greater than my cat's, and I think he's quite content overall. There are many people more accomplished than me (although I don't think they're all smarter); should I care if they were robots? I may be sad that I won't be able to experience what the robots experience, but there are already many people in the world whose experience is largely foreign to mine.
And here's a completely way of looking at it, since I won't lieve forever. A successful species eventually becomes extinct - replaced by its own eventual offspring. Homo erectus are extinct, as they (eventually) evolved into homo sapiens. Are you the "we" of homo erectus or a different "we"? If all that remains from homo sapiens some time in the future is some species of silicon-based machines, machina sapiens, that "we" create, will those beings not also be "us"? After all, "we" will have been their progenitors in not-too-dissimilar a way to how the home erectus were ours (the difference being that we will know we have created a new distinct species). You're probably not a descendent of William Shakespeare's, so what makes him part of the same "we" that you belong to, even though your experience is in some ways similar to his and in some ways different. Will not a similar thing make the machines part of the same "we"?
Not necessarily. The problem is that we can't precisely define intelligence (or, at least, haven't so far), and we certainly can't (yet?) measure it directly. And so what we have are certain tests whose scores, we believe, are correlated with that vague thing we call intelligence in humans. Except these test scores can correlate with intelligence (whatever it is) in humans and at the same time correlate with something that's not intelligence in machines. So a high score may well imply high intellignce in humans but not in machines (e.g. perhaps because machine models may overfit more than a human brain does, and so an intelligence test designed for humans doesn't necessarily measure the same thing we think of when we say "intelligence" when applied to a machine).
This is like the following situation: Imagine we have some type of signal, and the only process we know produces that type of signal is process A. Process A always produces signals that contain a maximal frequency of X Hz. We devise a test for classifying signals of that type that is based on sampling them at a frequency of 2X Hz. Then we discover some process B that produces a similar type of signal, and we apply the same test to classify its signals in a similar way. Only, process B can produce signals containing a maximal frequency of 10X Hz and so our test is not suitable for classifying the signals produced by process B (we'll need a different test that samples at 20X Hz).
My definition of intelligence is the capability to process and formalize a deterministic action from given inputs as transferable entity/medium.
In other words knowing how to manipulate the world directly and indirectly via deterministic actions and known inputs and teach others via various mediums.
As example, you can be very intelligent at software programming, but socially very dumb (for example unable to socially influence others).
As example, if you do not understand another person (in language) and neither understand the person's work or it's influence, then you would have no assumption on the person's intelligence outside of your context what you assume how smart humans are.
ML/AI for text inputs is stochastic at best for context windows with language or plain wrong, so it does not satisfy the definition. Well (formally) specified with smaller scope tend to work well from what I've seen so far.
Known to me working ML/AI problems are calibration/optimization problems.
> My definition of intelligence is the capability to process and formalize a deterministic action from given inputs as transferable entity/medium.
I don't think that's a good definition because many deterministic processes - including those at the core of important problems, such as those pertaining to the economy - are highly non-linear and we don't necessarily think that "more intelligence" is what's needed to simulate them better. I mean, we've proven that predicting certain things (even those that require nothing but deduction) require more computational resources regardless of the algorithm used for the prediction. Formalising a process, i.e. inferring the rules from observation through induction, may also be dependent on available computational resources.
> What is your definition?
I don't have one except for "an overall quality of the mental processes humans present more than other animals".
Ok, but the point of a test of this kind is to generalise its result. I.e. the whole point of an intelligence test is that we believe that a human getting a high score on such a test is more likely to do some useful things not on the test better than a human with a low score. But if the problem is that the test results - as you said - don't generalise as we expect them, then the tests are not very meaningful to begin with. If we don't know what to expect from a machine with a high test score when it comes to doing things not on the test, then the only "capacity" we're measuring is the capacity to do well on such tests, and that's not very useful.
I think that the aesthetic dislike of "everything is a set" is misplaced, because it misses a crucial point that people unfamiliar with formal untyped set theories often miss: Not every proposition in a logic needs to be (or, indeed, can be) provable. The specific encoding of, say, the integers need not be an axiom. It's enough to state that an encoding of the integers as sets exists, but the propositions `1 ∈ 2` or `1 ∩ 2 ≠ ∅` can remain unprovable. Whether they're true or false remains unknown and uninteresting (or, put another way, "nonsensical").
The advantage is, then, that we can use a simple first order logic, where all objects in the logic are of the same type. This makes certain things easier and more pleasant. That the proposition `1 ∈ 2` can be written (i.e. that it is not a syntax error, though it's value is unknowable) should not bother us, just as that the English proposition "the sky is Thursday" is not a grammatical error and yet is nonsensical, doesn't bother us. It is no more or less bothersome than being able to write the proposition `1/x = 13`, with its result remaining equally "undefined" (i.e. unknowable and uninteresting) if x is 0. If `1/x = 13` isn't a syntax error, there's no reason `1 ∈ 2` must be a syntax error, either.
That a proposition is nonsensical (for all assignments of variables or for some specific ones, as in x = 0 in 1/x) need not be encoded in the grammar of the logic at all, and defining nonsense as "unknowable and uninteresting" is both convenient and elegant. I think that some logicians overlook this because they're attracted to intuitionist theories, where the notion of provability is more reified, whereas in classical theories every proposition is either true or false. They're bothered perhaps less by the ability to write 1 ∈ 2 and more by the idea that 1 ∈ 2 has a truth value. But while the notion of provability itself is not reified in classical logics, unprovable propositions are natural and common. 1 ∈ 2 has a meaning only in a very abstract sense; the theory can make that statement valid yet practically nonsensical by not offering axioms that can prove or disprove it. Things can be "undefined" in a precise way: the axioms do not allow you to come to any definition.
1 ∈ 2 is operating at a _different layer of abstraction_ than peano arithmetic is. It's like doing bitwise operations on integers in a computer program. You can do it, but at that point you aren't really working with integers as _integers_.
If 1 ∈ 2 is neither provable nor refutable, then you're not working with anything. The proposition literally has no meaning. It's not a syntax error, but you can't use its value for anything. Its value is undefined.
This actually comes in handy: While 1 ∈ 2 is undefined, `(2 > 1) ∨ (1 ∈ 2)` is true, and `(1 > 2) ∧ (1 ∈ 2)` is false, and this is useful because it means you can write:
x = 0 ∨ 1/x ≠ 0
which is a provable theorem despite the fact that the clause `1/x` is difficult to typecheck. This comes in even more handy once you apply substitutions. E.g. it is very useful to write:
y = 0 ∨ 1/x ≠ 0
and separately prove that y = x.
To make this convenient, typed theories will often define 1/0 = 0 or somesuch (but they don't complain about that). In untyped set theory, 1 ∈ 2 and 1/0 can remain valid yet undefined.
Of course a ZF set theory operates with different objects than Peano arithmetic - it's a different theory. But Peano arithmetic nevertheless applies to any encoding of the integers, even the ones where 1 ∈ 2 is undefined.
Running Java workloads is very important for most CPUs these days, and both ARM and Intel consult with the Java team on new features (although Java's needs aren't much different from those of C++). But while you're right that with modern JITs, executing Java bytecode directly isn't too helpful, our concurrent collectors are already very efficient (they could, perhaps, take advantage of new address masking features).
I think there's some disconnect between how people imagine GCs work and how the JVMs newest garbage collectors actually work. Rather than exacting a performance cost, they're more often a performance boost compared to more manual or eager memory management techniques, especially for the workloads of large, concurrent servers. The only real cost is in memory footprint, but even that is often misunderstood, as covered beautifully in this recent ISMM talk (that I would recommend to anyone interested in memory management of any kind): https://youtu.be/mLNFVNXbw7I. The key is that moving-tracing collectors can turn available RAM into CPU cycles, and some memory management techniques under-utilise available RAM.
So, the guys at Azul actually had this sort of business plan back in 2005, but they found that it was unsustainable and turned their attention to the software side, where they have done great work. I remember having a discussion with someone about Java processors and my common was just “Lisp machines.” It’s very difficult to outperform code running on commodity processor architectures. That train is so big and moving so fast, you really have to pick your niche (e.g. GPUs) to deliver something that outperforms it. Too much investment ($$$ and brainpower) flowing that direction. Even if you’re successful for one generation, you need to grow sales and have multiple designs in the pipeline at once. It’s nearly impossible.
That said, I do see opportunities to add “assistance hardware” to commodity architectures. Given the massive shift to managed runtimes, all of which use GC, over the last couple decades, it’s shocking to me that nobody has added a “store barrier” instruction or something like that. You don’t need to process Java in hardware or even do full GC in hardware, but there are little helps you could give that would make a big difference, similar to what was done with “multimedia” and crypto instructions in x86 originally.
There are also load and store barriers which add work when accessing objects from the heap. In many cases, adding work in the parallel path is good if it allows you to avoid single-threaded sections, but not in all cases. Single-threaded programs with a lot of reads can be pretty significantly impacted by barriers,
Sure, but other forms of memory management are costly, too. Even if you allocate everything from the OS upfront and then pool stuff, you still need to spend some computational work on the pool [1]. Working with bounded memory necessarily requires spending at least some CPU on memory management. It's not that the alternative to barriers is zero CPU spent on memory management.
> The Parallel GC is still useful sometimes!
Certainly for batch-processing programs.
BTW, the paper you linked is already at least somewhat out of date, as it's from 2021. The implementation of the GCs in the JDK changes very quickly. The newest GC in the JDK (and one that may be appropriate for a very large portion of programs) didn't even exist back then, and even G1 has changed a lot since. (Many performance evaluations of HotSpot implementation details may be out of date after two years.)
[1]: The cheapest, which is similar in some ways to moving-tracing collectors, especially in how it can convert RAM to CPU, is arenas, but they can have other kinds of costs.
The difference with manual memory management or parallel GC is that concurrent GCs create a performance penalty on every reads and writes (modulo what the JIT can elide). That performance penalty is absolutely measurable even with the most recent GCs. If you look at the assembly produced for the same code running with ZGC and Parallel, you’ll see that read instructions translate to way more cpu instructions in the former. We were just looking at a bug (in our code) at work this week, on Java 25 that was exposed by the new G1 barrier late expansion.
Different applications will see different overall performance changes (positive or negative) with different GCs. I agree with you that most applications (especially realistic multi threaded ones representative of the kind of work that people do on the JVM) benefit from the amazing GC technology that the JVM brings. It is absolutely not the case however that the only negative impact is on memory footprint.
> The difference with manual memory management or parallel GC is that concurrent GCs create a performance penalty on every reads and writes
Not on every read and write, but it could be on every load and store of a reference (i.e. reading a reference from the heap to a register or writing a reference from a register to the heap). But what difference does it make where exactly the cost is? What matters is how much CPU is spent on memory management (directly or indirectly) in total and how much latency memory management can add. You are right that the low-latency collectors do use up more CPU overall than a parallel STW collector, but so does manual memory management (unless you use arenas well).
I think that describing Zig as a "rewrite of C" (good or otherwise) is as helpful as describing Python as a rewrite of Fortran. Zig does share some things with C - the language is simple and values explicitness - but at its core is one of the most sophisticated (and novel) programming primitives we've ever seen: A general and flexible partial evaluation engine with access to reflection. That makes the similarities to C rather superficial. After all, Zig is as expressive as C++.
> Most importantly, it dodges Rust and C++'s biggest mistake, not passing allocators into containers and functions
I think that is just a symptom of a broader mistake made by C++ and shared by Rust, which is a belief (that was, perhaps, reasonable in the eighties) that we could and should have a language that's good for both low-level and high-level programming, and that resulted in compromises that disappoint both goals.
To me, the fact that Zig has spent so long in development disqualifies it as being a "rewrite of C."
To be clear, I really like Zig. But C is also a relatively simple language to both understand and implement because it doesn't have many features, and the features it does have aren't overly clever. Zig is a pretty easy language to learn, but the presence of comptime ratchets up the implementation difficulty significantly.
A true C successor might be something like Odin. I am admittedly not as tuned into the Odin language as I am Zig, but I get the impression that despite being started six months after Zig, the language is mostly fully implemented as envisioned, and most of the work is now spent polishing the compiler and building out the standard library, tooling and package ecosystem.
I don't think it's the implementation that's delaying Zig's stabilisation, but the design. I'm also not sure comptime makes the implementation all that complicated. Lisp macros are more powerful than comptime (comptime is weaker by design) and they don't make Lisp implementation complicated.
Fair. I'm not a compiler developer, so I'll defer to your expertise on that front.
That being said, I suppose my ultimate wonder is how small a Zig implementation could possibly be, if code size and implementation simplicity was the priority. In other words, could a hypothetical version of the Zig language have existed in the 80's or 90's, or was such a language simply out of reach of the computers of the time.
It's not quite as minimal as C, but it definitely could have been made in the 80s or 90s (actually, 70s, too) :) There were far larger, more complex languages back then, including low-level languages such as C++ and Ada, not to mention even bigger high-level languages. High-level languages were already more elaborate even in the 70s (comptime is no more tricky than macro or other meta-programming facilities used in Lisp in the sixties or Smalltalk in the 70s; it certainly doesn't come even remotely close to the sophistication of 1970s Prolog).
I don't think there's any programming language today that couldn't have been implemented in the 90s, unless the language relies on LLMs.
> Zig does share some things with C - the language is simple and values explicitness - but at its core is one of the most sophisticated (and novel) programming primitives we've ever seen: A general and flexible partial evaluation engine with access to reflection.
To my understanding (and I still haven’t used Zig) the “comptime” inherently (for sufficiently complex cases) leads to library code that needs to be actively tested for potential client use since the instantiation might fail. Which is not the case for the strict subset of “compile time” functionality that Java generics and whatnot bring.
I don’t want that in any “the new X” language. Maybe for experimental languages. But not for Rust or Zig or any other that tries to improve on the mainstream (of whatever nice) status quo.
> leads to library code that needs to be actively tested for potential client use since the instantiation might fail
True, like templates in C++ or macros in C or Rust. Although the code is "tested" at compile time, so at worst your compilation will fail.
> I don’t want that in any “the new X” language
Okay, and I don't want any problem of any kind in my language, but unfortunately, there are tradeoffs in programming language design. So the question is what you're getting in exchange for this problem. The answer is that you're getting a language that's both small and easy to inspect and understand. So you can pick having other problems in exchange for not having this one, but you can't pick no problems at all. In fact, you'll often get some variant of this very problem.
In Java, you can get by with high-level abstractions because we have a JIT, but performance in languages that are compiled AOT is more complicated. So, in addition to generics, low-level languages have other features that are not needed in Java. C++ has templates, which are a little more general than generics, but they can fail to instantiate, too. It also has preprocessor macros that can fail to compile in a client program. Rust has ordinary generics, which are checked once, but since that's not enough for a low-level language, it also has macros, and those can also fail to expand correctly.
So in practice, you either have one feature that can fail to compile in the client, or you can have the functionality split among multiple features, resulting in a more complicated language, and still have some of those features exhibit the same problem.
I wasn’t clear then. I would rather have N language features of increasing complexity/UX issues for dealing with increasingly complex situations rather than one mechanism to rule them all that can fail to instantiate in all cases (of whatever complexity). That’s the tradeoff that I want.
Why? Because that leads to better ergonomics for me, in my experience. When library authors can polish the interface with the least powerful mechanism with the best guarantees, I can use it, misuse it, and get decent error messages.
What I want out of partial evaluation is just the boring 90’s technology of generalized “constant folding”.[1] I in principle don’t care if it is used to implement other things... as long as I don’t have surprising instantiation problems when using library code that perhaps the library author did not anticipate.
[1]: And Rust’s “const” approach is probably too limited at this stage. For my tastes. But the fallout of generalizing is not my problem so who am I to judge.
> Okay, and I don't want any problem of any kind in my language, but unfortunately, there are tradeoffs in programming language design.
I see.
> So in practice, you either have one feature that can fail to compile in the client, or you can have the functionality split among multiple features, resulting in a more complicated language,
In my experience Rust being complicated is more of a problem for rustc contributors than it is for me.
> and still have some of those features exhibit the same problem.
Which you only use when you need them.
(I of course indirectly use macros since the standard library is full of them. At least those are nice enough to use. But I might have gotten some weird expansions before, though?)
That will have to do until there comes along a language where you can write anything interesting as library code and still expose a nice to use interface.
> I would rather have N language features of increasing complexity/UX issues for dealing with increasingly complex situations rather than one mechanism to rule them all that can fail to instantiate in all cases (of whatever complexity). That’s the tradeoff that I want.
It's not that that single mechanism can fail in all situations. It's very unlikely to fail to compile in situations where the complicated language always compiles, and more likely to fail to compile when used for more complicated things, where macros may fail to compile, too.
It's probability of compilation failure is about the same as that of C++ templates [1]. Yeah, I've seen compilation bugs in templates, but I don't think that's on any C++ programmer's top ten problem list (and those bugs are usually when you start doing stranger things). Given that there can be runtime failures, which are far more dangerous than compilation failures and cannot be prevented, that the much less problematic compilation failures cannot always be prevented is a pretty small deal.
But okay, we all prefer different tradeoffs. That's why different languages choose design philosophies that appeal to different people.
[1]: It's basically a generalisation of the same idea, only with better error messages and much simpler code.
Zig is so novel that it's hard to find any language like it. Its similarity to C is superficial. AFAIK, it is the first language ever to rely on partial evaluation so extensively. Of course, partial evaluation itself is not new at all, but neither were touchscreens when the iPhone came out. The point wasn't that it had a touchscreen, but that it had almost nothing but. The manner and extent of Zig's use of partial evaluation are unprecedented. I have nothing against OCaml, but it is a variant of ML, a 1970s language, that many undergrads were taught at university in the nineties.
I'm not saying everyone should like Zig, but its design is revolutionary:
C is sometimes used where C++ can't be. Exotic microcontrollers and other niche computing elements sometimes only have a C compiler. Richer, more expressive languages may also have additional disadvantages, and people using simpler, less expressive languages may want to enjoy some useful features that exist in richer languages without taking on all of their disadvantages, too. Point being, while C++ certainly has some clear benefits over C, it doesn't universally dominate it.
TS, on the other hand, is usable wherever JS is, and its disadvantages are much less pronounced.
8051s are still programmed almost entirely in C. There are C++ compilers available, but they're rarely used. Even on STM32, C is more popular. There's a perception -- and not an unsubstantiated one -- that C++'s can more easily sneak in operations that could go unnoticed.
C++ has many advantages over C, but it also brings some clear disadvantages that matter more when you want to be aware of every operation. When comparing language A against language B, it's not enough to consider what A does better than B; you also have to consider what it does worse.
That's why I don't think that the comparison to TS/JS is apt. Some may argue that C++ has even more advantages over C than TS has over JS, but I think it's fairly obvious that its disadvantages compared C are also bigger. For all its advantages, there are some important things that C++ does worse than C. But aside from adding a build step (which is often needed, anyway), it's hard to think of important things that TS does worse than JS.
If there is any C, it is hardly any different from using C as cheap macro Assembler, with lots of inline Assembly.
Also definitely a 1980's CPU.
It is more than apt, until C gets serious about having something comparable to std::array, span, string_view, non null pointers, RAII, type safe generics, strong typed enumerations, safer casts,...
Among many other improvements, that WG14 will never add to C.
Again, when comparing two languages you can't just look at the advantages one of them has over the other. There's no doubt C++ has many important advantages over C. The reason to prefer C in certain situations is because of C++'s disadvantages, which are as real as its advantages. Even one of the things you listed as an advantage - RAII - is also a disadvantage (RAII in C++ is a tradeoff, not an unalloyed good). A comparison that only looks at the upsides, however real, gives a very partial picture.
Alongside all of its useful features, C++ brings a lot of implicitness - in overloading (especially of operators), implicit conversion operators, destructors, virtual dispatch - that can be problematic in low-level code (and especially in restricted environments). Yes, you can have an approved subset of C++, and many teams do that (my own included), but that also isn't free of pitfalls.
There isn't anyone pointing a gun to someone's head forcing them to using 100% of all C++ features in every single project.
There is an endless list of edemic C pitfalls that WG14 has proven not to care to fix.
Auto industy has come up with MISRA, initially for C, exactly because of those issues.
Ideally both languages would be replaced by something better, until it doesn't happen, I stand by my point, the only reason to use C instead of C++, it not having a C++ compiler available, or being prevented to use one, like in most UNIX kernels.
I hold this point of view since 1993, having used C instead of C++ was only when obliged to deliver my work in C, due to delivery requirements where my opinion wasn't worth anything to the decision makers.
So if I was already using C++ within the constraints of a 386 SX, running at 20 Mhz limited to 640 KB ( up to 1 MB) RAM size, under MS-DOS, I certainly will not change it for the 2025 computing world reality.
> There isn't anyone pointing a gun to someone's head forcing them to using 100% of all C++ features in every single project.
Tell me how to use C++, without using RAII. You can't. Not being able to automatically allocate without also invoking the constructor is what I dislike the most in C++. Other things are, that you can never be sure, what a function call really does, because dynamic dispatch or output parameters aren't required to be explicit.
> I hold this point of view since 1993, having used C instead of C++ was only when obliged to deliver my work in C, due to delivery requirements where my opinion wasn't worth anything to the decision makers.
I, too, wrote C++ for the 386 in the early nineties, and I, too, generally prefer it to C, but the fact remains that it has some real disadvantages compared to C. From the very early days people talked about exercising discipline and selecting a C++ subset - and it can and does work - but even that discipline isn't free. Avoiding destructors, for example, isn't easy or natural in C++; explicit virtual dispatch with hand-rolled v-tables is very unnatural.
I have nothing but admiration for Erlang, and it is, without a doubt, one of the most inspired languages I've encountered in my career. But when I was at university in the late-ish nineties, they taught us Haskell as "the language of the future." So I guess some languages are forever languages of the future, but they still inspire ideas that shape the actual future. For example, Erlang monitors were one inspiration for our design of Java's structured concurrency construct [1].
If you're interested in another "language of the future" that bears some superficial resemblance to Erlang, I'd invite you to take a look at Esterel (https://en.wikipedia.org/wiki/Esterel), another language we were taught at university.
Not really. With dependent types, you can have a function that returns a type that depends on a runtime value; with Zig's comptime, it can only depend on a compile-time value.
Note that a language with dependent types doesn't actually "generate" types at runtime (as Zig does at compile-time). It's really about managing proofs that certain runtime objects are instances of certain types. E.g. you could have a "prime number" type, and functions that compute integers would need to include a proof that their result is a prime number if they want to return an instance of the prime number type.
Using dependent types well can be a lot of work in practice. You basically need to write correctness proofs of arbitrary properties.
> do these languages let programmers explicitly assert that a certain, relevant property holds at runtime?
Yes, but the programmer has to do the really, really, really hard work of proving that's the case. Otherwise, the compiler says: you haven't proved to me that it's true, so I won't let you make this assertion. Put another way: the compiler checks your proof, but it doesn't write it for you.
The only programs for which all interesting assertions have been proven in this, or a similar, way have not only been small (up to ~10KLOC), but the size of such programs relative to that of the average program has been falling for several decades.
> Sorry if this is obvious, but do these languages let programmers explicitly assert that a certain, relevant property holds at runtime?
You can assert something but then you also have to prove it. An assertion is like a type declaration: you're asserting values of this type can exist. The proof of that assertion is constructing a value of that type to show that it can be instantiated.
This is what the Curry-Howard correspondence means. The types of most languages are just very weak propositions/assertions, but dependent types let you declare arbitrarily complex assertions. Constructing values of such arbitrarily complex assertions can get very hairy.
>> Much of Zig seems to me like "wishful thinking"; if every programmer was 150% smarter and more capable, perhaps it would work.
... and the same could be said about Rust, only with Rust we can already see that it suffers from relatively low adoption at a relatively advanced age.
The funny thing about that claim is that it leads to an obvious question: if working harder to satisfy the compiler is something that requires less competence than other forms of thinking about a program, then why Rust? Why not ATS? After all, Rust does let you eliminate certain bugs at compile time, but ATS lets you eliminate so many more.
Maybe you and I have different working definitions of "relative", but Rust hit 1.0 only 10 years ago, whereas the age of the most popular languages is 30+ years. In that sense Rust is relatively young. Indeed, Rust is the youngest language in the TIOBE top 20, and it's more popular than other languages which have been around much longer. The only language which comes close is Swift, and that one had the advantage of Apple pushing it as the main language for iOS development, Rust never had that kind of marketing and corporate support.
It's actually hard to find any language that has ever very popular (JS, TS, Python, Java, C++, C#, C, and you can even throw in PHP, Ruby, Go, Kotlin, and even COBOL and Fortran) that has such a low adoption rate at age 10. I'm not saying that means Rust won't buck the historical trend and achieve that, but its adoption clearly does not resemble that of any language that's ever become very popular.
Typescript is a tool more than a language, so that's not fair at all.
And the rest of those basically had no competition in their domain when they started 30+ years ago. Rust now has to grow next to 30 years of those languages maturing, and convince people to abandon all existing community resources - whereas those languages did not.
Having a low adoption rate is normal in this day and age. Go is kind of an anomaly caused by it's backing by Google.
But it doesn't help that Rust is also in the native space, whose devs are especially stubborn, and is a difficult language with painful syntax.
> And the rest of those basically had no competition in their domain when they started 30+ years ago.
That's not true, as anyone who was programming back then (like me) knows. Java had serious competition from VB, Delphi, and a host of other "RAD" languages as they were called back then (and not long before, the language that the magazines were touting as "the future" was Smalltalk). All of them were heavily marketed. C++, of course, had to compete against an established incumbent, C, and another strong contender, Ada (Ada, BTW, was about as popular in 1990, when it was 10 years old, as Rust is today, although bigger, more important software projects were being written in Ada in 1990 than are being written in Rust today). Python and Ruby both were both competing with the very powerful incumbent, Perl. PHP, of course, had Java to compete with, as did C#. Some of these languages had strong backers, but some started out as very small operations (Python, C++, PHP), and some languages with very strong backers did poorly (Delphi, FoxPro).
Again, I have no idea if Rust will ever really take off, but its adoption at this advanced age, despite no lack of marketing, would be extraordinarily low for a language that becomes very popular.
> But it doesn't help that Rust is also in the native space, whose devs are especially stubborn
Perhaps, but Fortran, C, and C++, were all in this space and they all spread rather quickly. Microsoft, a company with a fondness for complicated languages, chose to write significant portions of their flagship OS in C++ when the language was only five years old.
It's true that the market share of low-level languages has been shrinking for several decades now and continues to shrink, but also isn't really a good news for a language that for at least a decade has been trying to get a significant share of that shrinking market, and has been having a hard time at that.
> and is a difficult language with painful syntax.
Yes, but this, too, isn't a point in favour of betting on Rust's future success. Other difficult or complex languages indeed had a harder time getting much adoption in their first decade, but things also didn't pick up for them in their second decade.
The latest bloomer of the bunch is Python, but if Rust ever becomes very popular (even as popular as C++ is today), it would need to break Python's record. That's not impossible, but it would be highly unusual. Low adoption in the first decade has virtually always been a predictor of low adoption down the line, too.
I'm curious, do you have figures for this? I was not around when C or C++ were 10, but I was when python was and as a long time python user I would say that the pycommunity was much smaller at 10 then the rust community at 10. So my gut feeling is that your statement is false at least wrt python, but I'm happy to change my mind if you have sound data.
Edit: just to add some more anecdotal evidence from my memory of languages I used: java community was pretty bigger than rust at 10. Go's was much smaller than rust at 10. I'd be happy to check my beliefs against actual data :)
I wasn't programming yet when C was 10 years old, but I was when C++ was 10 years old (1995), and its adoption was an order of magnitude higher than Rust's is today.
I agree that of all popular languages, Python is the latest bloomer, but while it became popular for applications and data processing rather late, it was used a lot for scripting well before then.
> Go's was much smaller than rust at 10
Go turned 10 (if we want to count from 1.0) only 3 years ago, and Go's adoption in 2022 was much bigger than Rust's today.
I don’t have a dog in this race but I was also around at that time and one reason is there was far less choice in 1995 about where you would go from C. C++ was also a vastly simpler language back then (no templates, no exceptions, barely a few hundred command line options). So I am not sure what its adoption then can teach us about language adoption now.
I don't think it's true there was far less choice in 1995. Around that time (a few years later) I was working on a project that was half Ada half C++, and there were a few more exotic choices around. Aside from those, and C, there were still projects in the company back then written in Fortran and even in Jovial. At university, I learnt Esterel for formally-verified embedded software. And that's not even touching on the higher level space, where VB, Delphi, some Smalltalk, and a large selection of other "RAD tools" were being used (my first summer job was on what today would be called an ERP system written in a language called Business Basic). At university, the language I was taught at intro to compsi was Scheme (that was also the embedded-scrpting language we used at work). We were also taught ML and a bit of Haskell.
It's true that not many languages that seemed a reasonable choice at the time survived to this day as reasonable choices.
I dunno, I think you're trying to split hairs if top 20 isn't "very popular".
But I don't think the comparison you're trying to make works, because then isn't now.
In general, in order to convince someone to leave their current tools, you have to not only be better but a lot better. As in, you need to offer the entire feature set of the old tool plus something else compelling enough to overcome the network effects (ecosystem + years of experience + job prospects) of the prior environment.
So when C++ came on the scene, they had to compete against ~20 years of accumulated legacy systems and language history. Rust had to compete with 50 years of legacy systems and language history.
Moreover, developer expectations are a lot different today. Back then, C++ was what, a compiler and some docs? Python was an interpreter and some docs? Maybe an IRC channel? Today, it's not enough to reach 1.0, you also have to deliver a package manager, robust 3rd party library ecosystem, language server tooling, a robust community with support for new devs, etc. So timescales for language development are getting longer than they were back then.
Also, I don't know why you've chosen "very popular" as a metric. Very popular isn't something a language needs to be, it just needs to be big enough to sustain a community. Being top 20 within 10 years is certainly in that realm. You can see that other language communities have existed for longer and are much smaller. And anyway, the entire developer population today is much larger than it was back then; you can have a small percentage of all developers but still large enough to be robust. I don't know the math, maybe someone can figure it out, but I wouldn't be surprised if 1% of developers today is inflation adjusted to like 10%-20% of developers in 1996. So Rust is probably as big as it needs to be to sustain itself, it doesn't have to be a "very popular" language if that means being in the top 5 or whatever the threshold is.
> In general, in order to convince someone to leave their current tools, you have to not only be better but a lot better.
I agree.
> So when C++ came on the scene, they had to compete against ~20 years of accumulated legacy systems and language history. Rust had to compete with 50 years of legacy systems and language history.
Ok, but Go and TypeScript are pretty much the same age, and when Java came out, it took over much of C++'s market very quickly.
I agree Rust has some clear benefits over C++, but it also has intrinsic challenges that go well beyond mere industry momentum (and other languages have shown that momentum is very much defeatable).
> Moreover, developer expectations are a lot different today.
But there are more programmers today, and many more people worked on Rust than on C++ in its early years. And besides, Rust has had all those things you mentioned for quite some time, and C++ still doesn't have some of them.
> Also, I don't know why you've chosen "very popular" as a metric. Very popular isn't something a language needs to be, it just needs to be big enough to sustain a community.
I agree it doesn't need to be very popular to survive. But popularity is a measure of the actual benefit a language brings or, at least, lack of popularity is a measure of lack of sufficient benefit, because software is a competitive business. So claims that Rust is some huge game-changer don't really square with its rate of adoption.
> I agree it doesn't need to be very popular to survive. But...
I think that's the end of it then, yeah? We've established it's popular enough (you set a lower bound at Haskell, which has been around for 35 years, has an active and vibrant community, and is still used in industry), we agree it doesn't need to be more popular, so then this threshold of "very popular" you invented (which I guess is the top 10) is arbitrary and therefore not relevent.
> Ok, but Go and TypeScript are pretty much the same age, and when Java came out, it took over much of C++'s market very quickly.
These two languages were created and pushed by two of the largest corporations on the planet. Typescript is basically Javascript++, and it came at a time when Javascript was to a large degree the only language for the browser. So they had: 1) one of the largest corporations in the world backing it with effectively unlimited money as part of a larger campaign to win web developer mindshare 2) a large developer base of people who already spoke the language 3) guaranteed job opportunities (at least at Microsoft, so more quickly followed) for people who invested in it. Microsoft was also instrumental in defining the platform on which Typescript ran, so they had that benefit as well. That's one way to achieve success for a language, but it requires only offering a very small delta in features; Typescript could only do what it did by being Javascript + types.
Likewise with Go, they bootstrapped that community with Googlers. Bootstrapping a community is way harder than bootstrapping a language, so having a built-in community is quite an advantage. People wanted to learn Go just to have it on their resume, because they heard it would help them land a job there. Plenty of my students took that route. Google threw their weight around where they could for Go, even going as far as to steal the name right out from another language developer and telling him to pound sand when he complained about it.
I mean, Google could afford to hire Robert Griesemer, Rob Pike, AND Ken Thompson to create Go; whereas Rust came from the side project of a lowly Mozilla software engineer. We're looking at two very different levels of investment in these respective languages.
This seems to me like cherry picking. You're taking the best-case scenarios and then comparing it to something not like that at all. When it pales in comparison, you conclude it's not sufficient. But here's the thing: if we want programming as a field to evolve, not every new language can be ExistingLang++. Some languages are going to have to take big swings, and they're not going to be as popular as the easy road (big swings mean big divisions and polarized views; Javascript + types is an easy and agreeable idea). That doesn't mean they aren't just as if not more beneficial to programming languages as a field.
> But there are more programmers today, and many more people worked on Rust than on C++ in its early years.
Yes, and that completely muddles your point, which is why these comparisons don't make sense. It's like comparing the success of a new NFL team to teams from 50 years ago. Yeah they're ostensibly playing the same game but in many important ways they're actually not.
So at best in order to make the claim you're trying to make, you'd have to normalize the data from then and now. You haven't done that so you can't say Rust hasn't achieved arbitrary threshold of popularity after 10 years and therefore... I'm not exactly sure what your conclusion is actually. Therefore it won't survive? Therefore it's not all people make it out to be? I don't know, you're not being clear.
> But popularity is a measure of the actual benefit a language brings or, at least, lack of popularity is a measure of lack of sufficient benefit
If you're going to make this claim you've gotta back it up with some data. "popular", "actual benefit", "sufficient benefit" are all fuzzy words that mean one thing in your head but mean something different in everyone else. Many people live long enough to understand "popular" and "best" are not often synonymous.
> So claims that Rust is some huge game-changer don't really square with its rate of adoption.
Did anyone make that claim here? Rust is programming language like any other, and at best it's an incremental improvement over current languages, just like all of the top new languages of the last 50 years. The closest thing to a "game changer" I've seen is LLM vibe coding, but otherwise the game in Rust is the same as it's always been: 1) press keyboard buttons to write code in text buffer, 2) compile, 3) dodge bugs, 4) goto 1. Rust makes the first and second parts marginally worse, while making the third part marginally better. It doesn't change the game, but it makes it more fun to play (IMO).
> These two languages were created and pushed by two of the largest corporations on the planet.
I don't think Rust lacks in hype and marketing. It doesn't buy ads in print magazines, but no language does anymore (that's exactly how VB, Delphi, FoxPro, Visual C++, and Java were marketed). And don't forget that while being well-known is necessary for success, it's far from sufficient.
There's also the matter that large corporations may let certain star personalities work on vanity projects, they tend not to invest too heavily in projects they think are unlikely to succeed. In other words, even corporations can't market their path to success, at least not for long. That's why they try to market the things they already believe have a chance of success. Sun acquired technologies developed for Smalltalk and diverted them to Java because they believed Java had a better chance of success.
> This seems to me like cherry picking
Quite the opposite, I think. I can't find a single example of a language with Rust's adoption at age 10 that ended up very popular.
> whereas Rust came from the side project of a lowly Mozilla software engineer
So did C++.
> If you're going to make this claim you've gotta back it up with some data.
I'm backing it up with the market and the idea that in a highly competitive market, any technology that carries a significant competitive advantage in shorter time-to-market or in better reputation etc. should be picked up - at least if it is well-known. It's the claim that a technology gives its adopter a competitive advantage and yet doesn't spread as quickly as previous similar technologies that requires explanation.
> Did anyone make that claim here?
This whole debate is over whether Rust has some "superabled" and unique bottom-line-affecting capabilities compared to Zig.
> Rust is programming language like any other, and at best it's an incremental improvement over current languages
If you see Rust as one avenue for incremental improvement over C++, then we're in complete agreement :)
I have received 3 transmissions from you so far, and you have yet to define "success" or "popularity", nor have you specified the threshold between "popular" and "very popular", despite having agreed with me that these things are not important for languages. Moreover, you haven't brought any figures to bear in supporting your claims. I think if we are going to continue this discussion, you have to substantiate your position -- otherwise I don't think you've said anything here that I haven't already responded to.
A specific definition of "popular" doesn't matter. What we can say is that Rust's market share at age 10 is lower than that of Fortran, COBOL, C, C++, VB, Python, JS, Java, C#, PHP, Ruby, TS, Kotlin, and Go at that age, but it's bigger than that of ML, Haskell, Erlang, and Clojure at that age. I don't know if I can compare its market share to that of Ada at that age. I'm nearly certain that much larger (and definitely more important) programs were written in Ada circa 1990 than are being written in Rust today, but it's hard for me to compare the number of programs.
Let's remember: Dancin' Duke Java Applets distributed with Netscape's web browser.
"Sun is giving away Java and HotJava free, in a fast-track attempt to make it a standard before Micro-soft begins shipping a similar product"
Zig began development ten years after Rust. If we were counting from the beginning of the development rather than 1.0 (as I did above), then Rust is about to turn 20.
> ... and the same could be said about Rust, only with Rust we can already see that it suffers from relatively low adoption at a relatively advanced age.
What are we measuring? Lines of code? Number of programmers employed? Number of new applications started?
Rust is in: Linux, Windows, Azure, all over AWS, Amazon's Prime video, Cloudflare's proxy, Firefox, Python's `cryptography` package, Zed editor. This is just the sample I know of.
> What are we measuring? Lines of code? Number of programmers employed? Number of new applications started?
They're correlated, but number of programmers employed is something that's relatively easy to measure.
> Is this low adoption?
For a language this old? Yes. That people talk so much about specific projects that use Rust only underlines that: Rust is now as old as Java was when JDK 6 came out; older than PHP was when PHP 5 came out or Facebook was launched; older than C++ was when Windows NT came out.
As in, why not use a language with much stronger formal verification? Because people have tried it and failed.
Like you said, Rust is hard, it already feels at the limit of what people can handle.
But unlike ATS, many people have tried Rust and succeeded, and some Rust programmers even claim that they become very productive with it after a while. I very much doubt the same could be said about ATS.
> But unlike ATS, many people have tried Rust and succeeded, and some Rust programmers even claim that they become very productive with it after a while
So if Rust is preferable to ATS because more people are productive with it despite ATS being able to guarantee more at compile-time, then by that logic a language that more people would be productive with than with Rust, despite Rust guaranteeing more at compile time, would be preferable to Rust.
You see, the problem is that these arguments cannot lead us to an objective preference unless we compared Rust to all other points on the spectrum, especially since Rust proponents already must accept that the best point is not on any extreme. So we know that languages that guarantee more than C but more productive than ATS are preferable. Guess what? Zig is right there, too, so that argument can't be used to prefer Rust over Zig.
Sure, that makes sense. I agree this is all very subjective, given we don't have the benefit of hindsight for what Zig can accomplish yet.
I think where we disagree is that you believe Zig is as safe as Rust (by making it easier to make other things safer). I don't believe so (my first impression of Zig was Bun repeatedly segfaulting), and I'm just sad that people are choosing the easy route and going for more insecure software, when it finally looked like we made such great progress, with a language people can actually use. I agree with simpler, but there's so many other things that can be changed or removed from Rust, and still leave in lifetimes, or something similar.
> I think where we disagree is that you believe Zig is as safe as Rust
Quite the opposite. Rust is definitely safer in the simple sense that it guarantees more memory safety. But that's very different from saying that Zig is closer to C than to Rust (Zig's memory safety is much closer to Rust's than to C's), and it also doesn't mean that it's easier to write more correct/secure software in Rust (because there are just too many factors).
> and I'm just sad that people are choosing the easy route and going for more insecure software
The jump from "segfaulting" to "insecure" is unjustified, as not all causes of segfaults map to equally exploitable vulnerabilities. Java programs, for example, segfault much less than Rust programs. Does that mean Rust programs are significantly less secure?
> Zig's memory safety is much closer to Rust's than to C's
This is arguable. Zig's issues with memory safety are not limited to such things as use-after-free ("temporal" memory safety in a rather obvious sense). Without something very much like Rust's borrow checker and affine types, you can't feasibly prevent a program from inadvertently e.g. stashing a pointer somewhwere to an inner feld of a tagged union, and reusing that pointer later even though the union itself may have mutated to a different tag, which results in undetected type punning and potentially wild pointers. The issue of iterators to a collection being invalidated by mutation is quite similar in spirit, and again has nothing to do with the heap per se. Rust prevents these issues across the board. The fact that it also manages to prevent temporal memory unsafety altogether is simply a bonus that essentially comes "for free" given the borrow checking approach.
That's not arguable. Because if we say that memory safety is important because it prevents some common and dangerous vulnerabilities, then Zig clearly prevents the most dangerous of those, while C doesn't.
> you can't prevent a program from inadvertently...
Now you're naming more problems that aren't as high on the list as the ones Zig does prevent (and C doesn't). It's true that type confusion is also rather high on the list, but Zig clearly reduces type confusion much more than C does.
Nobody denies that Rust prevents more issues than Zig via sound guarantees - albeit at what I think is a significant cost, and we can argue, subjectively, over whether those issues are worth that cost or not - but Zig's guarantees are still closer to Rust's than to C's if eliminating very common/dangerous vulnerabilities is what we're judging by.
The underlying issue is that Zig turns out to have no feasible safe subset, just like C - unless you go as far as to declare things like using tagged unions in combination with pointers "unsafe" which is both insanely restrictive and hard to check programmatically. People might complain about having to fight the borrow checker, but they'd complain a whole lot more if the standard approach to safety-subsetting a language was just bounds-checked array access plus "you can write FORTRAN in any language!"
> The underlying issue is that Zig turns out to have no feasible safe subset
It does have a safe subset with respect to spatial memory safety, which is the more important kind of memory safety if what we judge by is dangerous vulnerabilities.
> People might complain about having to fight the borrow checker, but they would complain a lot more if the standard approach to safety-subsetting a language was just bounds-checked array access plus "you can write FORTRAN in any language!"
I don't know how you can say that this is an objectively-supported claim given Rust's poor adoption at its age.
Rust's real superpower is its tooling. Cargo handles package management, building, testing, documentation, and publishing. The compiler's errors explain what went wrong and where it happened. Installing the toolchain with rustup is quick and painless, even on Windows. I can't know that it's best in class, but it's certainly the best I've used.
I can see another language having a more expressive type system, I've come up against the limitations of Rust's type system more than once, but the tradeoff isn't worth it if I have to go 20 years back in time in terms of tooling.
Rust is much older than Zig, though, and there's nothing stopping Zig (or any future language that doesn't adopt Rust's precise set of guarantees) from having the same, or possibly better. Given Zig's immaturity, I certainly wouldn't use it for any serious production software today.
BTW, I'm not saying Rust is bad. All I'm saying is that the attempt at proving it's objectively best by leaning on memory-safety is not really as objective as the people who make that claim seem to think it is.
I hadn't heard of ATS before, and I think that I mistook your using it as an example of "more isn't always better" and thought you were suggesting it as an actual alternative.
I'm looking for the next thing I want to learn, and have been leaning towards logic programming and theorem provers, so you inadvertently piqued my interest.
Sure, just keep in mind that various formal verification tools vary greatly in their usability, even theorem provers. I.e. the experience with ATS will be quite different from Lean, which will be quite different from TLA+.
> ... and the same could be said about Rust, only with Rust we can already see that it suffers from relatively low adoption at a relatively advanced age.
How much adoption should we expect Rust to have at this point, and how does it compare to other languages? I certainly don't have the impression that Rust has relatively low adoption in a general sense, although I'm also a fan of the language and make a point of being in programming communities that are interested in Rust too.
> then why Rust? Why not ATS? After all, Rust does let you eliminate certain bugs at compile time, but ATS lets you eliminate so many more.
Of course it's subjective, but I think there is a perception of perceived benefit over effort. Zig gives much perceived benefit over low effort. Rust gives much perceived benefit over more effort. Even more effort for ATS. But the benefit is relative: nobody cares to create totally bug-free software when the software is not critical at all. In addition, most programmers think that solving bugs is part of the job more than preventing them. Solving segfaults is often easier than thinking about formal systems, so the perceived benefit is higher in Zig than ATS - at least for programmers that know C, pointers and the like. Note how the majority of programmers don't deal with memory allocations at all: they use JavaScript and Python, memory is managed. For them, solving memory bugs is not something that makes sense, so the effort of dealing with memory is enough to renounce that freedom altogether. Rust is a very good middle ground: it's a complex language, but there's lot of room for improving its ergonomics.
> Rust is a very good middle ground: it's a complex language, but there's lot of room for improving its ergonomics.
I certainly agree Rust is a much better middle ground than either C or ATS. But so is Zig. What's harder to support objectively is that Rust is a better middle ground than Zig (or vice-versa). We just don't know! They both make different tradeoffs that we can't objectively rule on.
So all I'm saying is that we're left with subjective preferences, and that's fine, because that's all we have! So let's stick to saying "I like Rust's design better" or "I like Zig's design better", and stop trying to come up with objective reasons that are just not well-founded.
At the very least, people should stop comparing Zig to C as part of an argument that claims Rust is good because it prevents the vulnerabilities associated with memory-safety violations, as Zig prevents the most dangerous of those, too.
Even if some AI was smarter than any human being, and even if it devoted all of its time to trying to improve itself, that doesn't mean it would have better luck than 100 human researchers working on the problem. And maybe it would take 1000 people? Or 10,000?
reply