I guess one way to address this would be industrial scale efforts in the USA to accelerate the things that gave its current lead in engineering and science. Oh, wait ....
Doh! Of course it was easier to implement. IETF wants a working open source implementation before standardising.
Have you ever tried to implement an ITU standard from just reading the specs? It's hard. Firstly you have to spend a lot of money just to buy the specs. Then you find the spec is written by somebody who has a proprietary product, and is tiptoeing along a line that reveals enough information to keep the standards body happy (ie, has enough info to make it worthwhile to purchase the specification), and not revealing the secret sauce in their implementation.
I've done it, and it's an absolute nightmare. The IETF RFCs are a breath of fresh air in comparison. Not only can you read the source, there are example implementations!
And if you think that didn't lead to a better outcome, you're kidding yourself. The ITU process naturally leads to a small number of large engineering orgs publishing just enough information so they can interoperate, while keeping enough hidden so the investment discourages the rise of smaller competitors. The result is, even now I can (and do) run my own email server. If the overly complicated bureaucratic ITU standards had won the day, I'm sure email would have been run by a small number of CompuServe like rent seeking parasites for decades.
Given that general public uses social network services for electronic messaging today, and those don't even pretend they want to be interoperable, we've got parasites of a totally different class on top of the Internet infrastructure.
Remember jabber/xmpp? At least they tried to interoperate. Google Talk at the beginning had interoperability as its main feature, but Google quickly scrapped that.
UPDATE: some say that's because XMPP was too encompassing of a standard (if a format allows to do too much it loses usefulness, like saying that binary files format can store anything). IMO that's not the reason, they could just support they own subset. They scrapped interoperability for competition only IMO.
> IETF wants a working open source implementation before standardising.
I don't think that's IETF policy. Individual IETF working groups decide whether to request publication of an RFC, and the availability of open source implementations is a strong argument in favour of publication, but not a hard requirement.
If the IETF standards are sometimes useful, it's more a matter of culture than of policy.
A great example of this was PKIX, whose policy was "we'll publish it as a standard and someone else will have to figure out how to make it work". There are 20-year-old standards-track PKIX documents that have no known implementations.
I have been told that ITU specifications are deliberately confusing so that they can sell consulting services.
However, I think DER is good (and is better than BER, PER, etc in my opinion). (I did make up a variant with a few additional types, though.)
OID is also a good idea, although I had thought they should add another arc for being based on various kind of other identifiers (telephone numbers, domain names, etc) together with a date for which that identifier is valid (to avoid issues with reassigned identifiers) as well as possibility of automatic delegation for some types (so that e.g. if you register an account on another system then you can get a free OID from it too; there is a bit of difficulty in some cases but it might be possible). (I have written a file about how to do this, although I did not publish it yet.)
Impressive article. He gathered a lot of data interesting in its own right, tested a lot of theories, focused on facts over assertions; and it did it in a way that was a pleasure to read.
The conclusion was somewhat underwhelming: it's a least two things hitting at once: inflation and COVID, possibly with social media thrown in.
I dunno if he's right, but I'd probably add two more factors: the latest round of the ongoing (for 4 years now) Ukraine war coincided with the start of the decline, and now the rise of AI providing a sting in the tail. In fact it was the total lack of AI writing in this piece that made it such a pleasure to read. It's a rare find nowadays.
It isn't often we Australians get to brag: I put 32kW solar, 40kWh battery, DC EV car charger, AC car charger - US$35,000.
My 4 adult household has two EVs and the house is centrally air-conditioned. Average daily usage in January-March: 100kWh per day. Average feed in price when the sun is shining: about $0/kWh (but negative if it's a bright cool day.) Average electricity bill: a small credit. Cost of electricity where I live USD$0.23/kWh. Pay back time: 4 years.
Country with the most rooftop solar installations per capita: Australia.
Country with the most household kWh of batteries installed per capita: Australia.
Our engineers are orchestrating fully autonomous digital task forces, firing off agents and accomplishing incredible things.
Who swallows this stuff? Like all marketing it's full of weasel works and unverifiable claims. I don't doubt 75% of their code started life as the output of a model mind you, it just "forgets" to mention the subsequent hours of humans put into reviewing, discarding the crap, and refining what was left.
Edit: it's slop. How did I not spot it was slop posted by the marketing department immediately? If they want engineers to pay any attention they will have to do better. Or maybe they are targeting engineers any more? blog.google used to be a reliable read.
The rest of the world is well and truly over being bullied and screwed over by Trump. Notably while they went along with the previous wars the USA fought (even when they thought they were dubious) they refuse to help Trump is his wars.
If Trump made the mistake of actually attacking Greenland of Canada, they would probably go to war with him. Fortunately, that doesn't seem like something TACO would do. He is currently getting his nose bloodied by Iran, and he doesn't any appetite for pain - even in the very short term.
From the rest of the world's perspective, it's apparent 60-70% of USA citizens share the same view. Unlike Trump, we are patient. We will wait it out, and then gladly re-unite with 60-70% of the USA who are our friends. This friendship has worked out very well for both sides for a long while, and after this blip passes will likely continue to work out.
Async is a Javascript hack that inexplicably got ported to other languages that didn't need it.
The issue arose because Javascript didn't have threads, and processing events from the DOM is naturally event driven. To be fair, it's a rare person who can deal with the concurrency issues threads introduce, but the separate stacks threads provide a huge boon. They allow you to turn event driven code into sequential code.
window.on_keydown(foo);
// Somewhere far away
function foo(char_event) { process_the_character(char_event.key_pressed) };
becomes:
while (char = read())
process_the_character(char);
The latter is easy to read linear sequence of code that keeps all the concerns in one place, the former rapidly becomes a huge entangled mess of event processing functions.
The history of Javascript described in the article is just a series of attempts to replace the horror of event driven code with something that looks like the sequential code found in a normal program. At any step in that sequence, the language could have introduced green threads and the job would have been done. And it would have been done without new syntax and without function colouring. But if you keep refining the original hacks they were using in the early days and don't the somewhat drastic stop of introducing a new concept to solve the problem (separate stacks), you end up where they did - at async and await. Mind you, async and await to create a separate stack of sorts - but it's implemented as a chain objects malloc'ed on the heap instead the much more efficient stack structure.
I can see how the javascript community fell into that trap - it's the boiling frog scenario. But Python? Python already had threads - and had the examples of Go and Erlang to show how well then worked compared to async / await. And as for Rust - that's beyond inexplicable. Rust has green threads in the early days and abandoned them in favour of async / await. Granted the original green thread implementation needed a bit of refinement - making every low level choose between event driven and blocking on every invocation was a mistake. Rust now has a green thread implementation that fixes that mistake, which demonstrates it wasn't that hard to do. Yet they didn't do it at the time.
It sounds like Zig with its pluggable I/O interface finally got it right - they injected I/O as a dependency injected at compile time. No "coloured" async keywords and compiler monomorphises the right code. Every library using I/O only has to be written once - what a novel concept! It's a pity it didn't happen in Rust.
> async/await came out of C# (well at least the JS version of it).
Having been instrumental in accelerating bringing async/await to JS, it definitely was the case that it came from C# and we eagerly were awaiting its arrival in JS and worked with Igalia to focus on it and make it happen across browsers more quickly so people could actually depend on it.
Yep and I loved when C# introduced it. I worked on a system in C# that predated async/await and had to use callbacks to make the asynchronous code work. It was a mess of overnested code and poor exception handling, since once the code did asynchronous work the call stack became disconnected from where the try-catches could take care of them. async/await allowed me to easily make the code read and function like equivalent synchronous code.
> async/await came out of C# (well at least the JS version of it).
Not sure if inspired by it, but async/await is just like Haskells do-notation, except specialized for one type: Promise/Future. A bit of a shame. Do-notation works for so many more types.
- for lists, it behaves like list-comprehensions.
- for Maybes it behaves like optional chaining.
- and much more...
All other languages pile on extra syntax sugar for that. It's really beautiful that such seemingly unrelated concepts have a common core.
Similarly F#'s computation expressions predate C#'s syntax, and there is some evidence that C# language designers were looking at F#'s computation expressions. Since the Linq work, C# has been very aware of Monads, and very slow and methodical about how it approaches them. Linq syntax is a subtly compromised computation expression and async/await is a similar compromise.
It's interesting to wonder about the C# world where those things were more unified.
It's also interesting to explore in C# all the existing ways that Linq syntax can be used to work with arbitrary monads and also Task<T> can be abused to use async/await syntax for arbitrary monads. (In JS, it is even easier to bend async/await to arbitrary monads given the rules of a "thenable" are real simple.)
> use async/await syntax for arbitrary monads. (In JS, it is even easier to bend async/await to arbitrary monads given the rules of a "thenable" are real simple.)
I tried once to hack list comprehensions into JS by abusing async/await. You can monkey patch `then` onto Array and define it as flatMap and IIRC you can indeed await arrays that way, but the outer async function always returns a regular Promise. You can't force it to return an instance of the patched Array type.
JavaScript got async in 2017, Python in 2015, and C# in 2012. Python actually had a version of it in 2008 with Twisted's @inlineCallbacks decorator - you used yield instead of await, but the semantics were basically the same.
> And as for Rust - that's beyond inexplicable. Rust has green threads in the early days and abandoned them in favour of async / await.
There was a fair bit of time between the two, to the point I'm not sure the latter can be called much of a strong motivation for the former. Green threads were removed pre-1.0 by the end of 2014 [0], while work on async/await proper started around 2017/2018 [1].
In addition, I think the decision to remove green threads might be less inexplicable than you might otherwise expect if you consider how Rust's chosen niche changed pre-1.0. Off the top of my head no obligatory runtime and no FFI/embeddability penalties are the big ones.
> Rust now has a green thread implementation that fixes that mistake
As part of the runtime/stdlib or as a third-party library?
But for a long time (I think even till today despite that there is as an optional free-threaded build) CPython used Global Interpreter Lock (GIL) which paradoxically makes the programs run slower when more threads are used. It's a bad idea to allow to share all the data structure across threads in high level safe programming languages.
JS's solution is much better, it has worker threads with message passing mechanisms (copying data with structuredClone) and shared array buffers (plain integer arrays) with atomic operation support. This is one of the reasons why JavaScript hasn't suffered the performance penalty as much as Python has.
No, you appear to have no idea what you're talking about here. Rust abandoned green threads for good reason, and no, the problems were not minor but fundamental, and had to do with C interoperability, which Go sacrifices upon the altar (which is a fine choice to make in the context of Go, but not in the context of Rust). And no, Rust does not today have a green thread implementation. Furthermore, Rust's async design is dramatically different from Javascript, while it certainly supports typical back-end networking uses it's designed to be suitable for embedded contexts/freestanding contexts to enable concurrency even on systems where threads do not exist, of which the Embassy executor is a realization: https://embassy.dev/
> At any step in that sequence, the language could have introduced green threads and the job would have been done.
The job wouldn’t have been done. They would have needed threads. And mutexes. And spin locks. And atomics. And semaphores. And message queues. And - in my opinion - the result would have been a much worse language.
Multithreaded code is often much harder to reason about than async code, because threads can interleave executions and threads can be preempted anywhere. Async - on the other hand - makes context switching explicit. Because JS is fundamentally single threaded, straight code (without any awaits) is guaranteed to run uninterrupted by other concurrent tasks. So you don’t need mutexes, semaphores or atomics. And no need to worry about almost all the threading bugs you get if you aren’t really careful with that stuff. (Or all the performance pitfalls, of which there are many.)
Just thinking about mutexes and semaphores gives me cold sweats. I’m glad JS went with async await. It works extremely well. Once you get it, it’s very easy to reason about. Much easier than threads.
Once you write enough code, you'll realize you need synchronization primitives for async code as well. In pretty much the same cases as threaded code.
You can't always choose to write straight code. What you're trying to do may require IO, and then that introduces concurrency, and the need for mutual exclusion or notification.
Examples: If there's a read-through cache, the cache needs some sort of lock inside of it. An async webserver might have a message queue.
The converse is also true. I've been writing some multithreaded code recently, and I don't want to or need to deal with mutexes, so, I use other patterns instead, like thread locals.
Now, for sure the async equivalents look and behave a lot better than the threaded ones. The Promise static methods (any, all, race, etc) are particularly useful. But, you could implement that for threads. I believe that this convenience difference is more due to modernity, of the threading model being, what 40, 50, 60 years old, and given a clean-ish slate to build a new model, modern language designers did better.
But it raises the idea: if we rethought OS-level preemptible concurrency today (don't call it threads!), could we modernize it and do better even than async?
> Once you write enough code, you'll realize you need synchronization primitives for async code as well. In pretty much the same cases as threaded code.
I've been programming for 30 years, including over a decade in JS. You need sync primitives in JS sometimes, but they're trivial to write in javascript because the code is run single threaded and there's no preemption.
> What you're trying to do may require IO
Its usually possible to factor your code in a way that separates business logic and IO. Then you can make your business logic all completely synchronous.
Interleaving IO and logic is a code smell.
> The Promise static methods (any, all, race, etc) are particularly useful. But, you could implement that for threads. I believe that this convenience difference is more due to modernity, of the threading model being, what 40, 50, 60 years old, and given a clean-ish slate to build a new model, modern language designers did better.
Then why don't see any better designs amongst modern languages?
New languages have an opportunity to add newer, better threading primitives. Yet, its almost always the same stuff: Atomics, mutexes and semaphores. Even Rust uses the same primitives, just with a borrow checker this time. Arguably message passing (erlang, go) is better. But Go still has shared mutable memory and mutexes in its sync library.
> But it raises the idea: if we rethought OS-level preemptible concurrency today (don't call it threads!), could we modernize it and do better even than async?
I'd love to see some thought put into this. Threading doesn't seem like a winner to me.
Now you are comparing single threaded code with multi threaded, which is a completely different axis to async vs sync. Just take a look at C#'s async, where you have both async and multi threading, with all the possible combinations of concurrency bugs you can imagine.
Of course I'm comparing them. Threading and async are two solutions to the same problem: How do you write high performance event driven systems like network services? How do you solve the C10K problem (or more recently the C10M problem)?
If you use a thread per connection (or green threads like Go), you don't also need async. If you have async (eg nodejs), you can get great performance without threads. You're right that they can also be combined - either within a single process (like tokio in rust). Or via multi-process configurations (eg one nodejs instance per core, all behind nginx). But they don't need to be. Go (green threads) and Nodejs (async, single threads) both work well.
Of course we're comparing them. We all want to know who wore it better
> The job wouldn’t have been done. They would have needed threads. And mutexes. And spin locks. And atomics. And semaphores. And message queues. And - in my opinion - the result would have been a much worse language.
My point is that you do need mutexes, spin locks, etc with async as well, given that you have a multi threaded platform. So no, we have basically 2x2 stuff we are talking about with very different properties (async/sync x single/multi threaded).
> Rust has green threads in the early days and abandoned them in favour of async / await. Granted the original green thread implementation needed a bit of refinement - making every low level choose between event driven and blocking on every invocation was a mistake.
That's a mischaraterization. They were abandoned because having green threads introduces non-trivial runtime. It means Rust can't run on egzotic architectures.
> It sounds like Zig with its pluggable I/O interface finally got it right
That remains to be seen. It looks good, with emphasis on looks. Who knows what interesting design constraints and limitation that entails.
Looking at comptime, which is touted as Zig's mega feature, it does come at expense of a more strictly typed system.
No, I don't trust AI to do much of anything. I've written in other comments on HN that I mostly use it to write draft commit messages and pull requests that I review and rewrite myself, now that the honeymoon phase has ended. But when I was still attempting to get it to do more, I found Claude Code was really bad at trying to fix conflicted merges and rebases, always dropping the wrong parts of the code and leaving me with a broken codebase and a commit message history that didn't make sense for the changes inside.
According to the guy who wrote JJ, he copied all the ideas you mentioned from hg. That included a lot of ideas from hg's add ons. So the similarities are no accident. But then he added a twist - he didn't just delete the index, he dropped "hg commit" as well.
I can't see it going anywhere. It is in many ways "just" a different porcelain for git. The plumbing is the same. It's also safer to use: no JJ command can lose data another JJ command can't recover.
reply