That's almost certainly per server instance though, there's no mention of any type of synchronization across multiple instances, so if you e.g run many small ones or run the service as a lambda I'd be surprised if it worked like you expected.
IMO Lambda is kind of an unfair example because the author doesn't mention having multiple instances. Plus a hot take I have is you should not be building an entire web-app as a Lambda or series of Lambda functions... AWS does not have solutions for load balancing in things like APIG so you would have to architect that via DynamoDB or ElastiCache which is the "extra layer or two of overhead" the author mentioned.
I tried Aider recently to modify a quite small python + HTML project, and it consistently got "uv" commands wrong, ended up changing my entire build system because it didn't think the thing I wanted to do was supported in the current one (it was).
They're very effective at making changes for the most part, but boy you need to keep them on a leash if you care about what those changes are.
I think it depends on how thorny the thing you need to debug is. Race conditions, intermittent bugs that crash the process leaving no trace, etc. Debugging is much more than using a debugger
Actually in recent years, there are many people who argue exactly that.
Their claim being that taking medication to suppress testosterone and boost estrogen, as well as having various cosmetic surgeries (castration, inversion of penis, bone/cartilage/soft tissue reshaping of facial features), gives these males a "female body".
Some of these males even claim to no longer be trans as a result of these surgical and pharmaceutical interventions, referring to themselves as "cis women".
I don't really think we have. There are many open source projects that duplicate work for a variety of reasons. Perfectly fine for stuff you want to do, but probably not an optimal allocation of resources
These graphs only include stable releases, not nightly ones. Since the multi-threaded frontend hasn't landed on stable yet, we can't see it's effect on these graphs.
genuine question: how much can the frontend really impact compile times at the end of the day? I would guess most of the time spent compiling is on the backend, but IANA compiler engineer
- Serial took 10.2 seconds in the frontend and 6.2 seconds in the backend.
- Parallel tool 5.9 seconds (-42%) and 5.3 seconds (-14.5%).
So to answer your question, parallelising the frontend can have a substantial impact.
You are right that frontend is only a subset of the total compilation time - backend and linking time matter too. Happily, that's something that's being worked on too!
One obvious example of this would be C++, where a smarter frontend that doesn't do textual inclusion of a million lines would significantly improve compile times.
But that's low-hanging fruit "optimization", no? Once you get around it, and this has been forever, the bottleneck is in the backend generation. So if Rust has already solved this, the area where they can improve most is the backend generation?
Most C++ builds I have worked with, and all of them being MMLoC codebases, were actually bottlenecked by the high memory pressure. In part due to heavy templates and symbol visibility.
I think the history of C++ implementations shows that it's not low hanging fruit, it's a huge effort to implement and the payoffs aren't game changing.
reply