Some interesting trivia, Netflix' cofounder Marc Randolf spent time at Borland.
RAD Studio is kind of the closet experience to VB6 where you simply drop controls on a form and can easily wire it up.. with a much better language.
The tools were/are too expensive and Microsoft pile drove them from early dominance to niche by undercutting them handily, and it's been extractive rather than growth oriented since. There is the Lazarus/FreePascal project which offers an alternative.
Microsoft did a lot of bad things over the years, but Borland drove themselves over the cliff on their own. Instead of focusing on developer tools, they wanted to reinvent (and rename) Borland every few years in the 90s.
Bad management, bad decisions, bad products (Delphi 7 was peak). MS had nothing to do with that. And I'm sure Anders made a right move to abandon the sinking ship.
I'm still pissed at Borland for all those bad moves.
It’s very difficult to make money in developer tools. Microsoft could easily squeeze Borland by simply making MSDN tools free. Borland tried to diversify with databases, word processors, and spreadsheets, but Microsoft countered with Office, trying them altogether, and it became the default in every single business. Borland had great technology and was super innovative and I used Turbo C++ and TASM for years. But in the end, they just couldn’t find a cash cow market to keep them afloat.
> It’s very difficult to make money in developer tools.
Just to be clear: we are talking the 90s here. Everybody was charging for developer tools (). MSDN was not free, far from it. From today's viewpoint where every compiler imaginable is free and the tools are better than ever (except there is nothing like Delphi and VCL), the 90s were a heaven for tool makers.
Correct, but Borland didn’t die in the 1990s. That was its heyday. As I said, I used Turbo C++ during that period and I spent good money on it. But the tools commoditized and Microsoft eventually made MSDN basically free in the 2000s (there might have been some nominal charge, but it was low). And that was when Borland eventually got acquired, in 2009.
Borland decided that they should target management instead of the developers as their focal point of product development. They ignored the Web for Delphi and decided that web development front would be covered by JBuilder, a paid and slow evolving product that could not compete against the fast iterating and free Eclipse.
ST6 is under the command of SOCOM, they do have the directorate to do clandestine work. To the degree that this article reflects any reality, it's also plausible that a partner in SOCOM (TFO/ISA) or an agency would be along for the ride to do anything specialized.
If you are this emotionally invested in a job without having done it for some time, this is an accidental or insightful act of compassion from an amorphous over-funded company.
I was a big fan of python between 10 and 15 years ago for similar reasons, it felt "cleaner" than other scripting languages while also having rich standard and extended libraries.
With no real recent experience (I fell deep down the hole into C/kernel etc), I wouldn't have any authority to judge how it's adapted to time. But the most common complaint I've observed at companies of all size and sophistication is "the deployment story is a disaster". But it also seems like uv allows people to do all the "I want this specific version of such and such and I don't want $OS to know about any of this" well?
Re math/ai it's an interesting comment because a language is one part the syntax/stuff you receive, and one part the community that tells you to do things a certain way. I'd guess that Python has become such a big tent it is a little hard to enforce norms that other languages seem to have. This somewhat reminds me of Bjarne Stroustrup discussing scaling/evolving a language.
I think a lot of it is things have shifted away from the raw language. Less and less you’re dealing with Python, and more an assortment of libraries or frameworks. Pandas, numpy, torch, fastapi, …, and a dozen others.
Packaging has been a nightmare. PyPI has had its challenges. Dependency management is vastly improved thanks to uv - recently, and with a graveyard of tools in its wake.
The modern Python script feels more like loosely combining a temperamental set of today’s latest library apis, and then dealing with the fallout. Sometimes parallels the Node experience.
I think an actual Python project - using only something remotely modern like 3.2+ standard library and maybe requests - is probably just as clean, resilient, and reliable as it ever was.
A lot of these things are and/or have been improving tremendously. But think to your point the language (or really the ecosystem) is scaling and evolving a ton and there’s growing pains.
I can see that. A little while ago I was working at a startup and we had a node.js thing that was really crucial to the business that did some gray hat browser automation stuff to scrape TikTok (the users opted into it, but TikTok itself was less permissive). For some reason a person wanted to move part of it to Python to orthogonally solve some other actual problem. They passed me the code and Pandas was there to effectively do an HTTP request and parse JSON and I thought to myself "woah, I'm not in Kansas anymore" -- I ended up not having to worry about it because I ported the idea back to the more mature node system and that turned out to be viable over time.
Libraries can overtake aspects of a language for better and worse. Ruby seemed really tied to Rails and that was great for it as an example.
Ha, Pandas just to parse a website is a bit extra, I’d say. But yeah, it’s weird that you need libraries and api endpoints to do basic tasks these days.
It feels like something broke around 2015-ish. Going back, you could make a whole app and gui with Basic. You could make whole websites simply with HTML+PHP, sometimes using nothing but Notepad. You could make portable apps in Java with no libraries - even Swing or whatever was built in.
Now…? Electron, a few languages, a few frameworks, and a few dozen libraries. Just to start.
It may well have been done for the benefit of experts too - a hot war is true chaos. Shock, fatigue, sleep deprivation, being under fire and adrenaline dump. Militaries are very good at forgetting the lessons learned in the last war while applying the prior status quo to a novel situation.
I agree whole heartedly with your main point but I'd say militaries are more likely to overreact to the last war. "Generals are always fighting the last war" is a quote I heard from long ago and it seems to match up. In Desert Storm it was clear that the lessons of not getting entangled a la Vietnam were clearly in mind. When I went in to Bosnia it was clear Somalia was in mind as our RoE basically said if we were being being fired up on by someone using civilians as a shield to "aim carefully", and I keep hearing mention that the US is still trying to shift away from GWOT even though it's been 10 years since that started slowing down.
This is a really interesting and insightful comment! I'd add WWI/WWII to your examples- the French were so used to entrenched warfare they poured all their efforts into the Maginot Line because fortifications and prepared positions were the thing to do... only to be trounced with the mobility and swiftness of blitzkrieg.
Also imo USA & the UN let Rwanda genocide happen because they were gunshy after Black Hawk Down in Mogadishu thus everyone was so reluctant to commit forces even though even limited intervention early on could have stopped hundreds of thousands of atrocities. The overreaction to Somalia paralyzed effective action anywhere else.
> they poured all their efforts into the Maginot Line because fortifications and prepared positions were the thing to do... only to be trounced with the mobility and swiftness of blitzkrieg.
My position might be revisionist on that matter, but I think the Maginot line worked as planned: the goal was to force the German to go around (mostly through the Low Countries), reducing the frontage where they'd have to fight. The German did exactly that, but also pushed through the Ardennes, which the Allied planners hadn't though possible.
Netflix, at least the Open Connect org, was still open ended adjacent to whatever NTech provided (your issued laptop and remote working stuff). It was very easy to get "exotic" hardware. I really don't think anyone abused it. This is an existence proof to the comment parents, it's neither a startup and I don't see engineers screwing the wheels off the bus anywhere I've ever worked.
Then you don't understand the memory and protection model of a modern system very well.
sendfile effectively turns your user space file server into a control plane, and moves the data plane to where the data is eliminating copies between address spaces. This can be made congruent with I/O completions (i.e. Ethernet+IP and block) and made asynchronous so the entire thing is pumping data between completion events. Watch the Netflix video the author links in the post.
There is an inverted approach where you move all this into a single user address space, i.e. DPDK, but it's the same overall concept just a different who.
FWIW Rust advice is maybe 15% of the bottom of the article, most of the decisions apply equally to C and the article is a fairly sensible survey of APIs.
It wasn't just CGI, every HTTP session was commonly a forked copy of the entire server in the CERN and Apache lineage! Apache gradually had better answers, but their API with common addons made it a bit difficult to transition so webservers like nginx took off which are built closer to the architecture in the article with event driven I/O from the beginning.
every HTTP session was commonly a forked
copy of the entire server in the CERN
and Apache lineage!
And there's nothing wrong with that for application workers. On *nix systems fork() is very fast, you can fork "the entire server" and the kernel will only COW your memory. As nginx etc. showed you can get better raw file serving performance with other models, but it's still a legitimate technique for application logic where business logic will drown out any process overhead.
Forking for anything other than calling exec is still a horrible idea (with special exceptions like shells). Forking is a very unsafe operation (you can easily share locks and files with the child process unless both your code and every library you use is very careful - for example, it's easy to get into malloc deadlocks with forked processes), and its performance depends a lot on how you actually use it.
I think it's not quite that bad (and I know that this has been litigated to death all over the programmer internet).
If you are forking from a language/ecosystem that is extremely thread-friendly, (e.g. Go, Java, Erlang) fork is more risky. This is because such runtimes mean a high likelihood of there being threads doing fork-unsafe things at the moment of fork().
If you are forking from a language/ecosystem that is thread-unfriendly, fork is less risky. That isn't to say "it's always safe/low risk to run fork() in e.g. Python, Ruby, Perl", but in those contexts it's easier to prove/test invariants like "there are no threads running/so-and-so lock is not held at the point in my program when I fork", at which point the risks of fork(2) are much reduced.
To be clear, "reduced" is not the same as "gone"! You still have to reason about explicitly taken locks in the forking thread, file descriptors, signal handlers, and unexpected memory growth due to CoW/GC interactions. But that's a lot more tractable than the Java situation of "it's tricky to predict how many Java threads are active when I want to fork, and even trickier to know if there are any JNI/FFI-library-created raw pthreads running, the GC might be threaded, and checking for each of those things is still racy with my call to fork(2)".
You still have to make sure that that fork-safety invariants are true. But the effort to do that is extremely different depending on language platform.
Rust/C/C++ don't cleanly fit into either of those two (already mushy/subjective) categorizations, though. Whether forking is feasible in a given Rust/C/C++ codebase depends on what the code does and requires a tricky set of judgement calls and at-a-distance knowledge going forward to make sure that the codebase doesn't become fork-unsafe in harmful ways.
So long as you have something like nginx in front of your server. Otherwise your whole site can be taken down by a slowloris attack over a 33.6k modem.
That's because Unix API used to assume fork() is extremely cheap. Threads were ugly performance hack second-class citizens - still are in some ways. This was indeed true on PDP-11 (just copy a <64KB disk file!), but as address spaces grew, it became prohibitively expensive to copy page tables, so programmers turned to multithreading. At then multicore CPUs became the norm, and multithreading on multicore CPUs meant any kind of copy-on-write required TLB shootdown, making fork() even more expensive. VMS (and its clone known as Windows NT) did it right from the start - processes are just resource containers, units execution are threads and all IO is async. But being technically superior doesn't outweighs the disadvantage of being proprietary.
It's also a pretty bold scheduler benchmark to be handling tens of thousands of processes or 1:1 thread wakeups, especially the further back in time you go considering fairness issues. And then that's running at the wrong latency granularity for fast I/O completion events across that many nodes so it's going to run like a screen door on a submarine without a lot of rethinking things.
Evented I/O works out pretty well in practice for the I and D cache, especially if you can affine and allocate things as the article states, and do similar natural alignments inside the kernel (i.e. RSS/consistent hashing).
To nitpick at least as of Apache HTTPD 1.3 ages ago it wasn't forking for every request, but had a pool of already forked worker processes with each handling one connection at a time but could handle an unlimited number of connections sequentially, and it could spawn or kill worker processes depending on load.
The same model is possible in Apache httpd 2.x with the "prefork" mpm.
RAD Studio is kind of the closet experience to VB6 where you simply drop controls on a form and can easily wire it up.. with a much better language.
The tools were/are too expensive and Microsoft pile drove them from early dominance to niche by undercutting them handily, and it's been extractive rather than growth oriented since. There is the Lazarus/FreePascal project which offers an alternative.
reply