I mean, sure, and if you have users running VAX or the Hurd, that matters. But it turns out that most of us use one of Linux, NT or OS X. And even if you add BSD and Solaris (and a few other Unixes) you can still find languages without C's known problems that cover 100% of users. "But embedded." Embedded can maintain their own software, they do all the time. How long are we going to insist that end users run software that cannot be secure because of the lowest common denominator of programming languages?
I think this is a flawed mindset for a number of reasons.
First, I'd rather appeal to every user than most users. That one user I didn't have to appeal to is going to be a much more faithful and grateful user than the "normal" ones. Most of my software work is open source (remember this context is a discussion about curl), and this encourages active collaboration with users with niche situations. If I choose technologies that make using my software attainable for these people, odds are they aren't going to stop at just porting it to their platform.
Limiting your platforms to Linux, OSX, and NT also stifles innovation. These platforms are all deeply flawed. Their popularity isn't due to having the best design, but rather to having a good enough design and being entrenched. They're old platforms, we've learned a lot since they were started. New or niche platforms bring a lot of value to the table. The BSDs are a great example, as it's the best suited platform for a wide variety of applications.
All a new platform has to do to be able to run nearly all general purpose software is port a C compiler. Not even that - they just need a cross compiler. This is a great thing, IMO.
>Embedded can maintain their own software, they do all the time
This is a pretty silly argument. Most embedded developers don't ship their own implementation of HTTP, they ship curl!
> Their popularity isn't due to having the best design, but rather to having a good enough design and being entrenched. They're old platforms, we've learned a lot since they were started.
I think one could say the same thing about C's popularity as a language.
C was well-designed for its time, but "extremely" well-designed is a stretch given the much better designs that came immediately before (ALGOL 60 and 68, Pascal, Scheme) or after (Ada, Modula, ML) it. C was optimized to be fast to implement (and won out for that reason — "worse is better," and because UNIX was the first usable OS written in a high-level language), not for the best practices in safety or even performance, even as understood at the time.
I'd really disagree. All those languages are both safer and more expressive (if more verbose in the case of those with Pascal-like syntax) than any version of C, and, except in the case of ML, Scheme and ALGOL 68 with the optional garbage collection, there's no reason they couldn't be as fast or faster than C. Their main fault was simply in being too ahead of their time: too difficult or impossible to implement well on a PDP-11.
(I deleted the part about FORTRAN 77; seems I was confusing it with F90, which is the version that first allowed identifiers longer than 6 characters, dynamic memory allocation and user-defined types).
There are cases where you need to be close to the hardware -- the kernel, graphics drivers, low-latency graphics and audio. Why does using a URL to retrieve a file over the network require being close to the hardware?
- Var parameters instead of pointers for out parameters
- Real modules with type encapsulation
- Type safe function pointers
- Language support for concurrency
- Open arrays for variable length parameters
- Exceptions
All of this available in 1978.
By no means anything exceptional, Niklaus Wirth inspired himself on the programming language Mesa, used by Xerox PARC to create the Pilot OS and the Star workstation, as Xerox wanted to move away from BCPL in 1977.
Also many of these features were already available in Algol.
I find it rather implausible that we've managed to learn a lot of new things about operating system design and nothing about language design when the last major new operating system design to see significant adoption was probably NT in 1993, and we've had boatloads of new languages see adoption since then. Because the talent and effort is going to go where the rewards are, and if designing new languages is more productive than developing new operating systems, I would think that's where most of the energy is going. The inverse of Sturgeon's Law is that 20% of everything isn't crap, and the more of something you have, the larger that 20% is.
The main difference is that operating systems are complicated and programming languages (are supposed to be) simple. The biggest strength of C is its simplicity - there's not much that can go wrong with such a small feature set. I find Go to be pretty strong for similar reasons.
C is not simple. It is incredibly complex, due to the way the standard specifies all operations in terms of an abstract VM and offers absolutely no guidance on what to do when code goes out of a small set of behaviors. Because undefined behavior is so easy to trigger, essentially all large production C code relies on undefined behavior.
It depends on whether you actually want to know precisely what your code does. For me, that's essential to writing reliable software.
In my view, one of the reasons why we consistently fail to produce reliable software is that we continue to use a language from the 1970s that makes it very hard to determine what the meaning of a program is.
"Classic" C was actually simpler and safer than modern C.
Before optimizers, C was a WYSIWYG language. Yes, you can shoot your foot (gets) but you know what's happening where, and can manually check everything.
Modern C with language lawyering can "optimize out" your safety checks, leading to exploits.
Yet those optimization passes have been essential for keeping C alive. Optimizing C well depends on exploiting undefined behavior. And if not for optimizing compilers, I think C would have been replaced a long time ago.
For example, when everything can legally alias everything else (as in the case of "classic" C), it's hard for a compiler to prove anything about the contents of memory. This prevents a lot of seemingly-obvious optimizations. The problem with C is that you need to violate many programmers' assumptions about how the language operates in order to make it fast.
It's not a coincidence that a lot of C/C++ compiler developers have ended up moving on to other languages.
The long lists other users have posted in this thread of bugs in curl that wouldn't have been possible in another language suggests that in fact there's a lot that can go wrong with C.
Most of libcurl's users are in the embedded space, where they might not even be running an OS at all. So portability does still have to be a primary concern.