We need less code written in C, not more. We already have a problem with a massive, unreliable, insecure ecosystem of legacy C code that is hard to escape; writing more software in C makes that problem worse.
Most software is written to solve high-level problems. Using a high-level language is sensible, time-saving, budget-saving, improves portability, and saves on headaches later. The same rule that applies to COBOL should apply to C: only use it when you have to deal with an existing legacy codebase (and only if rewriting that codebase with a better language is not possible).
> We already have a problem with a massive, unreliable, insecure ecosystem of legacy C code
A ton of the world's software with the highest reliability requirements is written in C and C++.
Nuclear power plants? Yes. [1]
Joint Strike Fighter? Yes. [2]
Mars rover? Yes. [3]
Your Tesla? Yes [4]
US telephone systems. Stock exchanges. Bloomberg. Your cell phone OS (incl. many years before smartphones). The list goes on and on.
C and C++ are the most general and well-supported languages we have. They can solve problems on any architecture with a good tradeoff between efficiency and generality (over assembly) and predictable performance. Reliability and security requirements are up to the developer to impose on any language (see: the github PHP SQL injection search). C is not, in itself, a problem towards these goals and those who build upon C are always improving it [5].
"Reliability and security requirements are up to the developer to impose on any language"
This is a common fallback in these discussions, but it is misguided. Yes, you have to work to make software secure in any language; no, this does not mean C is equivalent. In C, you still need to worry about high-level problems like SQL injection, while simultaneously having to worry about low-level problems like integer overflows, dangling pointers, etc.
That critical systems have been written in C or C++ is not relevant -- it does not mean that it was a good idea to do so.
Well, I actually see that as an advantage vs. placing trust in an abstraction you have no control over that can provide security problems for you. So I guess we'll agree to disagree on this one.
Curious though, in what would you implement the examples I gave?
For the first two (nuclear power and JSF) and the car I would probably use a language like SML or Haskell, or perhaps even a language like Coq (where your programs are mathematical proofs of their own correctness). For a Mars rover, probably a core written in the languages above, and then bootstrap Common Lisp on top of that core (the core would control critical things that will not need much room to change, Lisp for various AI related things etc.).
Of course, I am not an expert on the requirements of all those projects. Maybe some fact about functional languages makes them entirely unsuitable. If there is a technical reason why C is being used, I would like to know it; if the reason is just, "Well that's what the libraries are written in, that is what most people know, there was a bunch of legacy code, etc." then it only proves that technical reasons are easily trumped by practical concerns. Like I said earlier, it is hard to get away from the large C and C++ codebase that we have to deal with today, despite all the problems of those languages and despite the widespread availability of better languages.
They took the code from the Ariane 4 and stuck it on the Ariane 5 assuming all would go fine.
While a famous incident attributed to Ada, I can safely say that C and other popular languages have done far more damage. Even if not in terms of exploding rockets (I don't know), but time lost debugging, fiddling with checks that would be handled in other languages and so forth.
I like to tell people with Ada I can spend my time adding value rather than fighting against the language because of it's minimalism and the bugs that come with that.
It's a very personal decision. :) You're not going to be popular on HN with a language like Ada. You're also probably not going to find a company looking for Ada developers, though I, myself, do Ada work for pay.
The Ada way of doing things really is a whole other world with completely different goals in mind than what fanboys and fangirls (I'm using the term dismissively, but there are real problems with new languages) will tell you about the new language they're using that was thrown together last weekend. Ada, of course, is far from perfect, but the blemishes like unbounded string syntax (I hope to see that fixed next revision! Ada just had a new revision!) are not enough to put me off from all the safety and checking the language does for me.
I must tell you that it was the winner of a DoD contest to create a kind of perfect language. There was a document called the Steelman that outlined everything the DoD's perfectly readable, maintainable, reliable, safe language would look like. Three of the four entries were Pascal-derived, not C!
If there is one thing Ada will do for you, if you let it, is demolish your ego. Every stupid thing you do will be caught by the compiler and you will scorned. The compiler will piss and moan catch all sorts of fantastic errors that you'd have to pop into a debugger for in C. Some folks have this notion that they are infallible, but "to err is human", right? You're welcome to turn off the checks in specific instances or project-wide (if you're nuts), but this isn't me. I'd love to think it is—and I write pretty great Ada these days—and I still don't have the gaul to turn off compiler checks.
Everything isn't a pointer in Ada, we can nest functions. Hell we nest packages and do sorts of things that just don't translate to Java, C++ or C, but now I'm just rambling.
I think Ada is a great choice for anything more than a shell script. Learning it introduces you more to a software engineering mindset because the language is built for software engineering, not hacking something together— though, in time, you will be capable of it. It's not the change of mindset like switching from procedural to functional, but it may be just as enlightening.
Well, since I'm not an American nor living in the US, DoD jobs are out of the question.
Secondly, I am unable to grok functional programming, which is why I quit trying to learn Haskell. I tried, I really tried but there's no way I'm able to wrap my head around functional programming. In my mind, there's a long list with two digits in hex, that the computer iterates as fast as possible. So if Ada is a functional only language that puts the brakes right there. If no, I'd be more than happy to start learning.
Ada ain't functional. A soft introduction is John English's Craft of Object-oriented programming. It's online, free, very intuitive and very well written. I usually tell folks to skip over the basics if they know them and work the first half. You're welcome to swing by #ada on Freenode for exceptionally friendly Ada talk. See what you think! I hope to see you on Freenode. :)
Once I read a story about a hardcore C developer that was forced to work on an Ada project.
In the beginning she was complaining a lot about the language's verbosity, but with the time she spent on the project she eventually became an Ada advocate! :)
That's a great anecdote. All to often you find when people are forced to do something they hate it no matter what, whether it's deserved or not. I'm guilty of this a bit myself. Ada had a DoD mandate in the 90s which really riled a lot of folks and put them off of the language for good. It's part of the reason Ada has not had more of an impact in our lives. That, and compilers for Ada were much more expensive then as well. For many many years though we've had GNAT, a free software project and GCC frontend. We are truly lucky to have it developed by it's corporate steward which is funded by dual-licensing the compiler for FLOSS and commercial support contracts.
No! Signed integer overflow in C/C++ is undefined. In other words, the language specifications permit implementations to do anything whatsoever[1]. This is emphatically not the case with either Java or C#. As an example, per the spec, with respect to C# integer multiplication,
In a checked context, if the product is outside the range of the result type, a System.OverflowException is thrown. In an unchecked context, overflows are not reported and any significant high-order bits outside the range of the result type are discarded.
In other words, while integer overflow isn't necessarily checked, unlike C/C++, C# isn't allowed to corrupt memory, violate runtime security guarantees, format your hard drive, threaten the President, or launch a video game[2] on integer overflow.
Several of your examples are hard real-time systems. Garbage collectors, schedulers, etc. are a no-go in such environments, so you won't see much Python or JavaScript there, but that doesn't discount the value of those languages. Note that Ada, a high level language, is used in similar systems, and that Lisp has been used in NASA satellites. High-level languages like Erlang are used in telecom systems and other critical network applications.
The fact is, languages created in the early history of computer programming have had the opportunity to be used in high profile projects like the ones you brought up. In time, you will undoubtedly see Python and others used in such contexts.
The use of C and C-likes by the JPL rather than Lisp is a sad story; it means robotics has not advanced as fast as it could have. http://www.flownet.com/gat/jpl-lisp.html
Not really. What we need is people not learning C (or any language) without also learning how writing it wrong causes defects. In my book (mentioned in the post) I have students using Valgrind and attempting to break their code starting at the 4th lesson. I also show them how and why to avoid defects in C.
Instead of saying "no new C", we should probably be approaching it with a two-pronged attack. First, tracking down bad educational material and making sure new C coders learn how to make safer C. Second, fixing the older code so that it is safe by default, rather than only in the context of the code it's written in.
But, yeah pipe dreams and all, so I just work on improving the educational material.
Thanks Zed, I really can't upvote you enough! I absolutely love the style of your writing, and the "hard-way" approach to teaching. Keep up the great work :-)
Was there any study done to the (un)reliability of C? I know for a fact practically every piece software I use is programmed in either C or C++.
The sole exceptions are Anki and Gentoo's portage system, both Python. And I'm pretty sure the reason portage is so extremely slow is because it is in Python (I've checked, it's not I/O-bound). And Anki is some very unreliable software.
In fact, give me a single big desktop software project made with a language that is not C or C++.
"I know for a fact practically every piece software I use is programmed in either C or C++."
How confident are you that that software will work as expected? How much are you willing to bet?
The fact that a language is popular does not prove that the language is good, nor that it should be used, nor that it is not causing us problems. Just the other day on HN, there was an article about a massive number of vulnerabilities in X11 applications -- all resulting from problems that you have in C and C++ but not in higher-level languages. Entire classes of bugs that are common problems in C are just not an issue in other languages.
Sure, it is possible to write unreliable code in high-level languages. It is just a lot easier to write unreliable C code, and C programmers are much more likely to do so (even those with years of experience).
Of course, that just dismisses the criticism of the language. But to say that it "should [not] be used" ignores the current landscape of engineers, employers and problems.
That quote is always annoying, because it applies equally well to all criticisms of any programming language in active use, but not all criticisms are created equal. A useful heuristic would actually help distinguish between reasonable criticism and inevitable kvetching.
I'm not betting anything. I'm just saying that it seems to me the advantages of higher level languages do not outweigh the disadvantages. Most of the software I use was written relatively recently, which means other program languages were available, yet C was chosen over them.
In fact, give me a single big desktop software project made with a language that is not C or C++.
I find your requirements odd and vague (why big? why desktop?), but speaking just for myself: Eclipse, jEdit, CyberDuck -- all written in Java.
I use plenty of applications written in C and C++ too, of course; but I think that's largely due to (1) inertia in the application development industry, and (2) it took a while for runtime environments like the JVM to perform well, so they got a bad reputation early on that's no longer really deserved -- but the reputation persists in the minds of many developers.
My parent's post said C was bad and software shouldn't be written in it. So I retort by talking about desktop software because this is an area where C and C++ rule supreme.
I specified big because big software projects carry more merit. A calculator written in your favourite language might be pretty handy, but it doesn't show your language can be used for real software. I use some Python scripts, but I don't see it in a lot of serious applications, apart from the dead slow portage and Anki, which is the most buggy software on my machine.
I don't think inertia is a real factor. Most software I use is brand spanking new. Google Chrome, to give an example, is barely five years old! Most of the other software I use is from the GNOME project, which has seen a major rewrite with GNOME 3 about two years ago, where they could have chosen to write it in a high-level language. Yet practically all of it is still in C, with a Javascript layer for the Shell (which I both love and loathe).
And speed is interesting. It might be due to old compilers/interpreters, bad programming habits or oldness, but all non C/C++/C# software I have ever used, was dead slow.
Google Chrome is based on WebKit, which was based on KHTML, which has been around since 1998. Moreover, Chrome depends on system and external libraries that provide C/C++ headers. Any viable alternative would need to work with C/C++ headers without needing large amounts of glue code, which isn't necessarily easy.
People still use C not because all of their code is so performance-sensitive that they can't deal with the overhead of bounds checks and garbage collection, but because there still aren't reasonably high-performance alternatives can easily integrate with existing codebases. Go and Rust are two promising contenders in this regard, but the former has only recently achieved C-level performance, and the latter is still not ready for production use. With that said, Mozilla is writing a browser in Rust (Servo), which shows their aspirations.
WebKit might be C/C++, that doesn't mean the browser itself needs to be that way. So is Anki, for example, a Python program that works with the QT toolkit, which is C++.
But point taken. I'm not saying we should use C, I'm anxious to a future where other languages can be used for serious applications. I was just trying to show C cannot possibly be that bad, as is it is still, together with C++, choice number one for every desktop application (and is Objective-C counts, it also dominates the mobile market).
My point in all this has been that language popularity is nearly orthogonal to technical pros/cons. Languages become popular for non-technical reasons. The popularity of C, C++, Objective-C, and related languages has almost nothing to do with the technical features of those languages, and almost everything to do with the marketing of popular OSes: Unix, Windows, and iOS. If an OS written in ML had become dominant in the 80s or 90s, it is nearly certain that ML would be a popular language. The inertia created by this large ecosystem cannot be denied; it is part of the reason you keep seeing new software being written in C/C++.
C absolutely is that bad. It is poorly defined. It forces programmers to explicitly write out things that can and should be done automatically. There is no standard error-handling system, just conventions involving return values and global error flags; there is no error-recovery system at all. Something as seemingly simple as computing the average of two signed integers winds up being non-trivial. C++ is even worse, as not only does it inherit most of the problems with C, but it introduces an entirely new list of pointless problems. Debugging code written in these languages is needlessly difficult -- you are spending as much time on high-level problems (i.e. design problems) as you are on tracing pointers and figuring out where some uninitialized value became a problem. It is not unreasonable to estimate that the dominance of C and C++ carry the cost of billions of dollars spent dealing with the headaches caused by these languages' problems.
It is hard to come up with a list of technical advantages to counter the above. C has few features, and those features are not very powerful. All that C really has going for it is that you can be "close to the machine," though it should be clear that other languages let you do this too (since other languages have been used to implement entire OSes). C++ has a few technical features that may be advantageous -- but they all compose poorly with each other, and their value is weighed down by all the baggage C++ carried from C (and indeed, most of the really bizarre bugs you can make in C++ stem from this baggage).
The non-technical reasons for C being popular vastly outweigh the technical deficiency of the language. The reason C/C++ is "choice number one" for desktop software is almost entirely a result of those non-technical reasons.
Nobody is doubting there's plenty of great software written in C. That doesn't mean we should choose C for all the new software we write, though. I'm sure one will find slow and unreliable code written in C if they look hard enough..
> And I'm pretty sure the reason portage is so extremely slow is because it is in Python (I've checked, it's not I/O-bound)
I was under the impression portage was "slow" (relative to other package management tools) due to the fact that it built everything from source (for which it uses make). Where is the Python bottleneck?
> In fact, give me a single big desktop software project made with a language that is not C or C++.
What constitutes "big"? There are a few large desktop projects that run on the JVM, for instance.
> I was under the impression portage was "slow" (relative to other package management tools) due to the fact that it built everything from source (for which it uses make). Where is the Python bottleneck?
I was talking about the overhead of making the list of packages to update/install. Pretending to install a simple package takes 7 seconds. Pretending to update takes 13 seconds. That's a very long time, because I might want to review the list of packages and make some changes, and every time I have to wait 13 seconds.
The compiling part is true, but that is not supervised.
It's been a while since I used Gentoo, but most of the serious users consider build-from-source as a feature and don't penalize portage for that. I believe your parent is referring to the sometimes annoyingly long dependency computation time of something like emerge -uDN world.
A major part of Firefox (hundreds of KLOC) is written in JavaScript. While the layout and network code is all C++, most parts that aren't performance-critical and don't need to interact directly with the OS are JS. This includes all of the high-level UI code.
I'm not siding with or against you here, but each of your examples is of something originally being in a language other than C.
Your position isn't strengthened if you can only name systems that were written in something else before someone converted them to C. In fact, by design, that's kind of benefitting the opposing side. I'm just saying.
1) the original UI for Skype was written in Delphi. The core functionality is, and has always been written in C/C++.
2) Mac OS is not a desktop application.
3) Adobe Photoshop is all C/C++ now. They converted it to C/C++ because they decided that was a better choice. Its tough to sell that as a case for a major desktop application not in c/c++.
The parent poster asked for desktop software written in languages besides C or C++ without any reference of hybrid applications or timeframe when they were written.
Since when desktop operating systems are not desktop applications?
I can provide other examples, but most likely I will get extra requirements along the way, so why bother...
After all, you can always use the argument that any application needs to call C APIs when the OS is written in C, therefore all applications are written in C.
What a victim complex. If someone is debating that we should stop using C, and I ask for software not written in C, it's obvious that I'm looking for contemporary software. And the reasoning that every software is "C" because it needs to do system/library calls would be ludicrous, and to provisionally accusing me of that is insulting.
I've done some research myself, and there are some interesting and big projects in languages other than C. Eclipse in Java, Dropbox in Python to name two. But my point still stands. I looked at all the desktop applications I have installed and they're without fault C or C++.
My impression is that desktop software is/was largely written in C/C++/C#/Objective C/Objective C++ because those are the languages to which the Operating Systems of today expose their APIs. For example Win32 = C/C++, Cocoa = Objective C, Metro = C# (or I guess anything compiled to the Common Intermediate Language (CIL)???).
Now that said, most languages provide a bridging layer that allows them to call out to those "native" APIs. These are used to enable API ?wrappers?, however due to these being provided by third parties (I believe) there has been a tendency to gravitate toward the "blessed" language of the OS vendor.
One of the big advantages of Java (to me at least) was that it provided a platform independent windowing capability inbuilt within the JDK that has been maintained by Sun/Oracle/(and Apple) as new operating system revisions were released.
Note, for example, that C/C++/Objective-C/Objective-C++ programs aren't/weren't allowed in the Mac App Store (I'm not sure if this is still the case...).
(Personally I ported a Java App to Objective C++ due to this.)
But generally I agree with your point that this isn't the only reason why C was so pervasively used. However, you also need to consider that a large number of programming environments and tools were specifically developed to aid C/C++ programmers, e.g. Borland C++, Visual C++, Code Warrior, XCode. It's also worth remembering that the GNU C Compiler and Debugger were import contributions to free software back in the day.
But also consider distribution of compilers etc. I think it is pretty fair to say that a lot of programmers learnt to program using Borland Pascal/C++ because at that point the Internet was not as accessible as today and copies of these could be "obtained".
The advent of Internet has not only allowed the distribution of compilers and environments for other programming languages, it has also meant that the languages used for backend systems, i.e. web servers and web applications, is irrelevant to the user's web browser.
Anecdotally, for safety-critical system software an issue with some languages other than C is that they have not been suitable for real-time systems. I don't know much about this other than that exception handling and also garbage collection can cause issues due to their non-determinism.
I fear that you'll think that the above is a bit too much like saying "all software is 'C' because it needs to do system/library calls", however I think it's probably fairer to say "all software is 'C' because many people have really, really liked it" and "better the devil you know".
There is a lot of applications in C#, a lot in Java, a lot in full python (e.g. ubuntu utilities), some in flex (cheers to balsamiq the wonderful). The fact that you didn't installed them doesn't mean they aren't there.
Moreover, you shouldn't say C/C++, they are very different languages and I can bet my hat that your desktop applications are full of C++ with an high level framework like Qt. Not really close to the metal.
The desktop client is indeed written in Java Swing. Fun fact: for PDF printing and getting updates, it actually does interact with a web service in the Ruby on Rails application. Both are ungodly bits of code put together years ago which shame me but seem to continue functioning.
To be honest, I was looking for some more recent software. Skype is 10 years old and both Mac OS and Photoshop are both decades old. And Pascal is not "better" than C, it is not a high level language.
Funny as someone that started coded back in the day Assembly was enterprise coding, my understanding of a high level language is a bit different than yours.
You asked for desktop software without any mention of time.
The implication is that his test is a proof of C's merit.
Merit is time-sensitive, in that if software is converted to another language, that language is more profitable/efficient for that software.
Therefore, the implication is also that systems converted to C demonstrate C's merit, because it's been decades since they were actually in the language you speak of.
I wouldn't say this article is recommending that you go out and program in C full-time. Many people have learned Lisp (and are better programmers for it), but few actually use it at their day job.
My point is there are fundamental computing concepts that you can pick up by learning C. In a world of high-level, low-LOC languages you can get by without learning those concepts, but it serves your and the ecosystem's best interest to learn them.
I think the disagreement we have may stem from our notions of what constitutes "fundamental computing concepts." I rank the lambda calculus much higher than C or assembly language when it comes to that. I would say that knowing your data structures and how to analyze algorithms asymptotically is vastly more important than knowing how code is being executed at a low level.
Even for the cases where low-level code must be written, I would say we need people who know assembly language and compiler theory more than we need people who know C. There is no particularly good reason for C to be anywhere in the software stack; you can bootstrap Lisp, ML, etc. without writing any C code. We need people who know how to write optimizing compilers; those people do not need to know C, nor should they waste their time with C.
Really, the most important computing concept people need to learn is abstraction. Understanding that a program can be executed as machine code, or an interpreted IR, or just interpreting an AST, and that code can itself be used to construct higher level abstractions is more important than learning any particular language.
Except that C is all over the stack that most people work in every day, and not just way down at the level of the OS.
It's astounding to me how many of the people talking about Python, Ruby, and PHP as moments of great liberation from C appear not to realize how many of the most useful libraries in these languages are really just gentle wrappers around C libraries.
Someone needs to write that not-particularly-low-level code, and someone needs to hook it up to these miraculous high-level languages. The people who do this have always been a quieter bunch than the Pythonistas, the Rubyists, the Node-nuts, and whoever else, but damn do they know what they're doing. And they certainly don't go around talking about how C is obsolete, only for device drivers, and has nothing to do with their "stack."
> There is no particularly good reason for C to be anywhere in the software stack;
Really? Not anywhere?
Who is handling your hardware interrupts? How is your keyboard and mouse input being handled? What about your video card drivers?
Now I will grant that you can bootstrap an initial run time in assembly and place your favorite high level language4 on top of that, if you add extensions to your favorite language to better interact with HW you can do everything in a higher level language, but as it stands, LISP doesn't have built in support for doing a DMA copy from memory buffer to a USB port.
My question then becomes, why the heck bootstrap in ASM rather than C?
As you said, there is no reason you cannot bootstrap in a high level language. Operating systems were written in Lisp at one time; they had device drivers, interrupts, etc.
My point is not that C is not used, but that there is no compelling technical reason to use C anywhere. The fact that Lisp and ML do not have standardized features for low-level operations is not really much of an argument. We could add those features, and we could do so with ease (CMUCL and SBCL already have low-level pointer operations and a form of inline assembly language); the only reason we do not is that nobody has time to rewrite billions of lines of C code, or perhaps more that nobody will spend the money to do such a thing. The existence of C at various levels of the software stack is a historical artifact, primarily a result of Unix having been written in C and OSes written in other languages having been marketed poorly.
The lesson is not that C is good for writing low-level code; the lesson is that technical features are not terribly important.
I would also point out that an OS is not just about interrupt handlers and device drivers. Most of an OS is high-level code that is connected to interrupt handlers and device drivers through an interface. Even if C were the best language in the world for writing low-level code, I would still question the use of C elsewhere (imagine, as an alternative, an OS that follows the design of Emacs -- a small core written in C, the rest written in Lisp).
It doesn't (yet one example of why C isn't the best systems programming language), but the concepts of C (raw memory, pointers, flat buffers), map onto underlying concepts pretty clearly.
Now that said, a lot of other things (anything dealing with asynchronous programming) don't map onto C that well at all, and other languages do a much better job at solving some conceptual problems.
But that is why languages like LISP and Haskel are taught, so that even when one is stuck working in the C ghetto, higher level concepts and more abstract coding patterns can still be brought to bear to solve problems. :)
Raw memory, pointers, flat buffers exist in almost every systems program language, even strong typed ones.
My point was that what many developers think what are C features for systems programming, are in fact language extensions that most vendors happen to implement.
In this regard, the language is no better than any other that also requires extensions for the same purposes.
Agreed; OS kernels and firmware for embedded systems all require low-level access to hardware in a way that high-level desktop applications do not. Being able to easily reason about how C is going to use resources and be compiled down to machine code for the architecture you are using can sometimes be an important asset.
I think that the point is that even if you accept that the kernel level code and device drivers are all in C, from there, there's less and less benefit to doing userland code in C from there... you could use Lisp, Erlang, Scheme or a number of other languages for userland and service oriented code.
Although I really don't care for Unity, or Windows 8's UI's I do appreciate some of the directions they are going in terms of being able to create applications that are more abstracted in nature. I personally happen to like higher level languages/environments, and modern hardware has been able to handle them very well for years.
I do think that certain patterns and practices that people have followed need to be re-thought for parallelism, and that a thread/process per request in service oriented architectures has now been a bottleneck... but there are techniques, languages and platforms that can take us much farther without digging into a low-level platform language like C.
I agree that knowing C is helpful, so is knowing assembly... that doesn't mean even a small fraction of developers should be working with them on a daily basis. Most code is one-off line of business application code and related services. It doesn't need sheer performance, it needs to be done and in production sooner... the next thing needs to get done. You can't create software as quickly in C/Assembly as you can in Java/C# (or Python, Ruby, NodeJS).
I agree with you; in the cases you mentioned there don't seem to be any good arguments for not using a higher-level language with managed memory, properly implemented data structures, etc.
It seems like there are at least two threads of thought going on in the comments in general. One of them is, "does C have any role in any domain, and if so what is that domain?". I think that it does; software development is much wider than kernels, userland applications, and compilers, and there are fields where C and/or C++ are the right tools as things currently stand. I don't think anyone would argue that either language exists as a global optimum in any problem space, but from an engineering (rather than a theoretical purism) standpoint sometimes there are few practical alternatives. Maybe these domains are small, maybe they're unexciting, but they do exist.
The other is, "what is the point of learning C?". Maybe they want a deeper understanding of manual memory management, the concept of stack and heap storage, pointer manipulation, etc. Learning more about C to play with these concepts isn't a terrible idea, although it's not the only way to learn about these things. If nothing else, learning C and trying to implement your own parsers or data structures might be a good way to better understand why writing correct code in C that accounts for buffer overflows and string issues is so difficult, and what challenges higher-level languages face in order to overcome these flaws.
Foregoing application performance adds up to a lot of money for the likes of Google and Facebook in terms of server cost, cooling, size.
Maybe Go will displace C at Google but I imagine only when it reaches performance parity.
> I would say that knowing your data structures and how to analyze algorithms asymptotically is vastly more important than knowing how code is being executed at a low level.
Except that most modern data structure research goes deep into cache awareness (i.e. structures that respect cache lines and algorithms that prevent cache misses and avoid pipeline stalling), which requires understanding of the hardware and the instruction set.
Knowing your Big-O-stuff is a prerequisite for modern algorithm design; it does not take you anywhere new, though.
Knowing your data structures does not mean being on the cutting edge of data structures research. It does mean knowing more than just vectors, lists, and hash tables. It means choosing the right data structures for your problem -- something that can have profound effects on performance, much more so than the cache.
Yes, people should know about the machine their code is running on, because when there are no asymptotic improvements to be made, constant factors start to matter. Right now, though, people tend to choose asymptotically suboptimal data structures and algorithms. Worrying about things like pipeline stalling when an algorithmic improvement is possible is basically the definition of premature optimization.
When I read "C code" in this thread - I assume people mean C and C++ together, since they're both capable of the same low-level stuff.
The thing is, the new compilers, static analysis, JIT'ing, ... available these days for C++ makes it the best time ever to write C++ in a reliable way, and being technology that is being used in the real world, this will only improve.
There's a reason the C++ language is undergoing a lot of changes nowadays, for the better. After years of no movement at all, C++11 finally arrived, supporting features improving general reliability and memory-management (single pointers, smart pointers, auto type, ...), and C++14 is on it's way. While old programs will still work, the core language evolves and so do the generally accepted standard/best practices, that when generally accepted provide very reliable code.
I work on quite a large C++ code-base, and both our test-team and static analysis tools rarely find "programming errors". Functional bugs - sure, you still have those like you have in any program, but real end-of-the-world memory corruptions or leaks are rare to completely absent. The only tricky part I guess is threading, although analysis tools have massively improved here - and this is an issue in pretty much every language that understands threads.
The nice thing about learning C is that you get to understand what exactly is happening, and while a functional language like Haskell is very cool and can in certain situations offer massive optimizations due to it's language and runtime design, it presents you with a non-existing world. C is an "easy" way to understand low-level for as far as it's useful. I fooled around with a lot of languages, including Haskell - and knowing C gives you a much better insight in what the runtime is actually doing, because you KNOW a CPU doesn't work like that, and you're able to quickly understand it's limitations and advantages.
Another way to understand what happens +- on CPU level is implementing a simple bytecode compiler and interpreter - in any language of your choosing, but for some reason, most "real world" interpreters are implemented in C/C++.
C is easy to get right. Its also easy to get wrong.
To be honest, most holes I've seen over the years are above the level of the language and are down to the implementation or design being flawed. For example SQL injection, silly business processes, elevation flaws, bad maths etc.
Most software is written to solve high-level problems. Using a high-level language is sensible, time-saving, budget-saving, improves portability, and saves on headaches later. The same rule that applies to COBOL should apply to C: only use it when you have to deal with an existing legacy codebase (and only if rewriting that codebase with a better language is not possible).