Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My point is there are fundamental computing concepts that you can pick up by learning C. In a world of high-level, low-LOC languages you can get by without learning those concepts, but it serves your and the ecosystem's best interest to learn them.


I think the disagreement we have may stem from our notions of what constitutes "fundamental computing concepts." I rank the lambda calculus much higher than C or assembly language when it comes to that. I would say that knowing your data structures and how to analyze algorithms asymptotically is vastly more important than knowing how code is being executed at a low level.

Even for the cases where low-level code must be written, I would say we need people who know assembly language and compiler theory more than we need people who know C. There is no particularly good reason for C to be anywhere in the software stack; you can bootstrap Lisp, ML, etc. without writing any C code. We need people who know how to write optimizing compilers; those people do not need to know C, nor should they waste their time with C.

Really, the most important computing concept people need to learn is abstraction. Understanding that a program can be executed as machine code, or an interpreted IR, or just interpreting an AST, and that code can itself be used to construct higher level abstractions is more important than learning any particular language.


Except that C is all over the stack that most people work in every day, and not just way down at the level of the OS.

It's astounding to me how many of the people talking about Python, Ruby, and PHP as moments of great liberation from C appear not to realize how many of the most useful libraries in these languages are really just gentle wrappers around C libraries.

Someone needs to write that not-particularly-low-level code, and someone needs to hook it up to these miraculous high-level languages. The people who do this have always been a quieter bunch than the Pythonistas, the Rubyists, the Node-nuts, and whoever else, but damn do they know what they're doing. And they certainly don't go around talking about how C is obsolete, only for device drivers, and has nothing to do with their "stack."


> There is no particularly good reason for C to be anywhere in the software stack;

Really? Not anywhere?

Who is handling your hardware interrupts? How is your keyboard and mouse input being handled? What about your video card drivers?

Now I will grant that you can bootstrap an initial run time in assembly and place your favorite high level language4 on top of that, if you add extensions to your favorite language to better interact with HW you can do everything in a higher level language, but as it stands, LISP doesn't have built in support for doing a DMA copy from memory buffer to a USB port.

My question then becomes, why the heck bootstrap in ASM rather than C?


As you said, there is no reason you cannot bootstrap in a high level language. Operating systems were written in Lisp at one time; they had device drivers, interrupts, etc.

My point is not that C is not used, but that there is no compelling technical reason to use C anywhere. The fact that Lisp and ML do not have standardized features for low-level operations is not really much of an argument. We could add those features, and we could do so with ease (CMUCL and SBCL already have low-level pointer operations and a form of inline assembly language); the only reason we do not is that nobody has time to rewrite billions of lines of C code, or perhaps more that nobody will spend the money to do such a thing. The existence of C at various levels of the software stack is a historical artifact, primarily a result of Unix having been written in C and OSes written in other languages having been marketed poorly.

The lesson is not that C is good for writing low-level code; the lesson is that technical features are not terribly important.

I would also point out that an OS is not just about interrupt handlers and device drivers. Most of an OS is high-level code that is connected to interrupt handlers and device drivers through an interface. Even if C were the best language in the world for writing low-level code, I would still question the use of C elsewhere (imagine, as an alternative, an OS that follows the design of Emacs -- a small core written in C, the rest written in Lisp).


> LISP doesn't have built in support for doing a DMA copy from memory buffer to a USB port.

Where in the ANSI/ISO C standard is this support described?


It doesn't (yet one example of why C isn't the best systems programming language), but the concepts of C (raw memory, pointers, flat buffers), map onto underlying concepts pretty clearly.

Now that said, a lot of other things (anything dealing with asynchronous programming) don't map onto C that well at all, and other languages do a much better job at solving some conceptual problems.

But that is why languages like LISP and Haskel are taught, so that even when one is stuck working in the C ghetto, higher level concepts and more abstract coding patterns can still be brought to bear to solve problems. :)


Raw memory, pointers, flat buffers exist in almost every systems program language, even strong typed ones.

My point was that what many developers think what are C features for systems programming, are in fact language extensions that most vendors happen to implement.

In this regard, the language is no better than any other that also requires extensions for the same purposes.


Agreed; OS kernels and firmware for embedded systems all require low-level access to hardware in a way that high-level desktop applications do not. Being able to easily reason about how C is going to use resources and be compiled down to machine code for the architecture you are using can sometimes be an important asset.


I think that the point is that even if you accept that the kernel level code and device drivers are all in C, from there, there's less and less benefit to doing userland code in C from there... you could use Lisp, Erlang, Scheme or a number of other languages for userland and service oriented code.

Although I really don't care for Unity, or Windows 8's UI's I do appreciate some of the directions they are going in terms of being able to create applications that are more abstracted in nature. I personally happen to like higher level languages/environments, and modern hardware has been able to handle them very well for years.

I do think that certain patterns and practices that people have followed need to be re-thought for parallelism, and that a thread/process per request in service oriented architectures has now been a bottleneck... but there are techniques, languages and platforms that can take us much farther without digging into a low-level platform language like C.

I agree that knowing C is helpful, so is knowing assembly... that doesn't mean even a small fraction of developers should be working with them on a daily basis. Most code is one-off line of business application code and related services. It doesn't need sheer performance, it needs to be done and in production sooner... the next thing needs to get done. You can't create software as quickly in C/Assembly as you can in Java/C# (or Python, Ruby, NodeJS).


I agree with you; in the cases you mentioned there don't seem to be any good arguments for not using a higher-level language with managed memory, properly implemented data structures, etc.

It seems like there are at least two threads of thought going on in the comments in general. One of them is, "does C have any role in any domain, and if so what is that domain?". I think that it does; software development is much wider than kernels, userland applications, and compilers, and there are fields where C and/or C++ are the right tools as things currently stand. I don't think anyone would argue that either language exists as a global optimum in any problem space, but from an engineering (rather than a theoretical purism) standpoint sometimes there are few practical alternatives. Maybe these domains are small, maybe they're unexciting, but they do exist.

The other is, "what is the point of learning C?". Maybe they want a deeper understanding of manual memory management, the concept of stack and heap storage, pointer manipulation, etc. Learning more about C to play with these concepts isn't a terrible idea, although it's not the only way to learn about these things. If nothing else, learning C and trying to implement your own parsers or data structures might be a good way to better understand why writing correct code in C that accounts for buffer overflows and string issues is so difficult, and what challenges higher-level languages face in order to overcome these flaws.


Foregoing application performance adds up to a lot of money for the likes of Google and Facebook in terms of server cost, cooling, size. Maybe Go will displace C at Google but I imagine only when it reaches performance parity.


> I would say that knowing your data structures and how to analyze algorithms asymptotically is vastly more important than knowing how code is being executed at a low level.

Except that most modern data structure research goes deep into cache awareness (i.e. structures that respect cache lines and algorithms that prevent cache misses and avoid pipeline stalling), which requires understanding of the hardware and the instruction set.

Knowing your Big-O-stuff is a prerequisite for modern algorithm design; it does not take you anywhere new, though.


Knowing your data structures does not mean being on the cutting edge of data structures research. It does mean knowing more than just vectors, lists, and hash tables. It means choosing the right data structures for your problem -- something that can have profound effects on performance, much more so than the cache.

Yes, people should know about the machine their code is running on, because when there are no asymptotic improvements to be made, constant factors start to matter. Right now, though, people tend to choose asymptotically suboptimal data structures and algorithms. Worrying about things like pipeline stalling when an algorithmic improvement is possible is basically the definition of premature optimization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: