The art of programming is evolving steadily; more powerful hardware becomes available, and compiler technology evolves.
Ofcourse there will be resistance to change, and new compilers don't mature overnight. At the end of the day, it boils down to what can be parsed unambiguously, written down easily by human beings, and executed quickly. If you get off on reading research papers on dependent types and writing Agda programs to store in your attic, that's your choice; the rest of us will be happily writing Linux in C99 and powering the world.
Programming has not fundamentally changed in any way. x86 is the clear winner as far as commodity hardware is concerned, and serious infrastructure is all written in C. There is a significant risk to adopting any new language; the syntax might look pretty, but you figure out that the compiler team consists of incompetent monkeys writing leaking garbage collectors. We are pushing the boundaries everyday:
- Linux has never been better: it continues improve steadily (oh, and at what pace!). New filesystems optimized for SSDs, real virtualization using KVM, an amazing scheduler, and a new system calls. All software is limited by how well the kernel can run it.
- We're in the golden age of concurrency. Various runtimes are trying various techniques: erlang uses a message-passing actor hammer, async is a bit of an afterthought in C#, Node.js tries to get V8 to do it leveraging callbacks, Haskell pushes forward with a theoretically-sound STM, and new languages like Go implement it deep at the scheduler-level.
- For a vast majority of applications, it's very clear that automatic memory management is a good trade-off. We're look down upon hideous nonsense like the reference-counter in cpython, and strive to write concurrent moving GCs. While JRuby has the advantage of piggy-banking on a mature runtime, the MRI community is taking GC very seriously. V8 apparently has a very sophisticated GC as well, otherwise Javascript wouldn't be performant.
- As far as typing is concerned, Ruby has definitely pushed the boundaries of dynamic programming. Javascript is another language with very loosely defined semantics, that many people are fond of. As far as typed languages go, there are only hideous languages like Java and C#. Go seems to have a nice flavor of type inference to it, but only time will tell if it'll be a successful model. Types make for faster code, because your compiler has to spend that much less time inspecting your object: V8 does a lot of type inference behind the scenes too.
- As far as extensibility is concerned, it's obvious that nothing can beat a syntax-less language (aka. Lisp). However, Lisps have historically suffered from a lack of typesystem and object system: CLOS is a disaster, and Typed Racket seems to be going nowhere. Clojure tries to bring some modern flavors into this paradigm (core.async et al), while piggy-banking on the JVM. Not sure where it's going though.
- As far as object systems go, nothing beats Java's factories. It's a great way to fit together many shoddily-written components safely, and Dalvik does exactly that. You don't need a package-manager, and applications have very little scope for misbehaving because of the suffocating typesystem. Sure, it might not be be pleasant to write Java code, but we really have no other way of fitting so many tiny pieces together. It's used in enterprise for much the same reasons: it's too expensive to discipline programmers to write good code, so just constrain them with a really tight object system/typesystem.
- As far as functional programming goes, it's fair to say that all languages have incorporated some amount of it: Ruby differentiates between gsub and gsub! for instance. Being purely functional is a cute theoretical exercise, as the scarab beetle on the Real World Haskell book so aptly indicates.
- As far as manual memory management goes (when you need kernels and web browsers), there's C and there's C++. Rust introduces some interesting pointer semantics, but it doesn't look like the project will last very long.
Well, that ends my rant: I've hopefully provided some food for thought.
> We're in the golden age of concurrency. Various runtimes are trying various techniques: erlang uses a message-passing actor hammer, async is a bit of an afterthought in C#, Node.js tries to get V8 to do it leveraging callbacks, Haskell pushes forward with a theoretically-sound STM, and new languages like Go implement it deep at the scheduler-level.
No, a better analogy is that we're in the Cambrian explosion of concurrency. We have a bunch of really strange lifeforms all evolving very rapidly in weird ways because there's little selection pressure.
Once one of these lifeforms turns out to be significantly better, then it will outcompete all of the others and then we'll be in something more like a golden age. Right now, we still clearly don't know what we're doing.
We've been doing concurrency for many years now; it's called pthreads. Large applications like Linux, web browsers, webservers, and databases do it all the time.
The question is: how do we design a runtime that makes it harder for the user to introduces races without sacrificing performance or control? One extreme approach is to constrain the user to write only purely functional code, and auto-parallelize everything, like Haskell does (it's obvious why this is a theoretical exercise). Another is to get rid of all shared memory and restrict all interaction between threads to message passing like Erlang does (obviously, you have to throw performance out the window). Yet another approach is to run independent threads and keep polling for changes at a superficial level (like Node.js does; performance and maintainability is shot). The approach that modern languages are taking is to build concurrency as a language primitive built into the runtime (see how go's proc.c schedules various channels in chan.c; it has a nice race detection algorithm in race.c).
There is more pressure than ever to build applications that leverages more cores to build highly available internet applications. Multi-cores have existed long enough, and are now prevalent even on mobile devices. No radically different solution to concurrency is magically going to appear tomorrow: programmers _need_ to understand concurrency, and work with existing systems.
> We've been doing concurrency for many years now; it's called pthreads.
Sometimes, the major advances come when fresh ideas are infused from the outside. In Darwin's case it was his geological work that inspired his theory. In concurrency maybe it will be ideas from neuroscience.
> No radically different solution to concurrency is magically going to appear tomorrow: programmers _need_ to understand concurrency, and work with existing systems.
The environment is changing. In 2007, the oxygen levels started increasing, single threaded CPU scaling hit the wall. It has gone from doubling every 2 years to a few % increases per year.
We are only at the beginning of this paradigm shift to massively multi-core CPUs. Both the tools and the theory are still in their infancy. In HW there are many promising advances being explored, such as GPUs, Intel Phi, new FPGAs, and projects like Parallella.
The software side also requires new tools to drive these new technologies. Maybe a radical new idea, but more likely some evolved form of CSP, functional, flow-Based, and/or reactive programming models from the 70s, that didn't work with the HW environment at the time will fill this new niche.
For example, one of the smartest guys I know working on a neuromorphic engineering where he's creating a ASIC with thousand of cores now and may evolve to (b)millions. If this trilobite emerges on top, whatever language is used to program it might have been terrible in the 70s or for your "existing systems" but it may be the future of programming.
> Sometimes, the major advances come when fresh ideas are infused from the outside.
I agree with this largely; over-specialization leads to myopia (often accompanied by emotional attachment to one's work).
> In Darwin's case it was his geological work that inspired his theory.
If you read On the Origin of Species, you'll see that Darwin started from very simple observations about cross-pollination leading to hybrid plant strains. He spent years studying various species of animals. In the book, he begins out very modestly, following step by step from his Christian foundations, without making any outrageous claims. The fossils he collected on his Beagle expedition sparked his interest in the field, and served as good evidence for his theory.
> In concurrency maybe it will be ideas from neuroscience.
Unlikely, considering what little we know about the neocortex. The brain is not primarily a computation machine at all; it's a hierarchical memory system that makes mild extrapolations. There is some interest in applying what we know to computer science, but I've not seen anything concrete so far (read: code; not some abstract papers).
> We are only at the beginning of this paradigm shift to massively multi-core CPUs.
From the point of view of manufacturing, it makes most sense. It's probably too expensive to design and manufacture a single core in which all the transistors dance to a very high clock frequency. Not to mention power consumption, heat dissipation, and failures. In a multi-core, you have the flexibility to switch off a few cores to save power, run them at different clock speeds, and cope with failures. Even from the point of view of Linux, scheduling tons of routines on one core can get very complicated.
> In HW there are many promising advances being explored, such as GPUs, Intel Phi, new FPGAs, and projects like Parallella.
Ofcourse, but I don't speculate much about the distant future. The fact of the matter is that silicon-based x86 CPUs will rule commodity hardware in the foreseeable future.
> [...]
All this speculation is fine. Nothing is going to happen overnight; in the best case, we'll see an announcement about a new concurrent language on HN tomorrow, which might turn into a real language with users after 10 years of work ;) I'll probably participate and write patches for it.
For the record, Go (which is considered "new") is over 5 years old now.
I think you missed my point about Darwin. Darwin was inspired by the geologic theory, gradualism, where small changes are summed up over long time periods. It was this outside theory applied to biology that helped him to shape his radical new theory.
Right now threads are the only game in town, and I think you're right. For existing hardware, there probably won't be any magic solution, at least no with some major tradeoff like performance hit you get with Erlang.
I was thinking about neuromorphic hardware when I mentioned neuroscience. From what I hear the software side there is more analogous to HDL.
Go is great stopgap for existing thread based HW. But if the goal is to achieve strong AI, we're going to need some outside inspiration. Possibility from a hierarchical memory system, a massively parallel one.
I wish I could offer less speculation, and more solid ideas. Hopefully someone here on HN will. I think that was the point of the video. To inspire.
There are other options in the systems field like "virtual time" and "time warps", or "space-time memory", or a plethora of optimistic concurrency schemes where you optimistically try to do something, discover there is an inconsistency, rollback your effects, and do it again (like STM, but with real "do it again").
Our raw parallel concurrency tools, especially pthreads and..gack..locks, are horribly error prone and not even very scalable in terms of human effort and resource utilization. That is why we've expended so much effort designing models that try and avoid them.
My point is that we'll continually find better solutions to existing problems (concurrency, or anything else for that matter). There will be a time in the future when we've come up with a solution that's "good enough", and it'll become the de-facto standard for a while (kind of like what Java is today). I don't know what that solution will be, and I don't speculate about it: I'm more interested in the solutions we have today.
Yes, the raw solutions _are_ very painful, which is why they haven't seen widespread adoption. And yes, we are continually trying to enable more programmers.
Yes, many of us are in the field of coming up with "the programming model" to handle this as well as general live programming problems. I'm personally focusing on optimistic techniques to deal with concurrency as well as incremental code changes.
Nothing you've said really invalidates his argument - we are still typing mostly imperative code into text files, it is still very easy to introduce bugs into software, and software development is on the whole unnecessarily complex and unintuitive.
It's heartening to see a renewed interest in functional, declarative and logic based programming today, but also saddening that the poisonous legacy of C has prevented us from getting there sooner.
> It's heartening to see a renewed interest in functional, declarative and logic based programming today, but also saddening that the poisonous legacy of C has prevented us from getting there sooner.
From the point of view of programming a computer, this doesn't make much sense to me personally.
But perhaps the problem is that I first and foremost see that I program a computer, a deterministic machine with limited resources and functionality, rather than "designing an user experience and letting computer take care of making it run as I describe". Guess I dwell in the depths of hardware/machine centric programming rather than fly high in user centric programming.
Unless you're writing IA64 microcode, you don't really program a computer. You explain your desires to a compiler using a vocabulary as expressive as is possible for it to comprehend, and then it uses whatever intelligence is at its disposal to program the computer.† The more intelligent the compiler (e.g. GHC with its stream-fusion), and the more expressive the vocabulary it knows (e.g. Erlang/OTP with its built-in understanding of servers, finite-state machines, and event-handlers) the higher-level the conversation you can have with it is.
Your conversation with the compiler is actually the same conversation a client would have with you, as a software contractor. From the client's perspective, you play the role of the compiler, interrogating and formalizing their own murky desires for them, and then coughing up a build-artifact for them to evaluate. This conversation just occurs on an even higher level, because a human compiler is smarter, and has a much more expressive vocabulary, than a software compiler.
...but the "goal" of compiler and language design should be to make that distinction, between the "software compiler" and the "human compiler", less obvious, shouldn't it? The more intelligence we add to the compiler, and the more expressivity we add to the language, the more directly the programmer can translate the client's desires into code. Until, finally, one day--maybe only after we've got strong AI, but one day--the client themselves will be the one speaking to the compiler. Not because the client will be any better at knowing how to formalize what they want than they ever were (that's the dream that gave us the abominations of FORTRAN, SQL, and AppleScript) but because the compiler will be able to infer and clarify their murky thoughts into a real, useful design--just as we do now. Wouldn't that be nice?
---
† If you use a language-platform that includes garbage-collection, for example, then you're not targeting a machine with "limited resources" at all; garbage-collection is intended to simulate an Abstract Machine with unlimited memory. (http://blogs.msdn.com/b/oldnewthing/archive/2010/08/09/10047...)
Don't talk rubbish. Nobody enjoys spending 10 hours to accomplish something that can be accomplished in an hour. Ofcourse we're trying to build compilers for nicer languages. Programming isn't going to become any less complex or unintuitive by sitting around wishing for better solutions: it's going to happen by studying existing technology, and using it to build better solutions.
What has "prevented" us from getting there sooner is purely our incompetence. It's becoming painfully clear to me that people have absolutely no idea about how a compiler works.
Dunno the OPs original reasoning, but I found this comment flippant and unsubstantiated. I don't code Rust myself, but both it and Go seem extremely promising and both "specialize" relative to C/C++ without stepping on each other's toes. There is room for both systems languages. As things stand now if it doesn't last very long, it will because of some future mistake by its creators or community, not because it loses out to some fitter competitor. AFAICT the multicore future doesn't have room for C/C++, so its logical that one or more practical systems language that do consider a multicore future will take the place of C/C++. Go and Rust seem like the most likely candidates on the horizon at this point in time.
Let's take a couple of simple examples of when manual memory management is helpful:
- implement a complex data structure that requires a lot of memory: you can request a chunk of memory from the kernel, do an area-allocation and choose to allocate/free on your own terms.
- implement a performant concurrency model. You essentially need some sort of scheduler to give various threads access to the shm via cas primitives.
Let's take up the second point first: you have implemented tasks that communicate using pipes without sharing memory (rt/rust_task.cpp). You've exposed the lower-level rt/sync via libextra/sync.rs, but it's frankly not a big improvement over using raw pthreads. The scheduler is a toy (rt/rust_scheduler.cpp), and the memory allocator is horribly primitive (rt/memory_region.cpp; did I read correctly? are you using an array to keep track of the allocated regions?). The runtime is completely devoid of any garbage collection, because you started out with the premise that manual memory management is the way to go: did it occur to you that a good gc would have simplified the rest of your runtime greatly?
Now for the first point: Rust really has no way of accomplishing it, because you I don't get access to free(). The best you can do at this point is to use some sort of primitive reference counter (not much unlike cpython or shared-pointer in C++), because it's too late to implement a tracing garbage collector. And you just threw performance out the window by guaranteeing that you will call free() everytime something goes out of scope, no matter how tiny the memory.
Now, let's compare it to the go runtime: arena-allocator tracked using bitmaps (malloc.goc), decent scheduler (proc.c), decent tracing garbage collector (mgc0.c), and channels (chan.c). For goroutines modifying shared state, they even implemented a nice race-detection tool (race.c).
The fact of the matter is that a good runtime implementing "pretty" concurrency primitives requires a garbage collector internally anyway. It's true that go doesn't give me a free() either, but atleast I'm reassured by the decent gc.
Now, having read through most of libstd, observe:
impl<'self, T> Iterator<&'self [T]> for RSplitIterator<'self, T>
What's the big deal here? The lifetime of the variable is named ('self), and the ownership semantics are clear (& implies borrowed pointer; not very different from the C++ counterpart). Whom is all this benefitting? Sure, you get annoying compile-time errors when you don't abide by these rules, but what is the benefit of using them if there's no tooling around it (aka. gc)? Yes, it's trivially memory-safe and I get that.
Lastly think about why people use C and C++. Primarily, it boils down to compiler strength. The rust runtime doesn't look like it's getting there; atleast not in its current shape.
> Let's take a couple of simple examples of when manual memory management is helpful:
>
> - implement a complex data structure that requires a lot of memory: you can request a chunk of memory from the kernel, do an area-allocation and choose to allocate/free on your own terms.
Rust fully supports this case with arenas.
> - implement a performant concurrency model. You essentially need some sort of scheduler to give various threads access to the shm via cas primitives.
And that's why the new scheduler is written in Rust.
Furthermore, manual memory management is helpful when you are implementing a browser that doesn't want a stop the world GC.
> You've exposed the lower-level rt/sync via libextra/sync.rs, but it's frankly not a big improvement over using raw pthreads.
It's just a wrapper around pthreads, for use internally by the scheduler and low-level primitives. It is not intended for safe Rust code to use. Of course it's not a big improvement over pthreads.
> The scheduler is a toy (rt/rust_scheduler.cpp)
That's why it's getting rewritten. You're looking at the old proof of concept/bootstrap scheduler. Please see the new scheduler in libstd/rt. It will probably be turned on in a week or two.
> and the memory allocator is horribly primitive (rt/memory_region.cpp; did I read correctly? are you using an array to keep track of the allocated regions)
There is a new GC that is basically written, just not turned on by default yet. Furthermore, manually-managed allocations no longer go through that list.
> Rust really has no way of accomplishing it, because you I don't get access to free().
Of course you do. `let _ = x;" is an easy way to free any value.
> The best you can do at this point is to use some sort of primitive reference counter (not much unlike cpython or shared-pointer in C++), because it's too late to implement a tracing garbage collector.
This is just nonsense, sorry. Graydon has a working tracing GC, it's just not turned on by default because of memory issues on 32 bit when bootstrapping. This is not too difficult to fix and is a blocker for 1.0.
Furthermore, did you not see the mailing list discussions where we're discussing what needs to happen to get incremental and generational GC?
> And you just threw performance out the window by guaranteeing that you will call free() everytime something goes out of scope, no matter how tiny the memory.
This is what move semantics are for. If you want to batch deallocations like a GC does (which has bad effects on cache behavior as Linus is fond of pointing out, but anyway), move the object into a list so it doesn't get eagerly freed and drop the list every once in a while.
> Lastly think about why people use C and C++. Primarily, it boils down to compiler strength. The rust runtime doesn't look like it's getting there; atleast not in its current shape.
The benchmarks of the new runtime are quite promising. TCP sending, for example, is faster than both node.js and Go 1.1 in some of our early benchmarks. And sequential performance is on par with C++ in many cases: http://pcwalton.github.io/blog/2013/04/18/performance-of-seq...
Please supply evidence (aka. code) to back your one-liners. I assume you're talking about libextra/arena.rs. It's very straightforward; there's a big comment at the top of the file, so I don't have to point out how primitive or sophisticated it is.
> And that's why the new scheduler is written in Rust.
You're talking about libstd/rt/sched.rs. So it uses the UnsafeAtomicRcBox<ExData<T>> primitive (from libstd/std/sync.rs) to implement the queues. The event loop itself is a uvio::UvEventLoop. Looking at the rest of libstd/rt/uv, I see that your core evented io is libuv (aka. Node.js). For readers desiring an accessible introduction, see [1]. Otherwise, sched.rs is very straightforward.
> There is a new GC that is basically written
Unless you're expecting some sort of blind worship, I expect pointers to source code. I found libstd/gc.rs, so I'll assume that it's what you're talking about. Let's see what's "basically" done, shall we?
You use llvm.gcroot intrinsic to extract the roots, and then _walk_gc_roots to reference count. You've also written code to determine the safe points, and have implemented _walk_safe_point. For readers desiring an accessible introduction to gc intrinsics in llmv, see [2]. The history indicates that nobody has basically touched gc.rs since it was written by Elliott a year ago, so I'm not going to investigate further.
The reason it's not enabled enabled by default is quite simple: it's not hooked up to the runtime at all. You still have to figure out when to run it.
> Graydon has a working tracing GC
You're not understanding this: the whole point of running an open source project is so you can proudly show off what you've written and get others involved. Your one-liners are not helping one bit.
> did you not see the mailing list discussions where we're discussing what needs to happen to get incremental and generational GC?
No, and that should be the purpose of your reply: to provide links, so people can read about it. I'm assuming you're talking about this [3]. Okay, so you need read and write barriers, and you mentioned something about a hypothetical Gc and GcMut; readers can read the rest of the thread for themselves: I don't see code, so no comments.
> TCP sending, for example, is faster than both node.js and Go 1.1 in some of our early benchmarks.
TCP sending is libuv: logically, can you explain to me how you're faster than node.js? No comments on Go at this point.
> And sequential performance is on par with C++
So you emit relatively straightforward llvm IR for straightforward programs, and don't do worse than clang++. Not surprising.
This is the one link you provided in your entire comment. Learn to treat people with respect: showing a programmer colorful pictures of vague benchmarks instead of code is highly condescending. Yes, I've seen test/bench.
If you're hiding some code in the attic, now is the time to show it.
I find some of your comments very aggressive. Yet you are lecturing people about being condescending. I honestly much prefer pcwalton's tone which I find much less condescending, interestingly.
Pointing at code can be indeed useful, but it looks to me like you are comparing apple to oranges: Rust is not at 1.0 yet, so comparing code that isn't yet production-ready with Go or whatever technology that is already mature is not all that useful.
Saying that, in it's current state, Rust is not a good choice for production code is acceptable and fairly obvious. Extrapolating to the point of saying that it is doomed seems like quite an exaggeration to me, and not respectful of the work people are putting into this project.
> Unless you're expecting some sort of blind worship, I expect pointers to source code. I found libstd/gc.rs, so I'll assume that it's what you're talking about. Let's see what's "basically" done, shall we?
I don't really want to draw out this argument, but I called your reply FUD because you were claiming things that were not true, such as that we cannot implement tracing GC.
Hm, a conservative mark-and-sweep that uses tries to keep state. I wonder how the gc task is scheduled, but you're not feeling chatty; so I'll drop the topic.
I made claims based on what I (and everyone else) could see in rust.git; I have no reason to be either overtly pessimistic or overtly optimistic. At the end of the day, the proof is the pudding (aka. code): we are only debating facts, not hypotheticals.
Either way, it was an interesting read. Sure, I took a karma hit for saying unpopular things, and people feel sour/ hurt/ [insert irrational emotion here]; that's fine. Nevertheless, I hope the criticism helped think about some issues.
I think you took a karma hit not for saying unpopular things, but for assuming bad faith. One of the lead developers of the Rust language pointed out some gaps or mistakes in your comment about Rust. Instead of appearing eager to correct yourself, you appeared eager to defend your original statements and all but accused him of lying. I'm certain you could have made the same substantive points with a more reasonable/humble tone and not been downvoted.
For example, when you learn new information like "there's a tracing GC in progress" and you want to look at the source code, you could choose to say "Oh, cool! I didn't know that. Could you give a link with more information or source?" instead of lecturing the other commenter about how they are Doing Open Source Wrong.
I don't have a position to defend, and I am nobody to make any statements of any significance: I did a code review, and I was critical about it. If anything, I want the project to succeed. Evidence? [1]
He asked me why I thought Rust would've live for long, and I spent hours reading the code and writing a detailed coherent comment to the best of my ability. He dismisses my comment as "FUD" [2] and responds with one-liners. The final comment with a link to his blog with colorful graphs was terribly condescending. Him being a lead developer doesn't mean squat to me: a bad argument from him is still a bad argument.
No, I'm not going to stoop to begging for scraps: if I wanted to do that, I'd be using proprietary software; Apple or Microsoft nonsense. In this world, the maintainer is the one who has to take the effort to educate potential contributors. He is clearly doing a terrible job, and I pointed that out.
No, I never accused him of lying. I accused him of making a bad argument, and not giving me sufficient information to post a counter-argument, which is exactly what he did.
And no, I did not "defend" my original argument: I posted a fresh review of fresh code (the one in src/libstd/rt, as opposed to the one in src/rt).
On the point of tone. Yes, I've spent many years on harsh mailing lists and my language is a product of that experience. Are you going to discriminate against me because of that, irrespective of the strength of the argument?
I will repeat this once more: the only currency in a rational argument is the strength of your argument; don't play the authority card.
Factually, there have been more commits to the arch/arm tree than the arch/x86 tree in the last six months. It's true that Linaro, Samsung, and many other companies are interested in taking ARM forward as it's great for minimizing power consumption on embedded devices (among other things). I'm not going to speculate about whether x86 or ARM will "win the battle" or whether they will co-exist, but the fact of the matter is that x86 dominates everything from consumer laptops to web infrastructure. It's a very mature architecture, and VT-x is slowly phasing out pvops. The virt/kvm/arm tree is very recent (3 months old): ARM doesn't have virtualization extensions, so I don't know how this works yet. So, yeah: ARM definitely has a long and exciting future.
> C is single-handedly responsible for 99% of all security problems on the Internet.
Collecting evidence to back outrageous claims is left as an exercise to the reader.
> BS
I'm not interested in "transcendental superiority" arguments. CLOS doesn't have users, and hasn't influenced object systems in prevalent languages; period.
> WTF?
Factually, Java is a very popular language in industry, which requires code produced by different programmers to fit together reliably. I personally attribute it to the object system/ typesystem, although others might have a different view.
> I'm not going to speculate about whether x86 or ARM will "win the battle" or whether they will co-exist
I don't care about a 'battle'. Just most computers, probably a dozen, around me use ARM.
> Collecting evidence to back outrageous claims is left as an exercise to the reader.
That's a trivial task.
> I'm not interested in "transcendental superiority" arguments.
WTF?
> CLOS doesn't have users,
BS.
> and hasn't influenced object systems in prevalent languages; period.
True scotsman argument. Actually for that it is relatively unknown, it has influenced a lot languages and a lot of researchers. There are a ton of non-CLOS literature and systems, trying to adapt stuff like Mixins, MOP, Multiple Dispatch, Generic Functions, ...
That languages like Java doesn't has anything of that natively is not CLOS' fault. Java just recently was catching up to some kind of closures. Give the Java maintainers a few more decades. Java does not even have multiple inheritance.
CLOS' Multiple dispatch is also now present in unknown languages like Haskell, R, C#, Groovy, Clojure, Perl, Julia and a few others.
To the contrary, Typed Racket is under active development and new Racket libraries are written using it. I don't know where you got the impression that it's going nowhere, but it's incorrect.
Ofcourse there will be resistance to change, and new compilers don't mature overnight. At the end of the day, it boils down to what can be parsed unambiguously, written down easily by human beings, and executed quickly. If you get off on reading research papers on dependent types and writing Agda programs to store in your attic, that's your choice; the rest of us will be happily writing Linux in C99 and powering the world.
Programming has not fundamentally changed in any way. x86 is the clear winner as far as commodity hardware is concerned, and serious infrastructure is all written in C. There is a significant risk to adopting any new language; the syntax might look pretty, but you figure out that the compiler team consists of incompetent monkeys writing leaking garbage collectors. We are pushing the boundaries everyday:
- Linux has never been better: it continues improve steadily (oh, and at what pace!). New filesystems optimized for SSDs, real virtualization using KVM, an amazing scheduler, and a new system calls. All software is limited by how well the kernel can run it.
- We're in the golden age of concurrency. Various runtimes are trying various techniques: erlang uses a message-passing actor hammer, async is a bit of an afterthought in C#, Node.js tries to get V8 to do it leveraging callbacks, Haskell pushes forward with a theoretically-sound STM, and new languages like Go implement it deep at the scheduler-level.
- For a vast majority of applications, it's very clear that automatic memory management is a good trade-off. We're look down upon hideous nonsense like the reference-counter in cpython, and strive to write concurrent moving GCs. While JRuby has the advantage of piggy-banking on a mature runtime, the MRI community is taking GC very seriously. V8 apparently has a very sophisticated GC as well, otherwise Javascript wouldn't be performant.
- As far as typing is concerned, Ruby has definitely pushed the boundaries of dynamic programming. Javascript is another language with very loosely defined semantics, that many people are fond of. As far as typed languages go, there are only hideous languages like Java and C#. Go seems to have a nice flavor of type inference to it, but only time will tell if it'll be a successful model. Types make for faster code, because your compiler has to spend that much less time inspecting your object: V8 does a lot of type inference behind the scenes too.
- As far as extensibility is concerned, it's obvious that nothing can beat a syntax-less language (aka. Lisp). However, Lisps have historically suffered from a lack of typesystem and object system: CLOS is a disaster, and Typed Racket seems to be going nowhere. Clojure tries to bring some modern flavors into this paradigm (core.async et al), while piggy-banking on the JVM. Not sure where it's going though.
- As far as object systems go, nothing beats Java's factories. It's a great way to fit together many shoddily-written components safely, and Dalvik does exactly that. You don't need a package-manager, and applications have very little scope for misbehaving because of the suffocating typesystem. Sure, it might not be be pleasant to write Java code, but we really have no other way of fitting so many tiny pieces together. It's used in enterprise for much the same reasons: it's too expensive to discipline programmers to write good code, so just constrain them with a really tight object system/typesystem.
- As far as functional programming goes, it's fair to say that all languages have incorporated some amount of it: Ruby differentiates between gsub and gsub! for instance. Being purely functional is a cute theoretical exercise, as the scarab beetle on the Real World Haskell book so aptly indicates.
- As far as manual memory management goes (when you need kernels and web browsers), there's C and there's C++. Rust introduces some interesting pointer semantics, but it doesn't look like the project will last very long.
Well, that ends my rant: I've hopefully provided some food for thought.