Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Some problems demand a lot of concurrency. The canonical example, described by Dan Kagel as the C10K problem back in 1999, is a web server connected to tens of thousands of concurrent users. At this scale, threads won’t cut it—while they’re pretty cheap,5 fire up a thread per connection and your computer will grind to a halt.

Try it. It'll probably work fine. It may be very expensive, memory wise, but it's easy to get a machine with a lot of memory.



It's not just that. As you increase OS thread active count, each thread starts to respond slower and slower.

It's been tried, periodically. Still sucks.


Write a little program that starts up 10k threads that just wait. The other tasks on the machine won't be any slower once they're set up.

Of course, if they're doing real work they'll be using CPU time, but that's true of any scheme you might pick.


Obviously. My point is that spawning 10_000 "processes" (green threads / fibers, really) on the Erlang BEAM VM is almost not noticeable at all f.ex. in web server mode. Everything just gets a tiny little bit laggier but chugs along nicely. Same goes for Golang's goroutines, though not exactly to the same extent (the runtime does not tolerate as huge a number as easily as Erlang's runtime).

Whereas spawning native OS threads (not sure about the 10k number, could be even more with the good hardware these days) and having them all do stuff is gonna lag a whole lot more due to context switches.

So you know, apples to apples, but some apples are much better than others.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: