Hacker News new | past | comments | ask | show | jobs | submit login

A context switch in a modern CPU takes only a few microseconds. A GB of RAM costs less than $10. So those concerns, although valid in theory, are usually irrelevant for most web applications.

On the other hand, simplicity in a code base usually matter. Code written with an evented API, littered of callbacks, is usually harder to read and maintain than that written in a sequential way with a blocking I/O API.

You can recreate a sync API on top of an evented architecture using async/await, but then you have the same performance characteristics of a blocking API, but with all the evented complexity lurking underneath and leaking here and there. Seems to me a very convoluted way to arrive to the point from where we started.




A function call takes less. And using more RAM == thrashing your caches more == slowing down. The price of RAM isn't relevant to that -- this isn't about saving money on RAM but saving cycles. Yes, yes, that's saving money per-client (just not on RAM), but you know, in a commoditized services world, that counts, and it counts for a lot.


A GB of RAM only costs less than $10 if you are buying for your unpretentious gaming rig.

A GB of ECC server RAM costs more. An extra GB of RAM in the cloud can even cost you $10/mo if you have to switch to a beefier instance type.


How much does a MB of L-n cache cost?

I don’t have the answer, but you would want to measure dollars to buy it, and nanoseconds to refill it.


That's true, if you're buying OEM ram for Dell or HP servers, it's more like $10-20/GB. However you can buy Crucial ECC DDR4 ram for $6/GB, so there's a hefty OEM markup.


$10/mo is far less than the cost of thinking about the issue at all.


Yes, but. Suppose you build a thread-per-client service before you realized how much you'd have to scale it. Now you can throw more money at hardware, or... much more money at a rewrite. Writing a CPS version to begin with would have been prohibitive (unless you -or your programmers- are very good at that), but writing an async/await version to begin with would not have been much more expensive than a thread-per-client one, if at all -- that's because async/await is intended to look and feel like thread-per-client while not being that.

One lesson I've learned is: a) make a library from the get-go, b) make it async/evented from the get-go. This will save you a lot of trouble down the line.


It's actually a big problem for web servers. If you consider apache for example, that has to do one thread per connection. (yes, apache still doesn't support events for websockets in 2020).

Let's say you configure it for 2000 max connections (really not much) so that's 2000 threads, so 20 GB of memory right away because the thread stack is 10 MB on Linux. It's a lot of memory and it's obliterating all caches.

You can reduce the thread stack to 1 MB (might crash if you go lower) but any caching is still trashed to death.

Next challenge. How do you think concurrency work on the OS with 2000 threads? Short answer is not great.

The software making heavy use of shared memory segments, semaphores, atomics and other synchronization features. That code is genuinely complex, worse than callbacks. Then you're having issues because these primitives are not actually efficient when contended by thousands of threads, they might even be buggy.


What's wrong with the Apache event worker?

https://httpd.apache.org/docs/2.4/mod/event.html


It's not quite event based really. It still requires one thread per connection (websocket).


Ah I see you've dipped your toes into the Sea of Apache too. Horrible software. Should have died in 2000.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: