Did you just say single threaded concurrently reasoned applications are more performant than multithreaded applications?
Trying to reason about how this could be possible. I'm still amazed by the performance you can get out of async non blocking style application code so don't take this the wrong way, truly trying to understand here.
I'm also a little confused about threads being rescheduled by the OS being treated like an inconvenience, to me it's a feature that prevents one application being too hoggish.
I can see your point that it's nice to have these labelled with await but I'm struggling to pinpoint a time I've asked myself what thread scheduling might mean for my code. Other than just assuming that another thread could be anywhere doing anything which I think is the correct approach no?
Locks btw, totally non issue with the right API/Lang support.
No I didn't say "single threaded concurrently reasoned applications are more performant than multithreaded applications" because I was highlighting this style of concurrency from the programmer's perspective. But yes, for certain types of applications it can be more performant.
Naturally, if your application is computationally intensive, a single thread can't compete with a multithreaded application. But for applications that use a lot of slow, blocking I/O, converting them to a single-threaded application that uses non-blocking I/O is a significant reduction in overhead. Compare a traditional web server that uses one thread per request and blocking I/O, versus one that uses an event loop, non-blocking I/O. You will see why the latter is more efficiently on system resources. Again this isn't a panacea, and for some applications you do have to use threads. I'm just pointing out an omission of the article.
Thanks for the clarification I see your angle now.
I'm not trying to debate on this, pretty sure we're just reasoning about facts we both agree on here.
I was aiming for a more general stance but if were talking about web servers there is one thing we might disagree on being the net benefit of non blocking in that specific scenario.
Let me see if I can explain, the individual requests may contain a lot of slow blocking io but these are handled async with a thread per request at the server level. So the request and internal blocking is limited to the request only (ignoring thread pool limits.)
While it may be that non blocking might produce more performant request level handling in some cases most or at least a substantial amount of web request logic is very dependent on chaining those blocking io one after the other, e.g. get thing modify, return.
In blocking you execute and reason about sequentially in non blocking you reason about with callbacks or syntactic sugar around promises specifically because sequential programming is easier to reason about.
The point of difference being that in non blocking you have a lot of overhead in the event loop and in the implementation, all to produce sequentially executing code that doesn't block other requests.
This is sort of why me and a few colleagues came to the conclusion PHP is still pretty damn decent for web stuff.
Non blocking is great for concurrency where it actually results in parallelism but if it doesn't then there isn't much gain other than gained complexity.
All this needs to be taken with a grain of salt. Node has Async await for a reason, and I've seen a few different implementations of non blocking php as well.
The choice really comes down to which style you find your self benefiting from on the regular.
Bleh that needs about three rounds of editing before it makes sense, something I don't have time for. Sorry for the ramble!
its a two-edged sword. if you can trust the application code to effectively schedule itself then it's good but you can also get bugs and perf issues where a synchronous chunk of code blocks progress on everything (see js blocking the main thread and causing the ui to freeze)
the pro is that your async code explicitly defines points where context switching is okay since you're blocking on something anyways. this could be good for perf if context switching in the middle of a synchronous operation is expensive.
the con is that your async code might not cede control often enough to allow other coroutines to make progress.
so yes, you can have something hogging the runtime but in the context of an application that you control as a whole this is something you can avoid/fix if necessary.
at the OS level this might not make sense because you have to assume that applications are adversarial and will try to hog cpu time...
Trying to reason about how this could be possible. I'm still amazed by the performance you can get out of async non blocking style application code so don't take this the wrong way, truly trying to understand here.
I'm also a little confused about threads being rescheduled by the OS being treated like an inconvenience, to me it's a feature that prevents one application being too hoggish.
I can see your point that it's nice to have these labelled with await but I'm struggling to pinpoint a time I've asked myself what thread scheduling might mean for my code. Other than just assuming that another thread could be anywhere doing anything which I think is the correct approach no?
Locks btw, totally non issue with the right API/Lang support.