How do you solve for deadlocks at the compiler level? Even if all your memory access is perfectly safe, you can still deadlock on external resources if you aren't pay attention.
That's what I mean by fear and respect for concurrent programming. That's the problem that hasn't been solved.
Deadlocks is a solved problem. Technically, they can't even exist in any concurrency model that doesn't share anything. What can exist is processes waiting for messages from each other, but that's not a deadlock, but a valid behavior and is only potentially problematic without timeouts. Asynchronous message passing with event-driven/reactive semantics farther enforce impossibility to block on waiting for a specific message. In practice strict event-driven semantics are not necessary for it to never be a problem.
Deadlocks are not restricted to shared memory communication. Two Unix processes talking via a socket pair can trivially deadlock ( for example if they are both blocked waiting for the other side to speak first).
Also asynchronous systems can deadlock as well, it is just much harder to debug as the debugger won't show an obvious system thread blocked on some system call; the deadlocked threads of execution still exist but will have been subject to CPS and hidden from view (just some callback waiting forever on some wait queue).
It's not useful to use the same term for very different kinds of things. Shared resource deadlocks are common and disastrous problems. Share nothing mutual blocking is uncommon, not necessarily a problem at all and can be completely harmless and automatically recovered when it is a problem. For example, spawning actors to wait without timeouts would be absolutely ok, parent can do all the timeouting and kill the children.
Two processes blocking on a socket is not a deadlock. Surely there are timeouts on both sides, because using sockets without timeouts is just ignorance, and both will just timeout and move on.
Also strictly statically declared event handlers per actor are 100% mutual blocking and deadlock free. Because they can't wait for messages in a way, that blocks other event handlers.
The term deadlock has been used for message passing issues since the dawn of time. It is literally the same issue.
Using timeouts to paper over issues is just wrong. I accept that timeouts are necessary to deal with network issues (and a timeout should cause the connection to be dropped, so won't solve the deadlock issue), but certainly they are not required for in-application message passing.
Finally if an actor won't send a message untill it has received another one, I fail to see how statically declared handlers will help.
> Finally if an actor won't send a message untill it has received another one, I fail to see how statically declared handlers will help.
Think of it as reacting to messages, not waiting. In that model actors of course can react by sending messages, but can't have a special waiting state for specific messages, making it impossible to block other handlers. I'm not sure why this is hard to understand.
We do have this problem solved in every possible way. But it's so not a big deal with actor model, that there is no point sacrificing any flexibility for it.
Forget about waiting. Think about state machines. Let's say that that there is a rule that, if the machine is on state S1, on reception of message M, send message M and move to state S2. This the only rule for state S1. Now if two actors implementating this state machine and exchanging messages find themselves in state S1 at the same time, they are stuck. This is a bug in the state machine specification of course and I would call it a deadlock. How would you call it? How would the actor model statically prevent you from implementing such a state transition rule?
This is why I'm talking about specific model with static handlers per actor, where you can't choose handlers dynamically depending on the state you are in. Whether you are on state S1 or S2, all handlers are still able to receive messages, what they can't do is run at the same time.
It can receive all messages you want, but if the only message that it that would cause a state transition, and send out a message, is M, then it is still stuck.
I mean, I'm no expert, but I guess you could statically analize the state machine and figure out, given a set of communicating actors, which sequence of messages would lead to a global state from which there is no progress. I assume that, because message ordering is not deterministic the analysis is probably non easy to do.
Well, this is the limit all models have. You can abuse memory safety the same way and use indices of bounds checkable arrays as raw pointers for example.
That's what I mean by fear and respect for concurrent programming. That's the problem that hasn't been solved.