The term deadlock has been used for message passing issues since the dawn of time. It is literally the same issue.
Using timeouts to paper over issues is just wrong. I accept that timeouts are necessary to deal with network issues (and a timeout should cause the connection to be dropped, so won't solve the deadlock issue), but certainly they are not required for in-application message passing.
Finally if an actor won't send a message untill it has received another one, I fail to see how statically declared handlers will help.
> Finally if an actor won't send a message untill it has received another one, I fail to see how statically declared handlers will help.
Think of it as reacting to messages, not waiting. In that model actors of course can react by sending messages, but can't have a special waiting state for specific messages, making it impossible to block other handlers. I'm not sure why this is hard to understand.
We do have this problem solved in every possible way. But it's so not a big deal with actor model, that there is no point sacrificing any flexibility for it.
Forget about waiting. Think about state machines. Let's say that that there is a rule that, if the machine is on state S1, on reception of message M, send message M and move to state S2. This the only rule for state S1. Now if two actors implementating this state machine and exchanging messages find themselves in state S1 at the same time, they are stuck. This is a bug in the state machine specification of course and I would call it a deadlock. How would you call it? How would the actor model statically prevent you from implementing such a state transition rule?
This is why I'm talking about specific model with static handlers per actor, where you can't choose handlers dynamically depending on the state you are in. Whether you are on state S1 or S2, all handlers are still able to receive messages, what they can't do is run at the same time.
It can receive all messages you want, but if the only message that it that would cause a state transition, and send out a message, is M, then it is still stuck.
I mean, I'm no expert, but I guess you could statically analize the state machine and figure out, given a set of communicating actors, which sequence of messages would lead to a global state from which there is no progress. I assume that, because message ordering is not deterministic the analysis is probably non easy to do.
Well, this is the limit all models have. You can abuse memory safety the same way and use indices of bounds checkable arrays as raw pointers for example.
Using timeouts to paper over issues is just wrong. I accept that timeouts are necessary to deal with network issues (and a timeout should cause the connection to be dropped, so won't solve the deadlock issue), but certainly they are not required for in-application message passing.
Finally if an actor won't send a message untill it has received another one, I fail to see how statically declared handlers will help.