Well that law is a good theoretical instrument.
But aside from some computations which have been deliberately designed to be impossible to parallize, it seems to me that Amdahls assumes that there is some constant part of the program that just can't be parallized. I disagree with that, and I think Erlang offers the best counterargument-- just because some part has to be done on one node (or thread) doesn't mean that the computer can't do other work at the same time.
Sure if your mental model of computation is do this, then that, and then do these two things in parrallel and then do these other things in parallel then gather all the results, this would suggest you have a problem under Amdahls law. If your mental model is a set of independent nodes sending messages to each other then when you run into a bottleneck, you spawn a few more of the most commonly used nodes. Suddenly there is no part of the program which has to be run sequentially while the rest of the world waits.
You're not speeding up a given instance, you're "merely" running more instances or running larger problems. (In some problems, the sequential portion is roughly constant, independent of the amount of data. So, you can reduce the effect of the sequential portion by increasing the amount of data.)
More throughput and larger data are both good things but they're not the same as running a given problem faster.
Sure if your mental model of computation is do this, then that, and then do these two things in parrallel and then do these other things in parallel then gather all the results, this would suggest you have a problem under Amdahls law. If your mental model is a set of independent nodes sending messages to each other then when you run into a bottleneck, you spawn a few more of the most commonly used nodes. Suddenly there is no part of the program which has to be run sequentially while the rest of the world waits.