> Fear, and respect, the absurdly difficult challenge that is writing correct concurrent code, even when your compiler is helping you out.
There are plenty of safe and easy models for writing concurrent code. Here's a famous one that's easy to overlook:
gzip -cd compressed.gz | grep error
On Unix, this doesn't use a temporary file. It creates two concurrent processes. The first decompresses a file and writes it to a pipe, and the second reads from a pipe and searches for a string of text. You could call this "coroutines over a stream," I suppose.
And of course, people have been writing shell pipes for decades without concurrency errors. Unix enforces process isolation, and makes sure all the data flows in one direction.
Now, there's no reason a programming language couldn't enforce similar restrictions. For example, I've spent the last few years at work writing highly concurrent Rust code, and I've never had a single case of a deadlock or memory corruption.
One slightly trickier problem in highly concurrent systems is error reporting. In the Unix example, either "gzip" or "grep" could fail. In the shell example, you could detect this by setting "set -o pipefail". In Erlang, you can use supervision trees. In Rust, sometimes you can use "crossbeam::scope" to automatically fail if any child thread fails. In other Rust code, or in Go, you might resort to reporting errors using a channel. And I've definitely seen channel-based code go subtly wrong—but not necessarily more wrong than single-threaded error-recovery code in C.
With the right abstractions, writing concurrent code doesn't require superhuman vigilance and perfection.
There are plenty of safe and easy models for writing concurrent code. Here's a famous one that's easy to overlook:
On Unix, this doesn't use a temporary file. It creates two concurrent processes. The first decompresses a file and writes it to a pipe, and the second reads from a pipe and searches for a string of text. You could call this "coroutines over a stream," I suppose.And of course, people have been writing shell pipes for decades without concurrency errors. Unix enforces process isolation, and makes sure all the data flows in one direction.
Now, there's no reason a programming language couldn't enforce similar restrictions. For example, I've spent the last few years at work writing highly concurrent Rust code, and I've never had a single case of a deadlock or memory corruption.
One slightly trickier problem in highly concurrent systems is error reporting. In the Unix example, either "gzip" or "grep" could fail. In the shell example, you could detect this by setting "set -o pipefail". In Erlang, you can use supervision trees. In Rust, sometimes you can use "crossbeam::scope" to automatically fail if any child thread fails. In other Rust code, or in Go, you might resort to reporting errors using a channel. And I've definitely seen channel-based code go subtly wrong—but not necessarily more wrong than single-threaded error-recovery code in C.
With the right abstractions, writing concurrent code doesn't require superhuman vigilance and perfection.