The article's main argument was that some things are too risky without multiple supervision. The extreme and current example is letting financial institutions supervise themselves. How to use computers is just one part of the problem.
Many things can run unattended - but under what circumstances should they? What are the necessary safeguards? Pilots on commerical airplanes don't do much flying, they are onboard to make sure everything is safe and works.
Eventually it's all about acceptable risk. We already know automated trading can be dangerous. But the danger lies in bad human assumptions, bad human models and the widespread idea that somehow no human being remains responsible.
As a complete aside - I worry whenever I hear cheerful talk about "the drones we send to kill militants". That kind of convenient and unaccountable power over life and death in other countries is both morally suspect and open to abuse. Killing by remote is still killing and relying on technological quick-fixes to "solve" terrorism is a dangerous strategy.
Is he crazy? Of course ALL airplanes will eventually fly solo, and some already do - like the drones we send to kill militants in Pakistan.
And it will be a better world when airplanes, surgery, tollbooths, and many other things are done by computers.