It has been my experience that the more a particular person attempts to understand and control machine intelligence, the more she grows to fear it and its potential.
The only people who claim that machine intelligence is dangerous are the ones on the outside looking in. Everyone who actually works in on AI and understands it (hint, it's just search and mathematical optimization) thinks the fear surrounding it is absurd.
> Everyone who actually works in on AI and understands it thinks the fear surrounding it is absurd.
This isn't true. Please don't state falsehoods. Stuart Russell, Michael Jordan, Shane Legg. Those are just the ones mentioned elsewhere in this thread.
How many of those AI researchers are actually working on AGI though? As you mentioned, most of them are in fact just developing search and optimisation algorithms. Personally, I believe the fields of neuroscience/biology are more likely to produce the first AGI. People who claim machine intelligence is dangerous are not scared of k-means clustering or neural networks, they are scared of an hypothetical general intelligence algorithm which hasn't been discovered yet. One could argue that the fear is absurd because AGI is not likely to happen within our lifetime but it's hard to argue that it will not happen eventually and be a potential threat.