I'd have a problem if they were autonomously picking and choosing who lives or dies - like autonomous driving algorithms.
If there's a police officer trained in controlling and using the robot then there shouldn't be any qualms about it.
It's no different from a soldier controlling a UAV from across the world - just in this case I'd expect a lot more auditing and scrutiny if the tech's used domestically.
I have no doubt that recorded video of controversial incidents could be "lost due to computer malfunction" or some other (un)fortunate mishap.
The adrenaline argument is interesting. But conversely, if the intent is to de-escalate a situation, it's harder to establish trust via an armed machine.
Supposedly US drone pilots are remarkably PTSD prone. In the moment you may be looking at the monitor, but your brain will realize what happened eventually. Also watching someone explode or get Ginsu'd in HD 60fps super-zoom is probably less pleasant than pressing the bomb release button on your joystick from 30k ft and waiting a week for the BDA (bomb damage assessment) to tell you that you missed.
I'm not sure there is much difference between this and a sharpshooter. They are both using deadly force to stop a deadly threat. Both involve a human being pulling a trigger.
Automated killing would be beyond the pale. Now THAT is the stuff of dystopian nightmares like Psycho Pass and Fahrenheit 451.
Sharpshooters can be hacked, we just don't call it hacking. We call it extortion, bribery, corruption, deception, hostage taking, and brainwashing. For instance, the practice of SWATting.
If there's a police officer trained in controlling and using the robot then there shouldn't be any qualms about it.
It's no different from a soldier controlling a UAV from across the world - just in this case I'd expect a lot more auditing and scrutiny if the tech's used domestically.