There appears to be a strong opposition to weapons systems that can autonomously decide to open fire, but it is rarely explicitly evaluated against the alternative.
I wonder: why does it appear preferable to have a human control a weapon?
Is it because we assume the human is capable of compassion and empathy? If so, what about anger, fear, prejudice and hatred?
It seems to me that a machine with clear mission objective, rules of engagement and criteria for friend-civilian-foe identification may actually be preferable in terms of minimizing unnecessary war casualties.
Ultimately, it is humans who'd program these machines and you get to do a lot more calm thinking when coding than when squeezing a trigger. Also, code can be reviewed and tested, split-second decision cannot.
Moreover, down the road, when humanity reaches a state where most wars are fought by machines against machines, the pointlessness of the exercise may become a lot more apparent.
> Moreover, down the road, when humanity reaches a state where most wars are fought by machines against machines, the pointlessness of the exercise may become a lot more apparent.
Dial 1-900-you-wish. What will happen is that the richer part of the world will fight wars against the poorer part of the world where they won't be able to afford the machines.
This gets rid of the biggest thing holding back the next big war: the fact that your own troops are also in danger.
How many corporations have the GDP of a small country? What if corporations could fight wars themselves? They could with drones.
I read somewhere that in Mexico, there's a big oil reserve. However corporations are unwilling to tap it because of the cartels, and the dangers they provide. Imagine the slippery slope if BP decided they'd take care of the cartels themselves.
"What will happen is that the richer part of the world will fight wars against the poorer part of the world where they won't be able to afford the machines."
This is the principal reason why "drones, OK, land mines, not OK." A land mine is a drone that waits for you. It is simple and cheap. Any armed group could afford them. Drones require high technology, among which satellites to relay video links and commands, etc. The "human in the loop" is just a bureaucrat rubber stamping, in real time, decisions made by a de facto autonomous weapon.
A drone also isn't there thirty years later when the country has a different name and a different form of government and an eight-year-old puts an errant foot where there was once a target designated.
>Is it because we assume the human is capable of compassion and empathy?
No - it is simply because a human is self aware and will act selfishly. By making a human accountable for the actions they can reasonably be expected to follow the rules.
Human selfishness is a weaker guarantee than programming.
Human can still chose to abandon or misinterpret their self interest. Strong emotions, hatred, prejudice or a belief that punishment won't be performed or that the deed can be covered up make humans likely to do so under some circumstances.
OTOH, machine has no choice at all. It always follows the rules.
I wonder: why does it appear preferable to have a human control a weapon?
Is it because we assume the human is capable of compassion and empathy? If so, what about anger, fear, prejudice and hatred?
It seems to me that a machine with clear mission objective, rules of engagement and criteria for friend-civilian-foe identification may actually be preferable in terms of minimizing unnecessary war casualties.
Ultimately, it is humans who'd program these machines and you get to do a lot more calm thinking when coding than when squeezing a trigger. Also, code can be reviewed and tested, split-second decision cannot.
Moreover, down the road, when humanity reaches a state where most wars are fought by machines against machines, the pointlessness of the exercise may become a lot more apparent.