Hacker News new | past | comments | ask | show | jobs | submit login
What's wrong with wanting a “human in the loop”? (warontherocks.com)
75 points by Symmetry on June 25, 2022 | hide | past | favorite | 84 comments



In all considerations of these arguments Incident RQ-170 2011 [1] should remain foremost. It is to military systems engineering what the Tacoma Bridge [2] is to civil engineering and the Therac-25 [3] is to software engineering.

Axiomatically: Any system, no matter how well guarded, can and eventually will be taken by an enemy and used against you.

That's why we;

Don't take a knife to fight if you're not trained to use it.

Don't build offensive cyberweapons your civilian population are not equipt to defend against (looking at you US Cyber Command).

[1] https://en.wikipedia.org/wiki/Iran%E2%80%93U.S._RQ-170_incid...

[2] https://en.wikipedia.org/wiki/Tacoma_Narrows_Bridge_(1940)

[3] https://en.wikipedia.org/wiki/Therac-25


My greatest concern today is the militarization of space going on right now[0] (Space Development Agency). The system engineer behind it Michael D. Griffin[1] seems to think the US can "win" all future wars by creating a space-based missile defense. Decisions are automatically made in milliseconds, with no room for human input.

SpaceX's participation[2] is particularly concerning (they just built and launched four "secret" satellites for it last week) and employees are now complaining about it on Glassdoor[3]

Elon talks a lot about AI / Skynet, and it appears there is a serious attempt to construct it.

[0] https://crsreports.congress.gov/product/pdf/IF/IF11623 [1] https://en.m.wikipedia.org/wiki/Michael_D._Griffin#Career [2] https://mobile.twitter.com/planet4589/status/153896005141464... [3] https://www.glassdoor.com/Reviews/Employee-Review-SpaceX-RVW...


If we think about the militarization of space, you're 6 decades behind the times. Space has always been militarized!

We've ran spy satellites since the 60s, and missile detection satellites after.. Most leading space-faring nations have also tested satellites that are technically dual use - one man's repair mission is another woman's satellite destruction capability. As well, of course, the Anti-satellite weaponry...The DoD is a big user of SATCOM, and GPS of course, is all about military targeting and coordinates.

Every weather satellite likewise is also an incredibly useful data provider for planning air operation campaigns, and an integral part of battle planning. Mix that with terrestrial and in-situ data sources, both classified, and unclassified, and you have understanding of the potential great levels of detail useful for the military.


The projects now are of a level of militarization comparable only to the (failed) Strategic Defense Initiative in the late 80's. Today the pursuit of artificial intelligent targeting in such a system is what makes it particularly threatening to humanity.

I've observed that Mike (and other supporters of this) are very religious. I suspect they are OK gambling with our collective future. Maybe they believe God will intervene or something. Well some of us don't want to play that game.


I don’t understand the point you’re trying to make, other than that these are famous events in engineering. What does a bridge collapsing bc of a structural defect or a software bug causing higher doses or radiation have to do with systems being taken by enemies and used against you?


They use the Therac-25 story to scare young CS students into considering the broader implications of their work, ie people might die. The others might also fall under 'popular professional cautionary tales.'


Interestingly in talking to a developer who worked in defense, they don’t say if you get your code wrong, people might die - they say the wrong people might die.


The basis of guerrilla warfare is using the stronger opponent's strength against them. Farmers come with scythes and staves and capture a rifle. Farmers come with scythes, staves and a rifle and capture more rifles. With rifles and the element of surprise, artillery is captured.

It goes back to Sun Tzu, and up to the present day. One of its most recent manifestations is the US equipment captured by the Taliban.


The article about the RQ-170 states that the aircraft uses inertial guidance for its flight path so as to avoid GPS spoofing attacks. But then no explanation is given for how that might have failed. Is anything else known about how the attack worked?


I looked far and wide, and there's nothing. However, inertial guidance is subject to drift, so if you spoof the GPS slow enough you can bypass inertial guidance as a protection.

The most interesting part of the RQ-170 story however is how they managed to track it accurately enough for GPS spoofing to begin with - you need a track accurate to a few dozen meters to do that, and it's supposed to be super stealthy even beyond the stealthiest fighter jets making that impossible.


The article presents two sides - one end advocates humans taking an active role in the loop, like confirming a drone's target classification and authorising it to act lethally. The other side is where the human's role is to shape the operating envelope to what they think will have parameters appropriate for the ethics of the situation.

The examples of the systems seems completely out of whack though.

The active participant advocates are clearly taking about things like air to ground hunter killer missions on ambiguous targets. For example, a civilian vehicle converted to technical vs a plumber hauling some piping. A farmer with a weed killer in a ditch by the road vs an insurgent constructing an ambush. These are clearly beyond the realm of what the other side is talking about - easily identifying shit like russian airforce craft with no friendly transponders and armed active radar payloads under its wings.

I don't understand the conversation that is trying to take place. Clearly they are too different to even consider. Is the conversation supposed to be hypothetical enough that we can pretend we can act in the first case, as we can in the second? Then the conversation seems pointless. Of course you can, if you accept the premise of the second in the first place.

As an aside, the section that goes like:

> exerting over the system when the system has been fine-tuned to make it easier for the human to do less, and for the operator to accept the machine’s judgment.

Is just incredibly bad faith nonsense. Or extremely ignorant. It's hard to tell which.


How would you classify a CROWS turret [1] or similar which could automatically fire based on image recognition?

I think that’s the kind of case where you run into these questions about a “human in the loop” — eg, do we need a human to confirm target identification or can the robot automatically engage anyone who points a “firearm” at a soldier, even if that occasionally shoots farmers holding tools oddly?

[1] https://en.wikipedia.org/wiki/CROWS


There is nothing on that page I could see in a brief scan that indicates this system ever acts on its own behalf. As far as I was aware, these are just to keep the gunner's head from poking out the top of the vehicle. It's operated by someone completely manually like 2 ft away.


Yes — I was asking a hypothetical about a technology that’s almost viable.

CROWS are remote turrets with cameras and automated aiming systems; all we’d need to do is add target recognition.

But is that ethnical?


You're asking me directly for the answer to all of these grand ethical questions, via the lens of a hypothetical technology you haven't defined the limits of.

I really don't know what you expect in ways of an answer.


It is both incredibly bad faith nonsense and extremely ignorant. The ethics debates around AI are filled with military jargon dropping morons. It is extremely hard to find a rational anyone not pushing some corporate-wealth-scheme or crank-conspiracy nonsense. I work in AI, and the non-technically specific conversations, such as ethics, are extremely shallow.


The main problem with non technical people talking about AI, is that they mainly have no understanding about AI, but everyone has still a strong opinion (or thinks he has to have one)


Technical people, or tech-adjacent people at least, seem to be even worse. They seem to assume that since they know how computers work that AGI “obviously” works the same way and will obviously instantly self-reproduce and take over the world for… some reason.

Of course, they were equally confident logic and expert systems could build an AI and despite none of that working out at all, it doesn’t seem to have stopped them.


The keyword you're looking for is "instrumental convergence" (https://www.lesswrong.com/tag/instrumental-convergence). It's not an intuition or obviousness based argument; there's significant prior work, if someone calls it obvious and skips over the argument they're being lazy by not providing links but the links do exist.

The expert-system/GOFAI people are mostly not the same people.


The argument that an imaginary AGI is going to take over the world because of an imaginary principle is not actually very convincing though. It would be better if you said they'd do something humans already did; at least we know that'd be possible. Most of this "instrumental convergence" assumes that it has an infinite attention span, infinite energy, all its plans work properly… the only reason it all goes that well is you're imagining it did.

I think last week someone on twitter told me an AI could be bad because it'd "kill us all by releasing viruses into the atmosphere", but if you had a baby it could also grow up to do that, and (correctly) nobody worries about this. Not clear why these are supposed to be different.

More importantly, it mostly relies on thinking AGIs behave like computer programs in that 1. they don't get bored 2. they can be copied or copy themselves. Is that necessarily true? If someone copied you could you take over the world easier? Maybe you'd just fight over who gets the bank account.


When it comes to the impact and ethics of AI, I don't think technically informed people are much, if any, better than the non technically informed at understanding AI and forming judgments. They don't really understand fully how it works, either, and they most definitely are not social-systems aware enough to understand how it actually impacts and functions in use.


I mean yes, the nerd deep in his field, is mainly focused on getting it to work and ethics would only be a potential hurdle, so they are clearly biased and the decision should not be left to them.

But I am not sure if it helps, when people talk about it, who do not have a clue at all. Which is sadly most people. (computers are a black box to them)

This will lead to situations, when activists and politicians lobby for automatic AI image control to fight child porn, not understanding false positives and that this means that lots of legitimate and very private pictures would be send of away for some human to check.


The main problem is that even in the field among experts, ethics has extremely low priority. It's shallow not because of laypersons, but because experts don't think ethics are worth thinking about.


Right. If you are concerned that the only writing on ethics of AI is by people who lack the technical understanding of the field to meaningfully comment, then it is incumbent on the people who are technically competent in the field to meaningfully engage in discussing ethics.

As it is I see AI researchers engaging in a curious doublethink: both assuming that someone else is thinking about the ethical questions, and at the same time whenever someone raises an ethical critique, dismissing it as technically unsophisticated.


Up until a few years ago AI systems were not sophisticated enough to have an impact. Papers only recently started having an ethics section, and even now it's mostly low effort, just write something and get over it.


The biggest misconception is people believe AI "thinks", which would require AI possessing comprehension, which it does not. That is the big hurdle laypeople can't seem to grasp: without comprehension, AI is literally an idiot savant, it can answer but it cannot tell if the question is nonsense or valid. This lack of comprehension is exactly why a human is required in AI loops which impact lives.


I am hoping to address that, as best I can.

One must remember that ethics is philosophy that has only so much traction where it touches necessity and survival.


It's a form of marketing that appeals to people in tech.


>> exerting over the system when the system has been fine-tuned to make it easier for the human to do less, and for the operator to accept the machine’s judgment.

>Is just incredibly bad faith nonsense. Or extremely ignorant. It's hard to tell which.

When you talk to the military you have to remember that many levels of military are regarded as drones that go through motions and that much of what goes on is assumed to be defined without critical thinking. When things deviate, they ask for directions from higher levels where critical thinking occurs. This is different than assumptions you typically make with end users, you assume your system is being used by people who actively think, question, and have varied needs.

In this light that argument makes a bit more sense, these are just processes that you're refining to make the process more fluid--"the human is a necessary evil because we lack the technology to do these and need meatsuits to execute" sort of thought process. Point there and shoot here, etc. When you think of much military action in this context, that people follow orders, execute predefined processes and don't think, this argument makes more sense... if I have biological robots why not replace them with faster non-biologicial robots? I completely disagree with this ideology but it's pretty deeply seated in the military from what I understand.

The issue is people arguing for HITL (human-in-the-loop) don't want drones who don't think and they assume they're able to intervene and say "no" when something absurd, unethical, or ridiculous is thrown in the mix. They want them to consider when someone says, fire a missile here, what are the implications of that to some degree. That if on their radar they see a playground full of children they might reject that order or improper information, give feedback and even ask for more confirmation.

I find the majority of the arguments presented in this article pretty weak, but don't feel like dissecting them all. The first does look at life through a Tayloristic perspective of efficiency above all else which I think is a fundamental problem humanity is facing everywhere, not just the military: the questioning of Taylorism and utility theory perspectives that dominant the world today, instead seeing a need for balancing the needs of humanity and considering things humanistically in a world increasingly ruled by the former.


> When you talk to the military you have to remember that many levels of military are regarded as drones that go through motions and that much of what goes on is assumed to be defined without critical thinking. When things deviate, they ask for directions from higher levels where critical thinking occurs.

This is the exact opposite to modern Western military doctrine. All levels are expected to exercise critical thinking when facing questions rife with nuance and subtlety. These range from how to employ complex weapons systems in tactical environments to abstract legal questions such as "Is this order lawful?".

At all levels, from officer to NCO to enlisted, troops are expected to employ a high level of flexibility and creativity in carrying out their orders, to be able to step up to cover the responsibilities of higher ranks if necessary, and to hold a high level of situational awareness.

Modern professional soldiers are not drones.

Frankly, I'd go so far as to say that ignorance displayed in your comment here should all but disqualify you from holding an opinion on military matters.


In the past few days HN linked something about changes in military organisation from the era when fixed formation set pieces were usual, and it emphasises that today we don't in fact really do what you've described. The soldiers aren't "drones" in this sense, they're responsible for some pretty sophisticated decision making which requires personal discretion.

A British Army soldier, literate but probably with nothing beyond high school education, is expected to learn tactics needed to achieve strategic goals without babysitting. "Take that building" is a goal, working as a group, advancing under cover and flanking the enemy is tactics. Modern infantry work mostly in small units, a handful of people with a shared purpose, they have an NCO leader to make decisions but they are individually expected to understand the tactics. The officer commanding dozens or hundreds of troops remotely can't babysit them all even if they both wanted to and had the information to attempt it which they don't.

The preference to not use humans is sentiment. Drones, robot guns, and so on don't leave grieving friends and relatives, they're just machines, and will be replaced. Sentiment is a massive political problem, the American people will pay for a $100Bn/ year war without flinching, hey it's full employment at the weapons factory - but when the planes full of coffins come home that's a voters' kid in each box.


Keeping a human in the loop is more about beeing willing to trust the tech than about safety.

Imagine launching your autonomous weapons, having them scan the area for enemy targets, only see you, conclude that you would not have launched if there were no enemies so obviously the one and only target it sees must be an enemy, and turns around and kills you.

After that happens just a few times, the other soldiers would revolt and refuse to use the weapon system anymore. That is the sort of thing that motivates having humans in the loop, even if on average there would be greater safety without humans in the loop.


"This side towards enemy'.

Doesn't have to be "safe" / "not dangerous" (a lot of weapons systems are very dangerous in different ways) but it should be predictable, using some reasonably simple mental model.

"Refusing to deploy" might not be a sensible option if your enemy is willing to deploy.

It seems safe to predict that human in the loop systems will be quickly overwhelmed by fully autonomous systems. The tradeoff may end up being having to balance the risk of getting killed by your own systems versus being killed by your enemy's systems, because yours were too "tame".


>"Refusing to deploy" might not be a sensible option if your enemy is willing to deploy.

In the AI ethics arena, this is really one of the best arguments I've heard for the military to pursue and have automated systems in place. US leadership may agree and see all the issues with using killer robots but other countries are developing the same technology and may have less ethical concern and considerations then you do. They may be willing to let these systems fail and kill innocent bystanders or even their own soldiers. They may even look at these and consider it a necessary loss to move forward and succeed in their goals. As such, if you're not willing to make such compromises as well, unless your approach ultimately achieves the goal or you have other techniques to mitigate these corner cutting strategies, you too must cave.

If China decides to engage in conflict and releases killer robots and we don't have killer robots, can we contend? If we can't, we might have to engage in the race to the bottom that is competition unless we have an alternative mitigation strategy that doesn't require us to compromise our ethics.

I imagine this is the dilemma scientists faced during WWII when developing nuclear weapons. Part of the argument that lead to nuclear weapons was that Germany was already working on the process. This resulted in other countries engaging and ultimately building successful weapons where Germany failed. Ultimately, we now live in a world where lots of people have nuclear weapons... and overall it's been the proverbial Mexican standoff with nuclear deterrence strategies.

Is there an AI deterrence strategy? Nuclear worked because it lead to the case of a global collapse in terms of what nuclear war would bring. Killer robots that aren't sentient don't seem to have quite the same threat and seem like militaries could utilize them as they fall in an ethical grey area globally and have far less risk of ending humanity on a global scale unless we start talking about sentient terminator killer robot level uprising which is not even remotely where AI is currently or in the foreseeable future.


It’s pretty easy to remove the human in the loop (just say “yes” to everything it would ask the human). In contrast, it can be incredibly difficult to add a human to a system that was designed to be fully autonomous.


I think currently no other country is even close to the level of autonomous weapon systems the US is developing. An arms race where you are miles ahead cannot be used as a reason why you have tonrun faster.


China is likely not far off - they’re pretty far ahead of the curve compared to the US military too.

All of these systems are going to be as classified as possible for any country developing them, so we’d only find out once on or more of the following become true;

- it’s several decades after they’re out of date

- a politician needs a major scary saber to rattle for some reason

- they get used in a shooting war where the public can see them.

Neither of which favor us knowing current capabilities of any of the major players right now, except MAYBE Russia.


If you look at the promo videos of boston dynamics one can gauge the progress, which is not classified


Also unlikely to be cutting edge in the murder drone department, IMO.


I think you’ve hit on the concept when humans in the loop are needed.

For many black box systems, the average performance may be comparable or better than humans, but when errors occur, they should fail predictably.

If systems fail unpredictably (ie unbounded in the types of errors it makes), then they can never be deployed in the field without humans in the loop.

This is why self driving cars can, at the same time, be on average safer but still not be accepted by public.


> This is why self driving cars can, at the same time, be on average safer but still not be accepted by public.

They're not safer, though.


> Imagine launching your autonomous weapons, having them scan the area for enemy targets, only see you, conclude that you would not have launched

or more realistically, your autonomous version of an aerovironment switchblade launches, has image recognition/onboard trained neural network that can identify a russian spec T-72 tank or BMP or similar, doesn't see any, loiters for a while and then self destructs.


Being an expert boss in AI and expert controller in loop in situ are two different perspectives.AI experts will be useless in situ.


My father had a humans-in-the-loop story about early ICBM warning systems. Turns out the moon doesn’t have an IFF transponder, so when it came over the horizon…


I think we all probably get the gist, but would love to hear more details.


I'll continue:

The moon came over the horizon... then we BLEW IT UP, which is why we only have 1 moon today


It's a warning system so it probably sounded a warning.


Moral: fly your icbm on a moon-eclipsing trajectory.


Nothing happened because the radar didn't have enough range to reach the freaking moon?


This article is thoughtful and naive at the same time. Deploying an autonomous weapon is just like deploying any other weapon. Once a gun is fired, the human is out of the loop. Autonomous weapons will end up the same way. People will get to exercise judgement early and late, but AI will make decisions during execution, just like a bullet follows the flight path.

The reason we need humans is for accountability. You can't court marshal a machine. Firing the operator doesn't fix anything or prevent it from happening again (next operator will be just as fallible), it just satisfies some primal part of our brains that needs to blame someone when something bad happens.


The difference lies in the predictability, I would say. A bullet flies according to well understood equations of motion, whereas an A.I - no-one really knows how it might be fooled, or what mistakes it could make.


Cutting the wires and letting a torpedo run in active homing mode and find and select its own target has been part of submarine warfare practice for decades.

We tolerate it because there aren’t generally a lot of civilians or even friendly fire risks in submarine engagement environments.


I guess the better analogue is directing troops.


That's one way to think about it. However, don't we usually just outright ban weapons with longer term lethality? A bullet is lethal for seconds while mines are lethal for years.

If autonomous drones are lethal for days, weeks or even months, should we consider them a war crime?


Mines are still deployed all the time. Check out YouTube and you’ll see plenty of videos of Russian tanks getting blown up by Ukrainian ones, and civilians navigating Russian ones across highways there.

Plenty of people die decades later from chemical and physical effects of various weapons deployed all the time (not even counting the depleted uranium issues in Iraq). White phosphorous is just one such example.


How is white phosphorus an example? Are you thinking of someone getting liver or kidney damage by absorbing white phosphorus, barely surviving, and lingering for a decade or two in delicate health before succumbing to high blood pressure? That sounds like the same kind of thing as someone getting injured by a bullet, which is a lot more common than WP.

Any white phosphorus left in the environment is going to oxidize within a few days into phosphoric acid (the tart flavor of Coca-Cola), and that will get neutralized into phosphate fertilizer pretty quickly, unless it's in some kind of deadly low-pH environment like acid mine drainage.


Good point on WP - I was mixing it in with agent orange, DU projectiles, UXO decades after the fact, etc. but that isn’t really fair.

Terrible in the moment, but except for the scars, not likely a problem once someone heals up.


Yeah, unless your kidneys or liver are damaged permanently. Less of a long-term problem than unexploded ordnance or sulfur mustard, for sure.


Naval mines are certainly not banned. The US military is back to investing heavily in mine warfare, and plans to use air-delivered mines in any future conflict with China.


I think it's potentially different in that they can be turned off at any time.


Can they?

A loitering drone that can be switched off at any time by its operator is a loitering drone that contains a vulnerability that enables it to be turned off at any time by an enemy.

There is some doomsday logic in deployment of ‘fire and forget’ weapons systems. ‘Once this thing is turned on even I can’t stop it’.


PSA: Stop Killer Robots

https://www.stopkillerrobots.org/


There was an incident once where a primitive ai system misidentified some birds as a nuclear launch. A submarine captain was supposed to launch missiles vs the United States, which would have ended the entire world. He decided not to. New ai is a lot better than our old ai but we’re talking about the fate of human existence and new ai is still prone to catastrophic errors the few times it does fail, Eg, autopilot mistaking the side of a truck for the sky in a Tesla.

What if the training set just didn’t include certain cases?


>There was an incident once where a primitive ai system misidentified some birds as a nuclear launch. A submarine captain was supposed to launch missiles vs the United States, which would have ended the entire world. He decided not to.

I think you mixed up a couple of stories.

There was an incident in 1983 where the Soviet early warning system picked up a reflection off a satellite and high-altitude clouds and interpreted it as a launch of 5 US ICBMs. The "proper" reaction to detecting a US launch would be to counterstrike immediately, but the operator Stanislav Petrov has recognized it as an error - he correctly guessed that if push came to shove, Americans would launch a couple more missiles than just 5.

The submarine incident was in 1962 during the Cuban missile crisis. As it was ending, a US destroyer dropped a couple of small signalling depth charges above a Soviet submarine. The submarine did not have ballistic missiles, but it had torpedoes with nuclear warheads. Those weapons required usually permission from both the captain and the political officer - and both agreed that World War 3 has just started and they should engage. Thankfully, the commander of the submarine flotilla Vasily Arkhipov was aboard and in this case also his permission was needed. He instead opted to surface the ship and contact Moscow for orders.

There was no AI present during any of this.

I don't believe birds possess the required characteristics to be identified as an ICBM launch.


The natural course of war tends toward a future where polities own, but do not operate, fully autonomous weapons. There will come a day when even the targets, be they military or civilian, are selected and attacked solely by machines. The human element is more creative, though bound by a sense of morality. Whereas the machine is faster, bound only by the thinnest of rule sets. If we're lucky, most of the combat occurs virtually rather than mechanically.


>When that happens, we identify why the interaction is problematic, why the human is not trusting the machine, or why the way that the data is presented is being misunderstood — and then we change those things. Such changes should make us question to what extent human oversight of algorithms is truly meaningful.

Nonsense; this is like saying "we changed the wheel on the bike so that the rider didn't fall off it so often which makes us question if they can ride the bike". If the interface is being changed to better deceive a user then that of course is problematic, but if it is being changed to allow the user to make accurate decisions then that is the function of the system.

>But now imagine a system similar to HARPY, a “fire-and-forget” system that targets enemy combatants in an area or enemy ships (to use an example that minimizes the collateral risk). Proponents of meaningful human control do not believe it should ever be left up to a machine to target humans.

Strawman - that is not what "Proponents of meaningful human control" believe, I am sure you can find some people who say that, but then they will also say that they don't believe that a bullet should be fired or a bomb dropped. Machines target humans all the time in the sense that unintended collateral damage occurs whenever bombs are dropped. The humans doing it must accept responsibility for this; that is the same for more sophisticated processes - the humans remain on the hook for unleashing the device. If the device decides to slaughter a primary school class that's on you in the same way as it is if you target a school yourself.

>I think there are good reasons to question this. Specifically, I am not persuaded that there is a significant moral difference between an operator identifying a single human target based on some data and a human operator drawing a box and defining targets within that box based on equally good data.

Which shows that you, dear writer, should never be allowed to decide a god damn thing. There is a chasm of uncertainty between the two scenarios, and a chasm of indirection.

>whether in cases when the system works better or as well without a human operator, we have moral reasons to place a human in the loop.

Because we live in an open world and the judgement of performance is post hoc, you can't know that it will work until it's done its thing. The judgement has to be when the act is commissioned because the consequences of the act cannot be reversed.


There's nothing inherently wrong with human-in-the-loop. In many, many, many cases, it's is lack of this that causes death, destruction and grievous mistakes while giving humans a nominal "plausible deniability" of making any mistake at all. - "it's the software that decides; I can't change that.".

The biggest reason why human-in-the-loop is necessary is that humans scale better in unique complexity situations than fixed-design machines ever an be.

Often you actually want to slow down a system specifically to assure a human can judge. Fully automating assures that the small or large errors will grow far too fast - we see this with social media - that's literally the problem with Twitter et al. - they operate too fast to be safe or effective.

It's better to slow things down if it results in correct, moral or fair operation. There is nothing magical or essential or better about maximum speed blindly.

The additional big reason is that machines are incapable of morality nor of facing punishment - you need a human to "have skin in the game" to create any force to assure correct behavior or operation. Being afraid of jail or financial ruin is essential to this. A machine or corporation (NOT a person) never can and never will.

One data point is drones: currently, like most precision weapons, the precision is nearly 100% - that is, if you point the weapon at a target, the target will be destroyed nearly 100% of the time. The problem then is how you target the weapon - that's where all the "errors" are incurred. This is how you get Afghan wedding parties killed off. It's the intelligence errors and rules of engagement that are actually what kills.

Another data point that should be horrifying: the US military is currently working on neuromorphic AI chips. The specific intention is to make early warning satellites capable of identifying targets and triggering a nuclear "response/attack" without a human in the loop.

Literal Sky Net. This is intelligence and targeting being automated and humans being pulled out of the loop.


You say, "There's nothing inherently wrong with human-in-the-loop." But humans have reaction times around 500 milliseconds; in the best cases, around 200 milliseconds. I routinely write software with worst-case reaction times around 0.001 milliseconds, or 1 millisecond for more complex stimuli. If a force of drones with humans in the loop fights a force of drones without humans in the loop, absent colossal blunders, the latter will win, and it won't even be close.

Imagine you get into a fistfight with a guy. He's bigger than you, he's got a loaded 9-millimeter pistol, and he's a good marksman. But his reaction time is 200 times yours --- say, about 50 seconds, if you're a regular human. That is, all his reactions are reacting to what you did 50 seconds ago, at best. Who do you think is going to win? Unless he's inside a tank and you're outside, you will. Even in that case, I think you've got a fighting chance.

You argue that it's good for armies to "have skin in the game" in the sense of being vulnerable to jail or financial ruin. And, from a moral point of view, it probably is good: armies that are exposed to retaliation will generally be less willing to do things that provoke retaliation, many of which are immoral. However, the things that provoke retaliation are also the things that win wars. Making your army more vulnerable may reduce war crimes, but it is not a recipe for victory.

The logic of warfare is inimical to our survival as a species, which is why our best and brightest have been calling for a pacifist world order since even before World War II. Nuclear warfare were a nice early warning, but they're children's toys compared to precision-guided munitions backed up by pervasive surveillance.


Correction: 500 times your reaction time would be about 40 seconds if your reaction time is 200 ms.


I once read a quote from a general or something, making the point that the decisions made by these systems, even when a human is in the loop, are very much algorithmic or formulaic, and therefore taking the human out of the loop, making the system entirely formulaic, should be fine.

Obviously that can't apply to all situations in the same way, but I thought it was an interesting viewpoint at least. I can't remember the article or the speaker though.


OTOH - given all the sorts of complete-facepalm mistakes that image recognition systems have been seen to make, and the incentives for an actual wartime enemy to "encourage" your "AI"-run weapon systems to screw up, and the consequences if they are successful...


Yeah, absolutely. The quote may have been from before the modern era of DNN-based image recognition, when I suppose that part of the system was always assumed to be a human's responsibility.


Mitigate kill chain distruption in age of electronic warfare.


You need a human in the loop, otherwise there would be no legal recourse on war crimes.


My reaction: It should be the same consequences for war crimes as if you'd fired a barrage of WWII-era VT-fused artillery shells at a refugee camp, or laid an old-fashioned minefield in the playground of an orphanage. Some "smart" gadget you built & deployed did something horrible. Your ass.

Or, if you protest that "the AI did it!" - then you have to march across a field guarded by a bunch of "smart" weapon systems. The justice system is not responsible if any of those AI's malfunction and kill you.


Real life WOPR incoming.

Shall we play a game?


Years ago, a late friend and I had a little ritual. One of us would be hunched over a terminal in the operations booth of our workplace's datacenter, trying to figure out why $critical_service was not coming back up after patching. The other would see the alert in Intermapper, come downstairs, and amongst the alarms and ringing phones, solemnly intone:

"Mr. McKittrick, after very careful consideration, sir, I have come to the conclusion that your new defense system...sucks."

This always managed to lighten the mood.


I can't get past the first paragraph!

> At the recent conference on the ethics of AI-enabled weapons systems at the U.S. Naval Academy, well over half the talks discussed meaningful human control of AI to some extent.

But, but AI is sentient right?! What business is it of ours to try to exercise control over AI, with its many and various superior attributes? lol.

> If you work among the AI ethics community, and especially among those working on AI ethics and governance for the military, you are hard-pressed to find an article or enter a room without stumbling on someone literally or metaphorically slamming their fist on the table while exalting the importance of human control over AI and especially AI-enabled weapons.

AI ethics is only there to justify the unjustifiable. They do not have my ethics that's for sure, nor any normal person's. Its a bad joke that allows a corporation to say - 'but it has been subject to an ethical review!' - and the political class can allow themselves to be bamboozled and provide their rubber stamp.

> the ethics of AI-enabled weapons

... the ethical implications fall on those coding the dystopia.

War is immoral, pretending that wars are real is immoral, handing over decision of life and death to a machine is immoral - regardless of how efficient the machine may be. So, any of those people involved in that circle are immoral. Soldiers may think they are working to defend their 'country' - but 'country' is a fiction and 'defence' is an inversion. People act this way for money.. at least that reason makes sense - but there is nothing moral about it. There are no ethics to be found there.


> Soldiers may think they are working to defend their 'country' - but 'country' is a fiction and 'defence' is an inversion.

Meaning is collaborative. It doesn’t matter if you’re a nihilist and want to declare your country doesn’t exist; the guy invading you makes it real.

You can be a quisling, but defecting to the invader seems to admit they exist, at least.


Who's invading who now? What are the soldiers "defending"?

Do you think that 'Department of Defense' is a misnomer? Howabout 'military aid'?

Are these the sorts of linguistic meanings we are meant to collaborate on? I'm no collaborator, you know..


Didn't follow your argument, but if AI: 1) gets advanced enough; and 2) goes out of control; won't it 3) take out the human at the off switch first?


I'm pointing out that the messages on ai and 'ethics' are mixed, confusing. That it is impossible to believe what is being stated.

On the out of control thing, for me, ai is just a machine (a special machine, but not sentient as we are sometimes told) and will do whatever is most successful, as per the criteria it is given. If it is capable and can kill people, and that is stated as its goal, I'm sure it would do so. Its just a machine, it has no moral compass.

I don't buy the hype that it would be able to choose its own goals - it can never be conscious, even if it can do a passable impression.

What does happen is that the (moral) responsibility for ai deaths is shifted onto those who train and code it. Developers might not think of themselves as killers (like soldiers do), but in working on these projects that is what they become.

When I point out the moral repercussions of coding, eg here and many other examples, eg privacy erosion, gameification, people don't like it. They want to take the money but not the responsibility or culpability their actions. They will pretend that they can defer their morality to an 'ethics committee'.

So these sorts of articles are just a form of distracting entertainment, that do nothing, fail to talk about the important parts of life, and ultimately facilitate the creation of an even greater dystopia for ourselves and future generations.


I've wondered if things might be compartmentalized enough in certain BigCorps that developers don't realize they are working on classified (or funded as dual-use) projects.

For AI-in-the-loop, similarly for a long time I have framed that as an extension of heuristic-based-control-system development. I suppose implicitly that puts some or all of the onus on the developers.


Things are absolutely compartmentalised. If devs genuinely believe that they are working for the greater good, what's to answer?

But, pretending that you don't know what you do know, that you don't realise you are doing something that will negatively impact others, esp when it is easy to find these things out, won't wash. Remaining ignorant is a choice after all.

I suspect that we will judge ourselves. If you find yourself wanting, there's time to change.

> For AI-in-the-loop, similarly for a long time I have framed that as an extension of heuristic-based-control-system development. I suppose implicitly that puts some or all of the onus on the developers.

Well, yes - AI doesn't come out of nowhere, it didn't invent and code itself. At least, I don't think that's how it occurred :)

People have choices about what they do. They can choose to believe they live in a materialistic, amoral reality - as they are taught at school - and then pretend that they are 'good' with no culpability because they are doing what they are told (coding dystopia). Or they can try to become individuals and try to uncover the nature of reality for themselves and according to their principles and truth.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: