Hacker News new | past | comments | ask | show | jobs | submit login
Robotic Warfare Has Arrived – 30% of US Military Aircraft are Drones (singularityhub.com)
33 points by masterfanman on Feb 10, 2012 | hide | past | favorite | 35 comments



Typical singularityhub blogspam: this is a rewrite of http://www.wired.com/dangerroom/2012/01/drone-report/

I don't know why singularityhub links haven't been banned from HN yet.



When I started reading the article I thought I'd post here something about how it might seem effective and useful to Americans until other countries start using them against us.

Then I saw our government is already using them against us. This is sad.

Renoir's 1937 film La Grande Illusion, beautifully illustrated how those on the other side of battle lines often have more in common with you than you think and those on the same side may not share your interests.

I have no problem with the average Joe on the street in the middle east, yet he is being harmed with my tax money. I have a big problem with someone spying on a peaceful protest near my home.

EDIT: resolved ambiguity someone responded to. I have no problem with the average Joe. I do have a problem with him being harmed with my tax money. Sorry for any confusion.


> I have no problem with the average Joe on the street in the middle east being harmed with my tax money

Really? It's quite amazing to be able to say this after having said that "those on the other side of battle lines often have more in common with you than you think".

How do you think "the average Joe on the street in the middle east" feels about having an unmanned plane over his head? How do you think his children feel when they see him being killed by such a machine? They swear they will seek vengeance.

You will have only yourself to blame for this.


Most of the comments are somewhat negative. People should look at this like we're at the beginning of WWI and planes are just starting to be used for warfare. Things will evolve rapidly because the DoD has a lot of money. Within a decade the drone fleet will evolve a couple of more generations.

If we could find some consumer uses for robotics, the tech will evolve even faster.

At the end of the day, it's about finding the money to pay for the advances.


AI research has been going now for 60 years and we have pitifully little (apart from a few expert systems) to show for it. It is on-going and billions are still being spent on it but its a barren dry field. The most important thing it has given us is the realisation that intelligent behaviour and sense of self is remarkably complex and digital machines can't replicate it. Mimic, to a limited extent, but not actually show aware, self-seeking behaviour.

Another sixty years won't change that.

That's why all the smart money is in biotech -- because that might show more promise. A computer with a synthetic mind made out of living neurons might be more promising.


Wait just a minute! Just because we've yet to build a machine as intelligent as a human being doesn't mean we haven't made remarkable progress, particularly in the last decade.

And where do you get the notion that digital machines can't be self-aware? What, exactly, are you basing this theory on? Are you claiming that some form of quantum state is required, or are you saying that intelligence needs to be analog?


I can see you don't have any background on this otherwise you would already know the answers. This has been well researched by biologists, linguists and cyberneticians for well over half a century both at the academic and applied level.

"Intelligence" is neither digital or analogue (or maybe its a quantum artifact) -- we don't have a clue what it is. That's the point. It appears to be an emergent property of organic systems that must evolve over the development of that system but we still can't prove that we are intelligent or self-aware. But we can be certain it doesn't appear in purely deterministic machines who's every parameter can be assessed at the fine grain level.


>But we can be certain it doesn't appear in purely deterministic machines who's every >parameter can be assessed at the fine grain level.

I don't know why you think this statement doesn't apply to humans. We're deterministic machines as well. More complicated perhaps, and less is known about the "fine grain level", but there is no evidence that we are anything more than complicated meat machines.


It does indeed apply to humans. If you could be bothered to read the rest of this thread you'd see that this is precisely the point I am trying to make.

Humans are machines. But we (and other living things) appear to exhibit defined properties that are not present in complex digital machines. It also seems that what we refer to as self-awareness is an emergent property that living things have, although in theory, not specifically limited to living things.

That set of properties is what is formally defined as having a mind. Whatever that means, and I agree its far from clear what that is.

But -- and its a big but -- digital processes don't seem to be able to mimic or model it (that could be our shortcoming -- the models are not any good) and it isn't just a matter of more memory/processing power/a big enough look-up table. These are not the problem nor the answer. It isn't just a CompSci issue. If it were just that, we would have it licked by now and we would all have interchangeable minds that we could simply reprogram and upload with new sets of skills and belief systems (yes, Robots would definitely need them too, by the way).

We have a long way to go and for a while now we've been heading down the wrong road. But don't take my word for it.


I agree we have a long way to go, and I don't know enough about the state of the art to say whether we're heading down the wrong road or not. But I'd like to point out an something that made an impact on me. Thomas Nagel's essay "What is it like to be a bat?":

http://instruct.westvalley.edu/lafave/nagel_nice.html

He makes a distinction between subjective understanding of consciousness ("What is it like to be X?") and the objective understanding (What are the atoms and neurons doing? What is the structure of the mind?) I think this distinction is a big part of why you, and society as a whole, dismisses AI.

We have a much deeper objective understanding of computers than we do about our own brains. But comparatively, bats are as familiar as our own siblings next to our subjective understanding of "what it is like" to be a computer. We simply have no basis for comparison. We can't put ourselves in a computer's shoes. And without this subjective, gut feeling comparison, people in general find it difficult or impossible to assign the word "intelligence" to any non-humanoid entity, be it whales, robots, or computers.


No, whales are intelligent alright. There's no doubt about that. Bats almost certainly are as well. At least I believe they are self-aware and have an inner life as an individual as well as being part of a group.

I also am a great believer in things being more than just the sum of their parts -- or at least to have that potential. I just don't believe there are any sentient machines. Yet. And I'm not going to waste my time anthropomorphising them.

Because that's not going to make them happen any sooner.


If we have no idea what intelligence is, how can you make the claim that it cannot appear in a purely deterministic machine?

Clearly there's no theoretical difficulty; with a large enough lookup table you could simulate every action an individual could take. Whether this lookup table can be considered conscious is academic if we have no way of defining consciousness.

So the question becomes: is there a more efficient algorithm that can be executed by a machine that it is feasible to build?

Because you're apparently certain that there isn't, you must have a mathematical proof that this algorithm cannot exist, right?

And even if your "certainty" was merely hyperbole, perhaps in lieu of proof you could provide some compelling evidence of your case?


"with a large enough lookup table you could simulate every action an individual could take"

Oh dear. That's Artificial Intelligence Day 1. It also doesn't work. That isn't an AI. There's also the slight dual problems of Nyquist and Shannon limits which dictate why this can't physically work in a digital system. But never mind. You seem to know that it is possible and it that's how to do it. No amount of prior research since 1948 onwards will be of any use to you.

You already have the answer. Its all just a matter of brute-force computing power and nothing else. Its that unsubtle, is it? Geee -- what a waste of time! All these years and it was just a matter of a big enough look up table. What a fool everyone has been not to just realise that!

So c'mon then, where is your fully functional robotic AI? Keeping it a secret isn't fair to the rest of us.

Did you consider any of these:

http://scholar.google.co.uk/scholar?q=artificial+intelligenc...

And this one's a doozy: http://works.bepress.com/cgi/viewcontent.cgi?filename=0&...

And they're just scratching the surface of a very deep field of research.


The weak-AI that did the logistics planning for GW1 more than paid for all the money that the DoD had spent on AI research.


Did it? That's what they would like to believe.

Anyhow, that's an expert system, not really an AI.


Your dividing line is anything we accomplish short of full-blown human intelligence is an expert system and everything is AI. It is an irrelevant distinction anyway since AI research has directly lead to the tens of thousands of expert systems that are in use every day.


No, I'd settle for an AI that exhibited even the self-determination and learning ability of a common ant or honey bee. Especially if all you are going to do with it is stick it in a missile and blow people up. We have nothing like this, nothing.

I know because I associate with cyberneticists, software engineers and biologists who have made this their life's work and who are engaged on this day in, day out.

Truly independent strong AI's are a pipe dream. At least for now. And merely throwing money at the problem won't by itself solve that. Digital computers as we know them and by themselves will not give us what we want. And why it is IMPERATIVE that AIs must be Asimov machines or at least can recognise friend or foe ... or else we are all in big trouble. The facts are we have nothing like this, nor does it seem anything too promising in that direction. Even humans aren't all that good at it if the stories of "friendly fire" are anything to go by. But I suppose you are saying "collateral damage" is a worthwhile price to pay for trying to develop AI's? Try explaining that on the six-o'clock news when a "safe" prototype AI drone malfunctions in a city full of your own people. Or even in a battlefield scenario.

I suggest you read the work of people like Blay Whitby and Kyran Dale (University of Sussex School of Cognitive and computing Sciences) for some practical background.


One wonders when we reach the logical conclusion of pure drone warfare... http://en.wikipedia.org/wiki/A_Taste_of_Armageddon


I don't know what they classify as a drone, but if it includes small aircraft of several feet long, then I could imagine they have hundreds (?) of them for the same price of a fighter jet.


Power used to be a function of the number of soldiers at your disposal. It will soon be a function of the quality of your roboticists.


Now that it's legal to fly them domestically, your police force is going to have so much fun with all that "anti-terrorism" money. Be sure to write Congress and the president a "thank you" letter for signing that into law.

http://www.washingtontimes.com/news/2012/feb/7/coming-to-a-s...

the FAA has issued hundreds of certificates to police and other government agencies [...] to allow them to fly drones over the United States


The reason is that every single flight has to have its own certificate. It could be a few drones on dozens of flights, or many drones on a few flights.

Cleaning this up is part of the FAA Reauthorization bill that Obama is signing. The FAA is being tasked to integrate unmanned flight into the national airspace system.

At the moment, each flight must have either a ground spotter or a chase plane in order to provide the see-and-avoid capability that is a requirement for all flights in visual conditions.

One reason I am not as pessimistic as some others is that the private pilot brigade, through AOPA and others, is extremely political and vocal. To generalise, they epitomise the self-reliant, freedom characteristics often associated with America.

There is no way they are going to accept that their freedom to fly wherever they want is being restricted just to accommodate drones. Expect an increasingly vocal campaign over the next several years.


It isn"t robotic warfare unless they are totally autonomous and goal seeking. These are not AI's - strong or otherwise.


They are fairly autonomous and goal seeking, the military does not use 1 pilot per drone. Details rapidly become classified after this point. But, the DoD does a lot of AI research and put's this stuff into practice see cruse missiles for some ancient, but still powerful tech.


Yes, indeed cruise missiles are robots (they have a sophisticated automated guidance and auto pilot for a start).

My understanding is that most drones are remotely controlled by human pilots/personnel. I saw a documentary about this and they seemed pretty dumb compared with military tech from the 80s such as cruise missiles.

I'm sure that they are working on AI controlled drones but I'm pretty sure none of them are autonomous yet. They would have to be Asimov machines to some extent to prevent them from doing bad stuff to allies and friends or even their owners.

No robot AI combo to my knowledge is yet safe or reliable enough to be deployed on their own unsupervised. If they think that they are then, boy, are they going to get their asses kicked sooner or later when they inevitably malfunction.


Many newer UAVs are fully autonomous in the flight operations sense, as they are able to take off, fly a pre-defined track through a set of waypoints and return to land all without direct human intervention.

As an example, the US Navy's X-47B UCAS-D demonstrator has already demonstrated autonomous flight, but the USN plans to test autonomous carrier landings at sea sometime next year with fully-autonomous aerial refuelling the year after that.

The thing is, autonomous flight isn't that difficult and the technology for it has been in place for some time. Where complications arise is with bad weather, which can confuse an aircraft's sensors; situations where precision instrument approaches aren't available and, most importantly, other aircraft. There is still not complete certainty that it's safe to fly an autonomous UAV in congested airspace where other other pilots often do unexpected things.

Many of these issues can be solved by technical means alone, including the ability to monitor, anticipate and avoid other aircraft. But we're still a very long way from solving that fuzzy boundary when things go wrong and only human judgement can prevent disaster.

I also do not believe that aircraft like the X-47B will be given autonomous freedom to select their own targets when they are deployed in about a decade. Instead, while they'll fly autonomously, their targets will be selected by human operators who'll also authorise the release of weapons.

True autonomy is going to require the answering of plenty of technical and ethical questions.


"True autonomy is going to require the answering of plenty of technical and ethical questions."

Precisely my point. Thanks. We all know flight autonomy is already here. Has been for ages.

The aerial refuelling is a neat trick which I'm not convinced will be able to be taken for granted yet -- I can see this won't become fully autonomous for a while. That procedure would have to be completely predictable and reliable -- at least as reliable as a human-managed manoeuvre and that isn't without risk.


Just pointing out that 'autonomy' can mean different things depending on your point of view. The kind of autonomy that has already been achieved coupled to a reasonably safe ability to operate in congested air space will mean that regular autonomous cargo flights will become possible.

As for aerial refuelling, I think autonomous probe & drogue refuelling is definitely feasible. They've already proven the ability of the X-47B's flight systems to maintain the refuelling position behind a 707, which is technically one of the harder things to get right.

One of the reasons aerial refuelling is so tricky is that it requires constant rapid adjustments by the pilot in the receiving aircraft to stay in position while receiving fuel. The X-47B should be able to process those adjustments much faster than a human pilot.


The 'good parts' tend to be human controlled, but you don't have one person controlling six of them without a quite a bit of automation. EX: http://www.spacewar.com/reports/USAF_Sponsors_Fully_Automati... (note 2007)

The key word is optionally piloted http://bayourenaissanceman.blogspot.com/2011/05/more-about-f...


Any aircraft can fly itself. Automation in flight systems is not new nor does it even require a computer. Negative feedback systems in both machines and in biology can do it very well -- in insects and birds it happens beautifully.

Early autopilots were analogue and worked remarkably well. Drones can keep flying and gliding without help (as long as they have power and fuel) -- of course they can. Its a classic and ancient application. But right now that makes them one step above a paper dart -- all be it with weaponry and reconnaissance. And that don't work without human intervention.

They are merely a remote extension of their human pilot's hand. Thankfully, because no one wants an autonomous drone going off on its own to "discover itself".


The ability for a 'pilot' to specify they want surveillance of some area and a drone fly's around pointing a camera at that area is a little more complex than simple autopilot. Classic reconnaissance aircraft often had multiple people to handle all of the complexity's involved, moving to having less than one person per plane and doing the same job takes a lot of automation. That said, something like the MQ-1 Predator actually uses multiple people on the ground due to this issue as does the MQ-9 Reaper which can do autonomous flight operations. But, both systems use multiple aircraft at the same time and have less than one controller per aircraft.

http://www.af.mil/information/factsheets/factsheet.asp?id=64...


http://en.wikipedia.org/wiki/Sea_Dart_missile#Gulf_War_.2819...

"This engagement was the first validated, successful engagement of a missile by a missile during combat at sea"


Global Hawk is autonomous, with no pilot behind a joystick.


i just hope that we advance so far in this field , so that eventually we have robots the exclusively fight and destroy other robots, the hope being that human casualties and destruction of human infrastructure is completely removed. Perhaps one day, we'll take robots out of the picture and fight wars virtually altogether!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: