Drones: Brookings senior fellow shares his thoughts

Unmanned aerial vehicles, also known as unmanned aircraft systems to the U.S. Department of Defense and drones to most of the general public, are perhaps the most visible example of how far robotics has come in a relatively short period. Although primitive examples of UAS were used by the military as early as World War II, advances in their capabilities have resulted in an exponential rise in the military flight time they’ve logged in the past decade and opened the door to an intriguing number of non-military applications.

However, especially given their visible role in warfare, the success of these vehicles has been accompanied by increased scrutiny — and the word drone itself has come to carry negative associations for many people.

We recently spoke to Benjamin Wittes, a senior fellow in Governance Studies at The Brookings Institution, about those associations and more in a wide-ranging conversation on the topic of unmanned systems and their use both domestically and abroad. Wittes is a member of the Hoover Institution’s Task Force on National Security and Law as well as the co-founder and editor-in-chief of the Lawfare blog, which is devoted to the discussion of “Hard National Security Choices.” He is the author of “Law and the Long War: The Future of Justice in the Age of Terror,” co-editor of “Constitution 3.0: Freedom and Technological Change,” and is writing a book on data and technology proliferation and their implications for security. A complete bio is available on the Brookings website.

 

President Obama gave his big speech addressing counterterrorism and drones last month. Did that change the narrative or the conversation about drones at all?

Well, I think it did. If only because it seems to promise a more sparing use of drones for purposes of military strikes. If that proves true, I do think that will be a significant shift, including a shift in the politics of drones. The more you move away from routine, day-to-day use of drones for military purposes and, kind of, kinetic strikes, the less salient that will be in the public mind.

Was that one of the goals of the speech? To make that less of a talking point?

I don’t think the president feels concerned as, for example, you would be about the perception of robotics that this creates. His concern is something else, which is what the balance of the authorities associated with war and peace are going to be and when you should and shouldn’t use weapons of war to kill people, particularly away from the hot battlefields, where there is a conflict going on that he’s trying to bring to an end. I do think trying to reassure people that there isn’t an endless war that people have come to associate very strongly with drones is probably very much what that speech was about.

Are there common misconceptions people have about drones and how they’re used?

I think the most common misperception is that basically a drone is a weapon. Of course as you know and as your readers will know, a drone is a platform. And you can put on that platform a lot of different functionality — from crop dusting to various forms of surveillance and espionage to traffic monitoring and first responders to killing people. And I do think the way unmanned aerial systems have developed in military sector, particularly as a weapon, has really conditioned the way the public understands what they are and conditions therefore the way people respond when you talk about flying drones over the United States. When Rand Paul did his filibuster — or I guess it was Ted Cruz who talked about drone strikes against somebody in the United States who is sitting peaceable at a café drinking his coffee or having brunch, I forget the exact words — you wouldn’t say that about an airplane. So I do think the sort of military sector development of this, and its use in ongoing covert actions, has really shaped the way people think about it.

Do you see military use of unmanned systems continuing to expand as it has been or leveling off?

I think the Obama administration at this point intends to use military force much less than we’ve been using it over the last 10 years, so I think all weapons systems will be somewhat in decline. That said, I think to whatever extent you’re using force, the percentage of that force that will be used with unmanned systems is probably going to continue to go up. You really saw that in Libya, where the entirety of the U.S. intervention was done through unmanned systems.

Can you talk a little about Lethal Autonomous Robots and the special issues they raise?

That’s a huge subject. So, first of all, there are no lethal autonomous robots yet. The Defense Department has adopted a very strict set of guidelines regarding when and under what circumstances it would deploy such a thing. In addition, some human rights groups, particularly Human Rights Watch, have come out calling for a preemptive international ban on lethal autonomous robots as a matter of international law. There has been a significant debate about that that I’ve actually been a participant in, much of which has taken place on the Lawfare blog, which is my website.

But the basic question, first of all, is “Could you program a fully autonomous system without a human’s hand in the loop or on the loop to comply with the laws of war?” That’s a sort of first question. The second question is, “Even if you could, is there something unthinkable — at a sort of transcendental moral level — about having robots making life and death decisions without human supervision or involvement?” And I think those are not unrelated questions but they are sort of independent questions.

Right now the debate is happening at a level that is independent of the technology because the technology actually doesn’t exist. Though there are lethal autonomous anti-weapon systems, for example the Patriot anti-air missile system and the Israeli Iron Dome antimissile system are both — not fully autonomous, but they’re pretty strikingly autonomous.

But in essence this is all conceptual because there’s no danger of this being put in place tomorrow. 

Not in the immediate term. And when, if ever, those systems would develop in a way that people were tempted to deploy is a very tricky set of questions — I don’t think we know the answer to that. General purpose artificial intelligence has been 10 years away for a lot of 10 yearses at this point. So I would be very cautious personally about saying we’re X number of years away from being able to deploy a fully autonomous system. But that said, Human Rights Watch is afraid of it, and afraid of it enough that they have a major campaign going on about this.

It’s not entirely clear what the U.S. policy governing the use of military drones is today, but given what we know, how has the policy evolved and how would you anticipate it will continue to evolve?

We’re using them increasingly. The capability keeps getting better, and they have certain very significant advantages over manned systems. Both in terms of force protection — that is, you’re not putting people at risk when you’re running missions — but also in terms of the ability to loiter for long periods and watch a target before hitting it has significant effectiveness advantages. So there are good reasons on both the effectiveness side and the force-protection side why the reliance on these systems is increasing. And I think that will continue to happen. There’s not a lot of good reason for people to be in the cockpit when they can be much more safely far away from it. And there are relatively few applications in which you’d want to imagine humans in the cockpit for longer.

Does the debate about the use of military drones bleed over into the conversation about broader use of non-weaponized drones domestically?

The public perception of non-weaponized drones domestically is highly conditioned by the public perception of weaponized drones not domestically. And that’s really a problem because there’s no prospect of using drones domestically the way we use them overseas. Just as there’s no prospect of using F-16s domestically the way we use them overseas. When we’re talking about civilian aviation, we don’t confuse that with military aviation in theaters of war. And similarly, when you’re talking about domestic UAV systems, you shouldn’t confuse them with military UAV systems. That said, people do. And part of that is the word “drone” has certain implications in people’s heads, and part of it is that it’s a new technology — or a new series of technologies. So I think part of the challenge for the domestic robotics industry is just going to be to teach people and convince people that it’s not essentially about weapons. That it’s about all kinds of things that people either don’t want to do or that we can automate in a fashion that does it better than humans can do with their own hands. So I do think there is a significant policy challenge.

What are some of the security and legal challenges associated with broader commercial adoption of UAVs in the United States?

I think there’s broadly speaking three really big challenges with regard to domestic UAS. One is privacy. Convincing people that having things flying around, many of which will be taking pictures of things, is not a very scary thing. Privacy has a real resonance for people, and developing norms and rules that are going to make people comfortable with the idea that lots of entities may be flying things around the skies snapping pictures of things is not something that they should be viscerally opposed to. Or that in certain instances it is. Having rules and norms that regulate what we’re doing with that sort of thing is very important.

The second thing is air safety. One of the reasons the skies are so safe — and they are amazingly safe — is that there are relatively few people flying things in them. And imagine that you’ve gone from a world in which the only people using the interstate highway system are licensed long-haul trucking companies and Greyhound to a world in which suddenly everybody can apply for a driver’s license and get one. You would have concerns about road safety. And I think the challenge in the domestic drones context of maintaining a safe airspace when you’re opening the skies to lots and lots and lots of new actors, many of whom are much less trained than the FAA requires of even general aviation pilots, that is going to be a very significant challenge. I think it’s a manageable challenge, but it’s going to be a significant challenge.

Then the third area, as we’ve already talked about, is perceptional. People really do not understand what domestic drones are going to be about. And part of that is that we haven’t figured it out yet. If you asked in 1981 what was personal computing going to be about, people would have said lots of things about word processing and spreadsheets, and they wouldn’t have focused on networking, and they would have entirely missed what by 15 years later was essentially the dominant aspect of personal computing, which is connectivity. And I believe we’re in a similar situation with respect to robotics, not just unmanned aerial systems, but all kinds of robotics. That we don’t even have the imagination to know what the major applications will be at a personal level. Therefore it’s very hard to describe. When you see one very dominant use like killing terrorists, that highly conditions what this technology is. And keeping, as we integrate it into our domestic life, an open mind about what it is as a domestic matter, that’s another policy challenge that those of us who A) want to kill terrorists — and I like the idea that we’re killing terrorists with unmanned systems. I’m not remotely ashamed of that. But I also don’t want that to be the only set of associations people have with a set of technologies that I think has much broader application and probably a much broader range of civilian applications than it has military applications. So keeping that separate, and keeping space in our mind that this is a set of technologies that has military applications and it has non-military applications and those two things are essentially unrelated to one another. Or the way we should think about them is very use-specific and very unrelated to one another. It’s a very important sort of gestalt policy challenge — really for the public and for policymakers.

How would you advise the unmanned systems community — the people who build drones and would like to see them more widely deployed — to begin addressing the privacy concerns that drones trigger?

First of all, I think there needs to be a broad conversation about robotics. Several of us, Wells Bennett, John Villasenor and I and Peter Singer here at Brookings have really been trying to have a public policy discussion on a sort of broad level about robotics, both on the military side and on the civilian side. I think that’s a very important component. Just having a policy dialogue that involves the military sector and the non-military sector and thinks about the technology in both in terms of — but not only in terms of — its military applications. I would suggest to the industry that it needs to engage that conversation and not simply assume that because you’re building a robot to do blank, that’s not a military thing, that people won’t map onto it the military side of things. Or even the science fiction-based fears side of things. So I would say to companies that are not doing military robotics, “Don’t think you’re immune from the perception that arises out of military applications.” And for the companies that are doing the military side, it’s really important to keep an eye on the civilian side as well. I think these discussions are both integrated with one another and need to be kept separate from one another at the same time.

Who will ultimately be responsible for regulating and monitoring privacy concerns? The FAA? Local law enforcement? 

I hope it will not be the FAA, to be honest. I think the FAA is at its core an aviation safety regulation agency, it’s not a privacy protecting organization. And I would hope that the privacy side of this would be handled in other ways. Now what those ways are is a complicated question. Number one, there are Fourth Amendment issues — those will be resolved as the government uses drones to collect information and tries  to use them in criminal cases and people resist the introduction of evidence collected by unmanned systems — aerial and not, by the way. The second is privacy issues that arise between two people or two one-person entities. Somebody does something on a private-to-private level that one person feels violates their privacy. There are really three ways to resolve that. One is as a regulatory matter by some agency, local or federal. One is by statuary change. States and the federal government you could imagine adopting certain legislations that would assign rules to that sort of thing. Or assign a regulator to assign rules. And then the third is just litigation. There is a tort of invasion of privacy. There are torts of trespass and malicious disclosure of private fact. So there are lots of things you could do with a drone that could prompt somebody quite reasonably to sue you. And I think you will get some rulemaking out of the disputes that arise between people that they’ll sue each other over.

So, in some ways, this isn’t as different as we feel like it is and it can be handled by our existing laws and systems?

That’s right, although I do think new technologies raise old issues and they also raise new issues. So identifying here what is new and what you just want to adapt old rules to deal with, that’s always a challenge with new technologies.

Finally, in as few words as possible, could you boil down your view of the benefits drones offer contrasted with the concerns they raise? 

I’m generally enthusiastic. I think robots in general offer a lot of opportunity to remove people from dangerous work that people aren’t very good at. And to replace them with machines that are cheaper and that are better at it and that don’t put people in danger. I think that’s kind of an across the board view of a lot of areas. They raise certain issues. I think the industry needs to be and people need to be sensitive to the issues that they raise. We haven’t identified all of them yet. There will be more that we haven’t put our finger on yet. There will be security issues, and I think they’ll be substantial. Any time you’re delivering radical, empowering technologies into the hands of lots and lots of people, somebody’s going to do something awful with them. And this is a very empowering set of technologies. That said, I would assume that as with almost all very great technological advances, the good will dramatically outweigh the bad. And the people who are today against drones will end up looking a lot like the people who 30 years ago were against computers or the Luddites in the 19th century that were against weaving technologies, textile technologies. I basically have every confidence that the good will wildly outweigh the bad.

[ photo courtesy U.S. Army ]

Tags: