As robots enter our lives, how will they learn our ethics?

Jacob Ward

At a point when most people have yet to experience self-driving cars, and may be hesitant to cede driving duties to a robot, Jacob Ward offers a strong argument in their favor. The former editor-in-chief of Popular Science and current television correspondent challenged his audience at the 2017 Robotics Alley  Conference & Expo in Minneapolis to consider that 90 percent of automobile crashes result from human error.

“People are the weak link,” he said, noting that half the drivers between 18 and 29 admit to texting while driving. His conclusion: Self-driving cars would remove human error from the driving equation, resulting in far fewer accidents.

The title of Ward’s keynote presentation was “Automatic Ethics.” He advanced several scenarios to flesh out his premise that automation is forcing ethics into uncharted territory. For instance, who would you sue if a robotic car crashed? And on a deeper level, if humans are going to turn over tasks to robotic helpers, how will we teach robots human ethics and standards of judgment?

For example, consider a robotic car confronted with the inescapable dilemma of either hitting a pedestrian or endangering the car’s passenger; which would we choose to have happen, and how will we instill those rules into the robots we’re going to entrust with making such choices? Ward’s talk posited that eventual adoption of self-driving cars is inevitable, but the transition period will feature particular dangers from humans who refuse to surrender the wheel.

As we struggle to determine who should be responsible for the inevitable accidents that will occur, Ward offered two current programs he believes could be instructive.

First, Workman’s Compensation Insurance exists for workers who perform dangerous jobs, such as law enforcement personnel. Second, the National Vaccine Injury Compensation Program was created to help families and individuals who experience rare but serious reactions to vaccines. In essence, these programs exist because we need people to perform risky jobs and the larger share of people benefit from vaccines that may be dangerous to a few. Similarly, while driverless cars could severely reduce the number of annual deaths from automobile accidents and benefit society as a whole, they will not be perfect. These existing programs may provide an example of how a new automated vehicle insurance program could be established.

Another area where “automatic ethics” come into play is in defense situations. Ward cited life-or-death decisions where machines are fully in charge. Russia has a “dead-hand” protocol in place that, in the case of a nuclear attack that wiped out the party leadership, a retaliatory nuclear strike would ensue, controlled by machines rather than humans. Israel’s Iron Dome anti-missile system automatically shoots down incoming missiles without human involvement.

“There always should be a human in the trigger-pulling decision,” Ward said.

A newly developed military protocol sponsored by DARPA (Defense Advanced Research Projects Agency) seeks to leverage currently limited warfare devices, such as unmanned attack drones that require direct, one-to-one control of a human into a larger cohort. The new plan puts a single human in charge of multiple robotic military devices referred to as a swarm. While the military advantage of this arrangement seem obvious, unintended consequences could prove disastrous if control were to fall into the wrong hands. In fact, in 2015, artificial intelligence researchers posted an open letter, warning about automating weapons systems, he said.

Returning to the central challenge of how to train robots to apply human ethics, Ward cited two approaches currently in use. One program employs a three-character game that relies on a bank teller, a cop, and a robber. As various scenarios unfold, the artificial intelligence system learns how humans would react to each, ingraining human values in the process. Another even more fundamental process, Ward related, has AI devices reading nursery rhymes to learn the morals they hold.

If the development of robotic ethics seems daunting, in many ways it is. As Ward pointed out, after years of study, psychologists and other professionals “barely understand why humans do what they do.”

 

Print Friendly

Leave a Reply

Your email address will not be published. Required fields are marked *