Mostly whimsical reflections on life
Even as we grapple with human ethics, philosophers are beginning to worry about robot ethics. It isn’t idle speculation.
We’re not talking here about HAL 9000, the rogue computer in 2001: A Space Odyssey.
We’re talking instead about how robots – for example in driverless cars – are programmed. Does the program call for the robot to drive into a telephone pole or an oncoming car to avoid a pedestrian in a crosswalk or does it take out the pedestrian to save the passenger’s life? Not a trivial decision. Certainly an ethical one.
With a human driver, we expect decisions based on instinctive response, for better or worse. But robots don’t have instincts. They have software programs.
The ethics of robots seems like an appropriate topic for science fiction writers. As far back as 1942, Isaac Asimov proposed three ethical laws for robots:
While Asimov’s laws work pretty well in guiding the plot in a Terminator movie, they may be inadequate to steer the driverless car that faces a choice of killing the passenger or the pedestrian.
Roboethics – a term coined as recently as 2002 – may differ notably from “regular” ethics in that it must straddle physical practicality and what we humans might call “doing the right thing.” That’s why this emerging field is multidisciplinary, with input from diverse experts in computer science, sociology, industrial design, theology, cognitive science and, of course, ethics.
One way to crystallize the questions posed by roboethics is to move from viewing robots simply as having artificial intelligence to seeing them as “artificial moral agents.” The challenge becomes anticipating and deciding how you want an artificial moral agent to behave in a given situation.
As robotics becomes more commonplace in manufacturing and service sectors to achieve efficiency, the number of ethical issues to decide for artificial moral agents will increase exponentially.
Consider a robotized fast food restaurant. An artificial moral agent won’t be physically able to chase down a customer who accidentally leaves his or her wallet on the counter. But should the artificial moral agent who prepares the food be programmed to detect E. coli bacteria?
Adding food security duties to the robotic chef probably means adding substantial technology and cost. Those who downplay the importance of roboethics may say, “Let the marketplace decide.” Those who care about roboethics may counter with, “It would be morally wrong to program an artificial moral agent to ignore food security.”
You could make the same analogy about advanced manufacturing. The robots building and inspecting VW vehicles might be able to skate by the moral dilemma of software to trick carbon emission testing (that wouldn’t really conflict with any of Asimov’s laws), but what about skipping over design flaws or faulty equipment that could lead to crashes and human fatalities?
The issues posed by roboethics may not seem – and probably aren’t – as visceral as the minimum wage for human workers. Whether we like or not, the two issues are intertwined. Capitalists will invest capital to improve efficiency and productivity, which usually means eliminating labor costs and jobs.
We hold human workers to certain standards. The question at hand is whether we will hold robots to the same or higher standards. It isn’t an idle question.