People seem more that a bit freaked out by the trolley problem right now. The 60s-era thought experiment, occasionally pondered with a bong in hand, requires that you imagine a runaway trolley barreling down the tracks toward five people. You stand at a railway switch with the power to divert the trolley to another track, where just one person stands. Do you do it?
This ethical exercise takes on new meaning at the dawn of the autonomous age. Given a similar conundrum, does a robocar risk the lives of five pedestrians, or its passengers? Of course, it isn’t the car making the decision. The software engineers are making it, cosseted in their dim engineering warrens. They will play God. Or so the theory goes.
Giving machines the ability to decide who to kill is a staple of dystopian science fiction. And it explains why three out of four American drivers say they are afraid of self-driving cars. The National Highway Traffic Safety Administration even suggested creating something of an “ethical” test for companies developing the technology.
But the good news is, that point might be moot. In a paper published in Northwestern University Law Review, Stanford University researcher Bryan Casey deems the trolley problem irrelevant. He argues that it’s already been solved—not by ethicists or engineers, but by the law. The companies building these cars will “less concerned with esoteric questions of right and wrong than with concrete questions of predictive legal liability,” he writes. Meaning, lawyers and lawmakers will sort things out.
Solving the Trolley Problem
“The trolley problem presents already solved issues—and we solve them democratically through a combination of legal liability and consumer psychology,” says Casey. “Profit maximizing firms look to those incentivizing mechanisms to choose the best behavior in all kinds of contexts.” In other words: Engineers will take their cues not from ethicists, but from the limits of the technology, tort law, and consumers’ tolerance for risk.
Casey cites Tesla as an example. Drivers of those Muskian brainchildren can switch on Autopilot and let the car drive itself down the highway. Tesla engineers could have programmed the cars to go slowly, upping safety. Or they could have programmed them to go fast, the better to get you where you need to be. Instead, they programmed the cars to follow the speed limit, minimizing Tesla’s risk of liability show something go awry.
“Do [engineers] call in the world’s greatest body of philosophers and commission some grave treatise? No,” says Casey. “They don’t fret over all the moral and ethical externalities that could result from going significantly lower than the speed limit or significantly higher. They look to the law, the speed limit, and follow the incentives that the law is promoting.” By that, he means that if policymakers and insurers decide to, say, place the liability for all crashes on the autonomous cars, the companies making them will work very hard to minimize the risk of anything going wrong.
The public has a say in this too, of course. “[T]he true designers of machine morality will not be the cloistered engineering teams of tech giants like Google, Tesla, or Mercedes, but ordinary citizens,” writes Casey. Lawmakers and regulators will respond to the will of the public, and if they don’t, automakers will. In January, Tesla pushed an Autopilot update that lets cars zip along at up to 5 mph over the limit on some roads, after owners complained about getting passed by everyone else. The market spoke, and Tesla responded.
Ethics in Self-Driving Cars
Still, thought exercises like the trolley problem helps gauge the public’s thoughts on autonomous vehicles. “When you’re trying to understand what people value, it’s helpful to eliminate all the nuance,” says Noah Goodall, a transportation researcher with the Virginia Transportation Research Council who studies self-driving cars. The thought experiment can provide a broad overview of what kinds of guidelines people want for those cars, and the problems they want addressed. But it can confuse them, too, because they are the fringe case at the fringe of fringe cases. “Trolley problems are pretty unrealistic—they throw people,” says Goodall.
The trolley problem also assumes a level of sophistication from the technology that remains quite some way down the road. At the moment, robocars cannot discern a child from a senior citizen, or a group of two people from three people–which makes something like the trolley problem highly theoretical. “Sometimes it’s hard to come to a fine-grain determination of what’s around [the car],” says Karl Iagnemma, who used to head up the Massachusetts Institute of Technology’s Robotics Mobility Group and is now the CEO of the self-driving software startup nuTonomy. “Typically the information that’s processed by a self-driving car is reasonably coarse, so it can be hard to make these judgments off of coarse data.”
Helping people feel comfortable with autonomous vehicles requires “being upfront about what these vehicles really do,” Goodall says. “They prevent a lot of crashes, a lot of deaths. Fine-tuning things are difficult, but companies should prove they put some thought into it.” More than 35,000 people die on American roads every year; over 1.25 million people die worldwide. Worrying about the ethical dilemmas of something like the trolley problem won’t save lives, but honing autonomous technology might.
But in addition to hiring more engineers, all these companies developing robocars might want to hire a few smart lawyers. Turns out they’ll have a hand in the future of mobility, too.
Go Back to Top. Skip To: Start of Article.from WIRED https://www.wired.com
via IFTTT