It’s been just over a year since the first pedestrian was hit and killed by a self-driving car. Since then, we’ve learned a lot about the algorithms that drive autonomous vehicles.
Self-driving cars are probably better than human drivers at maintaining safe speeds and distances on highways. But the technology still has serious problems, such as algorithms that are better at detecting light-skinned pedestrians, and therefore are more likely to hit a dark-skinned pedestrian.
Nicholas Evans argues that the artificial intelligence community has not done enough to correct the biases that are currently embedded in their systems.
“This is not a new problem and it's not a problem that's exclusive to autonomous vehicles,” he told Living Lab Radio. “The problem with race in algorithmic bias is very long standing.”
Evans, who is an assistant professor of philosophy at the University of Massachusetts Lowell, is working with other philosophers and an engineer to write algorithms using ethical theories.
One area of his work has to do with the distribution of very small risks over millions of miles driven.
In one example, an autonomous vehicle is passing a vehicle transport truck on the highway.
“If [the autonomous car] moves around in its lane in order to keep its occupants safe, is it applying risk to someone in a third lane by getting too close to that driver,” Evans asks.
Am 16. September findet der Fachkongress AUTOMATICAR zum 2. Mal änlässlich der ...
»weiterlesenAm 12. April fand das erste Mal die von der Mobilitätsakademie des TCS organisierte ...
»weiterlesenEPTA Conference 2017 „Shaping the Future of Mobility“ Luzern, Verkehrshaus, Mittwoch, 8. ...
»weiterlesenAm 22. September war www.auto-mat.ch live vor Ort, als die ersten beiden automatischen ...
»weiterlesen