Filling Voids and Fixing Problems in Autonomous Vehicle Algorithms

It’s been just over a year since the first pedestrian was hit and killed by a self-driving car. Since then, we’ve learned a lot about the algorithms that drive autonomous vehicles.

Self-driving cars are probably better than human drivers at maintaining safe speeds and distances on highways. But the technology still has serious problems, such as algorithms that are better at detecting light-skinned pedestrians, and therefore are more likely to hit a dark-skinned pedestrian

Nicholas Evans argues that the artificial intelligence community has not done enough to correct the biases that are currently embedded in their systems. 

“This is not a new problem and it's not a problem that's exclusive to autonomous vehicles,” he told Living Lab Radio. “The problem with race in algorithmic bias is very long standing.” 

Evans, who is an assistant professor of philosophy at the University of Massachusetts Lowell, is working with other philosophers and an engineer to write algorithms using ethical theories. 

One area of his work has to do with the distribution of very small risks over millions of miles driven. 

In one example, an autonomous vehicle is passing a vehicle transport truck on the highway. 

“If [the autonomous car] moves around in its lane in order to keep its occupants safe, is it applying risk to someone in a third lane by getting too close to that driver,” Evans asks. 

Veröffentlichung:
17. April 2019

Auto-mat ist eine Initiative von

TCS

Das Portal wird realisiert von

Mobilitätsakademie
 

in kooperation mit

Swiss eMobility

veranstaltungspartner

Schweizer Mobilitätsarena
 
 
 
Datenschutzhinweis
Diese Webseite nutzt externe Komponenten, welche dazu genutzt werden können, Daten über Ihr Verhalten zu sammeln. Lesen Sie dazu mehr in unseren Datenschutzinformationen.
Notwendige Cookies werden immer geladen