Home Opinion Between Humans and Machines

Between Humans and Machines

by Lisa-Maria Schantl

One year ago, Uber began to test an autonomous fleet of Volvo XC90 models in Tempe, Arizona. The self-driving cars were used for UberX transportation services and carried customers on a regular basis. A human test driver who fixes the robot’s mistakes was in each vehicle. Since Sunday, March 18, Uber is no longer testing these cars. One of the worst possible scenarios had brought the complex technology and its risks to the world’s attention.

Elaine Herzberg, 49, was about to cross a street in Tempe, pushing her bicycle along next to her. She stepped out of a shadowy spot. A moment later, she was hit by an SUV. Fatally. Rafael Vasquez, 44, was at the wheel of the car.

One detail distinguished this event from over 37,000 other deadly road accidents that happen each year in the U.S.: Vasquez did not steer the car because it was autonomous. This accident raised several questions. Who is responsible? Herzberg, because she chose a blind spot to cross the street? Vasquez, because he did not react as he should have? Or the car and the operation company itself, because of the use of immature technology?

Uber responded immediately and suspended their tests in Arizona, Pennsylvania, California and Canada. What seemed like a sympathetic reaction could be an act of hysteria and panic. It showed that the company tested a system without acknowledging each possible event and the consequences it could cause. Should we really go into the first field tests of artificial intelligence blind?

We have made more technological progress in the 21st century than any other time period. The development of artificial intelligence has just been a matter of time and reached new peaks over the last few years.

By 2016, eight percent of U.S. companies had already applied autonomous robots in their daily working routine. One of the most recent ones being LaGuardia Airport, which increased its security staff by an autonomous robot with features such as 360 degree view and thermal imaging. Self-driving cars seemed to be the next logical step.

“Self-driving cars are the most complex robots that humans have ever built,” said Timothy Carone, a professor in the field of automation and artificial intelligence at the University of Notre Dame in Indiana, to CNN.

Letting go of this accomplishment would be as irresponsible as keeping it going without considering each minor detail in its development, its trial and its evaluation. We have to figure out what it means to be human and if we have the right to risk the life of human beings – or any form of organic life – in favor of technological progress before the next self-driving car crashes.

First and foremost, Herzberg’s death raises ethical questions. Questions which make us wonder about the true relationship between human and machine. Questions which should no longer be ignored when facing the fast-paced development in the fields of robotics and artificial intelligence.

The accident in Tempe does not show the failure of a machine, but the lack of thorough considerations when deploying one. Thoughts and reflections on all possible risks will now hopefully be addressed in, and also outside of, each office before the next tragic accident occurs.

You may also like

Leave a Comment

WP-Backgrounds by InoPlugs Web Design and Juwelier Schönmann