Are Self-Driving Cars Really Safer?
Auto manufacturers and major tech companies have both been working consistently to get self-driving cars on the road as soon as possible. Obviously, profit is a major motivator here, but engineers and distributors are also motivated by the purported safety benefits of self-driving vehicles. Right now, something like 94 percent of car accidents are attributable to human error. If we could hypothetically reduce the rate of accidents with error-free self-driving cars, we could save tens of thousands of lives every year.
This depends on the assumption that self-driving cars are actually safer. Are they really?
Reliability
First, we have to look at a self-driving car’s reliability, compared to a person driving a similar vehicle. Autonomous vehicles rely on onboard software to monitor environments and exert control. They’re programmed to operate in a specific way, and will not deviate from their programming; in this way, they can be compared to a calculator. An adequately designed calculator will never give you the “wrong” answer to a mathematical equation, and a self-driving car will never decide to go against its programming to commit an error.
Importantly, self-driving cars will also be exempt from many of the factors that cause humans to get into accidents. Cars can never become intoxicated. Cars will never be distracted by a sensational billboard or the scene of an accident on the other side of the road. Cars will never suffer from road rage, or attempt to drive while tired. For these reasons, self-driving cars already have a leg up on their human counterparts—regardless of their inner workings.
That said, the type of programming associated with the car matters—and there’s no guarantee it can function perfectly 100 percent of the time.
Detection Systems
One of the main systems a self-driving car uses is a sophisticated detection system. Using a variety of cameras, radar, and lidar, the car “sees” its environment, looking for street signs, traffic lights, and of course, pedestrians. Companies like to showcase their software when it’s working at its peak, but some studies suggest that detection systems fail to notice pedestrians in an alarming number of cases, and certain weather conditions can greatly compromise the integrity of the system.
In other words, while a self-driving car can hypothetically operate the vehicle more safely than a human, there’s no guarantee that its data feed is going to be perfect; it can’t avoid hitting a pedestrian if it doesn’t even know the pedestrian is there.
Hacking and Software Integrity
There’s also the problem of security—and software integrity. These days, all it takes is a guessed password and/or an exploited vulnerability for a malicious actor to gain access to any device or system. If a malicious person were to gain control of a self-driving car, they could hypothetically sabotage it, or control it as they see fit, potentially putting countless lives in danger.
Tech companies are acutely aware of this, and scrambling to improve software security to such a degree that hacking becomes a non-issue. But hacking isn’t the only way to exploit a vehicle’s software. For example, a clever prankster in 2017 was able to “trap” self-driving vehicles in a modern-day salt circle, using a solid white line to confuse the vehicle’s software and prevent it from leaving a location. On an even simpler level, pedestrians could learn to walk in front of self-driving vehicles with reckless abandon (and a false sense of security), due to the vehicle’s deep-seated programming to avoid hitting pedestrians at all costs. These risky behaviors could eventually lead to tragedy—especially in a time when both self-driving and traditional vehicles are on the road.
Of course, there’s an important caveat here; traditional cars can be exploited in the same ways. Someone could hack into a self-driving vehicle and make it crash into a wall, sure, but they could also cut the brake lines for the same effect. Someone might be able to trap a car in a salt circle, but they could also deflate the tires of a traditional vehicle and leave it equally helpless.
The Bottom Line
So what’s the bottom line? Right now, self-driving cars have a handful of advantages over human drivers, but there are also some weaknesses that can’t be ignored. Their autonomous, emotionless nature means they’re never going to suffer from the most common preliminary causes of accidents (like intoxication or rage), but their sensors and programming are nowhere near perfect.
The good news is that self-driving vehicle technology is constantly getting better. Companies understand the weaknesses of current autonomous systems, and are working aggressively to make up for them. Every year, autonomous vehicle safety gets a little better—and a little closer to totally eclipsing the safety capabilities of human drivers.