The self-driving car, that cutting-edge creation that’s supposed to lead to a world without accidents, is achieving the exact opposite right now: The vehicles have racked up a crash rate double that of those with human drivers.
They obey the law all the time, as in, without exception. This may sound like the right way to program a robot to drive a car, but good luck trying to merge onto a chaotic, jam-packed highway with traffic flying along well above the speed limit. It tends not to work out well.
Furthermore, the report also states that although the accident rates are twice as high as for regular cars, the nature of the accidents does not put the driverless car at fault…
Turns out, though, their accident rates are twice as high as for regular cars, according to a study by the University of Michigan’s Transportation Research Institute in Ann Arbor, Michigan. Driverless vehicles have never been at fault, the study found: They’re usually hit from behind in slow-speed crashes by inattentive or aggressive humans unaccustomed to machine motorists that always follow the rules and proceed with caution.
The main issue with programming driverless cars to “act more human-like” can be encapsulated by this line in the article: “If you program them to not follow the law, how much do you let them break the law?”
|SUPPORT INDEPENDENT SOCIAL COMMENTARY!|
Subscribe to our Substack community GRP Insider to receive by email our in-depth free weekly newsletter. Opt into a paid subscription and you'll get premium insider briefs and insights from us.
Subscribe to our Substack newsletter, GRP Insider!
Human thinking as we have come to know it regards the concept of following the law as one with a whole lot of gray area between the positive state (follow the law) and the negative state (disobey the law).
At this point, the way machines are programmed is that they do exactly only what their code tells them to do. When it comes to the execution part, it’s either they do it (1) or they do not (0). And they do it in a fraction of the time that humans take to do the same process.
It seems that as of this moment, the balance to abide by the law and to make judgment calls to handle the so-called gray areas is a complex process that is yet out of the reach of artificial intelligence (AI).
The article comes up with an example: should an anonymous car vehicle sacrifice its occupant by swerving off a cliff to avoid colliding with and possibly killing a school bus full of children? Or, an example closer to home, should we tell a white lie to people in order to avoid hurting their feelings?
Resolving this seeming incompatibility between the thought and execution processes of humans and machines is a challenge that will be around for a long, long, time. Machines rely on programmable logic: if A, then B; humans do not. They often use other tools for making their decisions aside from logic.
There is, something, however, seemingly off with the notion that the driverless car’s compliance with the law is considered wrong.
One can view this in another way: that the driverless car’s staying within the parameters of the law can set or be taken as an impetus, a precedent, even a model, for all human drivers to seriously consider fine-tuning their tendencies and behavioral patterns which make them non-compliant with the law.
This, of course, runs contrary to an underlying issue that exists with technology and artificial intelligence in the first place: how to make it more “human”, or at least temper it with an ethical regard for human life. In addition, it is expected that the human ego prevents us from even considering the fact that we can or should emulate a law-abiding behavior that we ourselves program into our machines.
If you’re a human, it just does not compute.
[Photo courtesy: nytimes.com]
А вы, друзья, как ни садитесь, все в музыканты не годитесь. – But you, my friends, however you sit, not all as musicians fit.