Humans slamming into driverless cars: is it really a key flaw on the part of the latter?

According to a report on Bloomberg, there seems to be a key flaw in the way driverless cars are behaving on the road…

The self-driving car, that cutting-edge creation that’s supposed to lead to a world without accidents, is achieving the exact opposite right now: The vehicles have racked up a crash rate double that of those with human drivers.

The glitch?

They obey the law all the time, as in, without exception. This may sound like the right way to program a robot to drive a car, but good luck trying to merge onto a chaotic, jam-packed highway with traffic flying along well above the speed limit. It tends not to work out well.

Furthermore, the report also states that although the accident rates are twice as high as for regular cars, the nature of the accidents does not put the driverless car at fault…

Turns out, though, their accident rates are twice as high as for regular cars, according to a study by the University of Michigan’s Transportation Research Institute in Ann Arbor, Michigan. Driverless vehicles have never been at fault, the study found: They’re usually hit from behind in slow-speed crashes by inattentive or aggressive humans unaccustomed to machine motorists that always follow the rules and proceed with caution.

The main issue with programming driverless cars to “act more human-like” can be encapsulated by this line in the article: “If you program them to not follow the law, how much do you let them break the law?”

SUPPORT INDEPENDENT SOCIAL COMMENTARY!
Subscribe to our Substack community GRP Insider to receive by email our in-depth free weekly newsletter. Opt into a paid subscription and you'll get premium insider briefs and insights from us.
Subscribe to our Substack newsletter, GRP Insider!
Learn more

ROBOCAR-master675

Human thinking as we have come to know it regards the concept of following the law as one with a whole lot of gray area between the positive state (follow the law) and the negative state (disobey the law).

At this point, the way machines are programmed is that they do exactly only what their code tells them to do. When it comes to the execution part, it’s either they do it (1) or they do not (0). And they do it in a fraction of the time that humans take to do the same process.

It seems that as of this moment, the balance to abide by the law and to make judgment calls to handle the so-called gray areas is a complex process that is yet out of the reach of artificial intelligence (AI).

The article comes up with an example: should an anonymous car vehicle sacrifice its occupant by swerving off a cliff to avoid colliding with and possibly killing a school bus full of children? Or, an example closer to home, should we tell a white lie to people in order to avoid hurting their feelings?

Resolving this seeming incompatibility between the thought and execution processes of humans and machines is a challenge that will be around for a long, long, time. Machines rely on programmable logic: if A, then B; humans do not. They often use other tools for making their decisions aside from logic.

There is, something, however, seemingly off with the notion that the driverless car’s compliance with the law is considered wrong.

One can view this in another way: that the driverless car’s staying within the parameters of the law can set or be taken as an impetus, a precedent, even a model, for all human drivers to seriously consider fine-tuning their tendencies and behavioral patterns which make them non-compliant with the law.

This, of course, runs contrary to an underlying issue that exists with technology and artificial intelligence in the first place: how to make it more “human”, or at least temper it with an ethical regard for human life. In addition, it is expected that the human ego prevents us from even considering the fact that we can or should emulate a law-abiding behavior that we ourselves program into our machines.

If you’re a human, it just does not compute.

[Photo courtesy: nytimes.com]

4 Replies to “Humans slamming into driverless cars: is it really a key flaw on the part of the latter?”

  1. We are still in the infancy stage, regarding Artificial Intelligence. It is easier to work on a flying machine, piloted by an Artificial Intelligence.

    The “Driverless Car” is connected to a sophisticated GPS systems. There are many sensors in it. So, if one or two of the sensors fail. It will result to a wayward Driverless car. Assembly Robots have the same problems.

  2. Why would anybody want a driverless car?

    We are men, we hunt food for ourselves, we pitch our own tents, we make our own fire, we make our own repairs, etc, etc.

    Seriously though, there are just things on this earth that is not possible without the constant intervention of humans.

  3. After a lifetime of driving, repairing and studying automobiles, I have come to an unavoidable conclusion – we are the weakest link in a car. As car components go, human beings are deeply substandard – we have imperfect perception, we are ruled by emotion, and we vary wildly in quality.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.