Tesla and AI

Tesla is now being investigated by the Federal government as the dangers that its so-called Autopilot software poses have become so egregious that they could no longer be swept under the rug. People who never signed up for Tesla’s beta program are being killed by it.

The real danger, though, is the omnipresent use of the misnomer “Artificial Intelligence” or “AI”. This term leads people to believe that computer software and hardware can provide something similar to human intelligence. This could not be further from the truth.

What has been developed, and is an important and useful innovation, is machine learning. This is the ability to train recognition software that can, for example, identify elements of an image as cars, people, road signs, curbs and so forth. It can even recognize situations in some cases. What it cannot do is reason about a situation and figure out what to do about it. These action plans have to be developed by humans and programmed into the software. For every possible situation. That’s why self-driving software is taking so long and needs to operate in a constrained environment where events that it is not programmed to handle do not occur.

Think about it. You can teach 16-year old humans to drive a car reasonably safely in a few days. That’s because once they understand the rules of the road they are readily able to reason and apply them in situations that they have never seen before. That’s intelligence. Despite decades of trying, computer software isn’t there yet. There are powerful arguments that say it will never be, that human intelligence differs in fundamental ways from the von Neumann model of the stored program computer. We don’t have a stored program, we make it up as we go along. Maybe quantum computers will be different, but not in my lifetime as far as I can see.

Both comments and trackbacks are currently closed.