Avoidable Contact #66: The autonomous grift, and how you’re going to pay the price for it

Leave comment
Popular Science Urbmobile
Popular Science

I can’t tell you what the price of oil, bitcoin, or Tesla stock will be 24 hours from now, but I feel absolutely confident in making the following longer-term prediction: You are never going to share the road with a significant number of autonomous vehicles. “Never,” in this case, is a fancy way to say never. Not in the lifetime of anyone reading this column. I am stone-cold certain about this. If you want to contact me and make some sort of bet, I’m your huckleberry—but give me a chance to convince you on the matter first.

Let’s start with Moravec’s paradox. It tells us that it is remarkably easy to get computers to do things which seem difficult to human beings, such as playing chess at a grandmaster level, but that it is somewhere between “very hard” and “impossible” to get computers to do things that come naturally to people, such as learning a new physical activity, holding a conversation, and operating with a broad set of rules in unpredictable circumstances. Moravec suggests that it took a really long time, something along the lines of a billion years, to evolve the motor-muscle and visual-interaction skills that we take for granted in ourselves and in animals, but that the evolution of rational thought was a fairly quick thing. It took humans maybe 100,000 years to develop the faculties they needed to play chess; it took computers about 50 years to develop the same capacity. Apply that same ratio to a billion years, and you get … a lot longer than Ford, Waymo, or the self-appointed “science geek” in your office would like to believe.

We all intrinsically understand this. We trust a computer to fly us across the Atlantic Ocean or dispense life-saving medication, but we wouldn’t let one play Jarts™ with our children. Five hundred bucks will buy you enough computing power to plot a trajectory to Betelgeuse but five billion dollars can’t buy you a robot capable of taking your spoken order, finding its way to Burger King with no external help, and returning with what you asked for. Now here’s something you might not understand: We are no closer to the “Burger King errand machine” than we were in the year 2000, or 1980, or 1960 for that matter. This is not a problem that can be solved with a new generation of Pentium processor or a faster memory chip. We will need an entirely new kind of computer architecture, and we have no idea how to get there.

“Now wait a minute, buddy,” you’re saying. “If this is such an impossible problem, why are so many brilliant people spending their entire lives on it? And why is a relatively obscure autowriter the only person saying the emperor has no clothes?” Fair question—but it’s also one you could have asked regarding alchemy 800 years ago. There were thousands of utterly brilliant people who wasted their lives looking for the transmutation of lead into gold, because at the time it seemed no more difficult than many other problems which were eventually solved. Most educated men of the 13th century thought alchemy was considerably more possible than the creation of a flying machine, for example.

Autonomous vehicles, along with the “artificial intelligence” to operate them, have seemed just around the corner since at least 1980—but we are no closer to either. We’re just lowering the bar and redefining the terms. When we say “artificial intelligence” today, we really mean “machine learning,” which just means throwing a lot of computing power at things we knew way back in the ’60s. I’ve done a little machine learning for various financial institutions in the past decade. If you give me enough processor time, I can “teach” a computer to “recognize” a picture of a flower about as well as a two-year-old. The difference is that the two-year-old is smart enough to recognize an oil painting of a flower as well … and a crayon drawing … and a cartoon … and a flower made of LEGOs. All of these things are beyond the “AI” that we have right now.

Similarly, the “autonomous vehicles” predicted in magazines like OMNI functioned more or less like human drivers, only better. Forty years later, we’ve drastically simplified the task at hand to “Level 4” autonomy, which basically means “a vehicle that can operate in a thoroughly known and mapped area with no severe weather, no interruption of wireless signal, and no truly out-of-bounds situations like, say, a deer standing in the middle of the road while a child stands on the sidewalk near to said deer.” In 1996, Carnegie Mellon built “Navlab 5,” a Pontiac Trans Sport that went cross-country without human help for all but 50 miles of the journey using a very careful selection of roads. The best “autonomous” vehicles today are a little better than that—but they still freeze up when it’s time to do something absolutely crazy like pulling into a gas station or taking a detour around a temporarily closed road. This, despite the fact that today’s average desktop computer is more than a thousand (yes, thousand) times as powerful as its 1995 equivalent. If increasing the processing power by over a thousand times doesn’t let you pull into a gas station, do you think that increasing it another thousand times is going to do the trick? What about that scenario of the deer in the road and the child on the sidewalk? Navlab 5 had about a million transistors; the Playstation 4 has 5.8 billion. How many transistors do you need to make a decision between a deer and a child?

Popular Science Urbmobile
Popular Science

The people at the real technical end of autonomous driving know all of this, of course. They know that there is no way to put “robot cars” on the road with real human drivers, if only because the robot cars will have to be programmed to let everyone cut in front of them. The only way this can possibly work is if you take the human drivers out of the equation. This makes the problem much simpler. You line the roads with sensors, and if there’s any kind of problem you just shut the whole section of road down until some kind of external agent sorts it out. Those of you who live in cities with subways will recognize this approach to the problem, because that’s how subways work—or how they would work if we trusted subways to be operated by robots. You’d think train systems would be far easier to automate than cars, and you’d be right, but in actual practice very few train systems are automated, and they tend to be back-and-forth zero-complexity systems with low possibility of human interference like, say, the terminal shuttles at the Tampa airport. Think about that. In order for us to trust a computer to run one train back and forth on one piece of track, with no other trains in sight, we have to give that train FAA security. You’ll notice the monorails at Disney World aren’t automated; they have a much more complicated process. Ten years from now, you might be able to automate the monorail, if you were willing to shed a little human blood in the process.

If the co-existence of autonomous vehicles and human drivers is a not-gonna-happen-dot-com thing—and it is—then it should be obvious that the whole thing is somewhere between a grift and a scam. And indeed it has all the characteristics of a scam, from the never-fulfilled promises to the constant redefinition of terms. Like quantum computing and “strong AI” and any number of other wacky technological goose chases, the autonomous vehicle “business” is dependent on there being a knowledge gap between the people writing the checks and the people selling the product. It is also dependent on there being a whole bunch of people in the media who “freaking love science” but whose eyes glaze over at the mention of “NP-complete” or “Fast Fourier transform,” because those are the people who report breathlessly on autonomous vehicles with the same lack of critical thought they would likely display if you could show them a convincing-looking, but completely fake, lightsaber.

The short-term goal of the grift is, obviously, to get rich and get out. The long-term goal is more dangerous: to redefine the American road as a place where human-operated vehicles are expressly forbidden in favor of “dumb cars” that rely heavily on central control and communication to operate in any sort of even remotely satisfactory fashion. At that point, all the stock-market bets will pay off and all the investors will be made whole. Which is a nice way of saying that all the “technology” in “self-driving cars” is actually the same “technology” used for any number of other corporate purposes, whether that purpose is the eternal renewal of copyright for lucrative intellectual property or the implementation of laws that sound like they are keeping companies honest but in practice merely raise the barriers to entry for potential competitors all the way to the troposphere. It’s the “technology” of lobbying the government and manipulating public opinion towards a particular end. And the practitioners of that particular technology are both highly skilled and highly compensated, so they usually get their way.

Which brings me back to the prediction at the head of this column: You are never going to share the road with a significant number of autonomous vehicles. Consider it an ironclad fact. The only thing left to determine is: Will this statement be true because the autonomous cars will never arrive, or because the human drivers will be bullied off the roads? That is up to you, and me, and all of us. Take it seriously.

Read next Up next: 2021 Corvette C8 will offer Gulf Oil livery … sort of

Leave a Reply

Your email address will not be published. Required fields are marked *

Your daily pit stop for automotive news.

Sign up to receive our Daily Driver newsletter

Subject to Hagerty's Privacy Policy and Terms of Conditions

Thanks for signing up.