With Tesla’s more than 70,000 Autopilot-equipped Model S and X’s zooming down the freeways, the electric car giant has collected a massive amount of data from the excursions—from about 100 million miles driven with the program active. Elon Musk recently announced that Tesla will be offering the information to the US Department of Transportation, to help bridge the gap between the realities of autonomous driving and the takeaways for regulations and infrastructure.
But, how has Tesla come to collect so much data in the first place? How has the company been successful in earning the trust of its drivers?
For an inside look at Tesla’s unique approach to autonomous driving, TechRepublic spoke with a Tesla spokesperson. Below is the conversation, lightly edited for clarity.
What is Tesla’s approach to autonomous driving?
At Tesla, we’ve taken an incremental and a disruptive—in the original sense of the word—approach to autonomy. Rather than waiting until we have a perfect solution, we’ve decided not to let perfect be the enemy of the “better,” and instead, release “better” as soon as it’s available.
In various spaces where we’re confident that we can do better on average than a human driver—like on highways and in a few other environments, especially the driveway or garage—we’ve released autonomous features. We will continue to evolve these features to encompass more and more scenarios that our vehicles encounters when driving, as well as greater and greater degrees of autonomy while driving in those scenarios.
What you’ve seen today, and the way we’ve communicated this in the past, is what Autopilot started on the highway, Summon started in the driveway. The two will eventually converge, so that the capabilities and super-urban and even private property environments will expand just as Autopilot finds its way off the highways and into a more expansive set of driving conditions.
What kind of feedback have you gotten so far from people who have used Summon and Autopilot?
They’re extremely popular. In the first month of the release, Summon was used nearly a million times in the US and China alone. It’s due to release in the rest of the world with a slightly different implication to account for some regulatory nuances, and I anticipate that it will only ramp up. With all of those activations, we have had reports of, I believe, three incidents, all of which were very benign.
The typical early response to Autopilot is people are a little bit anxious. They’re used to a glitchy laptop or a problematic smartphone, and they tend to implicitly extend that experience into the car, and they think, “Man, if my car were to have a glitch like my phone or my computer typically do, this could be really bad.” The extrapolation there is not quite accurate, nor is the standards or the bar the same for some of these consumer devices as they are for vehicles. Autopilot obviously won’t account for all scenarios, but it also is explicitly designed not to be at the ultra-pedigree state that a lot of consumer electronics devices typically are.
Once you’ve driven it for a bit, what you’ll find is you become so used to the car driving itself on the highway, driving anything else becomes starkly obnoxious.
I’ve been driving on Autopilot for a better part of a year now, and several months ago found myself in Boston on a trip and, to my embarrassment, I had to rent a car. As I was driving it to my destination, I just recall the distinct impression that this car is not doing its job. Why is it on me to steer while I’m driving down the highway? I should be free to sit here and look at the countryside of New Hampshire as I drive through it.
The expectation that it creates in users over time, there’s an initial surprise or there tends to be an initial apprehension about it. But we find that, over time, people become so used to it that living without it is an experience that nobody wants to have.
What are the environments that Autopilot can’t master?
Yeah, there are environments, particularly densely urban environments right now where we don’t recommend the Autopilot be used. The reason for that is it becomes a much less predictable environment, and the movement of objects in it are much less predictable.
What about weather conditions and stuff like that, like rain, snow?
We’ve designed Autopilot to not allow the user to enable it if those weather conditions are severe enough. The probability of successful operation is diminished, so specifically were you to drive Autopilot in a blizzard in which the camera can’t see the road, or where the radar is otherwise obscured by snow pack on the front of the car, environments like those Autopilot shows a warning to the user—it’s not a warning, it’s an alert that Autopilot is not available on account of those conditions.
There are still a lot of questions about regulations. What has to happen to get autonomous cars on the road?
There’s the current popular opinion, not an explicit popular opinion, it’s sort of an implicit one that a lot of people share, and it creeps its way into these conversations. That autonomy will and must be held to a higher standard than human drivers are.
Ask a friend about a driverless car that hit someone, relative to a human-driven car that hit two people, and almost invariably if you word the question the right way, the impression given to the person hearing it is the fact that the car that was autonomous and hit someone is somehow worse. That’s a worse outcome than had a human-driven car killed two people or more.
That’s just a fundamental piece of public perceptions that’s found its way into our laws. We have pretty asymmetrically motivated regulators who are typically punished for issues that arise, but there is zero incentive for them, generally, to move the ball forward on some of these technologies. If you’re a regulator in a given agency, you will be crucified for an issue with a technology issue you approved that had a bug. What you won’t be is scored on the net of the asset that technology afforded it relative to the cost.
That piece of perception is going to lose to what will probably be at least an order of magnitude of difference in the expectation of autonomy before it will be legal. What that will mean is if a human driver were to have an accident once every million miles, the expectation of autonomy I think will probably be something on the order of one accident per ten million before it will be allowed.
When will the public will be ready to accept autonomous cars on the road? Will we see a shift soon?
It’s hard to judge, contingent on how this rolls out. The incremental technologies rolling out such as Autopilot can help break down that barrier to adoption. Those who experience it, those who spend time with it, very quickly become comfortable with the idea of this kind of technology.
Some of it’s going to be dependent on how quickly we, and others, move that ball. Even the spaces where we’re currently allowed, it will also depend on the public perception as it incrementally rolls out. The potential for an autonomousThree-Mile Island equivalent, and for what that did to the nuclear industry, is just as high. Should an event occur that receives major publicity, it’s actually quite likely that that could throw a large bucket of cold water on progress made towards legalizing and increasing adoption of autonomous vehicles.
What are the things that you spend the most time considering on a daily basis when it comes to safety?
The development of the technology in safety cages for any autonomous vehicle is the game of ironing out the very long table of corner cases and issues that are very much not the norm and not a nominal driving case, but that show up regardless.
If you see the world through the eyes of a researcher, you’ll see a lot of things that you never noticed before. You’ll see corner cases and other issues in the environment that each require specific tailoring to iron out. Where nearly all of the effort in autonomous vehicles goes, and you ask where those developing this spend most of their time, it’s specifically in identifying and finding solutions to off-nominal driving cases. Anyone, any startup, anyone with basic understanding of programming can put together a decent demonstration of a car driving itself, let’s say around the block or down the lane. But, where it takes an army of people and a flood of data is from worldwide operation, which is something that Tesla is uniquely able to identify.
If you could look forward a few years, where would you think we’re going to see most of the autonomous cars? Will they be in fleets or private, high-end buyers? Where do you think the first big adoption will come?
That’s a great question. I’d rather not comment on that one, if you don’t mind.