Skip to content

Tesla Autopilot part 2 – Safety

by Brandon on March 14th, 2021

This post will focus on answering a simple question: Is Autopilot safe? I’ll answer this two ways: first, by describing my experience and my mental model for what it means for something to be “safe”. Second, by looking at some data. One thing I will not attempt to address here are any legal or civil liability concerns.

As background, I suggest reading (or at least skimming) part 1 if you haven’t. I’ll use a number of terms (e.g. TACC, AutoSteer, etc) which are defined in that post.

TL;DR: I believe Autopilot is safe when used responsibly, and that overall the Autopilot convenience features have little effect on overall vehicle and road safety (in either direction).

What does it mean to be “safe”?

I think Autopilot is safe in the way that cruise control is safe – it isn’t completely without risk, but if used responsibly the risk of it causing a serious problem is minimal. In fact, I believe that Autopilot’s TACC (Traffic Aware Cruise Control) is inherently safer than regular cruise control, and AutoSteer can make you safer in certain situations – e.g. cases where you might have been distracted anyway (e.g. fiddling with the radio or reaching for some gum). All current Autopilot capabilities fall under the umbrella of an Advanced Driver Assistance System, or what the SAE defines as “Level 2 – Partial Driving Automation”. In short: just as with cruise control, you are driving – the automation is just helping you. 

For drivers who misuse the system, or those who actively abuse it, the question gets more complicated. I consider misuse to be anyone who doesn’t pay attention or ignores warnings and guidelines about where, when, and how to use or not use the system. Misuse can arise due to misunderstanding of the system or ignorance of its limitations. Some have argued that misuse is inevitable, perhaps due to automation fatigue. Others have argued that Tesla encourages misuse or fails to adequately discourage it. In my opinion they generally do a good job of discouraging misuse and communicating through the product itself how it should or shouldn’t be used. Unfortunately certain aspects of their marketing, some statements from a particular CEO, and the overzealous claims of some ardent fans are not always well-aligned with the product truth. At the very least, there is room for improvement here.

I consider abuse of the system to cover any attempts to circumvent the safeguards of the system (e.g. by using a device to stop the “nags”), or any deliberate attempts to operate the system in a manner other than what is intended.

Some will point to YouTube videos or social media posts depicting misuse or abuse of Autopilot as evidence that there is a problem here. However, I think it is important to view these examples in context. These videos tend to be stunts, most often done to garner attention (and clicks and thus $$). Some have been debunked as faked. More importantly, these kind of stunts are not unique to Tesla vehicles, Autopilot, or any form of ADAS or automation.

Not sure what I mean? Check these out:
Teen Seriously Hurt in ‘Ghost Ride the Whip’ Stunt When Truck Runs Him Over – YouTube
Woman “Ghost Riding the Whip” into oncoming traffic – YouTube
Ghost Riding The Whip Gone Wrong (No Commentary) – YouTube

Further, Autopilot, and every L2 / ADAS is really an advanced form of “cruise control”. Cruise control itself has been shown to be associated with negative effects on driver attention and behavior (source). Another interesting question worth exploring may be whether Autopilot (or another ADAS) is better or worse in this regard than the “dumb” cruise control found on very many vehicles for decades.

Net safety impact is complicated

One of the reasons it’s difficult to come up with a clear consensus answer about the safety of Autopilot (or any ADAS really) is that any such system can and almost certainly does have a mix of both positive and negative safety impact. The real challenge is determining the overall “net” effect.

For example, two difficult to balance considerations are: 

1) An irresponsible driver who is going to fall asleep at the wheel on a given drive is undoubtedly much safer if they are driving with Autopilot (Autosteer, or Nav On AP) enabled. The same could probably be said for an intoxicated or otherwise impaired driver.  

2) The existence of AP could result in irresponsible drivers attempting to drive when tired or intoxicated more than they otherwise would have. 

Of course, in the first case the use of Autopilot absolutely does not make those conditions safe, you should absolutely not drive with or without AP if you are too tired or have been drinking. However, I think there is a convincing argument that it does likely make these situations safer. If the number of incidents of people who drive in these compromised states were fixed, it would be reasonable to suggest that the addition of AP is a clear safety win.

Unfortunately, consideration #2 is where it gets really complicated, because if the mere presence of Autopilot somehow increases the chances of someone trying to drive when they’re too tired or impaired, then it would very obviously have a negative effect on road safety.

If you assume the answer is that both are true, now you’re in the position of having to determine which effect is greater, and perhaps considering some ethical or at least philosophical questions around the morality of causing harm to avoid a greater harm (as illustrated in at least some version of “the trolley problem”). One way to do this is by observation and statistical analysis. We’ll look at that a bit more later in this post.

Tesla’s responsibility

As I said earlier, I’m not going to wade into any legal liability or regulator discussions here. However, that doesn’t preclude me from discussing my view of Tesla’s moral responsibilities here.

Two particular questions that arise are:

  1. Is it Tesla’s fault if someone misuses or abuses Autopilot? 
  2. Is Tesla ensuring that the net safety impact of Autopilot is positive, and how are they measuring it?

People have different opinions about the former, but I am firmly in the “no” camp. If Tesla ever did anything to encourage that behavior, that would be a different story, but to my knowledge they have not. Further, they make it very clear that AP does not make a vehicle autonomous, and that drivers are responsible for their vehicles. However, I admit that this is not a clear cut issue. I would put forward that it’s possible to encourage Tesla to do more (particularly with specific, practical suggestions!) without faulting them for not having done whatever you’re suggesting in the past. This is a new and uncertain area, and while we should absolutely require due diligence and the highest regard for safety from the start, we cannot expect perfection, and should encourage learning and improvement.

One such suggestion I have made is to require drivers to complete a short training program before using Autosteer or other Autopilot features beyond the basic Traffic Aware Cruise Control. I think even a 5-10 minute video followed by a 3-5 question quiz could go a long way to ensuring that drivers understand the limitations of the system and the intended use of the various modes. The video should show examples of things that can go wrong – e.g. the system failing to recognize a stopped vehicle protruding into the road, and show that the driver must handle this or they will crash. The quiz could ask questions like “Is it acceptable to operate a smartphone (e.g. texting) while using Autopilot?”, and “True or false: Autopilot will always come to a stop if an obstacle such as a stopped vehicle is in your path”. If you answer any incorrectly, you have to watch the video again before you can retry the quiz.

The second question is equally complicated but in different ways. For one, it’s unclear how to weight positives versus negatives. If driving deaths go down by 5%, but accidents go up by 10%, is that a positive or negative overall outcome? If 10 accidents occur because of AP which you knew otherwise wouldn’t have, but 10 equally bad accidents which would have occurred are avoided because of AP, is that a wash? What if the numbers are 9 and 11? 1 and 19? 5 and 50,000?

Of course, those are all hypotheticals. It’s not feasible to ever have exactly that kind of data. So what kind of data could we get? Well, given my experience I’m drawn to the trusty Randomized Control Trial as a way of measuring effects and establishing the likelihood of causality. However, that approach isn’t practical here, so what’s the next best thing? Well, that would require data we don’t have – but that Tesla does. Unfortunately they don’t currently share it. So let’s look at what they do share, and what NHTSA publishes, to see what we can learn from it.

Tesla’s Vehicle Safety Reports

Tesla provides voluntary quarterly Safety Reports which provide a few interesting data points about the fleet. Primarily they focus on a particular metric: miles between accidents. They provide this number for a few slices of their fleet: 

  • Miles driven in which drivers had Autopilot engaged
  • Miles driven where Autopilot was not engaged but Tesla’s active safety features (powered by the Autopilot technology) were enabled
  • Miles in which Tesla vehicles were driven without Autopilot and without their active safety features.

In Q3 2020 (most recent available when I last went through this data exercise), those numbers were 4.59 million, 2.42 million, and 1.79 million miles, respectively. They also reference NHTSA data which reported at the time that on average in the US there is an automobile crash for every 479,000 miles of driving.

On the surface, this seems to paint a pretty rosy picture for Tesla. Their vehicles appear to be safer than average without any active safety or assistance features, and safer still when you add in their advanced safety features, and safest of all when Autopilot is in use. That certainly fits the marketing narrative! However, if you’ve ever spent 5 minutes with a data scientist, you might at least have some questions. Unfortunately, Tesla’s reports provide little in the way of further details – no raw data, and only limited definitions for what they even count as miles “with Autopilot active”. 

What they do tell us is: 

  • They are using exact summations for the Tesla vehicle numbers, and are not using a sampled data set 
  • They count any crash in which Autopilot was deactivated within 5 seconds of the crash event as a crash where Autopilot was active 
  • They count all crashes where an airbag or other active restraint deployed, which they say in practice means virtually any accident at about 12mph or above

However, some things we don’t know include: 

  • Is the data from their worldwide fleet? Or US only? 
  • Are they applying any other filters, such as a minimum version of Autopilot hardware or software? 
  • What do they mean by “Autopilot active”? As I wrote in part 1, there is no specific feature or mode called Autopilot in the car. Rather, there are TACC (Traffic Aware Cruise Control), AutoSteer, and Navigate On Autopilot. I think most are assuming that Tesla is referring to the latter two modes, but it’s conceivable they’re also including TACC. I would guess that they are not including any Smart Summon usage.
  • What makes up the “miles driven without Autopilot and without active safety features” data? This could be limited to older model Teslas which lack the new active safety features, or could be limited to drivers who disable some or all of those features in the vehicle’s Settings app, or some combination of the two. 

So what insights can we glean from this data? Maybe not as many as you’d hope. Let’s start by laying out a few hypotheses which we’d like this data to help us validate or falsify: 

Hypothesis 1: Tesla vehicles are safer than average vehicles even without active safety features being present or enabled. 

Hypothesis 2: Tesla’s active safety features are effective at preventing accidents and thus make their cars safer than they’d otherwise be. 

Hypothesis 3a: Use of Autopilot reduces the risk of having an accident. 
Hypothesis 3b: Use of Autopilot does not increase the risk of having an accident.
Hypothesis 3c: Use of Autopilot increases the risk of having an accident.

For each of these, we can use a common key metric, and Tesla’s chosen metric seems reasonable enough: miles driven between accidents. You could state it other ways but that’s about as good as any. 

Personally, I believe #1 and #2 are true, and I think #3a may also be. But I don’t have data to conclude that any of these are true, let alone to assess the actual difference in our key metric. I’ve now seen multiple very enthusiastic Twitter accounts aggressively defend the most naïve interpretation: that they’re all true and that Autopilot is “ten times safer than a human”. After all, 4.59 million is roughly 10 times larger than the NHTSA’s 479,000 number.  

Of course, that’s not actually how any of this works. But why not? 

Well, let’s start with hypothesis 1. The naïve reading says that the data proves this true and that Tesla vehicles have about half as many accidents as the average vehicle. But wait, the NHTSA data says it’s specific to the US. The Tesla data doesn’t specify, so I assumed it was worldwide. Clearly that could account for some difference (especially since the US reports 7.3 road fatalities per 1B KM, Canada reports 5.1, and Germany reports 4.2 – source). And that’s just the beginning of what makes these numbers incomparable. The NHTSA data is based on an entirely different kind of reporting – it’s an estimate based on police and insurance reports, and it may even be using a different definition of “accident” (e.g. not all accidents result in airbag deployments).

Further, the NHTSA data shows that accident rates vary drastically by state, road type, vehicle age, and several other dimensions. Teslas are all relatively new, especially the vast majority of those using Autopilot, and they’re much more common in certain areas (especially near cities around the coasts). We can say for sure that Tesla owners are not “average” car owners – they are not representative of the usage patterns or driver demographics seen across all active passenger vehicles in the US. We don’t really have a good way to say exactly how they’re different, but we can have a pretty good idea that these differences mean we should expect that Teslas have fewer accidents for a given # of miles driven, regardless of anything Tesla has done.

In data science terms, we have absolutely no way to control for all of those differences. However, it’s pretty easy to conduct a more meaningful comparison than the one these rabid Tesla fans are doing with the very basic data that Tesla provides in their reports.

In order to do this, I’m leveraging the NHTSA’s Fatality and Injury Reporting System Tool (FIRST). As the name suggests, this database focuses on crashes which result in fatalities or injuries. Indeed, it enables querying of data from the unsampled Fatality Analysis Reporting System (FARS) as well as injury data from the General Estimates System and Crash Report Sampling System, and even some estimated “Property-Damage-Only” crash data. However, some data (e.g. make and model) is only available for fatal accidents, so I’ll lean heavily on that. There are differences here versus the more general accident data Tesla reports. However, as you’ll see below, it provides us with a lot of data we can use to better contextualize Tesla’s numbers.

In considering hypothesis #1, we must look for other explanations of the difference between NHTSA’s 479,000 number and Tesla’s 1.79 million number. If we can rule out all the other explanations we can think of, we’ll have a better basis for believing this hypothesis is correct. I’ll use the FARS data to illustrate that there are other explanations for at least some of the difference between these two numbers.

Since we don’t know exactly how Tesla computed their numbers, I’m forced to make some assumptions. Earlier I pointed out that we don’t know if Tesla’s data is US-specific or worldwide. For the purpose of this analysis, I will focus on data about vehicles and roadways in the US. We also don’t know the mix of vehicle models and model years in Tesla’s data. However, since Tesla first began selling the Model S in late 2012, we can assume that all vehicles in their data set are from no earlier than the 2013 “model year”. Further, we know that Tesla sold more vehicles in 2018 than in all years prior (thanks to the introduction of the Model 3), and that sales have grown since then.

Alternate Explanation #1: Newer vehicles are safer

One possible explanation for Tesla’s numbers being higher, then, is that newer vehicles overall have fewer accidents. To support this assertion, I looked for information about the distribution of vehicle ages on the road in 2019. According to IHS Markit, the average age of light vehicles in operation in the US in June 2019 was 11.8 years. Since we know that most Teslas on the road at that time were 2018 model year vehicles (or 2019 models, but only for part of the year), we can compare the fatal accident rates of 2018 model year vehicles versus 2007 model year vehicles in 2019.

2007 is also a good model year to use for comparison, as US vehicle sales were approximately 16 million that year, not too far shy of the 17.2 million sold in 2018. Of course, we can expect that nearly all 2018 vehicles were in operation in 2019, whereas many 2007 vehicles were no longer in use by that time.

Passenger vehicles involved in fatal crashes in 2019:

All model years39,412
20181,909
20072,165
Source: NHTSA FARS

And yet, we see that despite having fewer vehicles sold in 2007, and even fewer in active operation by the time 2019 rolled around, we see that they’re involved in 13.4% more fatal accidents than 2018 models were. This supports the idea that newer vehicles are generally safer than older ones. This intuitively makes sense, due to the increased risk of mechanical failure, the increased risk caused by deteriorated brakes and tires, and in consideration of the fact that safety features (and safety regulations) have improved significantly over the past 10-20 years.

Determining exactly how much of that 479,000 versus 1.79 million difference can be explained by this, we’d need to know more about the exact distribution of vehicle model years on the road in 2019, and the various accident rates of each. However, to get a rough idea of how sizable this effect could be, we can calculate some estimates:

If all 16 million original 2007 vehicles remained on the road, that is a fatal accident rate of 1 in 7,390 vehicles. For the 2018 vehicles, that rate was 1 in 9,009. That’s about a 22% increase in accident rate for the 2007 vehicles over 2018. However, we know that a lot of those 2007 vehicles were no longer in operation, whereas nearly all of the 2018 vehicles still were. I couldn’t find a great source for average rate of vehicle decommissioning, except some figures of around 5% per year. If we take a really simple 5% reduction from 16 million vehicles over say 10 years, you’re left with around 9.6 million. That works out to a rate of 1 in 4,434, which is a more than 100% increase (i.e. 2x) the 2018 rate.

That alone would bring up the NHTSA 494,000 number to roughly 1 million. Of course, this is a super rough estimate. If older vehicles (e.g. 2002 models) have even higher accident rates, or if fewer of those older vehicles were on the roads in 2019 than I estimated, then this could account for even more of the difference. The point here is not to come up with an exact figure, but instead to illustrate that the sizable disparity (273%, or 3.73x) in accident rate for the NHTSA “average” accident rate versus Tesla’s number is likely the result of a range of compounding factors which must be considered before attempting to draw any conclusions about what these numbers mean.

Takeaway: It would be a more fair and valuable to compare Tesla vehicles accident rates with “average” vehicles from the same model year, or at least the same range of model years.

Alternate Explanation #2: More expensive vehicles are safer

As of the end of 2019, Tesla’s cumulative sales in the US were approximately:

300,000 Model 3
158,000 Model S
85,000 Model X

These vehicles are considerably more expensive than the average passenger vehicle. This affects both the equipment and the population of owners/drivers substantially. Tesla owners are more likely to be older, more experienced drivers. They’re like to be careful with their expensive purchases. They’re more likely to live in urban areas and drive more on city streets and highways, rather than rural/country roads.

So how about we try comparing the 2018 Tesla Model S crash rates to competitive Audi models? I chose Audi in particular as they tend to appeal to the same tech-savvy demographic as Tesla. In 2018, Audi reportedly sold 7,937 A7 class and A8 class vehicles (this includes the S and RS variants). The same year, Tesla reportedly sold 25,745 Model S cars. In 2019, only a single fatal accident was reported involving those Audi models from the 2018 model year. For the Tesla Model S, there were three. Given that there were approximately three times as many 2018 Model S vehicles on the road, this suggests these vehicles have approximately equal likelihood of being involved in a fatal accident.

Admittedly, these numbers are very small. I repeated this exercise comparing the 2018 Model 3 and the 2018 Audi A4 + S4. The former were involved in 11 accidents, the latter in 2. There were just over four times as many Model 3s sold in 2018 as there were Audi A4 class vehicles. This shows us a lower rate of fatal accidents for the A4/S4 vehicles than the Model 3, though again the numbers are small and pretty close. This translates to 1 in 12,707 Model 3s being involved in a fatal accident, and 1 in 17,283 Audi A4s being involved in one.

The 2018 Ford Focus, on the other hand, had a rate of 1 in 5,667. This further supports the notion that more expensive vehicles are less likely to be involved in accidents (or at least fatal ones) versus lower cost models.

Takeaway: More expensive vehicles tend to have lower accident rates than less expensive vehicles. Tesla vehicles are more expensive than “average” vehicles, so it is expected that their accident rate is lower than average.

With more time, I could explore other possible explanations, e.g. that accident rates vary by region in a way negatively correlated with where Tesla sales are concentrated. With more data I could do a much better analysis, but I’m working with what I’ve been able to find publicly accessible and in a reasonably digestible form.

For now I’m content that the data shows Tesla vehicles are among the safest on the road, but that the degree of difference versus the NHTSA “average” number Tesla quoted is overstated. Please note that Tesla never specifically asserts that their vehicles (even without active safety features) are 3-4x less likely to be in accidents. They just publish these two numbers and let many readers make this naïve leap.

Effect of Active Safety Features

So what about hypothesis 2? Do Tesla’s active safety features help prevent accidents?

The data here is both interesting and confounding. The real problem is that we don’t know how this population, with its 2.42M miles between accidents, differs from the baseline Tesla population with their rate of 1.79 million miles between accidents. Is the latter wholly or predominantly made up of older Tesla models that lack these active safety features? Or is it entirely or predominantly vehicles where the driver has explicitly disabled these features? How many Tesla owners do disable some or all of these features? How are vehicles with only some of the features enabled counted?

At face value, these data points suggest that Tesla’s active safety features do prevent accidents. I believe this is true, but I wish we had better data to really understand the difference they make. As it stands, this is a positive indicator, but it leaves too many open questions to be a slam dunk, and that’s unfortunate. That said, even if the no-active-safety-features group is mostly older Tesla vehicles, they’re still relatively new vehicles in a similar price range purchased by similar drivers, so overall it’s probably reasonable to feel good about the impact these features are having.

Another challenge here is that we don’t have any other data to compare it to. We don’t know how many people disable active safety features in competitive cars, nor how many accidents occurred in that configuration. That would certainly be interesting to look at – for example, to see if Tesla’s active safety features are more (or less) effective than those of competitors.

I’m not going to spend any more time on hypothesis #2 though, as I think it’s probably the least contentious.

The big question: Autopilot’s impact

Which brings us to hypothesis 3 (a/b/c). 

Tesla’s report of 4.59 million miles between accidents for vehicles with Autopilot engaged certainly sounds impressive. The immediately obvious point of comparison here is to their reported number for Tesla vehicles with active safety features enabled (but not using Autopilot). Looking at that, 4.59M versus 2.42M seems like a very compelling increase in safety. So can we conclude that hypothesis 3a is true? Well, unfortunately no.

Let me be clear: it may be true. If it is true, this data makes sense. But we can’t conclude that it is true because there are other possible explanations for this measurement. In fact, there are some fairly obvious explanations which likely account for at least some of this difference (and possibly most or all of it). To understand why this is the case, let’s look at what we’re comparing: 

 Miles on Autopilot Miles not on Autopilot (but with active safety features) 
Vehicle make & model Tesla Model 3, Y, S, and X Tesla Model 3, Y, S, and X 
Vehicle model year ~2016 and newer ~2016 and newer? 
Has drivers who ever use Autopilot Yes Yes 
Has drivers who never use Autopilot No Yes 
% of miles which are on highways Probably >95% Probably far less than 95%  
% of miles in poor weather Probably smallerProbably higher 
% of miles in snow Probably negligible Probably higher 
Countries/region represented Only those with AP available, 
more likely those where it isn’t limited by local regulations.
(Unless all the data is US only)
All Tesla markets
(unless all the data is US only) 

Just as with the other hypotheses above, there’s no perfect way to test the hypothesis #3 variants. And unfortunately, the data made available just isn’t detailed enough to draw any significant conclusions. It supports the idea that AP is at least not obviously unsafe, but the case for it being a net safety win is lacking.

To illustrate further why the 4.59M number and the 2.42M number are not directly comparable, let’s take one more look at some NHTSA FARS data.

Fatal accidents involving a 2016-2020 Tesla (in 2019)

Interstate, freeway, or expressway8
Other road types27

Most accidents, including for Teslas, don’t happen on interstates. In fact, it’s likely that this difference is even more drastic for non-fatal accidents (as most of the highest speed accidents likely happen on interstates). This supports the idea that we should expect Autopilot use to correlate highly with driving conditions which are associated with fewer accidents. This is correlation, of course, and not causation. That’s the big thing we’re missing, and unfortunately we aren’t going to get it with the data available today.

This hypothesis could be better tested with better data. For example, Tesla could provide this same data filtered by road type, perhaps even filtering to specifically roadways where their Navigate On Autopilot mode is supported – as these are likely to be the same roadways where most Autopilot use happens anyway (and best matches their description of where Autopilot is intended to be used – e.g. divided highways). My expectation is that if you did this analysis, you’d find that the numbers would be far closer – possibly even too close to make a strong case for hypothesis 3a. On the other hand, I think it could help make a much better case for hypothesis 3b – that Autopilot is no less safe than driving on the same roads in a Tesla with the Autopilot-based active safety features (which we can reasonably say is among the safest ways to be driving).

With more data and a bit more work you (or they) could study specific routes, and compare AP-traversals versus non-AP-traversals. You could conduct comparisons focused on specific cities or regions, and see if the AP miles are consistently safer than the non-AP ones. Tesla is in the best position to conduct this research, given the immense amount of high-quality and self-consistent data they have at their disposal. They could conduct this research themselves, or provide a drop of the data to a third-party and commission a detailed study. I’d even go so far as to guess that Tesla will do one of those things at some point – perhaps once they’re more confident in an unambiguous, conclusive, positive result. 

From → Other

2 Comments
  1. Richard permalink

    Great analysis Brandon. These were some of my thoughts as well when I saw the safety update from Tesla. The scary thing is, that many people without your level of scepticism/analytical skill, will be lulled into a false sense of security and may end up being more likely to misuse/abuse. The recent crash in Texas (the two occupants had changed seats) is a case in point.

  2. Thanks Richard. I wouldn’t rush to judgement about that Texas accident. There’s so far no evidence that Autopilot was involved, and investigators are still figuring out what happened (this of course hasn’t stopped several media outlets from jumping to conclusions and writing clickbait headlines).

Comments are closed.