Last updated 10-Jan-2023
Tesla and Elon Musk have made repeated claims regarding full self driving capabilities and that they will be FSD feature complete imminently. In the 2021 earnings call in early 2022 Musk again reiterated that FSD would be here this year but still nobody really knows what he means by here and what success looks like. Musk mixes his language from capabilities enabling Robotaxi which would imply level 5 capabilities to some Level 3 capabilities with the driver taking over and what is more likely to be a stepping stone, one that Mercedes have permission for in Europe.
The date has been missed a number of times and commentators vary from those that are expectant and those that are dismissive.
The NHTSA have helpfully defined a list of features that they expect to be available for self driving. Tesla don't have a great relationship with the regulators and they do their own thing, whereas companies like Waymo publish the list and include commentary on their progress towards completing these features in their annual report. Waymo have gone further and extended the NHTSA list further which we've included at the end, but the primary thrust of this article is the NHTSA list and a comparison to Teslas current capabilitiesd and the unique challenges being set out for Level 3, 4 and 5 self driving. And lets be frank, it is bodies like the NHTSA that need to be convinced whether Tesla like it or not for level 3 and upwards self driving to happen as the critical element is the lawful shift of responsibility to the car. Logically, the driver either has to be ultimately in control at all times (level 2) or for periods of time the car is in control and responsible (level 3 and above).
This assessment is based on what we're seeing in the US limited beta.
When Elon Musk talks about being feature complete, we believe he's essentially claiming that each of the composite features required for full self driving will have been implemented in some form in their FSD code. We don't believe he's currently claiming these will be anything other than in Beta form or that the reliability will be immediately ready for self driving, just that every base is covered. Driving has been decomposed by the NHTSA into a list of competencies or features which the car must be able to perform. Some such as 'Perform Car Following (Including Stop and Go)' has been in place for some time, others don't yet exist and we assess each feature further on in this blog.
The NHTSA list is important as we believe FSD will only be approved when bodies like the NHTSA are satisfied with the range of capabilities. Tesla may have what they feel to be the best self driving software in world, but until they can demonstrate both the variety of required capabilities and at the required level of performance of each capability or feature, the lawyers and regulators will not allow any form of autonomous driving and the shift of accountability to the car.
The following list are those recommended by the NHTSA. Our commentary is based on the capabilities as of Feb 2022 as far as we understand them.
Tesla use map data to determine speed limits and have a somewhat limit feature set regarding responding to speed limit changes, primarily only slowing the car. In 2020 Tesla now also started reading speed limits although the response to speed limits is limited to certain classes of roads and is not available in urban areas. We have not seen evidence that Tesla responds to speed limits on all road types and at all speeds.
We are not aware of any traffic merge capability. When joining a major road via a slip way and needing to adjust speed and position in relation to other traffic within a defined distance to merge is not yet implemented. A similar feature is automatic lane change which has it's own criteria but works very differently in the sense that there is no time pressure to pull out and the car simply waits for a gap in the traffic. There are some mentions of the capability at lower speeds as part of the city streets beta
The FSD City Streeets beta has some evidence of this. High speed merge and low speed merge are slightly different. Low speed merge may include slow moving traffic where a lane disappears and the traffic needs to merge often like a zip and alternative cars. Cars are epxected to yield to each other. There is no evidence that the car will adjust its speed faster to stay ahead of a car it wants to merge in fron tof which a human driver would do.
Tesla provide some parking capabilities and have done for some time howver this requirement is different. With the current system the car drives slowly past parking bays and the car will attempt to identify one and make it available. However, we believe this feature is not about simple parking, it is about the car looking ahead and finding a refuge area or safety lane and automatically bring the car to a safe stop from a busy road, starting at speed, should say the driver be incapacitated. We've seen no evidence of this capability yet and do not feel a simple slowing down and putting the hazard lights meets this requirement as the car is still on the main carriageway.
Tesla have added the visualisation of oncoming traffic which clearly signifies they are now detecting these vehicles. It's unknown exactly what the autopilot functionality does with this information but Tesla do seem to have capability in this area.
Lane divider markings are often used to determine whether the driver is allowed to perform passing manoeuvres or overtakes. A combination of recognising these road markings and map data do seem to be used by Tesla in deciding whether an over take is permissible.
This is essentially Traffic Aware cruise control and has been a feature for some time and generally works well.
This is a feature that has caused some controversy over the years. Autopilot has failed to spot stationary vehicles ahead especially when travelling at speed, typically on the highway when coming up behind stationary traffic. Failed examples include fire trucks blocking the road following an accident. It has been reported this is in part due to the image processing requiring moving objects to identify hazards and other vehicles. The latest beta release appears to do this and reponds to parked cars, whether this will work at speed is yet to be seen.
This is a key safety feature and until the car is able to detect and stop for stationary object above 60 mph or 100km/h the feature is not complete.
This is a feature that currently exists on highways. The ability to lane change in some countries requires a confirmation by the driver but few incidents of poor performance have been reported
This is a variation of stopping to stopped vehicles but is on a much smaller and random object size. Essentially the requirement is to address any obstruction whether that be debris in the road having fallen off a lorry or a pedestrian thats collapsed, the limitations of this are similar to the stopped vehicles. The nearest none vehicle reported obstruction is a traffic cone which has now been added.
Traffic lights and stop signs have been added in 2020 and appears to be heavily weighted towards map data as the feature could not be enabled on cars until the map data had been updated. There are also no reports that temporary stop signs or traffic lights due to say road works have been detected but we'll give this to Tesla for now.
Traffic lights and stop signs have been added in 2020 and use a combination of maps and cameras and the cameras can now identify the status of traffic lights. The cars response to these lights is limited to warnings and stopping. Limitations are largely the same as those for detecting stop signs and the reliance of map data.
We take this as meaning the ability to turn off a main/primary road onto a side road, turn from a side road onto the main/primary road and where the junctions are managed by lights or stop bars, to make judgements on whether it is safe to pull out automatically.
The Beta FSD release, which is in very limited hands, has shown some of this capability but it is not on general release
The navigation of roundabouts is more than the ability to drive around a roundabout when continuing to travel along the same road. Some owners have allowed their cars to continue on AP and the car has treated the roundabout as simply a series of bends in the road, the car is almost unaware they have passed through a junction. We believe to meet this criteria the car needs to stop or at least slow, check it's safe to enter the roundabout, obey appropriate lane discipline and exit at the desired point from the roundabout. The Beta FSD release, which is in very limited hands, has shown some of this capability but it is not on general release.
This is effectively the advanced summon feature which has been available in some countries for a number of months. There is a caveat in that Summon currently only retrieves the car from a parking spot and does not seek out and park the car in an available parking lot. The parking aspects are almost there in principal as park assist features do currently exist but these are not yet implemented as part of summon. We'd suggest the building blocks are all largely present, however these are neither currently all integrated nor particularly reliable.
There are two broad methods to approach such a requirement. One is via detailed maps and in that sense every car with sat nav has been capable of this criteria since sat nav systems were introduced. The more advanced approach is to read road signs and assess the environment to cater for temporary restrictions. This later capability is not yet present.
Tesla would probably argue that cone detection is an example where this feature has been met, and in that sense edge markings can be defined by cones. The requirement and real world however require much more including the ability to respond to workers in the road directing traffic, temporary situations where map data would be over ridden such as driving on the wrong side of the road for a short period under the direction of worker. No such capability currently exists.
Right of way decisions are complex in that they are cognitively complex. Regular junctions are covered elsewhere and the rules are easily laid down, however establishing appropriate right of way on certain roads can take a complex decision making for instance meeting a large vehicle in a narrow lane where passing is not possible may require the car to yield and an experienced driver would take that decision based on who can most easily move to a place where passing is possible.
There some evidence emerging in specific situations that the beta is starting to do this, but it looks like point solutions and not a general catch all solution to deal with the unexpected.
This criteria feels almost like a catch all for anything thats missed. State and local laws can vary considerably around the world. There are some limited examples such as the variable speed limits in France depending on whether it is raining are partly implemented, but equally variable speed limits in countries like the UK are not detected at all (other than by the earlier Mobileye system).
We would generously give this a 'started' which is what Musk is seeking to claim, however there is still much localisation to do, many road signs to be learnt, and the general quality of the driving to reach a level where due car and attention or poor driving is not levelled at the car, even if that driving is cautious.
It was the source of some bad publicity when a number of accidents occured with cars crashing into fire trucks. The basic detection has therefore been poor. Further, in the event of an accident, in much the same way as a road or construction worker with temporary overrides against map or permanent signage directions. As with construction worker, the limit is probably to stop the car and there is no ability to automatically drive as directed.
We feel the points about construction zones equally apply here.
In much the same way as construction zone workers and emergency services, occasionally members of the public may have to control traffic flow following an accident or incident. No such capabilities currently exist.
We feel the points about construction zones equally apply here.
Emergency vehicles on a blue light or emergency run expect other vehicles to safely give them additional room or yield giving them right of way where that would not ordinarily be the case. An example might be stopping at a green light to allow a emergency service vehicle to cross the red light junction. There is no evidence this has been implemented.
We feel the points about detecting and responding to emergency vehicles apply here.
Further, the car may need to stop if a following Police car signals for the car to stop. There is no indication this has been implemented.
There are limited visualisations of pedestrians and cyclists, however this is limited to where they are directly blocking the path and an accident would result if the car continued. Giving way to pedestrians standing at a crossing waiting to cross is not in the current feature set and would require the pedestrian to enter the road and to cause an obstruction. The only time this may work if the crossing is controlled by a stop light.
Tesla are able to detect certain obstructions at low speed, however this has not been reliably seen at higher speeds and stationary objects at speed can still present a problem to the car. There are several well publicised accidents where cars have simple ran into cars parked half in lane following a breakdown. Tesla would probably argue the feature is present, in which case it's performance is still significantly under the required levels.
We feel this is more a sat nav type requirements and as such most sat nav systems have been able to perform this where they have traffic data. The Tesla system allows a time threshold before triggering alternate routes and certainly if the driver deviates from the planned route, the system will reroute accordingly. The ability to follow the revised route is a function of the other capabilities.
The following list are those added by Waymo and it is possible the reader can think of others. Whether you feel Tesla needs to be able to cater for these situations or whether some of these are just combinations of existing features it is hard to argue against the need to handle complex situations. While we didn't include the scoring of these in the stats at the top of the article as the NHTSA seems to be the more important set for regulatory purposes, the scores here are of the 18 additional features, 5 are met, 7 are partially met, and 6 are not met. The partially met criteria, like before, typically still have some significant work to be done, for instance detecting pedestrians in the road is partly met at lower speed, the ability to drive past them safely in a country lane is not currently attempted.
Essentially this is how to deal with the situation when you run out of road. An example may be your lane is ending and the merge function has to abort, how the car handles that situation safely and to be able to continue in some manner is important. As merge is no yet implemented, nor is this capability currently.
This is a simplified version of the overtake and this is something that Tesla have had for some time.
We believe this is to do with either convoy traffic or the ability to monitor traffic further ahead than the car immediately in front of you. Tesla does have the ability to see the car in front of the car in front based on the radar, however there is no capability where Tesla looks further down the road and starts to make decisions based on traffic 4 or 5 cars in front or where it starts to see brake lights in the distance.
Considerate drivers may modulate their speed to create a gap to allow merging drivers room to enter the lane. Teslas will slow down if a car starts to enter the lane but this is largely reactionary. There has been discussion that the cars are assessing adjacent lanes and whether they believe a driver may enter their lane so it is possible the building blocks for this are in place. We would argue the defensive driving nature of assessing whether a car is going to enter the lane and knowing a car must enter your lane within the next 300m and pro-actively making space for this to be done safely are different things.
In most countries pedestrians in the road are common occurrences. The ability to detect and moderate speed and find a safe way to pass these pedestrians is required otherwise the car will remain behind the pedestrians for some time. Tesla currently stops if a pedestrian blocks the road and that is usually as a result of a collision avoidance response rather than a road navigation response.
Over taking cyclists does not generally require the full road width and a second lane to pull into. The judgement of a cyclists speed, path, their stability, cyclists hand signals indicating whether they are about to turn across the traffic, even judging the road surface ahead of them and whether they are likely to take avoiding actions such as approaching standing water are all things a car driver would do and a self driving car would need to cater for. Currently there is no such provision beyond recognising the cyclist and staying behind them unless there is an over taking lane.
Animals can pose a bigger threat to cars than pedestrians as they can be more unpredictable in how they react to traffic. Low speed collision avoidance is as far as Tesla currently are but limited to obstructions directly in the vehicle path.
Motor cyclists are more like cars than cyclists in terms of road behaviour however the detection has to allow for them being smaller. Tesla currently detects motorcyclists.
The differentiation of a school bus over any other vehicle is predominately a more cautious response to passing. A common accident is where children run out from behind a bus into the path of a oncoming traffic. Tesla do not differentiate school buses from any other form of transport.
The is largely a composite requirement of responding to temporary road signals and adaptive sat nav. As such Tesla current performance is mapped out in those criteria.
There are two types of rail crossing in general use, those with gates which create a physical barrier and those without. The requirement to handle the former is met by the criteria to obey signage and local traffic laws, however the later, the skill to cross an unmanned and ungated crossing purely by visual reference to trains on the track is not catered for. In some regions this requirement could be extended to situations where trams exist and more complex road situations can arise as both the trams/trains and the cars can share the same parts of the road but with a different right of way logic to that with other vehicles.
Reversing when needed on a road, other than for parking, is something that can be required for instance when two cars meet on s shared single width carriage way. Reversing and adopting similar self driving skills backwards is not a capability that Tesla has over and above parking.
On one level this is stability and traction control and in that sense Tesla has this capability. For self driving however the ability to drive cautiously and deliberately, matching the cars speed to the conditions is not something that can be done. The opposite is currently true where drivers are advised to judge when and reduce the regenerative braking level in icy conditions and accidents involving aquaplaning are common where the car increases power to maintain speed when cutting power and slowing down would be more advisable, including predicting when low friction might occur ahead rather than wait until the car senses it through spinning wheels.
This is essentially the ability to self diagnose. It's undoubtedly true that the hardware on the car has a degree of redundancy (since HW2.5, there's still no answer on how thats going to be fixed on HW2 cars) but what has not been mentioned is embedded safety such as dual systems. It's unknown what the criteria will be but for now we know Tesla are doing some of this.
We've touched upon weather and the car not making any noticeable adjustments in conditions such as icy roads over and above traction control, ie driving at slower speeds to be more cautious. There's also the ability to use weather information to know whats ahead and if the car feels it can't complete the journey autonomously then it doesn't start it (or at least advises the driver). This is not present currently.
The cars can drive in darkness. There is a reliance on other vehicles being self illuminated and there's no clear word on the ability to see say a cyclist or pedestrian in darkness, in much the same way humans might struggle. The car has no hearing however and audible cues can not be detected. If we applied a reasonable test here, recognising that the cameras can generally see better in the dark than humans in some circumstances, we feel the Tesla sensor suite is sufficient.
We'd put this into a very broad class of anticipating behaviours and being cognitively aware of the surroundings. A door ajar is not just a car thats a few cm wider than the car would ordinarily be, it's a the threat that a person is about to get out of the car where an experienced driver would use a combination of a wider passing distance and moderating their speed to reduce the risk. We've previously mentioned the school bus scenario, but equally cycle paths that are likely to end, outside pubs or city centres in crowded situations, a raft of scenarios that drivers intuitively assess and manage and often failing to do so successfully results in accidents.
Being reliant on good road markers was a feature of the older mobileye system used for AP1. Tesla have moved on and use a wider variety of techniques to detect where the road edges are. There are situations such as newly resurfaced road where basic road markings may be missing but other visual clues tell a driver to stop or behave differently. Tesla have map data to indicate where junctions are, but we still have a general belief that cars can not rely on map data, only use it as a secondary source of information. The real world is constantly changing and map data is no refreshed in real time.
This is another criteria which we feel is largely covered by others.
We're tried to keep this article as a quantitative assessment of features and not stray too far into the subjective performance of those features or other related topics. The following considerations however offer our view on a few of those areas.
Tesla make claims about regional regulations holding back the deployment of features but in practice we find this fairly limited. The principal areas where this seems to happen are automatic lane changes where some countries require the drivers acknowledgement to proceed and advanced summon but this is an imposition placed on Tesla in the execution of a lane change and not necessarily the detection and calculation that a lane change is permissible.
The real concern for use on regional considerations is the performance of many features seem to vary from country to country because driving conditions and road layouts vary from region to region, and there has been no real talk of training the models for different geographies. As an example, in Germany for instance there are no speed limits in places and cars can easily be approaching from behind at 150mph unlike many other countries where the maximum speed is below 100 mph. The lane change in this situation needs to be aware that cars much further back in the rear vision could be closing rapidly possibly with a speed differential of 70-80 mph, whereas the current system appears to work on fairly low speed differentials of the cars. When training the models to assess an approaching car speed, it is possible that statistically 99.99% of the closing vehicles globally are all gaining at a relatively sedate 10-20 mph. The models will naturally weight this highly in its decision making, however in Germany it may that 20% of vehicles are closing much faster than this and the model would have trained differently. AI essentially makes assumptions based on past data and the assumption that cars are closing much slower than they actually are is a potentially very dangerous mistake.
There are other examples. The unfortunate accident of Joshua Brown led to Tesla increasing its balance for what may be lorrys sideways across the road which is now causing false alarms in other countries on the approach to bridges. In Europe however lorry's have their sides filled in with crash protection which changes their appearances considerably from that of a bridge. As a result, its entirely possible that a model trained purely on European data would have different behaviours to one trained in the US.
The big step change is the legal change in terms of accountability when driving. Even level 3 driving has this challenge and irrespective of the capabilities of the software, the legal systems in each country with respect to who is in control of the vehicle and who is to blame in the event of an accident, and insurance companies who need to pay in the event of an accident, need to accept the shift when the car drives without driver engagement, even if just for a short period of time. We've argued before that Tesla would have been better to focus on Level 4 on main, pedestrian free truck roads with well defined junctions. This would be an easy problem to address initially than try to be all things to all people.
In our opinion this is an enormous hurdle and the first fatal accident that occurs with a self driving car involved will result in an investigation akin to an airplane crash and potentially cause the capabilities to be turned off until completed. The regulators would simply not allow Tesla to turn on thousands of cars globally to perform full self driving if there was a potential life threatening weakness.
As an aside Tesla would need to acknowledge their software was no longer in beta.
This is going to frustrate many owners and Tesla as the claim will simply be thje car can drive itself and its safer than humans, but we doubt regulators, insurance companies, legal systems etc will ever accept the argument that even though somebody has died following a failure of the car software, statistically 2 people somewhere in the world would have died without the system.
We've touched upon Artificial Intelligence (AI) and Machine learning (ML) and the pitfall of global models v regional trained models, but there is a second potential issue with ML trained AI. Machine learning is a statistical processes creating a best fit, however the models only venture into the questions they are being asked to address, and the data provided may not be able to answer the question of statistically the data isn't strong enough. As a simple example, lets say we are trying to assess if a coin toss results in a head. Statistically the results would reveal we thought we'd have a head, and did, we thought we'd have a tail and did, and two other groups, we thought we'd have a head and didn't (false positive) and we thought we'd have a tail but actually had a head (false negative). Part of ML is to balance the mix of the wrong outcomes, it may be that the false negative is the most dangerous (missing the lorry across the road resulting in a horrific accident) so the parameters are tuned to reduce false negatives, but in doing so you will increase other wrong answers, usually false positives (hence why we now get more phantom braking events). The greater the variability of a problem, like a coin toss, the significantly greater the number of incorrect answers that may result. If we think about auto wipers on cars, Tesla seem to be preferring to have fewer false positives as triggering the wipers on a dry windscreen is never a good idea, but as a result they're having an increase in false negatives where the wipers don't trigger when they should. The expectation that Machine learning can decompose a problem to strongly polarised 'yes or no' outcomes is too simplistic and even tiny inaccuracies of false results, when propagated across hundreds of thousands of cars driving millions or miles will result in significant numbers of mistakes being made.
Both level 3 and level 4 have built into them the provision that should the car need to, it can hand back control to the driver in an organised manner.
In Europe for level 3 this has been stipulated as a 7-10 second period where the car will alert the driver who then has 7-10 seconds to put down whatever they are doing, assess the surroundings and be in a position to take over the driving of the car. If the driver is unable to take over, the car will come to a safe halt. The implications of this is the car needs to be able to reliably determine if it can continue to drive for the next 7-10 seconds - if it can not then there is a period of time where the car has run out of ability to drive but the driver is not ready to take control, a situation which is obviously highly dangerous.
There is no evidence that Tesla is doing anything in this space. Autopilot current aborts suddenly and often without notification. This is clearly unacceptable for any level of responsibility being given to the car as the driver has to seemlessly take over control. Without any warning, the driver would need to be permanently ready which undermines the benefit of FSD.
We have for a long time doubted the current sensors on a Tesla are suitable for any level of autonomy. The claim is that humans drive with their eyes and so a car can drive with vision only. We accept that in theory this is valid however there are a number of issues with the Tesla cameras.
A vision only system might be capable of driving, but if this was only possible when light conditions allowed and the car was clean then lvel 5 is immediately ruled out and levels 3 and 4 are prone to issue.
Ways you can support tesla-info