Consumer supervision is having all the controls of the car right there in front of you. And if you are doing it right, you have your hands on wheel and foot on the pedals ready to jump in.
Accident rates under traditional cruise control are also extremely below average.
Why?
Because people use cruise control (and FSD) under specific conditions. Namely: good ones! Ones where accidents already happen at a way below-average rate!
Tesla has always been able to publish the data required to really understand performance, which would be normalized by age of vehicle and driving conditions. But they have not, for reasons that have always been obvious but are absolutely undeniable now.
At least once every few days, it would do something extremely dangerous, like try to drive straight into a concrete median at 40mph.
The way I describe it is: yeah, it’s self-driving and doesn’t quite require the full attention of normal driving, but it still requires the same amount of attention as supervising a teenager in the first week of their learning permit.
If Tesla were serious about FSD safety claims, they would release data on driver interventions per mile.
Also, the language when turning on FSD in vehicle is just insulting—the whole thing about how if it were an iPhone app but shucks the lawyers are just so silly and conservative we have to call it beta.
Yikes! I’d be a nervous wreck after just a couple of days.
I kept it for a couple months after the trial, but canceled because the situations it’s good at aren’t the situations I usually face when driving.
The only problem is, it doesn't work.
That was the case when they first started the trial in Austin. The employee in the car was a safety monitor sitting in the front passenger seat with an emergency brake button.
Later, when they started expanding the service area to include highways they moved them to the driver seat on those trips so that they can completely take over if something unsafe is happening.
I wonder if these newly-reported crashes happened with the employee positioned in e-brake or in co-pilot mode.
This is awkward for any technologies where we've made it boring but not safe and so the humans must still supervise but we've made their job harder. Waymo understood that this is not a place worth getting to.
It would be interesting to try training a non-human animal for this. It would probably not work for learning things like rules of the road, but it might work for collision avoidance.
I know of at least two relevant experiments that suggest it might be possible.
1. During WWII when the US was willing to considered nearly anything that might win the war (short of totally insane occult or crackpot theories that the Nazis wasted money on) they sponsored a project by B.F. Skinner to investigate using pigeons to guide bombs.
Skinner was able to train pigeons to look at an image projected on a screen that showed multiple boats, a mix of US and Japanese boats, and move their heads in a harness that would steer a falling bomb to a Japanese boat. They never actually deployed this, but they had tests in a simulator and the pigeons did a great job.
2. I can't give a cite for this one, because I read it in a textbook over 40 years ago. A researcher trained pigeons to watch some parts coming off an assembly line, and if they had any visible defects peck a switch.
There were a couple really clever things about this. To train an animal to do this you have to initially frequently reward them when they are right. When they have learned the desired behavior you can then start rewarding them less frequently and they will maintain the behavior. You will have to keep occasionally rewarding correct behavior though to keep the behavior from eventually going away.
The way they handled this ongoing occasion reward was to use groups of 3 pigeons. The part rejection system was modified to go with a majority vote. Whenever it was not unanimous the 2 pigeons in the majority got a reward. This happened frequently enough to keep the behavior from going extinct in the birds, but infrequently enough to avoid fat pigeons.
Once they had 3 pigeons trained by a human deciding on the rewards during the initial training when you need frequent rewards and got them so they were working great on the line, they could use those 3 to train more. They did that by adding the trainee as a 4th member of the group. The trainee's vote was not counted, but if the other 3 were unanimous and the trainee agreed the trainee was rewarded. This produced the frequent rewards needed to establish the behavior.
The groups of 3 pigeons could do this all day with an error rate orders of magnitude lower than the error rate of the human part inspector. The human was good at the start of a shift, but rapidly get worse after as their shift goes on.
Ultimately the company that had let the researchers try this decided not to actually have it used in production. They felt that no matter how much better the pigeons did and how much they publicly documented that fact ads from competitors about how that company is using birds to inspect their parts would cost too many sales.
Seems like there's zero benefit to this, then. Being required to pay attention, but actually having nothing (ie, driving) to keep my engaged seems like the worst of both worlds. Your attention would constantly be drifting.
Externalized risks and costs are essential for many business to operate. It isn't great, but it's true. Our lives are possible because of externalized costs.
OSAH also has regulation to mitigate risk ... tag and lock out.
Both mitigate external risks. Good regulation mitigates known risk factors ... unknown take time to learn about.
Apollo program learned this when the door locks were bolted on and the pure oxygen environment burned everyone alive inside. Safety first became the base of decision making.
They advertise and market a safety claim of 986,000 non-highway miles per minor collision. They are claiming, risking the lives of their customers and the public, that their objectively inferior product with objectively worse deployment controls is 1,700% better than their most advanced product under careful controls and scrutiny when there are no penalties for incorrect reporting.
https://www.rubensteinandrynecki.com/brooklyn/taxi-accident-...
Generally about 1 accident per 217k miles. Which still means that Tesla is having accidents at a 4x rate. However, there may be underreporting and that could be the source of the difference. Also, the safety drivers may have prevented a lot of accidents too.
I think Tesla's egg is cooked. They need a full suite of sensors ASAP. Get rid of Elon and you'll see an announcement in weeks.
If you have a large fleet, say getting in 5-10 accidents a year, you can't buy a policy that's going to consistently pay out more than the premium, at least not one that the insurance company will be willing to renew. So economically it makes sense to set that money aside and pay out directly, perhaps covering disastrous losses with some kind of policy.
So this number is plausible.