upvote
I don't think that's the reasoning.

The reasoning was simply that LIDAR was (and incorrectly predicted to always be) significantly more expensive than cameras, and hypothetically that should be fine because, well, humans drive with only two eyes.

Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.

Having similar sensors certainly doesn't guarantee your accidents look the same, so I don't think your logic is even internally sound.

reply
Sensor fusion is also hard to get right, since you still need cameras you have to fuse the two information streams. Thats mainly a software problem and companies like Waymo have done it, but Tesla was having trouble with it earlier, and if you don’t do it right, your self driving system can be less reliable.
reply
Sensor fusion seems like it'd be a big problem when you're handcoding lots of C++, and way less of a problem when all the sensors are just feeding into one big neural network, as Tesla and probably others are doing now. The training process takes care of it from there.

One of Udacity's first courses was on self-driving, taught by Sebastian Thrun who later cofounded Waymo. He went through some Bayesian math that takes a collection of lidar points, where each point contributes to a probabilistic assessment of what's really going on. It's fine if different points seem to contradict each other, because you're looking for the most likely scenario that could produce that combined sensor data. Transformers can do the same sort of thing, and even with different sensor types it's still the same sort of problem.

reply
> Sensor fusion is also hard to get right, since you still need cameras you have to fuse the two information streams

The response to the challenge shouldn't be whittling down your sensor-suite to a single type, but to get good at sensor fusion.

reply
I think this is the key. In theory - more information stream when fused together (properly) should reduce error. If their stumbling block is the "properly" part, than the rest of those justifications come off as a pretty weak way to sidestep their own inabilities to deliver this properly.

We have lots of evidence of similar strategies being used in other domains, this seems like an especially life-critical domain that ought to have high rigor and standards applied.

reply
> how incredible the human brain is compared to computers.

It is pretty incredible but people will (rightly so?) hold automated drivers to an ultra high standard. If automated driving systems cause accidents at anywhere near the human rate, it'll be outlawed pretty quickly.

reply
> If automated driving systems cause accidents at anywhere near the human rate, it'll be outlawed pretty quickly.

This is evidently false. Robotaxi crash rates exceed human drivers', but there's not an effective regulatory agency to outlaw them!

https://futurism.com/advanced-transport/tesla-robotaxis-cras...

reply
According to that article, Waymo crashes 2.3x more often than human drivers (every 98k miles vs 229k miles), which is clearly false. I think it's far more likely that humans don't report most minor collisions to insurance, and that both Robotaxis and Waymo are safer than human drivers on average.
reply
> According to that article, Waymo crashes 2.3x more often than human drivers (every 98k miles vs 229k miles), which is clearly false.

Why is it clearly false? It might be false, but clearly? I would definitely like to see evidence either way.

> I think it's far more likely that humans don't report most minor collisions to insurance, and that both Robotaxis and Waymo are safer than human drivers on average.

That sounds like you are trying to find reasons to get the conclusion you want.

reply
The NHTSA requires a report when any automated driving system hits any object at any speed, or if anything else hits the ADS vehicle resulting damage that is reasonably expected to exceed $1,000.[1] In practice, this means that everyone reports any ADS collision, since trading paint between two vehicles can result in >$1k in damage total.

If you go to the NHTSA's page regarding their Standing General Order[2] and download the CSV of all ADS incidents[3], you can filter where the reporting entity is Waymo and find 520 rows. If you filter where the vehicle was stopped or parked, you'll find 318 crashes. If you scan through the narrative column, you'll see things like a Waymo yielding to pedestrians in a crosswalk and getting rear-ended, or waiting for a red light to change and getting rear-ended, or yielding to a pickup truck that then shifted into reverse and backed into the Waymo. In other words: the majority of Waymo collisions are due to human drivers.

So either Waymos are ridiculously unlucky, or when these sorts of things happen between two human driven cars, it's rarely reported to insurance. In my experience, if there's only minor damage, both parties exchange contact info and don't involve the authorities. Maybe one compensates the other for damage, or maybe neither party cares enough about a minor dent or scrape to deal with it. I've done this when someone rear-ended me, and I know my parents have done it when they've had collisions.

If human driven vehicles really did average 229k miles between any collision of any kind, we'd see many more pristine older vehicles. But if you pay attention to other cars on the road or in parking lots, you'll see far more dents and scratches than would be expected from that statistic. And that's not even counting the damage that gets repaired!

1. See page 13 of https://www.nhtsa.gov/sites/nhtsa.gov/files/2025-04/third-am...

2. https://www.nhtsa.gov/laws-regulations/standing-general-orde...

3. https://static.nhtsa.gov/odi/ffdd/sgo-2021-01/SGO-2021-01_In...

reply
Definitely. I looked at Tesla's source for these numbers, looks like they primarily used data sourced from police reports, which most people only file if the incident is serious enough to turn into insurance.

Tesla notes:

> These assumptions may contain limitations with respect to reporting criteria, unreported incident estimations (e.g., NHTSA estimates that 60% of property damage-only crashes and 32% of injury crashes are not reported to police

https://www.tesla.com/fsd/safety

reply
> Musk miscalculated on 1) cost reduction in LIDAR

Given that Musk has a history of driving lower costs, it's unlikely he overestimated the long-term cost floor. He just thought we were close to self-driving in 2014.

Another factor is Andrej Karpathy, who was the primary architect for the vision-only approach. Musk wanted fewer parts, and Karpathy believed he could deliver that. Karpathy is still an advocate of vision-only.

reply
Right, for the reasons that I just mentioned
reply
Musk has never been scared of vertically integrating something that's too expensive initially.
reply
> Musk miscalculated on 1) cost reduction in LIDAR and 2) how incredible the human brain is compared to computers.

And, less excusable, ignorant of how incredible human eyes are compared to small sensor cameras. In particular high DR in low light, with fast motion. Every photographer knows this.

reply
And also ignorant about how those two eyes have binocular vision, adjustable positions, and can look in multiple mirrors for full spatial awareness.
reply
There are good arguments but this isn’t one. Many humans (like me!) drive fine without binocular vision. And the cars have many cameras all around, with wide angle lenses that are watching everything all the time, when a human can only focus in one direction at a time.
reply
I thought only the front view has binocular vision on the cars. The others are single, with no depth perception. How does it know how close objects are outside this forward cone?

https://www.researchgate.net/publication/378671275/figure/fi...

reply
So your eye does not have an adjustable position and you cannot use mirrors?
reply
Both are easily compensated for by having many cameras.
reply
Binocular vision is not only relevant for driving (well, maybe for the steering wheel, but that's not the point).
reply
It gives us depth perception. And moving the eyes and/or head gives the depth perception over a wide field of view.
reply
Eh, I think ‘miscalculation’ might be giving too much credit about good intentions.

He wanted (needed?) to get on the hype train for self driving to pump up the stock price, knew that at the time there was zero chance they could sell it at the price point lidar required at the time - or even effective other sensors (like radar) - and sold it anyway at the price point that people would buy it at, even though it was not plausibly going to ever work at the level that was being promised.

There is a word for that. But I’m sure there are many lawyers that will say it was ‘mere fluffery’ or the like. And I’m sure he’ll get away with it, because more than enough people are complicit in the mess.

Miscalculation assumes there was a mistake somewhere, but near as I can tell, it is playing out as any reasonable person expected it too, given what was known at the time.

reply
I think Musk is really not as smart as he thinks he is and this specific thing was probably an earnest mistake. Lots of other fraudulent stuff going on though of course!
reply
IMHO not using lidars sounds like a premature optimisation and a complication, with a level of hubris.

This is a difficult problem to solve and perhaps a pragmatic approach was/is to make your life as simple as possible to help get to a fully working solution, even if more expensive, then you can improve cost and optimise.

reply
Considering he also runs a company that puts computer chips inside brains to augment them you’d think he ought to have a more sound understanding as to the limits of both.
reply
There certainly is a pretty on going miscalculation regarding human intelligence, and consrquentially, empathy.
reply
Seeing the SOTA in FSD techs it is not obvious that Musk made a miscalc so far.
reply
Nah

If the data were positive for Tesla, Tesla would publish it

They do not, so one can infer it is not flattering

(Before you post the "Miles driven with FSD" chart, you should know upfront (as Tesla must) that chart doesn't normalize by age of vehicle or driving conditions and is therefore meaningless/presumably designed to deceive)

reply
Until a lawyer points out other cars see that. My car already has various sensors and in manual driving sounds alarms if there is a danger I seem not to have noticed. (There are false alarms - but most of the type I did notice and probably should have left more safety margin even though I wouldn't hit it)

also regulators gather srastics and if cars with something do better they will mandate it.

reply
Very recent issue with Waymo https://dmnews.co.uk/waymo-robotaxi-spotted-unable-to-cross-.... This is 17 years after they bet the farm on LIDAR, with no signs its ever going to be cost effective or that it's better than multiple cameras, with millisecond reaction 360 degrees, that never gets tired, drunk, distracted, and also has other cheaper sensors and NN trained on Billions or real world data.
reply
Tesla does not handle rain well either. This is not a LIDAR problem, it is a problem with self driving cars in general.
reply
My Tesla can't even tell if it should turn the wipers on consistently or correctly. Let alone drive in the rain.
reply
A feature that is bulletproof in other cars with a very boring and industry standard sensor (it's not even expensive), while Tesla insisted they could do it with just normal cameras.
reply
Seriously. Why do people think a company that can't do automatic wipers could possibly do automatic driving?
reply
The same people that seriously thought we’d have a mars base by now.
reply
People also don't handle rain well.
reply
That's an example of it failing safe. I'd rather it did that than drive me into a sinkhole because it thought it was a puddle.
reply
Ok so Waymo is useless in the rain then, kind of limiting. But at least that 0.000000000001% times it actually is a sinkhole you won't damage the bumper.
reply
I'd rather a Waymo be useless in the rain rather than a Tesla be actively dangerous and likely to kill me.

Tesla ""autopilot"" fatalities: 65

Waymo fatalities: 0

reply
Autopilot isn’t full self driving (FSD), most cars these ship with smart cruise control (what autopilot basically is). Do you have fatality statistics for FSD?

If we are just talking about smart cruise control, most cars are using cameras and radar, not lidar yet. But Tesla is special since it doesn’t even use radar for its smart cruise control implementation, so that could make it less safe than other new cars with smart cruise control, but Autopilot was never competing with Waymo.

reply
> Waymo fatalities: 0

By some measures Waymo is actually at -1 fatalities. There has been one confirmed birth of a child in a Waymo. https://apnews.com/article/baby-born-waymo-san-francisco-6bd...

reply
I think the car would have to be more actively involved in the process for that to count. :)
reply
There is also a report from the same flooding in LA of a Waymo driving into a flooded road and getting stuck.

They might have flipped a switch after that, causing this.

reply
Dude that's not a 'puddle' as the article claims, that's a body of water that it's not even visually obvious whether it's safe to drive through. Maybe I'm a bad driver but I'd hesitate to drive through that in a small car either.
reply
I think the difference is the prior knwoldege a commuter has of that section of road. Does it always flood shallowly in heavy rain?
reply
Even without prior knowledge, seeing others safely navigate the same section will lower your estimated risk.
reply
The amount of water will depend on the rain, so we don't know how shallow it is even with prior knowledge.
reply
If you drive the road every day, you probably do. If you can see someone drive through it (perhaps someone who knows the area well and knows how deep it is based on puddle width), you definitely do.
reply
deleted
reply
>A vehicle got stuck trying to figure out an obstacle so sensors with less information are better than sensors with more information.
reply
It is sound to think that cameras plus an accelerometer, plus data about about the car and environment (that you get from your ears) ought to be able to mimic and improve on human driving. However humans general purpose spatial awareness and ability to integrate all kinds of general information is probably really hard to replicate. A human would realize that an orange fluid spilling across the road might be slippery, guess the way a person might travel from the way their eyes are pointing...

It may just be faster to make lidar cheap. And lidar can do things humans can't.

reply
IIUC, the cameras in a Tesla have worse vision (resolution) at far distances than a human. So while in the abstract your argument sounds fine; it'll crumble in court when a lawyer points out a similar driver would've needed corrective lens.
reply
Most accidents happen because people are human, aren't paying attention, are inebriated, not experienced enough drivers, or reckless.

It's not fair to say that vision based models will "make the same mistakes people do" as >99% of the mistakes people make are avoidable if these issues were addressed. And a computer can easily address all those issues

reply
Which means the mistakes vision-based models for today are unique to them.
reply
This is a new and flawed rationale that I haven't heard before. Tesla cameras are worse (lower resolution, sensitivity, and dynamic range) than human eyes and don't have "ears" (microphones).
reply
The cars do have at least one microphone.
reply
Inside the car though, right? With multiple exterior microphones they could do spatialization like Waymo.
reply
Pretty hard to do if your whole selling point is ‘better and safer than human’ however?
reply