upvote
The inputs to FSD are:

    7 cameras x 36fps x 5Mpx x 30s
    48kHz audio
    Nav maps and route for next few miles
    100Hz kinematics (speed, IMU, odometry, etc)
Source: https://youtu.be/LFh9GAzHg1c?t=571
reply
So if they’re already “fusioning” all these things, why would LIDAR be any different?
reply
Tesla went nothing-but-nets (making fusion easy) and Chinese LIDAR became cheap around 2023, but monocular depth estimation was spectacularly good by 2021. By the time unit cost and integration effort came down, LIDAR had very little to offer a vision stack that no longer struggled to perceive the 3D world around it.

Also, integration effort went down but it never disappeared. Meanwhile, opportunity cost skyrocketed when vision started working. Which layers would you carve resources away from to make room? How far back would you be willing to send the training + validation schedule to accommodate the change? If you saw your vision-only stack take off and blow past human performance on the march of 9s, would you land the plane just because red paint became available and you wanted to paint it red?

I wouldn't completely discount ego either, but IMO there's more ego in the "LIDAR is necessary" case than the "LIDAR isn't necessary" at this point. FWIW, I used to be an outspoken LIDAR-head before 2021 when monocular depth estimation became a solved problem. It was funny watching everyone around me convert in the opposite direction at around the same time, probably driven by politics. I get it, I hate Elon's politics too, I just try very hard to keep his shitty behavior from influencing my opinions on machine learning.

reply
> but monocular depth estimation was spectacularly good by 2021

It's still rather weak and true monocular depth estimation really wasn't spectacularly anything in 2021. It's fundamentally ill posed and any priors you use to get around that will come to bite you in the long tail of things some driver will encounter on the road.

The way it got good is by using camera overlap in space and over time while in motion to figure out metric depth over the entire image. Which is, humorously enough, sensor fusion.

reply
It was spectacularly good before 2021, 2021 is just when I noticed that it had become spectacularly good. 7.5 billion miles later, this appears to have been the correct call.
reply
depth estimation is but one part of the problem— atmospheric and other conditions which blind optical visible spectrum sensors, lack of ambient (sunlight) and more. lidar simply outperforms (performs at all?) in these conditions. and provides hardware back distance maps, not software calculated estimation
reply
Lidar fails worse than cameras in nearly all those conditions. There are plenty of videos of Tesla's vision-only approach seeing obstacles far before a human possibly could in all those conditions on real customer cars. Many are on the old hardware with far worse cameras
reply
Interesting, got any links? Sounds completely unbelievable, eyes are far superior to the shitty cameras Tesla has on their cars.
reply
deleted
reply
Always thought the case was for sensor redundancy and data variety - the stuff that throws off monocular depth estimation might not throw off a lidar or radar.
reply
Monocular depth estimation can be fooled by adversarial images, or just scenes outside of its distribution. It's a validation nightmare and a joke for high reliability.
reply
It isn't monocular though. A Tesla has 2 front-facing cameras, narrow and wide-angle. Beyond that, it is only neural nets at this point, so depth estimation isn't directly used; it is likely part of the neural net, but only the useful distilled elements.
reply
I never said it was. I was using it as a lower bound for what was possible.
reply
Better than I expected. So this was 3 days ago, is this for all previously models or is there a cut off date here?
reply
Fog, heavy rain, heavy snow, people running between cars or from an obstructed view…

None of these technologies can ever be 100%, so we’re basically accepting a level of needless death.

Musk has even shrugged off FSD related deaths as, “progress”.

reply
Humans: 70 deaths in 7 billion miles

FSD: 2 deaths in 7 billion miles

Looks like FSD saves lives by a margin so fat it can probably survive most statistical games.

reply
Isn't there a great deal of gaming going on with the car disengaging FSD milliseconds before crashing? Voila, no "full" "self" driving accident; just another human failing [*]!

[*] Failing to solve the impossible situation FSD dropped them into, that is.

reply
Nope. NHTSA's criteria for reporting is active-within-30-seconds.

https://www.nhtsa.gov/laws-regulations/standing-general-orde...

If there's gamesmanship going on, I'd expect the antifan site linked below to have different numbers, but it agrees with the 2 deaths figure for FSD.

reply
Is that the official Tesla stat? I've heard of way more Tesla fatalities than that..
reply
There are a sizeable number of deaths associated with the abuse of Tesla’s adaptive cruise control with lane cantering (publicly marketed as “autopilot”). Such features are commonplace on many new cars and it is unclear whether Tesla is an outlier, because no one is interested in obsessively researching cruise control abuse among other brands.

There are two deaths associated with FSD.

reply
This is absolutely a Musk defender. FSD and Tesla related deaths are much higher.

https://www.tesladeaths.com/index-amp.html

reply
Autopilot is the shitty lane assist. FSD is the SOTA neural net.

Your link agrees with me:

> 2 fatalities involving the use of FSD

reply
I don't know what he's on about. Here's a better list:

https://en.wikipedia.org/wiki/List_of_Tesla_Autopilot_crashe...

reply
Autopilot is the shitty lane assist. FSD is the SOTA neural net.

Your link agrees with me:

> two that NHTSA's Office of Defect Investigations determined as happening during the engagement of Full Self-Driving (FSD) after 2022.

reply
deleted
reply
I quickly googled Lidar limitations, and this article came up:

https://www.yellowscan.com/knowledge/how-weather-really-affe...

Seeing how its by a lidar vendor, I don't think they're biased against it. It seems Lidar is not a panacea - it struggles with heavy rain, snow, much more than cameras do and is affected by cold weather or any contamination on the sensor.

So lidar will only get you so far. I'm far more interested in mmwave radar, which while much worse in spatial resolution, isn't affected by light conditions, weather, can directly measure stuff on the thing its illuminating, like material properties, the speed its moving, the thickness.

Fun fact: mmWave based presence sensors can measure your hearbeat, as the micro-movements show up as a frequency component. So I'd guess it would have a very good chance to detect a human.

I'm pretty sure even with much more rudimentary processing, it'll be able to tell if its looking at a living being.

By the way: what happened to the idea that self-driving cars will be able to talk to each other and combine each other's sensor data, so if there are multiple ones looking at the same spot, you'd get a much improved chance of not making a mistake.

reply
Maybe vision-only can work with much better cameras, with a wider spectrum (so they can see thru fog, for example), and self-cleaning/zero upkeep (so you don't have to pull over to wipe a speck of mud from them). Nevertheless, LIDAR still seems like the best choice overall.
reply
Autopilot hasn’t been updated in years and is nothing like FSD. FSD does use all of those cues.
reply
I misspoke, i'm using Hardware 3 FSD.
reply