Neither do cameras, or eyeballs.
I've been in zero-road-speed whiteout conditions several times. The only move to make is to the side of the road without getting stuck, and turning on your flashers.
Low-light cameras would not have worked. Sonar would not have worked. Infrared would not have worked.
If we could make sensors that lets an autonomous vehicle drive reliably in any snow/rain where a human could drive (although carefully) then we're good. But we are a long way from that. Especially since a lot of sensor tech like cameras tend to fail in 2 ways, both through their performance being worse in adverse condition but also simply failing to function at all if they are covered in ice/snow/water.
It's significant that a truly hard problem like autonomous driving doesn't respond to a "brute force" management style. Rockets aren't in this category because the required knowledge and theory is fairly complete, whereas real autonomous driving is completely novel.
Hmm. Is it ragebaiting to respond to a tired and wrong statement by saying that it's tired and wrong and that the situation is merely the product of piss poor management decisions? People get understandably frustrated seeing the same wrong talking point that people with domain knowledge in computer vision and robotics have repeatedly explained is wrong in extremely fundamental ways.
> I don't own a Tesla.
n.b. The shoe/foot comment was not about you. It was about Musk. It wouldn't make any idiomatic sense for the expression to be about you given what you said and what you were responding to. If they'd said "pot, meet kettle", then it would have been about you. In that context, saying that you don't own a Tesla feels like a weird thing for you to insert in your comment. It potentially comes across as suspiciously defensive.
Tesla is spending upwards of $6B/year to Waymo’s $1.5B. Only one of these companies makes an autonomous robotaxi that’s actually autonomous.
Of course you do, you're driving at much higher speeds and so is the surrounding traffic. You can't just guess what you might be looking at, you have to make clear decisions promptly. Lidar is excellent in that case.
Computer vision does not work exactly like human vision, closely equating the two has tended to work out poorly in extreme circumstances.
High performance fully automated driving that relies solely on vision is a losing bet.
It's frustrating to still see it repeated over a decade later. It was always bullshit. It was always a lie.
Then again, it's good that we have self-driving companies with lidar and without — we will find out which approach wins.
Also, military sensor use shows the best answer is to have as many different types of sensors as possible and then do sensor fusion. So machine vision, lidar, radar, etc.
That way you pick up things that are missed by one or more sensor types, catches problems and errors from any of them, and end up with the most accurate ‘view’ of the world - even better than a normal human would.
It’s what Waymo is doing, and they also unsurprisingly, have the best self driving right now.
1) it's not cheap to produce lidars at a stable predictable quality in millions;
2) car driving training data sets for lidars are much scarcer (and will always be much scarcer due to cameras' higher prevalence) and at a much lower quality;
3) combined camera+lidar data sets are even scarcer.
It wasn't cheap to produce accelerometers at a stable predictable quality in millions before smart phones either. Mass production shakes things up somewhat. See the headline for reference.
2+3. BYD collects extensive training data from customers, much like Tesla does. They will have no trouble with training.