upvote
A while back, Tesla gave me free FSD for a month and so I tried it out. It drove me to work just fine, and I was impressed.

Then, on the way home it drove me home on the wrong side of the street and I had to take over. Such a silly mistake.

Similar to what you said; from there on out, it was more trouble than it's worth because you can't let your guard down.

reply
That was exactly my experience with the free trial. I figure if I'm going to have to pay thousands of dollars to still have to pay full attention when I drive, I might as well keep the money and drive the car.

FWIW: My 2026 Huyndai's driver assistance is better than my old 2018 Tesla Model 3's enhanced autopilot.

reply
2026 hyundai driver assist is ok but the lines have to be very clearly defined on the road and won't do much under 25 mph
reply
Same here. Car took a left turn through an orange filter light, detected oncoming traffic and decided 'nope' to the turn and just ... stopped ... in the oncoming lanes.

I find it less cognitive load to drive it myself. It's easier to predict what other vehicles will do than my own. Boo.

I have sympathy with the challenges as I worked in the field.

reply
Was this a Tesla with HW3 or HW4? Also, was it in the US or outside the US?
reply
This was my experience. Kept getting told just try the new version. When I would the issues I had weren’t fixed and I bounced off it again. For very long trips it’s nice, but so is lane assist.

It always had the feeling of being outside with your toddler by the pool. I can look away but I have 50/50 odds of a dead toddler if I do it for to long.

reply
This is what the Tesla fans have been saying for years. "Oh, you're on the previous software version, bro. You gotta try the latest version, bro, trust me bro it's so much better. FSD on the current version is totally working for me, bro." "Oh, you're on today's software version? Don't worry, bro, the next one is going to be so much better, just wait for it bro, trust me bro we're going to have working FSD in the next version, bro."
reply
That was my impression as well. You have to babysit the AI the whole time and if you fail to do that it's basically your life (and others' of course) on the line.
reply
what do you think it is train drivers do?
reply
Doing their absolute best with the steering wheel not to go off those pencil-thin tracks?
reply
Run on premade, static tracks with clearly divided "roads" from the rest of road participants.

Their role is to stop the train in an emergency and adjust to speed etc. to track/driving conditions.

Automating their job probably wouldn't even need the complex ML used for self-driving because the context is significantly simpler and relatively well defined. Maybe a team in city might need such a model but it would still be a significantly simpler task than driving a car.

reply
Getting paid to babysit. Tesla asks you to pay to babysit.
reply
Should Tesla pay people to use Autopilot?
reply
Sounds like babysitting an LLM, with the alarming difference that this AI can kill you if you are not paying enough attention
reply
Oh don't worry the LLMs absolutely can kill us, just slightly more indirectly.

Triggering psychosis is not difficult and the LLM is easily capable of doing that. For a person they soon get freaked out and are likely to summon help. "Johnny started acting crazy and I'm not sure what to do, please come". But the LLM isn't a person, Johnny needs to know more about the CIA's programme to cross breed Venusians with Hollywood stars? Here's an itinerary with the address of a real hotel in LA and an entirely hallucinated CIA officer's schedule.

Next thing you know, Johnny is shot dead by officers responding to a maniac with a fire axe who broke into an LA hotel and was screaming about space aliens.

reply
>Johnny is shot dead by officers responding to a maniac with a fire axe who broke into an LA hotel and was screaming about space aliens.

I’m pretty sure LAPD is too used to this sort of thing to get spooked by it?

reply
Same here phantom braking on the highway, randomly turned off in the middle of an intersection turn and didn’t get over in time for exit and decided to brake in the left lane to try and force over. While it was fun to try it’s not reliable for me to trust. That and If I lean my head the wrong direction resting it I start getting yelled at by it.
reply
Exactly. Said it before and I'll say it again.

I do not want to be the 'manager of my car'. That'd be a downgrade from being an actual driver.

Lane Assist, auto-stop-start, cruise control are enough for me and have been available mostly for decades and require a similar amount of attention.

FSD is a busted flush and I can't believe those who got conned by it aren't more vocal.

reply
I hear this a lot, and I'm genuinely curious why you think it might take more energy to be on alert for tricky situations. Wouldn't you already be doing that for your own manual driving?
reply
I’m guessing that predicting the failure modes of a computer is more taxing than your brain using pattern recognition of what it needs to react to.

If you’re driving, your brain can automatically prioritize the importance of things that you see. But since a computer fails in different ways than a human, you lose all automatic prioritization

reply
Think about a junior coworker you offloaded some of your tasks to. It turns out the coworker frequently makes mistakes. At some point you are going to say it is easier to just do this myself. Especially if a single mistake can cost you your life!
reply
It's easier to predict, understand, and react to your own driving behavior.
reply
Because constantly switching between full attention and degraded attention (which the FSD promises) is more tiring that staying on full attention continuously.
reply
It's not just "tricky situations", sometimes FSD will do things that no normal driver would ever do, and it will do them inconsistently. Sometimes it's brilliant and sometimes it's drunk.
reply
deleted
reply
This is a subject that has been studied quite a bit, as there are a bunch of jobs where people have to monitor for rare emergencies, and react fast if an emergency should arise. Things like pilots on flights with autopilot; lifeguards watching for swimmers in distress; CCTV monitoring; operating airport X-ray machines, and so on.

One such study is "Performance consequences of automation-induced 'complacency'" (Parasuraman, Molloy & Singh, 1993) https://www.pacdeff.com/pdfs/Automation%20Induced%20Complace...

Previous studies had found that a human and a computer performed markedly better than either a human alone or a computer alone - but in those studies failures were quite common, so they didn't give the humans time to get bored or distracted.

When researchers got test subjects to perform a simulated flying task, monitoring a system with 99%+ reliability, they found the humans were proportionally much worse at stepping in than they were on less reliable systems.

Swimming pool lifeguards will often change posts every 15-20 minutes and and get a 10-15 minute break every hour, to keep things interesting enough that they can pay attention. Good luck getting drivers to do that.

reply
> Things like pilots on flights with autopilot;

Funny, I was going to mention exactly that. I'm a private pilot with a modern autopilot and flying is exhausting. Partly because the piston engine is rattling your brain the entire time but also because you're on high alert the entire time. You're always making sure the autopilot is keeping the plane on the blue (or green) line and is being predictable. And my smartwatch shows my heart rate is usually more elevated on autopilot than not.

reply
This is the real trick about 95% accurate or 99% accurate, if you never know when that 1% incident will occur, you ALWAYS have to watch for it. And eventually we'll have to live with the fact it'll never hit 100% accuracy, just as we don't have 100% accuracy today with human driving.
reply
I know my normal, non-self-driving car won't randomly slam on the brakes or swerve into a median. Even if I take my hands off the wheel, I know it will keep going straight-ish for a second or two.

A "self-driving" tesla is an adversary you need to supervise to make sure it doesn't take actions you wouldn't expect of a normal car.

As other posters have pointed out, it's like running an LLM with `--dangerously-skip-permissions`: I wouldn't `rm -rf /` my computer (or in the case of tesla, my life), but an AI might.

reply