Discussion
Comment Tesla a caché des accidents fatals pour continuer à tester la conduite autonome sur les routes
doener: The article was also published in German: https://www.srf.ch/news/dialog/autonomes-fahren-wie-tesla-un...
dangus: To pile on to this pathetic excuse for a company: anyone considering buying a Tesla should know that they are the #1 brand for fatal accidents in the United States, with over twice the accident rate of a typical automaker: https://www.roadandtrack.com/news/a62919131/tesla-has-highes...This terrible statistic can’t just be explained by aggressive driving owners or some other factor.Tesla makes unsafe cars and covers it up. The crash test safety awards their vehicles have won are clearly not representative of reality.
ymolodtsov: Tesla has a very bad track record in terms of both compliance and disclosure when it comes to autonomy incidents.
oblio: Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
Forgeties79: You would be surprised how passionately people defend Tesla on HN sometimes, especially when safety records come up.
rvz: The Tesla fans fell for it again.The Fools Self Driving (FSD) contraption once again revealed as a scam and continues to be pushed onto their fans as a "self-driving" capability.If they (Tesla) can hide fatal accidents, what else is Tesla not telling us?
ymolodtsov: We're talking about a brand whose every car has at least 350HP, and most of them have more.It's not an apples-to-oranges comparison.
jasoncartwright: Teslas turning off autopilot seconds before a crash, apparently avoiding being recorded as active during an incident, is wild https://futurism.com/tesla-nhtsa-autopilot-report
jeffbee: How do we know it can't be explained by self-selecting driver population? That sounds like the most likely explanation, and it's the only explanation advanced by the article you provided.
post-it: I guess there's something to be said for "hey, if you're considering buying a Tesla, you may be the kind of person that's likely to kill themself in a car crash. Consider buying a safer car or taking the bus!"
post-it: For a while they were the safest car in crash tests, weren't they? Was there an inflection point where they were dropping like a rock? Or is this a case of measuring different things (crash tests vs fatal accident rates)?I know you probably don't know off the top of your head, I'm hoping someone can chime in.
philipallstar: > Tesla vehicles have a fatal crash rate of 5.6 per billion miles driven, according to the study; Kia is second with a rate of 5.5,Basically the same as Kia. Why are Kias so bad?
xutopia: 2 reasons I can see.Kia have way smaller and cheaper cars with less security features to market. Tesla had front page news at some point saying how they were the safest car ever produced.Tesla is giving people driving their cars a false sense of security.
RoxiHaidi: One day an AI will obviously be infinitely better at driving than a human will be but that day is not yet here.
bluefirebrand: Personally I don't know if I care. Unless I can have some guarantee that the AI will prioritize my life and safety over literally any other concern, I'm not sure I would trust itI don't ever want to be inside an AI driven vehicle that might decide to sacrifice me to minimize other damage
maxcan: that study was pretty thoroughly debunked. Also, I believe it was put out by a lobbying group representing auto dealerships who see the Tesla DTC model as a mortal threat. There is a lot of legitimate criticism to be directed towards Tesla but the ISeeCars study "aint it".
dangus: Find a link that shows it’s debunked then? All they did was analyze federal crash data.I don’t know what’s so hard to believe about the study. Tesla’s numbers are pretty similar to other low-performing brands.
zulgin: Look I don't like Tesla as much as the next person, I think it is wildly over-hyped and over-valued. But this article is just slop.The headline says - "How Tesla hid accidents to test its Autopilot" but the actual article has no explanation as to (1) how Tesla hid anything or, for that matter, (2) who did Tesla hide this information fromIt mashes together a Tesla data leak from 2022 and an unconnected lawsuit from 2026 without ever explaining how those 2 are connected.Tesla has a pattern of making deceptive promises and deceptive disclosures but this article doesn't make that case at all.
tiberriver256: Thanks
iugtmkbdfil834: I think this is part of the reason I am wary of trying it ( including some of the competitor's variants ). They all want you to pay attention, because you may be forced to make a decision out of the blue. I might as well be in control all the time and not try to course correct at the literal last second.
IgorPartola: A self driving car should have no steering wheel. If it has a steering wheel it is a vote of no confidence from the manufacturer.
mzl: Dan Luu had some interesting analysis about car safety, comparing how different auto-makers fared on newly introduced crash tests: https://danluu.com/car-safety/The main take-away for me from that page is that very few manufacturers seem to design for actual safety (only Volvo had good results), and Tesla was angry that a new test had been introduced which feels indicative of a bad safety culture.
friendzis: > I’m convinced that Tesla makes unsafe cars and covers it up wherever they can.Tesla makes unsubstantiated, exaggerated claims about capabilities of their system and directly encourages unsafe behavior. How many other manufacturers encourage test subjects to drive full speed ahead into a concrete divider "to see what happens"?
infecto: They don’t, these are the anti-Tesla folks. No level of reasoning is available for discussions like this.
dangus: You’re so close to understanding!Tesla stans tell us that they’re the most luxurious wafers best-built cars on the road, in reality they’re as poorly built as an economy car brand with a reputation for low quality.
infecto: You’re missing the obvious explanation here. Driver profile. You could have the safest car around but if it’s being driven by unsafe drivers it will lead to higher accidents and fatalities.
pmarreck: > to minimize other damageYou mean deaths to multiple other people, do you not? Let's just call a spade a spade here and point out the genuine ethical dilemma.What's the ratio between "bodies of your own kids" and "other human bodies you have no other connection with" in terms of what a "proper" AI that is controlling a car YOU purchased, should be willing to make in trade in terms of injury or death?I think most people would argue that it's greater than 1* (unless you are a pure rationalist, in which case, I tip my hat to you), but what "SHOULD" it be?*meaning, in the case of a ratio of 2 for example, you would require 2 nonfamiliar deaths to justify losing one of your own kids
bluefirebrand: > You mean deaths to multiple other people, do you notI mean deaths the AI predicts for other people, yesAnd I'm not saying I would never choose to kill myself over killing a schoolbus full of children, but I'll be damned if a computer will make that choice for me.
mnvsbl: Full report here (video): https://www.rts.ch/play/tv/temps-present/video/tesla-la-face...
maxerickson: I find it interesting that you don't give other drivers any consideration in your analysis.
philipallstar: > You’re so close to understanding!Sorry, I don't understand this. I'm just asking a question. Do you reply to every question with that?
senordevnyc: Yeah, you also have to consider that your kids can be on either side of the equation too.
dham: Here we go again. Autopilot != FSD. Autopilot is not "autonomous" driving. It's lane keep with adaptive cruise control. The same system that Honda, Toyota, etc have. Yes the naming is wrong, the marketing is bad, but I don't see it as much worse as Toyota safety sense. If you use it to be "safe" you're going swerve off the highway into a ditch. I used super cruise from GM in my friends suv. As soon as lane markers go away on a bridge, I almost hit the railing.I'll get downvoted but just giving you the facts. I'm glad the Autopilot name has been retired. Such a bad name, but maybe a good name because autopilot in planes can't see and avoid obstacles either.
dv_dt: The news isn't necessarily of the effectiveness of the particular tech stack, but the integrity, or lack thereof of the manufacturer in reporting incidents. If that is in question, assessing the effectiveness of any of Tesla's tech stacks fsd or autonomy, or taxis for driving is in doubt.
grog454: Throttle and yoke aren't a vote of no confidence from aircraft manufacturers. Some modes of operation are suitable for autopilot and some are not.
pmarreck: Interestingly, I think that similar types of arguments are made against "agentic coding"If you don't pay constant attention, you will never notice when it slips in a bug or security issue
ownagefool: Sure, but you can do that in a diff after the event, rather than live.
ori_b: Can you explain why that makes it ok to cover up accidents and lie about the recordings of the event being corrupted?
Glemllksdf: I don't get it?If autopilot was missleading, full self driving is too?
buellerbueller: Here we go again; Musk fanboy to the rescue!
HFguy: After you wrote this, I went and read the article I also didn't see much there either. And wonder why you are getting down voted. And TBC, also not a tesla fan (the truck is dumb).
estearum: Until recently, Kias were sub-entry level tiny little shitboxesThis would affect both driver selection and performance during impactSlap a ridiculously powerful drivetrain on it and a premium price tag and you have a Tesla
adev_: So...For a bit of context on the video and the article:- The documentary is from the RTS. The RTS is the main publicly owned media from Switzerland. They are not the typical European owned public media: They are generally pretty well funded (contrary to most). They also tend to generate good (high) quality content, tend to be independent and rather neutral (leaning slightly to the left politically speaking).- The video is in French because, in Switzerland, the media are divided in three group associated to the regional languages: RTS for the French, SSR for the German and RSI for the Italian. Thats why you do get German translation.- They are generally pretty cooperative and open minded. If one of you want to submit english subtitles. Just contact them, they might accept it (I do not promise anything).
baq: it is finitely better today and will be better still. this doesn't mean it's better at everything a human driver can do, it's just better on average. the jagged frontier is real and a very important safety consideration; nevertheless, the averages matter, too.
trymas: IMHO you're shifting goal posts (and I am not downvoting).Tesla (or probably mostly Elon) was not selling "adaptive cruise control". It's selling "Autopilot" for $8k (now with a subscription AFAIK), with a pinky promise that "soon" or "next year" or "after two weeks" (jk) you essentially will set a destination, go to sleep and wake up at destination[1].It's same as saying that "LLM != AI" and arguing that "ChatGPT is not AI - it's a glorified statistics model that is good at creating human sounding texts". Yeah - you and I understand this - but the average guy most likely does not and will get burned by this.[1] It's a slight exageration, though I won't spend time digging for quotes but my main point is that's what Tesla are selling to an average guy and not nerds who can distinguish on what's possible, what's working and what level of driving assist there are.
d1sxeyes: To be fair, that report says> the self-driving feature had “aborted vehicle control less than one second prior to the first impact”It seems right to me that the self-driving feature aborts vehicle control as soon as it is in a situation it can’t resolve. If there’s evidence that Tesla is actively using this to “prove” that FSD is not behind a crash, I’m happy to change my mind. For me, probably 5s prior is a reasonable limit.
superxpro12: IDK, this has the same unethical energy as police turning off body cameras.in the BEST CASE, this is a confluence of coincidences. Engineering knows about this and leaves it "low prio wont fix" because its advantageous for metrics.In the worst case, this is intentional.In any case, the "right thing to do" is NOT turn off the cameras just before a collision, and yet it happens.This is also Safety Critical Engineering 101. Like.... this would be one of the first scenarios covered in the safety analysis. Someone approved this behavior, either intentionally, or through an intentional omission.
AlotOfReading: I don't believe any AV software out there attempts to solve the trolley problem. It's just not relevant and moreover, actually illegal to have that code in some situations.You can't get into a trolley situation without driving unsafely for the conditions first, so companies focus on preventing that earlier issue.
pmarreck: Are these still accidents where the driver was not paying attention, though?
x187463: This is reasonable, and you have to imagine many collisions involve the driver taking control at the last second causing the software to deactivate. That being said, this becomes a matter of defining a self-driving collision as one in which self-driving contributed materially to the event rather than requiring self-driving be activated at the exact moment of impact.
JumpCrisscross: > deaths the AI predicts for other peopleIsn’t this entirely hypothetical? In reality, are any systems doing this calculus? Or are they mimicking humans, avoiding obstacles and reducing energies in a series of rapid-fire calls?
philipallstar: Yes, all companies sold leaded gasoline.
post-it: Usually when people provide examples, they're intended to serve as a representative sample of a larger trend, and not an exhaustive list. Hope that helps.
cj: Their point still stands.Not all companies do illegal things.IMO it’s also a distraction to blame it on “capitalism” or some “larger trend” rather than just pointing directly at the company and people responsible.“The system is broken” line hasn’t worked for years now. Maybe if we stop blaming the system and start blaming the people?
tonyedgecombe: [delayed]
limbero: Sorry, but you seem to be implying that European public owned media outlets are not normally to be trusted. Why?I started out writing a list of European countries with high quality public broadcasters, but the comment started looking silly since the list quickly grew very long.
senordevnyc: “Infinitely” is a high bar, but Waymo is already demonstrably better than the majority of human drivers.
qsera: But only in very controlled environments...
bluefirebrand: Other drivers should take public transit if they don't want to / are afraid to operate their own vehiclesAs for me I actually like driving and I'm good at it. I'm not afraid of operating my own vehicle like so many people seem to be
sobellian: Would it be a vote of no confidence in Full Self Flying?
lotsofpulp: Liability insurance pricing tells the whole story, without clickbait articles or emotion.If there was a significant problem, my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are not.
JumpCrisscross: > my liability only insurance premiums would be higher for the Tesla compared to a non Tesla. But they are notYou’re correct inasmuch as we have no evidence there is “a significant problem.” But if Tesla is hiding evidence, as this article suggests, that might just be because lawsuits are still gaining steam.
ModernMech: No one claimed all companies do illegal things.
philipallstar: All of this is a crazy overgeneralisation of the hundreds of millions of companies in the world:> Look, there is no way corporations would lie for their own interest. Especially when they spent tens of billions to develop something.> It's not like they sold us leaded gasoline or "healthy tobacco" for decades.
ModernMech: Saying "corporations have lied in the past for their own self interest" and then pointing to two very well known examples does not imply or over generalize that all corporations do that.
CrazyStat: We can take the AI out of the question entirely and ask how many other humans you personally as a driver would be willing to mow down to avoid your own death—driving off a bridge, say.I would suggest that all but the most narcissistic would have some limit to how many pedestrians they would be willing to run over to save their own lives. The demand that the AI have no such limit—“that the AI will prioritize my life and safety over literally any other concern”—is grotesque.
x187463: Treat it like a driver assistance system. I treat FSD the same as I treat Augmented Cruise Control and Lane Keep Assist in my CRV. I keep my hands on the steering wheel and follow along with the decision making.
occamofsandwich: Sure, but then I don't want you to have a vehicle at all to minimize my own risk.
bluefirebrand: Feel free to minimize your own risk by staying home and never leaving
occamofsandwich: Feel free to minimize both our risks by not polluting public space with your personal crap.
maxerickson: No, I mean that they are not prioritizing you and many make poor choices.Replacing bad other drivers with good autonomous systems is likely a great trade off for you, even if you are in an autonomous vehicle that is eager to sacrifice you if there is an unavoidable incident.
onemoresoop: This is a policy that Tesla put in place, period. Handling control to driver suddenly in a weird moment can make the whole situation even more dangerous as the driver is not primed to handle it on the spot, it’s all too unexpected.
boringg: I guess i'll step in for the counter.How is a car supposed to pre-empt when it is in a situation that is to challenging for it to navigate? Isn't it the driver who should see a situation that looks dicey for FSD and take control?
XorNot: Which is just worse.When I'm driving I know what I'm doing, what I'm planning to do and can scan the road and controls with that context.Making me have to try and guess what the car is going to do at any given time is adding complexity to the process: am I changing lanes now, oh I guess I am because the autonomy thinks we should etc.
lotsofpulp: Liability insurance premiums would reflect higher risk of Tesla vehicles causing collisions, regardless if Tesla is at fault or if the driver is at fault. The insurance company still has to pay, which means the Tesla owners have to pay.
JumpCrisscross: > Liability insurance premiums would reflect higher risk of Tesla vehicles causing collisions, regardless if Tesla is at fault or if the driver is at faultWhy? They only pay out if you’re at fault. And if there aren’t final judgements in a deep pipeline of cases, premiums wouldn’t have a reason to adjust yet.
lotsofpulp: I am assuming Tesla has been around long enough and driven enough miles to have a sufficiently representative data set for insurance companies to know. I cannot imagine the pipeline of cases to be so deep as people are waiting on payments from collisions from years ago.I am also assuming that Tesla’s at fault determinations are more accurate than other brands, given the 6 or 7 cameras that are recording and should make determining fault easier.Basically, if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring Teslas, and hence insurance companies would be charging higher liability only insurance premiums.
hermannj314: What would that guarantee look like and would it be legal to sell a product that made that guarantee?"Prioritizing my life over every other concern" looks like plowing over pedestrians to get me to the hospital. I dont think you can legally sell a product that promises that.
paganel: The national broadcaster here in Romania has been politically leaning on whoever was paying the bills, hence on who’s holding political control over the country.I can say the same about the foreign bureaus of State-owned media thingies like Deutsche Welle and Radio France Internationale, both of these entities actively rooting for the Romanian political candidate that was seen as closer to German and French interests (I’m talking the last couple of rounds of Romanian presidential elections).
tclancy: No, it would be an acknowledgement of the lack of perfection in human systems so far.
bena: So, the car puts itself in a situation it can't resolve, then just abdicates responsibility at the last moment.That's still not a good look.And it does mean that FSD isn't to be as trusted as it is because if the car is putting itself in unresolvable situations, that's still a problem with FSD even if it isn't in direct control at the moment of impact.
idop: It's an insane reversal of roles. In a standard level 2 ADAS, the system detects a pending collision the driver has not responded to and pumps the breaks. Tesla FSD does the reverse: it detects a pending collision that it has not responded to, and shuts itself off instead of pumping the breaks. It's pure insanity.Also, Tesla routinely claims that "FSD was not active at the time of the crash" in such cases, and they own and control the data, so it's the driver's word against theirs. They most recently used this claim for the person who almost flew off an overpass in Houston because FSD deactivated itself 4 seconds before impact[1]. They used it unironically as an excuse why FSD is not at fault, despite the fact that FSD created the situation in the first place.[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
JumpCrisscross: The entire point of these articles about mounting lawsuits is those assumptions may be wrong. The liabilities involved are higher. And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.> if the Tesla was more dangerous to drive than a Toyota, because it was a Tesla, then insurance companies would be paying out more for insuring TeslasYou may be over indexing how much work liability insurers do. I have an umbrella policy. It absolutely doesn’t take into account the fact that I ski and fly a plane, for example. At the end of the day, their liability is capped and it’s usually easier to weed out by claims history than running models on small premiums.
grvdrm: Reminds me of a situation not long ago.I’m in left lane on highway. Tesla ahead of me but quite a ways away.I realize as I’m driving that the Tesla is moving quite slow for the left lane driving. And before you say it, yes there are lots of people speeding in highway left lanes too.So - I passed on the right rather than tailgate. Look over and see a guy leaning back in his seat. No hands on wheel. Could’ve been asleep. And driving 10-15 mph slower than you’d expect in that lane.To your point about using it FSD the way you do, makes total sense to me. Which implies you would also cruise at the right speed depending on the lane you are in, unlike my example.
x187463: One of my major complaints about FSD is the 'speed profiles'. You used to be able to set a target speed directly. Now, you can only select a profile. You're either going the exact speed limit, 2-3mph over, or essentially 'with the flow of traffic' which can lead to speeding +15 over the limit.
watwut: They are not afraid to operate their own vehicles. They are afraid you will kill them.You just said that you do not care how many people you kill - regardless of whether they are pedestrians, whether they are driving cars or whether they are on the bus. That is what people react to.
Geee: This is about the old autopilot, not FSD, and there doesn't seem to be anything new in the article. This is based on the same leaked data which has been public since 2023. The title seems to be inaccurate, as there's nothing to indicate that they hid fatal accidents.
Timon3: The AI can also only ever predict that you might die. So how should these predictions be weighed? Say there's a group of five children - the car predicts a 90% chance of death for them, vs. 50% for you if the car avoids them. According to your comments, it seems like you'd want the car to choose to hit the children, right?What is the lowest likelihood of your own death you'd find acceptable in this situation?
JumpCrisscross: > not sure I would trust itThis is a fair concern. I’m unconvinced it’s even remotely a real market or political pressure.On the market side, Waymo is constrained by some combination of production and auxiliaries. (Tesla, by technology.) On the political side, the salient debate is around jobs, in large part because Waymo has put to bed many of the practical safety questions from a best-in-class perspective.
bluefirebrand: Sure, but what happens when the tech gains market capture and inevitably enshittifies, the same way every other piece of tech has?I'm not really thinking about when self driving is State of the Art Research. I'm talking about when it becomes table stakes.Honestly the real truth is I just do not trust tech companies to make decisions that are remotely in my best interest anymore.I can't even trust tech companies to build software that respects a "do not send me marketing emails" checkbox, why would I ever trust a car driven by software built by the same sort of asshole?
JumpCrisscross: > what happens when the tech gains market captureIdk, we solve it then. Motor vehicles kill 40,000 Americans a year [1]. I’m willing to cautiously align with Google and maybe even Tesla if they can take a bit out of those numbers.[1] https://www.cdc.gov/nchs/fastats/accidental-injury.htm
catlikesshrimp: Car crash deaths are better known than software bug caused deaths. Worse: a car crash can cause the driver's death; I wouldn't offload work on which my life depends to an experimental tech.
throwanem: Real question, then, from someone who only bothers driving when he must and even then in a 2016 model: Why do you use it? What beneficial purpose do you find it to serve?I'm asking because I feel I must be missing something, inasmuch as to have my hands on the wheel while not controlling the car is an experience with which I'm familiar from skids and crashes, and thinking about it as an aspect of normal operation makes the hair stand up on the back of my neck. (Especially with no obviously described "deadman switch" or vigilance control!)
x187463: Here's a simple example from last week. FSD was in control on my way to work, stopped at a red light early in the morning before the sun was up. The light turns green and FSD doesn't not accelerate. I figured it was somehow confused and I was starting to move toward hitting the accelerator myself when a car comes flying through the red light from the driver's side. I hadn't noticed this car, but FSD saw it and recognized it wasn't slowing down. I could see there were headlights, but it wasn't clear how fast it was going.It's just nice having a 'second set of eyes' in a sense. It's also very useful when driving in unfamiliar cities where much of my attention would be spent on navigation and trying to recognize markings/signs/light positions that are atypical. FSD handles the minutia of basic vehicle operation so I can focus on higher level decisions. Generally, at inner-city speeds, safety and time-to-act are less of an issue and it just becomes a matter of splitting attention between pedestrians, obstacles, navigation, etc. FSD if very helpful in these situations.
Rohansi: Autopilot is completely different software from FSD. If you think FSD is stupid then Autopilot is worse because it won't do anything other than stay in the same lane and adjust speed to the car in front of you.For some reason you could turn this on when you're not driving on the highway. It doesn't do anything for traffic lights, stop signs, obstacles, etc. because it's just cruise control. It's also included with every vehicle (unlike FSD).
estimator7292: How about the fact that Tesla is killing people and covering it up?Would you go to a driver's funeral and tell their family that um, ackshully it's sparkling autopilot?What do you think you're adding to the conversation? You're trying to distract from the fact that real, actual people have been actually killed by this.
x187463: It's not a semantic issue, FSD is a completely different system, but many people mix up the terms when discussing these systems due to poor naming. Autopilot is just cruise control and lane keep. FSD handles navigation and full vehicle control. Articles discussing the dangers of Autopilot are making perfectly reasonable claims about a system which was poorly named/marketed, but they are not meaningfully relevant to conversations about FSD.
idop: Elon himself uses both terms interchangeably[1], and the two reportedly use the same stack, so why shouldn't we conflate the terms?[1] https://electrek.co/2026/03/18/tesla-cybertruck-fsd-crash-vi...
philipallstar: But the article doesn't say that at all - quite the opposite:> The study's authors make clear that the results do not indicate Tesla vehicles are inherently unsafe or have design flaws. In fact, Tesla vehicles are loaded with safety technology; the Insurance Institute for Highway Safety (IIHS) named the 2024 Model Y as a Top Safety Pick+ award winner, for example. Many of the other cars that ranked highly on the list have also been given high ratings for safety by the likes of IIHS and the National Highway Transportation Safety Administration, as well.
Forgeties79: The issue is they are potentially lying. It’s why we are even having this discussion. The numbers could be fraudulent
lotsofpulp: > The entire point of these articles about mounting lawsuits is those assumptions may be wrong.And my entire point is I trust the incentives of the insurer to accurately price risk and determine at fault more than a publication that needs clicks.> And given Tesla is potentially mucking with the data, the exculpatory value of having all those cameras is diminished.Does the data from Tesla even come into play for an insurer? They need to pay the damaged parties regardless of whether or not Tesla and its software are at fault. For premium pricing purposes, what Tesla does is irrelevant until after Tesla is found liable.In the meantime, a collision with a Tesla is the same as any other auto brand’s. I don’t think Ford/Toyota/anyone else’s software comes into play.
throwanem: Huh.I appreciate your response. I'll need to think about it for a while, too. It had not occurred to me to consider the possibility that someone else's FSD might protect me from the general incompetence and unreliability of motor vehicle operators.
x187463: The difference is FSD is properly annotated as (Supervised) and does exactly that. Autopilot does not 'autopilot' the vehicle by any reasonable measure.
Glemllksdf: Supervisded self driving would be correct. I don't think I was aware of the (Supervised) before your comment tbh.
XorNot: It was an entire media beat up because the media was too afraid to talk about anything real and the public not interested.There's plenty we could talk about: i.e. the failure scenarios of shallow reasoning systems, the serious limitations on the resolution and capability of the actual Tesla cameras used for navigation, the failure modes of LIDAR etc.Instead we got "what if the car calculates the trolley problem against you?"And observationally, proof a staggering number of people don't know their road rules (since every variant of it consists of concocting some scenario where slamming on the brakes is done at far too late but you somehow know perfectly well there's not a preschool behind the nearest brick wall or something).I remember running some basic numbers on this in an argument and you basically wind up at, assuming an AI is fast enough to detect a situation, it's sufficiently fast that it would literally always be able to stop the car with the brakes, or no level of aggressive manoeuvring would avoid the collision.Which is of course what the road rules are: you slam on the brakes. Every other option is worse and gets even worse when an AI can brake quicker and faster if its smart enough to even consider other options.
saalweachter: > Which is of course what the road rules are: you slam on the brakes.Yeah, there are a shocking number of accidents which basically amount to "they tried to swerve and it went badly".You can concoct a few scenarios where other drivers are violating the road rules so much as to basically be trying to murder you -- the simplest example is "you are stopped at a light and a giant truck is barreling towards you too fast to stop".If you are a normal driver, you probably learn about this when you wake up in the hospital, but an autonomous vehicle could be watching how fast vehicles are approaching from behind you. There's going to be a wide range of scenarios where it will be clear the truck is not going to stop but there's still time to do something (for instance, a truck going 65mph takes around 5 seconds to stop, so if it's halfway through its stopping distance, you've got around 2.5 seconds to maneuver out of the way).That does leave you all sorts of room to come up with realistic trolley problems.
plqbfbv: It's well known for a while now, and it's not to avoid recording being active, it's to avoid a possibly damaged computer to keep working in a likely compromised situation. What happens if the car crashes and flips, AP/FSD has no training on that, and wheels keep spinning at full speed while first responders try to secure the car?AEB should still be working to pump the breaks AFAIK, but auto-steer and cruise control will be disabled while the computer and electronics are still perfectly operational to make the car more secure for the passengers and first responders after the event.EDIT: IIRC the threshold for disengagement is 1s.