> Local media reported that three people died in the incident that’s likely to spark scrutiny over the smart driving software deployed in many of today’s cars.
How come these things are "scrutinized" afterwards instead of before? Governments are severely dragging their feet behind their backs and should clearly outlaw and force certification of these "smart" systems before they're deployed, not afterwards.
Bananas how we're letting these companies use public roads with real humans as testing grounds for driver assistance software.
bayarearefugee 1 days ago [-]
Even ignoring the FSD aspects of these electric vehicles (which are problematic enough at their current level of technology) the fact that they are allowed to do things like hide away the mechanical release handles (required to get out if power is lost, which is very likely in a crash) in weird barely accessible spots inside the door well seems just absolutely fucking bonkers to me.
And this is common, some of Tesla's are actually harder to locate than the ones in this Xiaomi
qzw 1 days ago [-]
Yes that just boggles my mind. Why is there a desire for such designs on a fundamental and safety-critical part of a vehicle? Aesthetic minimalism? But why?! A car door is not a phone screen. There’s plenty of real estate on a car door and it has no other function. It doesn’t need a hamburger menu. Just put a handle there, like we’ve been doing forever. Why does that need to be changed? A decision to hide the handles on a car makes me question and doubt every other design decision that went into the thing, because the designers have obviously lost their minds.
xnx 1 days ago [-]
> Aesthetic minimalism?
Also cost reduction. Save pennies cost lives.
dkjaudyeqooe 1 days ago [-]
Or just ban non-mechanical interior releases? Whats the justification for electronic releases?
My cheap car has electronic locks on all doors but the mechanical interior releases defeat them (but not on the rear doors).
A neat feature I noticed the other day is that operating the release once won't let the door actually open, it only partially unlatches it and to open the door you have to hold the lever open and push the door. Great safety design.
toast0 1 days ago [-]
My fancy new car has push to open doors and monitors the surroundings so it won't open if it detects a car or bicycle moving in the direction of the door. Could be handy.
But it's also designed reasonably. The lever you push to open can be pulled twice to open mechanically. There's no secret lever somewhere nobody knows about. Two different sales people told us to push or pull twice to open.
gruez 1 days ago [-]
>Or just ban non-mechanical interior releases? Whats the justification for electronic releases?
For frameless windows. The door needs to retract the window before you open it, otherwise you might damage the window.
dkjaudyeqooe 1 days ago [-]
Yes, pretty framesless windows is surely a justification forsuch a deathtrap.
qzw 1 days ago [-]
You might have child safety locks engaged on the rear doors by default. If you don’t have small children, you might be able to find a switch or setting to turn that feature off.
vel0city 1 days ago [-]
We've had child safety locks for doors long before we had the door handles be electronic.
AlotOfReading 1 days ago [-]
There's two main systems in use globally:
1. The US system, where regulators define a set of tests and manufacturers self certify compliance
2. homologation based systems like China and the EU where those tests are performed by an accredited third party to receive type approval.
But testing is ultimately reactive, not proactive. It's hard to write appropriate tests for areas of new technology that don't impose unnecessary or silly constraints. These aren't the kinds of regulations that are easy to put out or roll back either. It takes upwards of a year in the US to go through the NPRM process, and several years for gradual rollout into force.
As someone involved in this space, I'm also extremely skeptical that there's any way to develop these kinds of systems without significant testing on public roads. That testing should be monitored and come with explicitly strict development guidelines, but alternative approaches have been tried many times and have not panned out when deployed in practice.
maxdo 1 days ago [-]
1 incident, how many drunk human drivers, watching their phones on the road.
these systems able to solve this problem once and forever, ban human from driving. Humans are extreemly bad at driving.
Unless they produce more fatal incidents compared to human it's ok to have such crashes.
giancarlostoro 1 days ago [-]
I agree. Where I live in Florida, sometimes I look on the road, and there's easily a half dozen Teslas, and I rarely see them in accidents, but the few owners I know personal who have been in accidents have been due to other drivers. You can't make it perfect, but you can definitely get insanely safer. Airplanes are kind of the same, but we don't freak out over it.
qzw 1 days ago [-]
A lady in my neighborhood has crashed two different Teslas. Both times she rear ended somebody, judging from the damages. Some people can crash anything.
giancarlostoro 1 days ago [-]
Was she like on her phone each time?
gruez 1 days ago [-]
>How come these things are "scrutinized" afterwards instead of before? Governments are severely dragging their feet behind their backs and should clearly outlaw and force certification of these "smart" systems before they're deployed, not afterwards.
What makes you think this wasn't scrutinized by Chinese government before release?
diggan 1 days ago [-]
> Local media reported that three people died in the incident that’s likely to spark scrutiny over the smart driving software deployed in many of today’s cars.
This part makes me believe it wasn't scrutinized before, because otherwise the wording would have been different. Maybe something like "further scrutinized" or something else that indicates that it's actually been inspected and approved before.
gruez 1 days ago [-]
I think you're leaning a bit too hard on the wording here. If we're going randomly speculate based on tenuous reasoning like this, allow me to present the fact that such licensing regime does exist in China, so it's likely that Xiaomi did comply with it, rather than openly flouting the law.
Under current desperate climate in China, i.e. unemployment number is creeping up, marriage and childbirth are down..., as well as the fact that Xi likes flashy and fancy stuff, I don't think there is any incentive for the gov to put a limit on how those companies develop. I mean... it's not like the leftist undemocratic communist should have any incentive to do things appropriately for people of different orientations anyways.
In fact, right now in China, criticize such "high-tech" company may lead to serious trouble for the criticizer, given how easy it is for powerful companies to send criticizers to prison for reasons such as "disturb public peace", "hacking" and/or "spread rumor and lies".
Also, let's don't forget Telsa also crash and burns. So it is really tricky to explain to the communist why they must do better.
garrettjoecox 1 days ago [-]
Are there any useful statistics yet regarding accidents/deaths per million miles driven in “self driving” vehicles?
It always comes off as click/rage bait to me when people report on these deaths when there are literally hundreds per day that don’t involve an autonomous vehicle.
No other company is even close (i.e. 5-10 years behind) to where Waymo is on self driving maturity.
lm28469 1 days ago [-]
> No other company is even close (i.e. 5-10 years behind) to where Waymo is on self driving maturity.
Not too hard when you stay inside like three or four cities with good weather, straight roads, &c.
jwagenet 1 days ago [-]
> good weather
I always see this argument but it seems like they did fine through the recent SF storms https://www.sfchronicle.com/sf/article/waymo-robotaxis-storm... Most of the US has non-freezing weather for most of the year, so aside from avoiding heavy rain, snow, and ice I’m not sure what more you want.
> straight roads
SF might have mostly straight roads, but it has complicated intersections, bed drivers, cyclists, double parked cars, etc.
Waymo would do fine in any major city most of the year under most conditions.
nickthegreek 1 days ago [-]
> Waymo would do fine in any major city most of the year under most condition
There is no proof of this. I have yet to see these ever ferry passengers in snow, which in case you are unaware, is a common driving condition for many living in North America.
creer 24 hours ago [-]
Snow and ice is in favor of vehicles with lots of computing on board - and not the wet kind.
jwagenet 1 days ago [-]
I mean, I literally excluded snow from “most conditions” and snow is not common most of the year in major North American cities (and is often plowed quickly).
lucyjojo 19 hours ago [-]
you might want to prefix city with "american".
xnx 1 days ago [-]
Better to self drive in some places than no places.
Well if thats the tip of the spear of FSD tech we're fucked, no way I will take robotaxi to work before I retire. Extremely limited environment well under control, almost always sunny, doing the same area for what - 10 or 15 years?
Can it drive in rain & snow on narrow non-marked roads, then join traffic jams (or not) on highway at 120kmh, then enter city and navigate obscure construction works around it, crazy aggressive cyclists and scooters and get me where I need, 100% reliably? Or lets say >99.995%, thats roughly human frequent driver level.
This is what I am willing to pay for, either as shared taxi or our own car, nothing less. Anything less is me doing all the driving requiring full attention, have that already in dumb cars.
AlotOfReading 1 days ago [-]
Waymo can indeed drive through rain and snow on narrow unmarked streets, as well as traffic jams and navigate obscure construction works with cyclists.
Other than snow, it does all of those things in SF. They do snow testing in Tahoe and Michigan, not to mention the former testing in NYC.
xnx 1 days ago [-]
Also weather testing in Buffalo, New York. One of the snowiest cities in the US.
aredox 1 days ago [-]
It is not easy to compare as there are lots of confounding variables - self-driving is not activated at random or all the time, but typically on highways, which are less accident-prone.
They are also deactivated in difficult conditions such as bad weather which are also hard for human drivers. You can imagine a future with all cars are equipped with a self-driving system that always "passes the buck" to a human when conditions degrade - of course the system will have less accidents than humans! The statistics will even show human drivers being worse than before the advent of self-driving!
But the most dangerous car model is the Hyundai Venue[1], which also brags about all their Hyundai SmartSense safety features. I'm sure the next few cars down the list also do the same. Maybe your ire should be directed at them as well?
It kind of says something when it turns out that Volvo, with their old-timey ‘dumb’ safety features, seems to be outperforming all the sexier brands on safety.
Maybe focusing on the dumb stuff brings a lot more bang for the buck than the sparkly new ‘smart’ safety widgets?
ahartmetz 1 days ago [-]
Volvo is known for actually caring about real-world safety, not just what the tests happen to test. Or, I guess, what looks good on the feature checklist.
PeterStuer 1 days ago [-]
Volvo sort of prides themselves on safety, including "smart" widgets.
2003 Blind Spot Information System (BLIS): BLIS uses cameras or radars to detect other cars approaching your car.
2008 City Safety: Emergency braking system using laser to detect, it helps reduce the risk of rear-end collisions at low speeds
2010 Pedestrian detection with full auto brake: Using radar and camera, this system warns the driver if somebody steps out in front of the car, and brakes by itself if the driver fails to do so.
2016 Connected safety: use the cloud to share critical data between vehicles, alerting the driver about slippery road sections or vehicles that have activated their hazard lights.
2018 Oncoming mitigation by braking: A Volvo car driving in a built-up highway with digitally rendered signs depicting oncoming mitigation by braking.
If an oncoming vehicle veers into your lane and a collision is unavoidable, this feature can help reduce your vehicle's speed to mitigate the force of the collision
2023 Lidar: Based on high-performance sensors, Lidar technology is key for creating safe autonomous cars. It helps autonomous cars detect other cars, pedestrians and cyclists
2023 Driver Understanding System: 2-camera system can detect if the driver is distracted, sleepy or even intoxicated. If needed, the system will activate a protective shield and take appropriate countermeasures to preserve safety margins.
ronnier 1 days ago [-]
> The study is based on QuoteWizard by LendingTree insurance inquiries from Jan. 1, 2024, through Dec. 31, 2024. They analyzed the 30 brands with the most inquiries in this period.
QuoteWizard. Based on inquiries. I don't trust this.
honeybadger1 1 days ago [-]
the amount of miles driven with tesla fsd with no crashes is so significantly higher it's laughable to even draw the comparison, this data is even publicly available via the API.
ronnier 1 days ago [-]
The study can't be trusted. Just look at the article: "The study is based on QuoteWizard by LendingTree insurance inquiries from Jan. 1, 2024, through Dec. 31, 2024".
pwagland 1 days ago [-]
An interesting tidbit is that FSD is almost guaranteed to turn off before any accident occurs. Last I looked, the data was not available to see how long after FSD deactivation the accident occurred.
Note: I have an older Tesla, and actually quite like it. I don't have FSD, but I do have enhanced autopilot (EAP) and like it as well. That said, it is very easy to believe that people ignore the road for longer than they should with FSD/EAP turned on.
The grandparent comment also didn't mention the split between FSD miles/non-FSD miles. It is possible that FSD is so good, that every Tesla driver becomes useless when they are required to drive for themselves, and that is what drives the higher accident rate.
gruez 1 days ago [-]
>An interesting tidbit is that FSD is almost guaranteed to turn off before any accident occurs. Last I looked, the data was not available to see how long after FSD deactivation the accident occurred.
How can you make that assertion when there's no data? Are you just assuming the worst in the absence of data?
toast0 1 days ago [-]
No, the statistics I've seen haven't really been useful.
It's typically comparing cars in whatever autonomous modes vs all cars operating within a country/state. But the autonomous modes don't operate in all conditions, so it's not a good comparison.
There's concern about making sure the control group is appropriate too, comparing against a representative subset of the population is important.
I think there's some reasonable data for automatic emergency braking, in that I think I've seen it compared as just cars with aeb equipped vs cars without, number/severity of injuries for all collisions and there's enough data to show a difference.
nickthegreek 1 days ago [-]
The best you can do is try and limit accident/deaths of similar events in the same areas during the same weather. But we don't collect proper stats on this stuff in order to make true apples to apples comparisons. They arent driving in the same scenarios as the many people yet.
1 days ago [-]
viraptor 1 days ago [-]
There's a few I'm not going to link, but warn about them instead. They're often in the "lies, damned lies" category.
For example comparing self driving to average accidents often misses: non self driving cars having worse equipment (lack of collision warning, adaptive cruise control, etc.), comparison to all roads (self driving is activated mostly on known, well mapped areas and open highways), unknown accounting for self driving status (Teslas try to give back control just before the crash), and many other issues.
Unless some actually independent third party runs the numbers with a lot of explanations about the methodology, I'm ignoring them.
jqpabc123 1 days ago [-]
"Full Self Driving" for the win --- but "please wait patiently".
TIL frameless car doors have emergency / manual door release. I naively thought you can push hard enough to force door open even if it broke glass. Is this common knowledge?
dkjaudyeqooe 1 days ago [-]
Aren't we beyond the point where makers of these systems should be required to prominently state that these are level 2 driving systems? Playing word games ("Full Self Driving"!) is arguably killing people.
srmatto 1 days ago [-]
Car companies need to either sell feature complete auto-pilot model trims with LIDAR and sell a trim not equipped with auto-pilot at all. They should not sell middle-ground, budget trims with less-effective auto-pilot that compromise safety.
>According to Xiaomi’s initial report, the car’s advanced driver assistance function had been engaged less than 20 minutes before the crash. Alerts were issued because the driver apparently wasn’t holding on to the steering wheel. Seconds after another warning was sent about obstacles in the road and the driver then retook control of the wheel, the car crashed into concrete fencing on the side of the road.
>According to the company’s marketing materials, Xiaomi’s Navigate on Autopilot function can change lanes, speed a car up or down, make turns or brake with minimal human input. However, the company advises drivers stay alert to traffic conditions and reminds them that “smart-driving” isn’t the same as “self-driving.”
>It’s illegal in China for drivers to take their hands off the steering wheel, even if advanced driver assistance is engaged.
I never went to driving school but isn't it obvious to both brake and steer away from the obstacle?
Not that long ago I had some idiot driving the wrong direction, in my lane, and speeding, and was about to have a head on at about 200 km/h. I credit my survival to pulling hard to the right while slamming on the brakes, with the ABS allowing me to steer freely. It put as much space between me and the idiot as possible.
vel0city 1 days ago [-]
You only have a certain amount of traction budget to accelerate. You can spend that budget on trying to stop, you can spend that budget on turning. Spending it on both can end up with you losing grip and having less traction overall to achieve your goal. A modern car's computer will assist with trying to keep traction on the road so can be good to try to do both if you're not 100% sure what to do.
But also, turning often means you're going to enter another lane of traffic with potentially other cars/people/obstacles in there. So blindly swerving can sometimes end up with a much worse collision.
I'm not going to say you should always do X or you should always do Y or always do X + Y. There's a lot of situational conditions on that. But only kind of turning and only kind of stopping is probably not going to work.
potato3732842 1 days ago [-]
The biggest argument against swerving is that if someone else does something stupid and you hit them it's usually their fault, if you avoid them and hit something else it's your fault, unless it's on camera.
ninalanyon 1 days ago [-]
In Norway you are taught never to attempt avoiding action in a crisis. Just brake hard. Attempting avoiding action often puts more people at risk.
Perhaps this is more applicable to our typical driving conditions where we have rain, ice, snow, or all three, on roads for substantial fractions of the year and a sudden change of direction will result in loss of control.
Most of us do not practice the kind of precision high speed reactions necessary to control a car in such situations.
llm_nerd 1 days ago [-]
There was one second between the driver taking over and the collision, so it was likely a panic reaction to an imminent crash.
Which is fundamentally the problem with self driving technologies. If it isn't 100%, it might just increase the danger: It lures the driver into distraction, because enough kilometres on straight roads and who is going to keep paying attention when a car drives itself...and then boom exception you have 1 second to focus and ingest the situation perfectly or you die.
CBLT 1 days ago [-]
It's been proven that people are extraordinarily poor drivers for the first few seconds they take over driving from a computer.
piva00 1 days ago [-]
I would say any activity that demands focus would have the same pattern, anyone who has driven a car, rode a bike, etc. should be able to tell that it takes a while for you to get into focused mode if you let it drift even for a short while.
It's much more pronounced if you've ever raced a car on a track, rode a fast bike on tricky paths, or even go-karts, if your mind wander for a split second it takes a few seconds of active focusing to get back to the baseline where you enter "flow" again with the activity.
Expecting drivers who let a machine control their machine, getting out of the control feedback loop, to be able to regain focused control over split second decisions is just absurd to me.
nielsbot 1 days ago [-]
not to defend shitty self-driving implementations, BUT if on average they crash less than humans, even if they’re not 100%, society might accept that.
foobarian 1 days ago [-]
The problem is the accident distribution is really skewed, with a small fraction of drivers contributing most accidents. That means the majority drive a lot better than average, and would likely not put up with a regression in accident rates from what might be a "better than average" system but still not as good as human. I know I wouldn't, I pay attention, never had an accident in 30 years, tried a few FSD implementations including Tesla's and they scared the bejesus out of me.
gruez 1 days ago [-]
>The problem is the accident distribution is really skewed, with a small fraction of drivers contributing most accidents.
Even if you take this at face value:
1. such "dangerous" drivers still screw over "safe", by crashing into them. Unless you're willing to take them off the road entirely, you can't invoke "the majority drives better than average" and call it a day. At least in the US, doing this is a non-starter.
2. Driverless systems aren't slightly safer than the average driver, they're significantly safer. For instance waymo claims "81% fewer injury-causing crashes". This effect alone might swamp out the "majority drive a lot better than average" phenomena you talk about. Most drivers might be safer than average, but are they 5x safer than average?
llm_nerd 1 days ago [-]
> Driverless systems aren't slightly safer than the average driver, they're significantly safer.
Waymo operates an extremely conservative system that uses human oversight, roof mounted LIDAR, 100% 3D mapped environment in a very limited deployment, etc. This isn't robust to demonstrate Tesla or other self-driving situation, often using just a couple of cameras and a 2D map of roadways.
Tesla has boasted about the value of their system, using the classic per mile argument to justify its advantages. Only the overwhelming bulk of Tesla FSD or autopilot miles are on highways, which statistically is much safer than any other form of auto transport. This number is then compared against the drunk on the unlit country road type grab bag of every other style of driving and declared the victor. And of course even for highway accidents, aside from extreme weather events accidents are often people who purposefully wouldn't be using any assists -- stunt driving and the like.
It all seems very dubious right now. I believe Waymo's stats, I don't remotely believe Tesla's. We'll see what happens as people start using it more in normal driving situations.
porridgeraisin 1 days ago [-]
The nature of each failure matters as much as the frequency of failure. The way in which accidents humans get into occur is vastly different from the way in which self driving car accidents occur, even if the end result is pixel for pixel same. Blame, compensation, "peace of mind" of the involved are all affected by this, so it matters a lot.
The solution to this is self driving cars reaching critical mass through some other means[1]. Because then it will become the new normal before anyone realises it.
Personally, I think taxis are going to be the thing that takes them to critical mass regardless of all safety considerations. The driving force behind this will be the fact that in some countries humans driving taxis will soon be economically infeasible, at the prices people can afford. Then, if you agree that Uber et.al are far too entrenched in society for people to just forget about it, you can imagine waymo, robotaxi, etc, picking up.
[1] the "true" solution is to train it to get into the kind of accidents humans get in to, and try to "shape its uncertainty", but this is hard and of course infeasible with real world experiments. Recently, simulations have become extraordinarily better (due to generative models) for robotic/self driving tasks, so there is a slim chance this might change.
ijidak 1 days ago [-]
I don't know if humans are rational in that way.
Isn't nuclear energy, one of the safest forms of energy overall?
But I still can't shake the fear of it.
And that fear across most of the population has hindered nuclear energy despite all its benefits.
It is going to be interesting to watch how regulation unfolds.
aredox 1 days ago [-]
There are two dimensions to "safe":
-how often does it fail,
-what are the consequences of a failure.
When you plot those two and set what is the "acceptable" area, it is skewed. And it is perfectly normal.
There was not taking over, the car was always in driver's control. The driver was using cruise control not anything self driving.
llm_nerd 7 hours ago [-]
"According to the company’s marketing materials, Xiaomi’s Navigate on Autopilot function can change lanes, speed a car up or down, make turns or brake with minimal human input."
The driver had been flagged multiple times for taking their hands off the steering wheel. The entire story is about the autopilot driving the car, the driver taking over one second before the accident. This is literally in the timeline.
Are you replying to the wrong story? How could you be so blatantly wrong?
slaw 5 hours ago [-]
You are required to have hands on the steering wheel all the time in China. The driver should be in control all the time.
potato3732842 1 days ago [-]
Not that physical reality has ever gotten in the way of low effort internet comments but the driver isn't wrong, you can avoid an obstacle in much less forward distance by turning than braking.
There were two seconds between the alert and the crash. I'm not sure why the driver didn't at least get the car out of the barrier lane, perhaps there was other traffic (though the brake pedal percentages don't seem to indicate they committed to braking, odd, but I wasn't there so IDK). Obviously paying attention beforehand would have been the better move from the get go.
1 days ago [-]
lenerdenator 1 days ago [-]
Now replace "Xiaomi" with "Apple" in that headline and you'll see why they dropped the car project.
PeterStuer 1 days ago [-]
Apple never clearly decided whether it was building a full car (like Tesla)? A self-driving system (like Waymo)? A tech stack to license (like Mobileye). This lack of focus created internal turmoil and killed momentum. The car team reportedly had over half a dozen leadership changes.
Apple lacked a tangible car ecosystem: no charging network, no manufacturing experience, and no clear place in the market for another premium EV.
The project was burning $1B+/year without ROI. Ultimately, it chose to cut losses. Apple failed because it was indecisive, risk-averse, and out of its industrial depth—while Tesla, Waymo, and Xiaomi had clarity, speed, and alignment between ambition and execution.
toast0 1 days ago [-]
Not knowing what they want to build is a big problem, much bigger than these.
> Apple lacked a tangible car ecosystem: no charging network, no manufacturing experience, and no clear place in the market for another premium EV.
It wouldn't make sense to develop a charging network until they at least figure out what their product is. Most car makers take about three years between showing a finalish prototype and retail sales. That's enough time to build a charging network, if you even need to. Tesla needed to build a charging network, but now that's opening up. VW needed to build a charging network as a condition of their release, and that's open. I'm not an EV driver and I don't charge my PHEV unless it's free, but I see a lot of chargers around, and I don't know if there's a need for another major chsrging network. If Apple only wanted to be part of a car, not the whole car, there would be no reason for them to be involved in charging.
Apple does most (all?) of its manufacturing through contractors. Foxconn is building cars [1], and there are plenty of dedicated contract auto manufacturers. Again, not a big defect until we know what the product is.
Market positioning also needs to wait for the product. Maybe there's an opening now that Tesla is losing sales.
Also manufacturing a car is vastly different from manufacturing small electronic devices (not that that competence lives entirely with in Apple but they're experienced in designing them for manufacture). If Apple wanted to do car manufacturing that's a completely different area of expertise they'd need to spend a decade or more building up to.
Look at Tesla's journey, it took them years to get things like panel gaps even close to right on their cars and they're having massive problems with the Cybertruck's manufacturing (car washes can completely destroy their electronics, whole pieces of 'trim' falling off because they used the wrong glue, etc).
spwa4 1 days ago [-]
Well, now we really know they take inspiration from Tesla. Anyone remember the Tesla beheading, with the car doing a victory lap afterwards?
zozbot234 1 days ago [-]
You can complain about Xiaomi all you want, but at least they're not BYD - that even call their driver's assistance system "God's Eyes", just so that your insurance can get away with classing every crash it gets into as an "Act of God".
gruez 1 days ago [-]
>that even call their driver's assistance system "God's Eyes", just so that your insurance can get away with classing every crash it gets into as an "Act of God".
How come these things are "scrutinized" afterwards instead of before? Governments are severely dragging their feet behind their backs and should clearly outlaw and force certification of these "smart" systems before they're deployed, not afterwards.
Bananas how we're letting these companies use public roads with real humans as testing grounds for driver assistance software.
And this is common, some of Tesla's are actually harder to locate than the ones in this Xiaomi
Also cost reduction. Save pennies cost lives.
My cheap car has electronic locks on all doors but the mechanical interior releases defeat them (but not on the rear doors).
A neat feature I noticed the other day is that operating the release once won't let the door actually open, it only partially unlatches it and to open the door you have to hold the lever open and push the door. Great safety design.
But it's also designed reasonably. The lever you push to open can be pulled twice to open mechanically. There's no secret lever somewhere nobody knows about. Two different sales people told us to push or pull twice to open.
For frameless windows. The door needs to retract the window before you open it, otherwise you might damage the window.
1. The US system, where regulators define a set of tests and manufacturers self certify compliance
2. homologation based systems like China and the EU where those tests are performed by an accredited third party to receive type approval.
But testing is ultimately reactive, not proactive. It's hard to write appropriate tests for areas of new technology that don't impose unnecessary or silly constraints. These aren't the kinds of regulations that are easy to put out or roll back either. It takes upwards of a year in the US to go through the NPRM process, and several years for gradual rollout into force.
As someone involved in this space, I'm also extremely skeptical that there's any way to develop these kinds of systems without significant testing on public roads. That testing should be monitored and come with explicitly strict development guidelines, but alternative approaches have been tried many times and have not panned out when deployed in practice.
these systems able to solve this problem once and forever, ban human from driving. Humans are extreemly bad at driving.
Unless they produce more fatal incidents compared to human it's ok to have such crashes.
What makes you think this wasn't scrutinized by Chinese government before release?
This part makes me believe it wasn't scrutinized before, because otherwise the wording would have been different. Maybe something like "further scrutinized" or something else that indicates that it's actually been inspected and approved before.
https://www.all-about-industries.com/china-is-massively-prom...
Do you know a word called Cronyism?
Xi was in a meeting with few Chinese tech leaders just few weeks ago (https://www.reuters.com/world/china/chinas-xi-attends-sympos...), including Xiaomi's leader, probably trying to establish bi-directional support and connections.
Under current desperate climate in China, i.e. unemployment number is creeping up, marriage and childbirth are down..., as well as the fact that Xi likes flashy and fancy stuff, I don't think there is any incentive for the gov to put a limit on how those companies develop. I mean... it's not like the leftist undemocratic communist should have any incentive to do things appropriately for people of different orientations anyways.
In fact, right now in China, criticize such "high-tech" company may lead to serious trouble for the criticizer, given how easy it is for powerful companies to send criticizers to prison for reasons such as "disturb public peace", "hacking" and/or "spread rumor and lies".
Also, let's don't forget Telsa also crash and burns. So it is really tricky to explain to the communist why they must do better.
It always comes off as click/rage bait to me when people report on these deaths when there are literally hundreds per day that don’t involve an autonomous vehicle.
No other company is even close (i.e. 5-10 years behind) to where Waymo is on self driving maturity.
Not too hard when you stay inside like three or four cities with good weather, straight roads, &c.
I always see this argument but it seems like they did fine through the recent SF storms https://www.sfchronicle.com/sf/article/waymo-robotaxis-storm... Most of the US has non-freezing weather for most of the year, so aside from avoiding heavy rain, snow, and ice I’m not sure what more you want.
> straight roads
SF might have mostly straight roads, but it has complicated intersections, bed drivers, cyclists, double parked cars, etc.
Waymo would do fine in any major city most of the year under most conditions.
There is no proof of this. I have yet to see these ever ferry passengers in snow, which in case you are unaware, is a common driving condition for many living in North America.
Can it drive in rain & snow on narrow non-marked roads, then join traffic jams (or not) on highway at 120kmh, then enter city and navigate obscure construction works around it, crazy aggressive cyclists and scooters and get me where I need, 100% reliably? Or lets say >99.995%, thats roughly human frequent driver level.
This is what I am willing to pay for, either as shared taxi or our own car, nothing less. Anything less is me doing all the driving requiring full attention, have that already in dumb cars.
Other than snow, it does all of those things in SF. They do snow testing in Tahoe and Michigan, not to mention the former testing in NYC.
They are also deactivated in difficult conditions such as bad weather which are also hard for human drivers. You can imagine a future with all cars are equipped with a self-driving system that always "passes the buck" to a human when conditions degrade - of course the system will have less accidents than humans! The statistics will even show human drivers being worse than before the advent of self-driving!
[1] https://www.carpro.com/blog/list-of-the-most-dangerous-cars-...
[2] https://www.hyundaiusa.com/us/en/vehicles/venue
Those are two very different things.
Even the accident rate is below other cars if you adjust by miles driven.
https://news.ycombinator.com/item?id=42151851
Fatal accidents rate too
Maybe focusing on the dumb stuff brings a lot more bang for the buck than the sparkly new ‘smart’ safety widgets?
2003 Blind Spot Information System (BLIS): BLIS uses cameras or radars to detect other cars approaching your car.
2008 City Safety: Emergency braking system using laser to detect, it helps reduce the risk of rear-end collisions at low speeds
2010 Pedestrian detection with full auto brake: Using radar and camera, this system warns the driver if somebody steps out in front of the car, and brakes by itself if the driver fails to do so.
2016 Connected safety: use the cloud to share critical data between vehicles, alerting the driver about slippery road sections or vehicles that have activated their hazard lights.
2018 Oncoming mitigation by braking: A Volvo car driving in a built-up highway with digitally rendered signs depicting oncoming mitigation by braking.
If an oncoming vehicle veers into your lane and a collision is unavoidable, this feature can help reduce your vehicle's speed to mitigate the force of the collision
2023 Lidar: Based on high-performance sensors, Lidar technology is key for creating safe autonomous cars. It helps autonomous cars detect other cars, pedestrians and cyclists
2023 Driver Understanding System: 2-camera system can detect if the driver is distracted, sleepy or even intoxicated. If needed, the system will activate a protective shield and take appropriate countermeasures to preserve safety margins.
QuoteWizard. Based on inquiries. I don't trust this.
Note: I have an older Tesla, and actually quite like it. I don't have FSD, but I do have enhanced autopilot (EAP) and like it as well. That said, it is very easy to believe that people ignore the road for longer than they should with FSD/EAP turned on.
The grandparent comment also didn't mention the split between FSD miles/non-FSD miles. It is possible that FSD is so good, that every Tesla driver becomes useless when they are required to drive for themselves, and that is what drives the higher accident rate.
How can you make that assertion when there's no data? Are you just assuming the worst in the absence of data?
It's typically comparing cars in whatever autonomous modes vs all cars operating within a country/state. But the autonomous modes don't operate in all conditions, so it's not a good comparison.
There's concern about making sure the control group is appropriate too, comparing against a representative subset of the population is important.
I think there's some reasonable data for automatic emergency braking, in that I think I've seen it compared as just cars with aeb equipped vs cars without, number/severity of injuries for all collisions and there's enough data to show a difference.
For example comparing self driving to average accidents often misses: non self driving cars having worse equipment (lack of collision warning, adaptive cruise control, etc.), comparison to all roads (self driving is activated mostly on known, well mapped areas and open highways), unknown accounting for self driving status (Teslas try to give back control just before the crash), and many other issues.
Unless some actually independent third party runs the numbers with a lot of explanations about the methodology, I'm ignoring them.
https://www.yahoo.com/news/tesla-forced-change-name-full-210...
https://www.reuters.com/business/autos-transportation/tesla-...
>According to Xiaomi’s initial report, the car’s advanced driver assistance function had been engaged less than 20 minutes before the crash. Alerts were issued because the driver apparently wasn’t holding on to the steering wheel. Seconds after another warning was sent about obstacles in the road and the driver then retook control of the wheel, the car crashed into concrete fencing on the side of the road.
>According to the company’s marketing materials, Xiaomi’s Navigate on Autopilot function can change lanes, speed a car up or down, make turns or brake with minimal human input. However, the company advises drivers stay alert to traffic conditions and reminds them that “smart-driving” isn’t the same as “self-driving.”
>It’s illegal in China for drivers to take their hands off the steering wheel, even if advanced driver assistance is engaged.
The driver instead of braking tried to avoid obstacle and hit concrete barrier at 97 km/h.
https://carnewschina.com/2025/04/01/first-fatal-accident-inv...
Not that long ago I had some idiot driving the wrong direction, in my lane, and speeding, and was about to have a head on at about 200 km/h. I credit my survival to pulling hard to the right while slamming on the brakes, with the ABS allowing me to steer freely. It put as much space between me and the idiot as possible.
But also, turning often means you're going to enter another lane of traffic with potentially other cars/people/obstacles in there. So blindly swerving can sometimes end up with a much worse collision.
I'm not going to say you should always do X or you should always do Y or always do X + Y. There's a lot of situational conditions on that. But only kind of turning and only kind of stopping is probably not going to work.
Perhaps this is more applicable to our typical driving conditions where we have rain, ice, snow, or all three, on roads for substantial fractions of the year and a sudden change of direction will result in loss of control.
Most of us do not practice the kind of precision high speed reactions necessary to control a car in such situations.
Which is fundamentally the problem with self driving technologies. If it isn't 100%, it might just increase the danger: It lures the driver into distraction, because enough kilometres on straight roads and who is going to keep paying attention when a car drives itself...and then boom exception you have 1 second to focus and ingest the situation perfectly or you die.
It's much more pronounced if you've ever raced a car on a track, rode a fast bike on tricky paths, or even go-karts, if your mind wander for a split second it takes a few seconds of active focusing to get back to the baseline where you enter "flow" again with the activity.
Expecting drivers who let a machine control their machine, getting out of the control feedback loop, to be able to regain focused control over split second decisions is just absurd to me.
Even if you take this at face value:
1. such "dangerous" drivers still screw over "safe", by crashing into them. Unless you're willing to take them off the road entirely, you can't invoke "the majority drives better than average" and call it a day. At least in the US, doing this is a non-starter.
2. Driverless systems aren't slightly safer than the average driver, they're significantly safer. For instance waymo claims "81% fewer injury-causing crashes". This effect alone might swamp out the "majority drive a lot better than average" phenomena you talk about. Most drivers might be safer than average, but are they 5x safer than average?
Waymo operates an extremely conservative system that uses human oversight, roof mounted LIDAR, 100% 3D mapped environment in a very limited deployment, etc. This isn't robust to demonstrate Tesla or other self-driving situation, often using just a couple of cameras and a 2D map of roadways.
Tesla has boasted about the value of their system, using the classic per mile argument to justify its advantages. Only the overwhelming bulk of Tesla FSD or autopilot miles are on highways, which statistically is much safer than any other form of auto transport. This number is then compared against the drunk on the unlit country road type grab bag of every other style of driving and declared the victor. And of course even for highway accidents, aside from extreme weather events accidents are often people who purposefully wouldn't be using any assists -- stunt driving and the like.
It all seems very dubious right now. I believe Waymo's stats, I don't remotely believe Tesla's. We'll see what happens as people start using it more in normal driving situations.
The solution to this is self driving cars reaching critical mass through some other means[1]. Because then it will become the new normal before anyone realises it.
Personally, I think taxis are going to be the thing that takes them to critical mass regardless of all safety considerations. The driving force behind this will be the fact that in some countries humans driving taxis will soon be economically infeasible, at the prices people can afford. Then, if you agree that Uber et.al are far too entrenched in society for people to just forget about it, you can imagine waymo, robotaxi, etc, picking up.
[1] the "true" solution is to train it to get into the kind of accidents humans get in to, and try to "shape its uncertainty", but this is hard and of course infeasible with real world experiments. Recently, simulations have become extraordinarily better (due to generative models) for robotic/self driving tasks, so there is a slim chance this might change.
Isn't nuclear energy, one of the safest forms of energy overall?
But I still can't shake the fear of it.
And that fear across most of the population has hindered nuclear energy despite all its benefits.
It is going to be interesting to watch how regulation unfolds.
-how often does it fail,
-what are the consequences of a failure.
When you plot those two and set what is the "acceptable" area, it is skewed. And it is perfectly normal.
https://www.wolterskluwer.com/en/solutions/enablon/bowtie/ex...
The driver had been flagged multiple times for taking their hands off the steering wheel. The entire story is about the autopilot driving the car, the driver taking over one second before the accident. This is literally in the timeline.
Are you replying to the wrong story? How could you be so blatantly wrong?
There were two seconds between the alert and the crash. I'm not sure why the driver didn't at least get the car out of the barrier lane, perhaps there was other traffic (though the brake pedal percentages don't seem to indicate they committed to braking, odd, but I wasn't there so IDK). Obviously paying attention beforehand would have been the better move from the get go.
Apple lacked a tangible car ecosystem: no charging network, no manufacturing experience, and no clear place in the market for another premium EV.
The project was burning $1B+/year without ROI. Ultimately, it chose to cut losses. Apple failed because it was indecisive, risk-averse, and out of its industrial depth—while Tesla, Waymo, and Xiaomi had clarity, speed, and alignment between ambition and execution.
> Apple lacked a tangible car ecosystem: no charging network, no manufacturing experience, and no clear place in the market for another premium EV.
It wouldn't make sense to develop a charging network until they at least figure out what their product is. Most car makers take about three years between showing a finalish prototype and retail sales. That's enough time to build a charging network, if you even need to. Tesla needed to build a charging network, but now that's opening up. VW needed to build a charging network as a condition of their release, and that's open. I'm not an EV driver and I don't charge my PHEV unless it's free, but I see a lot of chargers around, and I don't know if there's a need for another major chsrging network. If Apple only wanted to be part of a car, not the whole car, there would be no reason for them to be involved in charging.
Apple does most (all?) of its manufacturing through contractors. Foxconn is building cars [1], and there are plenty of dedicated contract auto manufacturers. Again, not a big defect until we know what the product is.
Market positioning also needs to wait for the product. Maybe there's an opening now that Tesla is losing sales.
[1] https://www.carscoops.com/2025/03/foxconn-gearing-up-to-buil...
Look at Tesla's journey, it took them years to get things like panel gaps even close to right on their cars and they're having massive problems with the Cybertruck's manufacturing (car washes can completely destroy their electronics, whole pieces of 'trim' falling off because they used the wrong glue, etc).
Not sure if serious.