Jump to content

Uber ~ Automated Driving Death


Teken

Recommended Posts

10 minutes ago, Goose66 said:

Well, I have been among the most adamant supporter of self-driving cars but this video is pretty damning. Setting aside for the moment that the safety driver was looking down and reading something instead of focusing on the road ahead, the car should have been able to detect this woman in the road. There were no obstructions - she didn't walk out from behind a car and the it wasn't a one lane road lined with hedge rows. She didn't purposefully  run into the path of the oncoming vehicle. The sensors should have seen her and stopped the car. Could a person have avoided this accident? Maybe not. But I am very disappointed in the performance of the self-driving system here and may have to rethink my stance on how safe these things really are.

The latest video that I saw this morning does indeed show a failure of the self-driving system. The victim seems to be invisible in the dark and this is in fact exactly what the self-driving feature should have done better than a human.

For the victim and family this is dramatic, but it is a learning experience. Nowadays there are far fewer airplane accidents even though there are many more planes in the sky. That is because we learn from accidents and I have no doubt that this video will help in the evolution of self-driving cars.

Link to comment
12 minutes ago, asbril said:

The latest video that I saw this morning does indeed show a failure of the self-driving system. The victim seems to be invisible in the dark and this is in fact exactly what the self-driving feature should have done better than a human.

For the victim and family this is dramatic, but it is a learning experience. Nowadays there are far fewer airplane accidents even though there are many more planes in the sky. That is because we learn from accidents and I have no doubt that this video will help in the evolution of self-driving cars.

I agree.  If I had been driving, I'm sure I would have hit her.  In the visible light frequency using the headlights, you have at most 1/2 second of warning.  

But, it was my impression that these cars are using a lot more than just visible light from the headlights.  Driverless is supposed to be better than a human in a case like this with access to IR, sonar, and perhaps other imaging techniques (lidar/radar/?).  It would appear that driverless was no better than human here.

Clearly this woman is 100% at fault here.  What the heck was she thinking?  The car would have been butt obvious to her coming down an otherwise empty dark street with its headlights on.

Link to comment

I had to watch the video a few times because quite honestly it was very bad quality. Regardless, the main point is not a single soul who is sitting behind one of these vehicles will ever be a actively participant in DRIVING. This technology has simply shifted what in my day was considered a privilege and something you had to work toward. Now, companies and people are simply saying its OK for automated vehicles built by man which by its very nature is flawed.

And in this case are saying *Our expectation is a person who sits behind the wheel of these AI, Unattended, vehicles will be actively watching the road*

I want you all to take pause to how incredibly stupid that statement is . . .

The concept is I am too lazy to drive so purchased a yuppy car so I can continue to be lazy and let the robot drive me to X. Why would I ever want to be a active participant to something which is a privilege!?!?! The person might as well be DRIVING then if I need to be actively watching, monitoring, what ever.

As for sensors it doesn't matter what kinds of sensors are equipped in these vehicles. It doesn't matter how smart or quick learning these vehicles are. It doesn't matter how many conditions and environments they place these vehicles.

Why???

Because the world is ever changing, evolving, and you simply can not compensate for stupid.

This lady literally stepped out in the middle of a oncoming vehicle for no apparent reason because she lacked common sense of her surroundings and could care less. As I noted before where I live this is a daily affair and the people around me could give two sh^ts if I had the right of way etc. This technology will never be safe unless the infrastructure is built around the initial concept which allows it to operate safely.

Trying to retro integrate something that has billions of possibilities is not going to happen . . .

 

 

Link to comment
1 hour ago, Goose66 said:

Well, I have been among the most adamant supporter of self-driving cars but this video is pretty damning. Setting aside for the moment that the safety driver was looking down and reading something instead of focusing on the road ahead, the car should have been able to detect this woman in the road. There were no obstructions - she didn't walk out from behind a car and the it wasn't a one lane road lined with hedge rows. She didn't purposefully  run into the path of the oncoming vehicle. The sensors should have seen her and stopped the car. Could a person have avoided this accident? Maybe not. But I am very disappointed in the performance of the self-driving system here and may have to rethink my stance on how safe these things really are.

I'm still a supporter. The video shows that more work needs to be done. 1 accident should not derail nor deter progress due to fears....especially when the alternative would do the same thing. 

I hate to sound cold but it is accidents and close calls that shows the shortfalls of the system. Only then than they be improved. Just like with software, laboratory testing can only show so much as their are many variables at play in the wild to account for. 

We as people need to be accountable for our actions and use common sense. Whether it's crossing the road without looking (I see it in California way to often), or wearing black black at night on a dark road, we play a part in any accident. Hopefully, they find what went wrong and fix it. 

Link to comment
10 minutes ago, lilyoyo1 said:

I'm still a supporter. The video shows that more work needs to be done. 1 accident should not derail nor deter progress due to fears....especially when the alternative would do the same thing. 

I hate to sound cold but it is accidents and close calls that shows the shortfalls of the system. Only then than they be improved. Just like with software, laboratory testing can only show so much as their are many variables at play in the wild to account for. 

We as people need to be accountable for our actions and use common sense. Whether it's crossing the road without looking (I see it in California way to often), or wearing black black at night on a dark road, we play a part in any accident. Hopefully, they find what went wrong and fix it. 

My reaction comes from the fact that I believed these systems to be more capable. Especially this one that uses lidar. How did the lidar not paint this woman at 200 yards and at least slow down?!? It looks to me like the car NEVER even saw the woman despite the lack of anything else in the surroundings to obstruct her or cause noise.

Link to comment

Regarding snow and poor driving conditions, I do expect that this capability will be introduced in phases.  I don't believe we need a 100% solution in order for certain aspects to be viable.  Perhaps we will start only on certain roads and only under certain environmental conditions.  

I understand that there are 5 levels of autonomy.  We are already at level 2 (and possibly 3 for some cars).  I expect that we will get to levels 3 universally pretty quickly, and level 4 soon thereafter.

"Drivers are still necessary in level 3 cars, but are able to completely shift "safety-critical functions" to the vehicle, under certain traffic or environmental conditions. It means that the driver is still present and will intervene if necessary, but is not required to monitor the situation in the same way it does for the previous levels"

"Level 4 vehicles are "designed to perform all safety-critical driving functions and monitor roadway conditions for an entire trip." However, it's important to note that this is limited to the "operational design domain (ODD)" of the vehicle—meaning it does not cover every driving scenario."

Link to comment
2 hours ago, Teken said:

I had to watch the video a few times because quite honestly it was very bad quality. Regardless, the main point is not a single soul who is sitting behind one of these vehicles will ever be a actively participant in DRIVING. This technology has simply shifted what in my day was considered a privilege and something you had to work toward. Now, companies and people are simply saying its OK for automated vehicles built by man which by its very nature is flawed.

And in this case are saying *Our expectation is a person who sits behind the wheel of these AI, Unattended, vehicles will be actively watching the road*

I want you all to take pause to how incredibly stupid that statement is . . .

The concept is I am too lazy to drive so purchased a yuppy car so I can continue to be lazy and let the robot drive me to X. Why would I ever want to be a active participant to something which is a privilege!?!?! The person might as well be DRIVING then if I need to be actively watching, monitoring, what ever.

As for sensors it doesn't matter what kinds of sensors are equipped in these vehicles. It doesn't matter how smart or quick learning these vehicles are. It doesn't matter how many conditions and environments they place these vehicles.

Why???

Because the world is ever changing, evolving, and you simply can not compensate for stupid.

This lady literally stepped out in the middle of a oncoming vehicle for no apparent reason because she lacked common sense of her surroundings and could care less. As I noted before where I live this is a daily affair and the people around me could give two sh^ts if I had the right of way etc. This technology will never be safe unless the infrastructure is built around the initial concept which allows it to operate safely.

Trying to retro integrate something that has billions of possibilities is not going to happen . . .

 

 

The safety of cars, plans, trains, boats and even bicycles are all evolving based on accidents and other experiences. If we take out the emotional aspect (not at all easy for those affected) what happened in Arizona is part of the learning process. What I want to understand is why the radar (or other systems) did not identity the lady and her bicycle. Mobileye, Uber, Google, Tesla etc are all asking themselves the same question.

BTW the backup driver seems to keep looking down and I wonder whether she was looking at her phone.  Even without a phone, If current self-driving cars perform well 99.99 % of the time, how can a back-up driver maintain focus ?

I also do not believe that the road's infrastructure must evolve before self-driving cars. What the road infrastructure can bring in the future is communication between roads, traffic lights, first responders and of course all other cars, whereby self-driving cars can move at higher speeds. But this does not perse impede the development of self-driving cars.

Link to comment
18 minutes ago, asbril said:

I also do not believe that the road's infrastructure must evolve before self-driving cars. What the road infrastructure can bring in the future is communication between roads, traffic lights, first responders and of course all other cars, whereby self-driving cars can move at higher speeds. But this does not perse impede the development of self-driving cars.

The Connected Vehicle program at the US Dot level intends to provide rapid, secure and bidirectional communications to/from:  vehicle <--> vehicle, vehicle <--> roadside, vehicle <--> central control -- if/when/where it's deployed.  This could be another valuable piece of the puzzle that's currently missing.

Link to comment
12 minutes ago, Bumbershoot said:

The Connected Vehicle program at the US Dot level intends to provide rapid, secure and bidirectional communications to/from:  vehicle <--> vehicle, vehicle <--> roadside, vehicle <--> central control -- if/when/where it's deployed.  This could be another valuable piece of the puzzle that's currently missing.

Fascinating. I had no idea of this program but it covers exactly what I was describing. I'd love to be part of it.

Link to comment
1 minute ago, asbril said:

Fascinating. I had no idea of this program but it covers exactly what I was describing. I'd love to be part of it.

I was on the periphery of this when I retired, and one of my employees was deeply involved in it.  It is fascinating, and potentially very, very useful, especially in congested/urban areas.

Link to comment

Way back when, all elevators had an operator. Eventually, we accepted that a machine could be automated and be safe. There are still accidents and injuries in elevators but no one is screaming for the return of the operators.

From the vertical to the horizontal, freight trains could have been automated or, at least,  remotely controlled years ago. Why are they not?

Pilotless freight airplanes? Sure, why not?

As to the current incident, the woman killed was completely in the wrong. She could see (or should have seen) the lights coming from a long distance and/or she wasn't paying attention. The kicker being that pedestrians automatically have the right-of-way making it the fault of the driver. As to why the sensors failed, that will have to be determined in the investigation. As with the Tesla, perhaps there were no other sensors other than visual. The Tesla was blinded by the setting sun (glare). With the latest accident, perhaps the combo of darkness, curve, speed, angle of crossing, etc. prevented a narrow beam radar to not 'see' her until the same time as the lights. <shrugs>

What really concerns me about driverless cars (or having one) is there is no overriding self-preservation. For example, you are riding in your driverless car and a truck pulls out. The car has the option to hit a school bus full of kids, crash in to the truck causing a fiery crash killing you and the truck driver or go off a cliff to avoid all the accidents. At this point, your car will likely make the decision that it is your time to die and drive you off the cliff.

Link to comment
20 minutes ago, DrLumen said:

As to the current incident, the woman killed was completely in the wrong. She could see (or should have seen) the lights coming from a long distance and/or she wasn't paying attention. The kicker being that pedestrians automatically have the right-of-way making it the fault of the driver.

First, the idea of "pedestrians automatically have the right-of-way making it the fault of the driver" is legally wrong. A pedestrian crossing against a signal or outside of a designated crosswalk has some contributory negligence - how much depends on the totality of the circumstances and the jury it's put in front of.

But this is not a matter of who's legally at fault. The Volvos in the Uber program have Lidar, and in the absence of other obstructions, such as other cars, shrubbery or trees along the road, rain or snow, etc., the Lidar should have clearly "seen" this woman with her bags and bike in the roadway and at least slowed down. Knowing that the Lidar is there and seeing the video that, at least to me, clearly shows that the car made no attempts to stop, suggests that this has to be a failure of the Lidar detection system. I would think that the self-driving car with the Lidar would be MORE likely to avoid this accident than a human driver relying on visual input alone. We'll have to see what the data and the investigations reveal, but this looks to me at this point as a failure of the car's self-driving system, regardless of who is legally at fault.

Link to comment
Way back when, all elevators had an operator. Eventually, we accepted that a machine could be automated and be safe. There are still accidents and injuries in elevators but no one is screaming for the return of the operators.
From the vertical to the horizontal, freight trains could have been automated or, at least,  remotely controlled years ago. Why are they not?
Pilotless freight airplanes? Sure, why not?
As to the current incident, the woman killed was completely in the wrong. She could see (or should have seen) the lights coming from a long distance and/or she wasn't paying attention. The kicker being that pedestrians automatically have the right-of-way making it the fault of the driver. As to why the sensors failed, that will have to be determined in the investigation. As with the Tesla, perhaps there were no other sensors other than visual. The Tesla was blinded by the setting sun (glare). With the latest accident, perhaps the combo of darkness, curve, speed, angle of crossing, etc. prevented a narrow beam radar to not 'see' her until the same time as the lights.
What really concerns me about driverless cars (or having one) is there is no overriding self-preservation. For example, you are riding in your driverless car and a truck pulls out. The car has the option to hit a school bus full of kids, crash in to the truck causing a fiery crash killing you and the truck driver or go off a cliff to avoid all the accidents. At this point, your car will likely make the decision that it is your time to die and drive you off the cliff.
I think for driverless cars to be accepted, they will have to be programmed to consider their passengers their primary responsibility. No driving off cliffs to save the other person etc.

As for the video, the outcome may have been no different with a purely human driven vehicle but this is exactly the scenario where I would expect an autonomous vehicle to excel. Something went badly wrong here.

Sent from my SM-N950U using Tapatalk

Link to comment
9 minutes ago, mitchmitchell said:

I think for driverless cars to be accepted, they will have to be programmed to consider their passengers their primary responsibility. No driving off cliffs to save the other person etc.

As for the video, the outcome may have been no different with a purely human driven vehicle but this is exactly the scenario where I would expect an autonomous vehicle to excel. Something went badly wrong here.

Sent from my SM-N950U using Tapatalk
 

I agree.  I expected that the car would have at least reacted.  I saw no such signs.  Disappointing certainly for us.  I am sure that does not even come close to the word that would describe the family of the person killed.

Link to comment

It seems like it is very early to place blame based on a rather poor video. 

1) The NYT places the accident on a section of the road that widens to 5 lanes.  The woman did not "jump" in front of the car.  She was walking for some distance before she was struck.

2) The video shows the woman "suddenly coming into view" just past the start of the turn lane.  Google maps shows multiple street lamps in that area.  It might have been reasonable for a pedestrian to expect to be seen with those lights present.

Bottom line, I certainly can't judge much from a questionable video.  I question the ability of the CCD auto iris function to represent what "should" have been seen by a human under these circumstances.  I regularly miss deer at 5 am that are far less reflective that the woman and her bike.

Let the NTSB do their job.  It's tough enough to investigate something like this without public pronouncing guilt.  If it were my wife/daughter, I would certainly hope for the benefit of the doubt.

 

image.thumb.png.fb1337c273f433b0c4a0913a053785da.png

 

image.thumb.png.4776813dacab73502dec0fe543fa69ff.png

Link to comment
12 hours ago, Teken said:

Link??

https://www.dmv.ca.gov/portal/wcm/connect/42aff875-7ab1-4115-a72a-97f6f24b23cc/Waymofull.pdf?MOD=AJPERES

Per Waymo's official report to the state of California, 4 million miles.  350,000 in California last year alone.

A google search yields prolific results if you want to learn more.

Waymo uses radar, lidar, and optical cameras to detect its surroundings.  Not sure what Volvo uses, but it seems unlikely they used radar or lidar since both of those would have easily spotted that biker even in pitch darkness.  If they did have those systems, then they didn't do a very good job programming them.

EDIT:  Just looked at the Volvo car.  It sure looks like a lidar device on the roof.  But, I don't know for sure.

Link to comment
20 hours ago, asbril said:

The safety of cars, plans, trains, boats and even bicycles are all evolving based on accidents and other experiences. If we take out the emotional aspect (not at all easy for those affected) what happened in Arizona is part of the learning process. What I want to understand is why the radar (or other systems) did not identity the lady and her bicycle. Mobileye, Uber, Google, Tesla etc are all asking themselves the same question.

BTW the backup driver seems to keep looking down and I wonder whether she was looking at her phone.  Even without a phone, If current self-driving cars perform well 99.99 % of the time, how can a back-up driver maintain focus ?

I also do not believe that the road's infrastructure must evolve before self-driving cars. What the road infrastructure can bring in the future is communication between roads, traffic lights, first responders and of course all other cars, whereby self-driving cars can move at higher speeds. But this does not perse impede the development of self-driving cars.

I have to agree with this. There is something inherently wrong with this idea, at this point in development.

The idea of the system is allow drivers to not be attentive, or be redundant, and yet as the reports linked show the system can disengage at any time when a discrepancy or error is found requiring an attentive human to take over. This requires a100% of the time attentive human. It also admits that the automatic driving system makes many mistakes. It doesn't matter if the system is correct 99.99999% of the time. This allegedly rare mistake, and the conceptual error,  may be the blame for taking this woman's life. It's not good enough. I haven't seen the video yet but from the video observers reports humans would nt have been able to avoid the accident.

It has to be remembered that most mechanical devices do NOT have the dynamic range of light levels the human eye has, so the video may not be an accurate representation that an attentive, in situ, driver would have seen. 

I have to side with Teken here at this point in the technology. Until the next levels are accomplished in the AI, the concept is flawed. Trouble is, how do we get past this level without high human and property risk?

Link to comment
1 hour ago, apostolakisl said:

https://www.dmv.ca.gov/portal/wcm/connect/42aff875-7ab1-4115-a72a-97f6f24b23cc/Waymofull.pdf?MOD=AJPERES

Per Waymo's official report to the state of California, 4 million miles.  350,000 in California last year alone.

A google search yields prolific results if you want to learn more.

Waymo uses radar, lidar, and optical cameras to detect its surroundings.  Not sure what Volvo uses, but it seems unlikely they used radar or lidar since both of those would have easily spotted that biker even in pitch darkness.  If they did have those systems, then they didn't do a very good job programming them.

EDIT:  Just looked at the Volvo car.  It sure looks like a lidar device on the roof.  But, I don't know for sure.

Reviewing the link provided quickly it doesn't indicate what kind of environments these vehicles have driven on. I am interested the type of terrain, hours of use, and areas. I have to gather since this is in CA this is on buttery smooth roads, blue sky, and not a drop of rain.

Add in night time, rain, sleet, snow, road ruts, crazy human, etc.

I can say with a high level of confidence if Elon Musk taps upon me to provide help in this evolving technology his company will acquire more insight and knowledge for their systems. Having assisted GM & Ford at their *Winter Proving Grounds* I can tell you just a normal vehicles saw unforeseen mechanical and electrical issues when the mercury was -45'C.

Very interested to see a different angle or view of the video that resulted in this woman's death.  

Link to comment
1 hour ago, apostolakisl said:

Waymo uses radar, lidar, and optical cameras to detect its surroundings.  Not sure what Volvo uses, but it seems unlikely they used radar or lidar since both of those would have easily spotted that biker even in pitch darkness.  If they did have those systems, then they didn't do a very good job programming them.

EDIT:  Just looked at the Volvo car.  It sure looks like a lidar device on the roof.  But, I don't know for sure.

Just to be clear, there is lidar in the Uber system (every tech article talking about the accident references this) and this IS NOT Volvo's system, it's Uber's system - just build on a Volvo vehicle.

Link to comment
Just now, Goose66 said:

Just to be clear, there is lidar in the Uber system (every tech article talking about the accident references this) and this IS NOT Volvo's system, it's Uber's system - just build on a Volvo vehicle.

RE: Sensors -> Honestly, my view is unless these vehicles are using multiple sensing arrays which are defined for immediate, close, mid, to long range scanning. Which use microwave / radar in conjunction with day / night cameras, and what ever else they deem necessary. 

Impact deaths will always be seen . . .

Keeping in mind no matter the technology there is nothing to guard against someone / something stepping out in front of a oncoming vehicle.

ie. Wild Life / Stupid Human  

Link to comment
11 minutes ago, larryllix said:

The idea of the system is allow drivers to not be attentive, or be redundant, and yet as the reports linked show the system can disengage at any time when a discrepancy or error is found requiring an attentive human to take over. This requires a100% of the time attentive human. It also admits that the automatic driving system makes many mistakes. It doesn't matter if the system is correct 99.99999% of the time. This allegedly rare mistake, and the conceptual error,  may be the blame for taking this woman's life. It's not good enough. I haven't seen the video yet but from the video observers reports humans would nt have been able to avoid the accident

Well, there just needs to be an education here. I want a SAE level 3 car such as Tesla or Cadillac, but I do not expect that it would replace my attentiveness to road conditions - that would be level 4. I understand that level 3 will only provide me a marginally better experience in situations where I am otherwise not being safe (or, more specifically, being stupid). For most, this is texting and driving. For me, it's eating a Zaxby's salad while I am going down the highway at 85 with my family in the car (also eating Zaxby's). Of course, I don't want some frustrating system that makes me have to take the wheel every couple of minutes or so. If they could implement an eye monitor like what is in some trucks, that would be OK, but I think if we could just educate people to what the system is capable of at this point, we could safely roll it out and let it evolve to level 4. If I wind up with Zax sauce in my lap because I have to take control of the wheel in an emergency, well, that's on me.

Link to comment
14 minutes ago, Goose66 said:

Well, there just needs to be an education here. I want a SAE level 3 car such as Tesla or Cadillac, but I do not expect that it would replace my attentiveness to road conditions - that would be level 4. I understand that level 3 will only provide me a marginally better experience in situations where I am otherwise not being safe (or, more specifically, being stupid). For most, this is texting and driving. For me, it's eating a Zaxby's salad while I am going down the highway at 85 with my family in the car (also eating Zaxby's). Of course, I don't want some frustrating system that makes me have to take the wheel every couple of minutes or so. If they could implement an eye monitor like what is in some trucks, that would be OK, but I think if we could just educate people to what the system is capable of at this point, we could safely roll it out and let it evolve to level 4. If I wind up with Zax sauce in my lap because I have to take control of the wheel in an emergency, well, that's on me.

That sound to me like lying to ouselves. We want a system allowed that allows us to be unavailable for quick respond,, while still professing to be available to respond. While we are looking for a place to put our forks into our food bowl/plate,we are not watching or focusing our attention on prediction of future road events.

In Ontario this as become a big issue (with new laws and enforcement)  with using cell phones, while driving. Recently a young woman was charged with "Distracted Driving", while texting, stopped a a red light at an intersection. It can be argued she wasn't driving, OTOH still in control of a running automobile???

 Until the next level is proven and in place, the human pilot should not be doing anythin else but driving with full attention. Allowing the human to be 99.9999% inattentive will never work, as it makes the human response time too long to be of value anyway. I don't know how we will get through this AI stage safely.

Link to comment

Archived

This topic is now archived and is closed to further replies.


×
×
  • Create New...