On cars, old, new and future; science & technology; vintage airplanes, computer flight simulation of them; Sherlockiana; our English language; travel; and other stuff
THIS MOST COGENT QUESTION is addressed by a Carnegie Mellon associate professor who has written a book on this crucial topic.
Professor Philip Koopman has also written an article in the April 2023 edition of SAE International’s Update: News & Insights for Mobility Professionals. Here are tidbits gleaned from this article.
Infallible Computers?? “Some industry advocates,” Koopman observes, “would have us believe that computer drivers will be better than human drivers because humans make mistakes. So presumably computer software is perfect? Really? Increasing automotive software recall rates tell a different story.”
Safe as a Human Driver? Koopman posits rhetorically, “Why don’t we just go with ‘at least as safe as a human driver’ and leave it it that? While that has intuitive appeal, it begs the question of which human driver, driving which vehicle, under which conditions. Moreover, an average safety target overlooks the potential risk transfer to vulnerable groups of road users.”
The Infamous Trolley Problem. Koopman says, “If an AV fleet regularly gets itself into situations that require a nuanced argument of whom to kill, it is probably too dangerous to be operating on public roads in the first place.”
Which Human? “It seems pretty reasonable,” Koopman says, “to want an AV that is at least as safe as an unimpaired, undistracted driver behaving responsibly on the road. Not driving drunk is one of the reasons so often given that AVs will be safer, right?”
“Even if not drunk,” Koopman imagines, “what if every AV drove like a 16-year-old with a new license? Perhaps we’d prefer the equivalent of an experienced middle-age driver with a crash rate many times lower. Or even a 70-year-old with experience and maturity that more than compensates for slower reflexes (seriously—look at the data in the figure).”
AV Improvements. Koopman says, “While one hopes that AVs will get better at driving over time, we probably don’t want to suffer several years of having a fleet of robotaxis all driving like teenagers in our urban areas.”
And Which Areas? Koopman cites individual state fatality rates ranging from “0.51 per 100M miles (Massachusetts) to 1.73 (South Carolina), with a U.S. average of 1.11 based on 2019 data.”
Also, within a state, Pennsylvania, for example, “fatalities are five times more frequent per mile on rural state highways than on the PA Turnpike.”
In Which Vehicles? In comparing “apples to apples,” Koopman observes, “The age and type of vehicle matters as well.” The fleet-average 12-year-old vehicle has fewer and older safety features than a new, high-end car.
Summarizing. “In short,” Koopman says, “AV safety should be compared with good, unimpaired drivers on the same roads in the same operating conditions in comparable vehicles with a decent suite of non-autonomous safety features. That will set the ‘safe enough’ bar much higher than a simple U.S. national average for initial deployments.”
Stakeholders Frozen Out. “The current situations in the U.S.,” Koopman says on a disturbing note, “is that there is no credible regulatory oversight of AV safety before deployment…. Other stakeholders [not just those developing the technology] such as consumer safety advocates, community governments, and equity advocates are being frozen out of the conversation due to industry-sponsored state laws that prohibit cities from having a say in AV testing operations on their own streets.”
I am tempted to italicize the preceding paragraph. ds
© Dennis Simanaitis, SimanaitisSays.com, 2023
I have been playing around with the Internet of Things (IOT) in my home over the last few years, where I can issue Siri commands via Apple HomeKit to turn lights on, lock/unlock my front door and open/close my garage door, etc. There are a mix of devices, some better designed, others less so.
One of my systems that has been relatively reliable for many months now developed a problem a couple days ago whereby two lights plugged into smart plugs started responding slowly (20-25 seconds to turn on/off versus the normal instantaneous response). A restart of my WIFI network resolved this particular issue, but it points up how a small “electronic” glitch can significantly impact system performance.
Such a scenario in an “autonomous” car could easily result in a fatal tragedy versus the minor annoyance at home.
I have already experienced system failure in my previous Audi Q5 where proximity sensors stopped working due to being iced up by freezing slush. The system did fail in a safe way basically by warning me that the system was not functioning so at least I was aware, but it points up that autonomous features can quite easily fail to provide the necessary data to properly control the vehicle. This is also a possibility with a dumb system (i. e. Non-autonomous) whereby the only “smarts” come from the human operator. Now, not all humans are capable of properly adapting to adverse conditions, but at least experienced drivers know how to compensate for poor visibility / road conditions. Not quite sure how an autonomous car would deal with failed/impaired data inputs.
I remember a road trip many years ago where we suddenly hit really dense fog. It didn’t help that there was drifting snow that was obscuring lane markings. The shoulders on this highway were quite narrow so I determined that stopping was not a safe option, but instead dropped my speed to about 10 mph. Another car came up behind me and not satisfied with my slow speed pulled out to pass and nearly got into a head on collision. He pulled back in just in time, and followed me for the next two hours. It was a WHITE knuckle drive, and the biggest scare came when I came upon a car that was pulled over on the narrow shoulder, with the driver standing outside his car in the driving lane, but my low speed enabled me to safely react to the situation. What would an autonomous car do with limited visibility for any cameras, and possibly malfunctioning sensors?
At the very least if they are testing autonomous vehicles on public roadways they should be clearly marked and lit up as experimental vehicles providing warning to all other road users.