Tuesday, December 22, 2020

Brief, Spam, and Substantive Comments - #4 Comments in Response to NHTSA ANPRM

1. Spam comment from John John. I think this was something akin to a road sign selling something. No need to elaborate.

2. Second anonymous comment - This comment is quite brief and the commenter appreciates how NHTSA is establishing a framework for assessing AV safety and functionality. This person thinks it is important to work closely with the scientists and developers involved. 

3. Comment of Michael DeKort - I am assuming that this is the Michael DeKort mentioned on Wikipedia as a whistleblower who won a lawsuit against Lockheed Martin after he was fired for reporting on faulty equipment being installed on Coast Guard vessels. Mr. DeKort attempted to report the defect up the agency food chain and tried to alert major newspapers to the problem, to no avail. This Mr. DeKort has valid AV credentials; according to his LinkedIn profile, he has served on SAE AV committees. He lists a bunch of AV-related podcast episodes and he also has aerospace experience.

Mr. DeKort's comment is exactly the same as the comment and blog post of Mr. Patrick Coyle, reviewed in my earlier blog post about comments submitted.

Editorial interruption - adult whining on my part about comments submitted without background information about the author. Repeat of earlier rant in my head. No foot stomping, yet, on this one.

Advice on what a safety framework should look like

4. Comment of David Gelperin - I assume that this is the Mr. Gelperin who is a software engineering expert. One of his specialties is risk management relating to software and he is the CTO and President of ClearSpecs Enterprises. He submitted two brief attachments as his comments. 

Mr. Gelperin gives a definite yes to regulating AVs before widespread deployment and he calls safety "THE core quality attribute that NHTSA is considering. Safety in NOT a functional element. It is a quality attribute that crosscuts all functional elements." He advises NHTSA to list deadly hazards of AVs and then associate mitigations of these hazards. "NHTSA should provide lists of core hazards both to and from an ADS with an unconstrained domain, along with their mitigation alternatives."

Mr. Gelperin likes the idea noted in the ANPRM that AVs be programmed to drive defensively, but he points out that this must be a "verifiable, comprehensive, consensus mode," which perhaps NHTSA should take the lead in developing. 

Emergency stopping button - like on a subway car

Mr. Gelperin calls for AVs to have an emergency stopping button that a passenger may press. He goes into great detail - necessary, practical detail - about how this emergency stopping function could work: how it should be verifiable, invokable by the ADS (automated driving system) controller, and an "isolated copy when the main control platform does not respond to a liveness check." Basically, the main control platform would be continually checked by an isolated platform.

If I get the summary slightly wrong, due to terminology for software systems engineering, the gist is clear in terms of redundancy and continuous checking. 

Mr. Gelperin has a logical mind and he offers practical advice for how an emergency stopping function would operate, including in the unusual circumstance of a person using an AV to rush to a hospital. 

Monitoring ADS behavior could be done by hardware, software, and passengers, as well as other vehicles, and remote observers. I think it’s too early to rule out any form of monitoring. I suggest that monitoring be tied to hazard management. An inventory of hazards from an ADS could be developed and then mitigations for some would start with their detection by monitoring or self-checking.

Mr. Gelperin warns at the end of his comment (in the second attachment):

All ADS software requirements should be publicly available or independently reviewed including quality attributes, basic function sets, design constraints and implementation constraints. Without skeptical reviews, such requirements are likely to be seriously flawed for many reasons. The current non-collaborative environment is very dangerous and ignores most of what we know about human behavior. [Emphasis added.]


No comments:

Post a Comment