General Motors’ autonomous Cruise division has been facing a series of setbacks and controversies, raising concerns about the safety of its robotaxis. The recent incident involving a pedestrian being dragged by one of Cruise’s vehicles has led to California banning the company from operating in San Francisco. But it appears that this was not an isolated incident.
According to internal materials reviewed by The Intercept, Cruise was aware that its autonomous vehicles struggled to detect children and did not exercise additional caution when they were nearby. The company acknowledged the need for technology that could distinguish between children and adults, allowing for extra safety measures. However, Cruise reportedly lacked the necessary data on children’s behavior to ensure the vehicles operated safely around them.
The materials further highlight Cruise’s reliance on human intervention to identify children encountered by the autonomous vehicles. The lack of a high-precision small vulnerable road user (VRU) classifier, an automated system for detecting child-shaped objects, further exacerbated the detection issues.
In response to these revelations, Cruise stated that its software did not fail to detect children but rather failed to classify them correctly due to the unpredictability of children’s behavior. The company highlighted its rigorous testing in simulated and closed-course environments, which it claims demonstrated superior performance in critical collision avoidance scenarios involving children.
Moreover, it was not just children that Cruise struggled to detect. The materials indicated that the autonomous vehicles had difficulty recognizing hazards such as holes in the ground, increasing the risk of accidents and further compromising passenger safety.
These revelations shed light on the challenges facing General Motors’ Cruise division in developing reliable and safe autonomous vehicles. As public trust in autonomous technology becomes increasingly important, addressing the limitations and shortcomings highlighted in this report is crucial. Regulators, technology developers, and manufacturers must work together to ensure the safety of autonomous vehicles and the communities they operate in.
Q: Did Cruise know its robotaxis were a danger to children?
A: The Intercept’s report suggests that Cruise was aware of the detection challenges related to children and did not prioritize additional caution when they were nearby.
Q: How did Cruise attempt to compensate for its software’s limitations?
A: According to the report, Cruise relied on human workers to manually identify children that the software struggled to detect automatically.
Q: What other issues did the internal materials reveal about Cruise’s autonomous vehicles?
A: In addition to detection challenges with children, the materials indicated that Cruise also struggled to detect hazards like holes in the ground.
Q: What did Cruise say in response to the allegations?
A: Cruise stated that its software did not fail to detect children but failed to classify them correctly due to their unpredictable behavior. The company highlighted its rigorous testing in simulated environments to demonstrate its vehicles’ performance in collision avoidance scenarios involving children.