On the one-year anniversary of the first 737 MAX crash, senators and representatives grilled CEO Dennis Muilenburg for nine hours at public hearings on Capitol Hill about how Boeing’s mistakes contributed to 346 deaths. As they forced Muilenburg to concede to design and management errors, policymakers built a case for more regulation of Boeing’s advanced airplanes, not less.

Yet in the same month that Muilenburg appeared before Congress, Waymo began offering driverless car service to early adopters in Chandler, Arizona, without safety drivers on board. Waymo, the autonomous vehicle company held by Google’s parent Alphabet, now carries passengers in potentially lethal vehicles on public roadways with zero government testing or certification of the safety or security of the robotic driver.

Policymakers must take care about which lessons they learn from Boeing’s failures.


Tweet This

At a moment when public skepticism of big tech runs high and a bipartisan group of fifty state attorneys general has started an antitrust investigation of Google, it is hard to imagine that the safety of autonomous driving technology will remain exclusively in the hands of companies like Waymo. The implementation of automated systems and how humans interact with them lies at the heart of Boeing’s failures with the 737 MAX program. Robotic cars have already claimed at least one life. If that number grows, the traveling public and their elected leaders will eventually demand that some entity other than the company selling autonomous driving technology test its safety.

Yet policymakers must take care about which lessons they learn from Boeing’s failures. Notwithstanding the 737 MAX tragedies in Indonesia and Ethiopia, no one has died in the United States in a commercial airline crash in more than ten years. During that same period, nearly 350,000 people, equal to almost half the population of Seattle, were killed in car crashes in the United State. Over 25 million people in the United States went to the emergency room because of auto accidents in the same decade, a number that exceeds the entire population of Cascadia. Despite the recent loss of trust in Boeing, airplane travel remains much safer than auto travel.

If automated driving systems can reduce the death toll on our roadways as their advocates promise, policymakers should accelerate, not inhibit, their deployment. Yet they must also remain clear eyed about technology companies whose incentives don’t always align with the public interest and find ways to create meaningful checks on bad corporate behavior.

In this article, I review Boeing’s mistakes with the 737 MAX and explore what worked and what didn’t work in the FAA’s regulatory approach to Boeing. I then look at the case for the “light regulatory touch” now adopted by many jurisdictions when it comes to autonomous vehicles. I wrap up with some recommendations on how local governments in Cascadia can take prudent steps towards deploying autonomous vehicles in ways that improve safety and build public trust.

Lessons from the Boeing 737 MAX crashes and aftermath

Dominic Gates at the Seattle Times has done excellent, in-depth reporting on the 737 MAX that the publisher packaged into an interactive explainer of exactly what went wrong. Twelve problems with an automated control system caused it to errantly take control of the elevator trim in the tail section and fly the planes down to fatal crashes in spite of the pilots’ best efforts to reassert control. Factors that contributed to the faulty design included:

After the second crash, regulators and markets responded to Boeing’s mistakes:

  • International regulators and then the FAA grounded the 737 MAX in March 2019 after the second crash in Ethiopia. 
  • Regulators and Boeing undertook detailed study of the crashes and published reports about them.
  • The FAA reviewed its process for certifying the aircraft.
  • Boeing’s market capitalization dropped $63 billion between March and October 2019. 
  • Boeing’s CEO lost his role as chairman of the board, and the head of the commercial airplane division was fired.
  • Boeing’s CEO faced the families of the crash victims at public hearings in Congress and took strong criticism for the company’s management practices.
  • Boeing redesigned the automated system to improve its safety with the hope of putting the aircraft back in service in early 2020.
  • Congressional leaders called for more resources and independence for the FAA when it certifies new planes.

Even though Boeing had strong business incentives to make the 737 MAX safe, competitive pressures from Airbus and a rush to market resulted in a flawed design. After the crashes, regulators asserted their authority and forced Boeing to fix the problem. Policymakers appear ready to strengthen the hand of the FAA in approving the 737 MAX’s return to service and on future aircraft design.

When private companies compromise public safety, public regulators step in and make sure they do it right. That’s why the construction and operation of any vessel that moves people—be it an airplane, automobile, train, bus, passenger ship, or elevator—must meet requirements established by public regulators in the United States. A safety failure like that on the Boeing 737 MAX offers yet another proof point that private-sector enterprises may lack sufficient incentive to make safety a priority on their own.

The Autonomous Vehicle Industry’s Case for a Light Regulatory Touch

The autonomous car industry desperately wants to avoid the regulatory regimes applied to airplanes and other transport modes, favoring instead the idea of a “light regulatory touch.” The industry strongly supported the AV Start Act in 2017 that would have allowed them to put tens of thousands of autonomous vehicles on the road without standards for self-driving safety. That effort stalled in the US Senate when safety advocates pushed back.

Industry supporters argue that a heavy-handed regulatory approach to self-driving technology will slow its deployment and delay its life-saving applications. With over 35,000 people killed per year on the nation’s highways, self-driving advocates promise a technological fix so long as regulators don’t mess it up. Free market incentives will develop and deploy the technology faster, lowering the total number of deaths in the next two decades compared with a scenario where public regulators adopt a rigorous program of testing and validation before allowing self-driving cars on the road.

The autonomous vehicle industry also echoes the arguments by aviations experts about the limited technical expertise within regulatory agencies. Christopher Hart, a former NTSB chairman and pilot who chaired the Joint Authorities Technical Review (JATR), observed in testimony to the US Senate that “the leading technologists are not going to be with the regulator — they aren’t able to hire and retain the leading technologists — they’re going to be with the company.” The implication is that regulators won’t know enough about how the technology works to judge its safety.

Though self-serving, the industry’s arguments are hard to dismiss entirely. Regulation does slow things down and autonomous driving technology is advancing rapidly. The best practice of today may be obsolete tomorrow. Locking in standards too soon may hinder the advance of better and safer practices.

For the time being, policymakers and early adopters in Arizona have put their full faith in Waymo to do the right thing. Waymo’s engineers have convinced their lawyers that their technology is safe enough for the company to manage what liability will emerge from putting these vehicles into service. So long as there are no (or few) accidents, this experiment in laissez faire could continue for years.

The US approach to battery-powered e-cigarettes may offer some parallels. In the last decade, vaping grew into an $8 billion business with minimal regulation. Vaping is less harmful than cigarette smoking, and so its supporters argued against regulation since vaping could potentially save lives. After ten years of the hands-off approach, the growth of vaping among teenagers and a spate of vaping-related deaths finally prompted the FDA to act, starting with a proposed ban on certain flavorings for e-cigarettes. One could imagine a similar scenario unfolding with self-driving cars. Well-managed companies put safe products on the road, the market grows, and then something happens that forces the public sector to step in and start enforcing rules. 

Waymo has done the most testing of self-driving technology and, with Alphabet’s backing, has the financial heft to become the market leader in offering automated transportation services. But like Boeing, Waymo feels pressure from financial markets to bring its new technology to market faster. In September 2019, the month before Waymo started service without safety drivers, Morgan Stanley slashed $70 billion from its valuation of Waymo because of delays in deploying service. The managers making the decisions to put the robots on the road have to balance their personal stake in the company’s financial success with the safety of the traveling public. 

  • Our work is made possible by the generosity of people like you!

    Thanks to Eric Froines & Amy Harper for supporting a sustainable Northwest.

  • Waymo’s competitors understand the risks to the industry of not establishing standards for the verification and validation of self-driving technology. In July 2019, Aptiv, Audi, Baidu, BMW, Continental, Daimler, Fiat Chrysler, Intel, and Volkswagen together published “Safety First for Automated Driving” as a work in progress to establish standard ways to prove that self-driving technology is safe. The same month, Uber separately released its Safety Case Framework in a similar effort to describe an approach to safety while avoiding fixed standards that could limit technology development. 

    Waymo has not yet joined these industry-wide efforts to build a consensus around a safety framework for testing and deploying autonomous vehicles. Perhaps because it has a head start and doesn’t want to lose its competitive advantage waiting for the rest of the industry to catch up and adopt uniform standards.

    Exactly how and when the industry and public sector will establish standards and regulations on self-driving technology remains to be seen. It seems inevitable that eventually some company will put vehicles into service that kill or injure enough people that regulators will have to respond. In the meantime, the autonomous vehicle companies would serve their and society’s long-term interests if they pooled some of their massive investments to develop an independent driver’s test for robots to keep the dangerous ones off the road.

    Find a Ride in the Slow Lane

    The vision of widespread use of electric robo-taxi service holds tremendous promise for Cascadia’s cities. The technology could lower costs, reduce emissions, free up valuable real estate from parking, and improve safety. That bright future depends on full-size automated electric vehicles that can operate safely at highway speeds in all manner of traffic and weather conditions. Until those capabilities are proven, one way that cities can get themselves and their citizens ready is by exploiting advances in wireless bandwidth and artificial intelligence to robotize the repositioning of the new generation of “micromobility services” such as shared e-bikes and e-scooters. 

    Operators of these micromobility services could use remote operators using screens and joysticks and some artificial intelligence to move e-bikes and e-scooters at walking speeds along routes and at times of day when they would not interfere with other uses of rights of way. Such redistribution of lightweight devices will not hurt anybody; you can’t run someone over at 5 mph. This slow start would allow operators to reposition vehicles that block sidewalks, to redeploy them to areas with high-demand, to send them to recharging stations, and to send them directly to people who request them. 

    The technology would make the services more convenient for users, address cities’ concerns about sidewalk clutter, and provide low-cost last-mile connections to transit stops. Slow-speed, self-driving bikes and scooters would also introduce autonomous technology to communities in a low-risk way that would build trust in the eventual deployment of larger and faster vehicles. The technology also puts local governments firmly in control of how the technology is deployed as they have clear jurisdiction over the operation of micromobiity services.

    Segway Ninebot and Tortoise recently announced capabilities for the remote piloting of scooters and e-bikes. Segway Ninebot uses a three-wheeled scooter while Tortoise retrofits existing scooters and bikes with training wheels that drop down when remote operators reposition them.  Tortoise will begin offering service with scooters in a suburb of Atlanta early next year. 

    The potential market for these services is huge. According to a 2017 travel survey by the Puget Sound Regional Council, over half of all the trips made in the four county region surrounding Seattle have a length of two miles or less, a distance travelers can cover on a bike or scooter in under 10 minutes. Cascadia’s cities, including Seattle, Bellevue, Portland, and Vancouver BC, have invested in protected bike lanes to make choosing these modes safer and more convenient.

    Serving short-distance trips first with low-cost, light-weight electric vehicles that can’t kill people when in autonomous mode is a low-risk way to introduce the technology. Customers, operators, and cities in Cascadia would gain confidence in how the autonomous systems work and could gradually add larger and faster vehicles as the technology companies and their regulators prove their safety. The fastest and safest path to our autonomous transportation future could be in the slow lane.