Skip to main content

View more from News & Articles or Primerus Weekly

By Mark F. Geraci, Esq., CFE
Lewis Johs Avallone Aviles LLP
Islandia, New York

At the end of a 1982 episode of the iconic television series Knight Rider, David Hasselhoff’s character Michael asks his artificially intelligent car KITT how it feels to be "one of a kind.” With a tinge of sadness and loneliness in his voice, KITT tells him “it's a familiar feeling.”

Looking at the world in 2018, KITT might have appreciated being one of a kind. On Sunday, March 18, 2018, an autonomous vehicle operated by Uber with an emergency back-up driver behind the wheel struck and killed 49 year-old Elaine Herzberg, leaving many to question the reliability of this new technology.

While attorneys and policy makers point fingers, legal scholars speculate that there will be a shift in liability away from drivers, and towards the automotive industry as claims of products liability take form.

To prove prima facie case in strict liability for design defects, a plaintiff must show:

  1. That the manufacturer marketed product was not reasonably safe in its design,

  2. That it was feasible to design the product in a safer manner, and

  3. That the defective design was a substantial factor in causing plaintiff's injury.

Magadan v. Interlake Packaging Corp., 45 A.D.3d 650 (2d Dep’t 2007).

At the advent of this innovative technology, perhaps the only way to assess the “reasonable safety” of an autonomous-car is to compare it with the safety of a human driver. Autonomous-car advocates might point to statistics indicating that 95% of crashes are attributed to human error. With over 37,000 deaths per year from road crashes, and an additional 2.35 million resulting injuries and disabilities, the statistic seems even more compelling.

If we use the human error standard as a basis for comparison, does that simply relegate Elaine Herzberg’s death to the remaining 5% of crashes? Or do the statistics need to be reconfigured in light of these new technological advancements, making room for engineering miscalculations and software bugs?

Perhaps the human error standard sets the bar too low for the autonomous-car industry, allowing them to escape liability in even the most tragic circumstances. Even if autonomous-cars soaked up responsibility for 47.4% of the crashes caused by humans, they would still outperform us. Plaintiffs bringing products liability misrepresentation suits in that context might have difficulty overcoming such a hurdle.

As companies like Uber, Volvo, Tesla, BMW and even Google vie for market share of the autonomous-car enterprise, the standard for comparison might move away from human error, leaving judges and juries to assess “reasonable safety” as a comparison of different autonomous-cars in the market. As the industry develops and competition grows fiercer, finders of fact might be in a better position to assess what reasonable changes in the vehicle’s automated system might have prevented the crash.

The question might turn to a battle of the experts, with engineers and software designers debating the optimal configuration of a truly safe autonomous-car. This begs the question of how manufacturers of lesser-end autonomous-cars might stand in court when compared with their higher-end counterparts. Since software is only as good as the hardware supporting it, could simple budgetary decisions expose automakers to new liability in the realm of the autonomous-car industry?

And what about liability for municipalities, like Tempe, Arizona where Elaine Herzberg was killed? In these experimental years, should municipalities be liable for allowing tech-entrepreneurs to use their roads as a laboratory? And what of the human emergency backup driver? Can their nonfeasance in preventing the accident expose them to personal liability? It seems that there are more questions than answers in this awkward age of growing pains for artificially intelligent vehicles.

There also seems to be infinite grounds to challenge each and every decision made by an autonomous-car’s operating system. As such, changes to discovery might be looming as the industry develops, turning away from traditional depositions of drivers and victims to an empirical production of computer logs, engineering and software data from manufacturers. In this context, what need will there be for accident-deconstructionists and post-crash investigations?

And what role should the courts play in the coming age of artificial intelligence? As in the world of patent and copyright law, courts might face striking a balance between imposing liability on the makers of autonomous-cars with the need to encourage their development and foster competition in the market for the ever-hopeful production of safer models. Such a weighty decision becomes even more onerous with loss of life and limb hanging in the balance, begging the question of whether legislatures and policy-makers are better equipped to make such calls, and endure their fallout.

For now, only time will tell whether Elaine Herzberg’s death remains an unlikely tragedy, or becomes just another statistic amid a booming enterprise of artificially intelligent vehicles. So far, quite simply, it seems that “trust” might in fact “rust” when relying on the graces of our robotic side-kicks.