•  
  •  
 

Document Type

Article

Abstract

Automated systems like self-driving cars and “smart” thermostats are a challenge for fault-based legal regimes like negligence because they have the potential to behave in unpredictable ways. How can people who build and deploy complex automated systems be said to be at fault when they could not have reasonably anticipated the behavior (and thus risk) of their tools? Part of the problem is that the legal system has yet to settle on the language for identifying culpable behavior in the design and deployment for automated systems. In this article we offer an education theory of fault for autonomous systems—a new way to think about fault for all the relevant stakeholders who create and deploy “smart” technologies. We argue that the most important failures that lead autonomous systems to cause unpredictable harm are due to the lack of communication, clarity, and education between the procurer, developer, and users of these technologies. In other words, while it is hard to exert meaningful control over automated systems to get them to act predictably, developers and procurers have great control over how much they test these tools and articulate their limits to all the other relevant parties. This makes testing and education one of the most legally relevant point of failures when automated systems harm people. By recognizing a responsibility to test and educate each other, foreseeable errors can be reduced, more accurate expectations can be set, and autonomous systems can be made more predictable and safer.

Share

COinS