Advancing autonomous vehicles to widespread use requires significant testing, and if that testing is performed in real world conditions, safety of third parties must be an ongoing and evolving paramount concern. The March 2018 crash of an Uber Advanced Technologies Group (UATG) autonomous vehicle in Arizona resulted in the death of a pedestrian.  Local and federal findings included that the sole human back-up driver was inattentive immediately prior to the accident and that vehicle’s emergency braking systems (including Volvo’s own system) were not enabled at the time of the accident.  As a result of the crash, UATG suspended all testing to self-examine and improve safety.  It released a report based in part upon a review of the crash investigation to the National Highway Transportation Safety Administration (NHTSA) in November 2018. The report addresses operational, technical, and organizational changes to be imposed to improve the safety of UATG autonomous vehicles.

Based on these improvements, Uber submitted a Notice of Testing Highly Automated Vehicles Application to the Pennsylvania Department of Transportation (PA DOT) in November 2018.  On December 17, 2018,  PA DOT issued a Letter of Authorization to Uber to begin testing of its autonomous vehicles (good for one year).

The Authorization is consistent with the Commonwealth’s Automated Vehicle Testing Guidance issued on July 23, 2018.

Significant changes to UATG’s testing, including its Safety and Risk Mitigation Plan, as authorized by PA DOT, are as follows:

  • Operate a limited number of vehicles;
  • Operate those vehicles only during daylight hours on weekdays;
  • Operate them only in good weather;
  • Operate in them in areas where most roads have restricted speed limits of 25mph;
  • Operate them with two human backup drivers;
  • Operate them with more highly trained and audited backup drivers;
  • Operate them with the automatic emergency braking system and Volvo emergency braking system in operation.

UATG commenced testing under the Notice on December 20, 2018.  Safety related to the testing of autonomous vehicles remains the subject of ongoing debate at the federal, state, and local and private levels. The proposed changes to UATG’s testing of its autonomous vehicles are consistent with Pennsylvania’s July Guidance and Uber’s November 2018 report. We will continue to monitor and review evolving public and private guidance on the safe testing of autonomous vehicles.

This post will explain why corporate directors should keep abreast of AI concepts to effectively fulfill their fiduciary duties.

Introduction to AI

In the context of this post, let’s define AI as the use by computer and machine devices of characteristics commonly associated with human intelligence, including reasoning and learning.  Through algorithms, machine learning, or even deep learning, the devices process significant amounts of data to detect patterns, solve problems, or to provide additional data for human consideration. Netflix, Pandora, Amazon, all use AI to recommend entertainment and products based on prior selections.  Self-driving cars also utilize AI to detect and understand its surroundings and drive the car in a safe manner.

Fiduciary Duties and Considerations of AI

Members of a corporate board are required to comply with the fiduciary duties of loyalty and care.  They have a duty to act in the best interests of the corporation and its shareholders, and must be fully informed before making decisions on behalf of the company.  This includes the subsidiary duty of oversight, which requires that directors have in place an effective reporting or monitoring system and an information system that allows them to detect potential risks to the company.

Board members should keep informed about developments regarding AI that may have transformative effects, or might make a particular business model or product either obsolete or less necessary.  For example, use of ubiquitous free GPS apps that adapt to wrong turns, and online live traffic mapping, have largely displaced the necessity for printed maps.  Board members of companies that sold traditional maps should have seen this development around the bend.

Utilizing AI to both enhance the board’s decision-making capabilities and analyze data may soon be more commonplace.  For example, in 2014, a venture capital firm claimed to have “appointed” an AI program called Vital to its board of directors. Vital sifted through data about drugs used to test age-related diseases and would then advise as to whether or not the firm should invest in the drugs.  Although Vital was not a voting member, and although all boards can consider experts or various sources of information to assist in decision-making, appointing an AI program to the board was indicative of the role AI can play in the governance of a corporation.

It is important to note, however, that in Delaware, board members must be “natural persons.”  Thus, “appointing an AI program” as a board member for a Delaware corporation would be impermissible.  See DGCL Section 141(b).

To the extent a board moves forward to adopt AI, it is crucial that the board does not delegate its essential management functions and rely solely upon AI in making decisions for the corporation.  Doing so would be a prohibited delegation of its duties.

Conclusion

This post has only skimmed the surface of why board members should consider the potential impact of AI. Every board, regardless of industry, should consider how AI might transform its business.

 


On December 11, 2018, Lyft was granted a patent by the United States Patent and Trademark Offices for an Autonomous Vehicle Notification System (Patent No. US 10,152,892 B2).  The System is designed with several functions, one of which produces a “talking car.”  Talking in the sense of the ability to flash notifications to people outside the vehicle with information such as “safe to cross” (for individuals on foot) or “safe to pass” (for cyclists).  The system can also designate other messages, such as “yielding” and “warning turning left/right”.

It seems that concerns about the safety of autonomous vehicles is on the rise; just last week vigilantes actually attacked an autonomous vehicle while traveling through Phoenix.  So, the use of direct communications between the autonomous vehicles and individuals could go a long way to gaining the trust of the public.  However, this system seems to raise a new issue: what if the autonomous vehicle notifies the individual of safe passage, but another vehicle (autonomous or not) fails to stop for the individual? Further, how will the technology determine when it’s safe to pass or cross, especially during the transition period where autonomous vehicles will be sharing the road with human drivers?

In the context of product liability litigation, the question becomes who then would be responsible if the individual is injured by the second vehicle? Would this be similar to situations where a human driver “waves” another person through what turns out to be an unsafe intersection?  Depending on jurisdiction, the driver indicating safe passage may be held liable for any subsequent injuries/damages, regardless of the negligence of those around them.  Where there is no driver, will the assignment of liability become more complicated?  Will juries now need to understand the technology and algorithms used to determine “safe to pass/safe to cross” notifications in order to determine if the system failed the individual?

While the notification system patented by Lyft definitely has the potential for creating a sense of security when autonomous vehicles interact with people outside the vehicle, it also raises new questions without answers.   The ever-changing landscape of autonomous vehicles today attempts to respond to issues as they arise, but in return they create new ones without answers. This cycle will continue to perpetuate itself for some time into the near future as our roads transition to autonomous vehicles.

FDA is taking steps to embrace and enhance innovation in the field of artificial intelligence. It has already permitted the marketing of an AI-based medical device (IDx-DR) to detect certain diabetes-related eye problems, a type of computer-aided detection and diagnosis software designed to detect wrist fractures in adults (OsteoDetect), and most recently, a platform that includes predictive monitoring for moderate to high-risk surgical patients (HemoSphere).

FDA also embraced several AI-based products in late November when the Agency chose several new technologies as part of a contest to combat opioid abuse which it launched in May 2018. FDA’s Innovation Challenge, which ran through September 30, 2018, sought mHealth (mobile health) technology in any stage of development, including diagnostic tools that identify those with an increased risk for addiction, treatments for pain that eliminate the need for opioid analgesics, treatments for opioid use disorder or symptoms of opioid withdrawal, and technology that can prevent the diversion of prescription opioids.

The opioid crisis continues to ravage cities and towns across America. The selection of AI-based devices by FDA to aid in the opioid crisis is important as it shows

  • FDA’s commitment to its Action Plan to address the opioid crisis
  • FDA’s recognition that AI is an important technology that it must address and encourage;
  • FDA’s willingness to work with developers of AI devices to establish new pathways for approval and
  • The need for FDA to clarify its understanding of AI and how it will guide and regulate industry moving forward.

FDA received over 250 entries prior to the September deadline. In each proposal, applicants described the novelty of the medical device or concept; the development plan for the medical device; the team who would be responsible for developing the device; the anticipated benefit of the device when used by patients; and, the impact on public health as compared to other available alternatives. Medical devices at any stage of development were eligible for the challenge; feasibility and the potential impact of the FDA’s participation in development to expedite marketing of the device were factors considered when reviewing the submissions.

A team from the FDA’s Center for Devices and Radiological Health (CDRH) evaluated the many entries and chose eight of them to work with closely to accelerate develop and expedite marketing application review of innovative products, similar to what occurs under its Breakthrough Devices Program.

Several of the selected entries involve pattern recognition, whether by predefined algorithm or machine learning, to prevent, detect or manage and treat opioid abuse. For example, Silicon Valley-based startup CognifiSense is developing a virtual reality therapy as part of a system to treat and manage pain. CognifiSense uses a software platform that provides psychological and experiential training to chronic pain patients to normalize their pain perception. Another FDA chosen product, iPill Dispenser, uses fingerprint biometrics on a mobile app that aims to cut over-consumption by dispensing pills based on prescriptions, and which permits physician interaction with usage data to adjust dosing regimens. Yet another, Milliman, involves predictive analytics and pattern recognition to assess a patient’s potential for abuse of opioids before prescribing as well as detection of physician over-prescribing.

U.S. states with autonomous vehicle laws

The Autonomous Vehicle Legislative Survey provides a description of latest actions (regulations, executive orders, committee investigations, or the like) regarding autonomous vehicles taken by each U.S. state and territory. The survey also includes analysis of how the position of each state compares to other states.

The survey is meant to be an evolving document.  It will be updated quarterly by its authors, Jodi Dyan Oley, Monakee D. Marseille,  and Karen O. Moury of Eckert Seamans,  to keep readers abreast of the ever-changing developments in this emerging topic.  The authors are also working on a survey of the U.S. cities that are at the forefront of autonomous vehicle testing and development.  If interested, please subscribe to the AI Blog for notifications of updates on our surveys.

A wealth of information is available that discusses the intersection of artificial intelligence and the law.  Most people are familiar with the nearly ubiquitous examples of artificial intelligence in daily life such as Siri and the Amazon Echo.

Most lawyers are familiar with existing applications of artificial intelligence such as “computer assisted review”, sometimes called predictive coding, that allows for the review of large volumes of documents in a manner that is often faster and more accurate and less expensive than using only human review of documents.  See, e.g., a Delaware Court of Chancery decision, nearly requiring attorneys to consider that form of high-tech document review.

The confluence of artificial intelligence and corporate governance will require increasing attention.  For example, a venture capital firm reportedly attempted to “appoint” an artificial intelligence program as a member of the board of directors of a company.  That action, however, conflicts with a provision in the Delaware General Corporation Law at Section 141(b) that requires board members to be “natural persons.”

Another example of AI applied to the practice of law was explained in a recent article in Forbes, which described work performed by LawGeex, whose automated contract review platform answered questions about whether a non-disclosure agreement should be signed. The accuracy of that review was determined to be faster and better than the review of the same agreements by “human lawyers.”

Thomson Reuters has a website devoted to AI topics, and explains that AI is not a single technology. It combines a number of different technologies that are applied to different functions through various applications.  They provide examples of existing applications of AI in the practice of law such as:

  • legal research
  • litigation strategy analytics
  • online legal services, and
  • analysis of prior decisions by particular judges to assist in the prediction of how that particular judge would decide a particular issue.

We hope to provide more examples on these pages in the weeks to come about the intersection of law and AI.

Robert Campedel

In an “Expert Analysis” piece published by Law360, Eckert Seamans Member Robert Campedel addresses how the rise of autonomous vehicles and the transportation as a service (TaaS) industry are affecting the insurance industry, particularly in regard to product liability coverage. Read the full article on Law360. (Subscription may be required to access third-party content.)

  David Rockman

In an “Insights” piece published by Bloomberg Environment, Eckert Seamans Member David Rockman discusses why artificial intelligence is an emerging issue of great potential interest in the world of environmental law. Read the full article on Bloomberg Environment. (Subscription may be required to access third-party content.)

Steven Kramer

In an “Expert Analysis” piece published by Law360, author Steven Kramer, member-in-charge of Eckert Seamans’ White Plains office, explores unique product liability issues that are coming into play in online marketplaces and companies involved in the transportation as a service, or TaaS, industry. Read the full article on Law360. (Subscription may be required to access third-party content.)

Welcome to the Artificial Intelligence Law Blog, brought to you by the AI, Robotics, and Autonomous Transportation Systems team at the law firm of Eckert Seamans.

The purpose of this blog is to present legal developments in the fields of artificial intelligence, robotics and autonomous transportation systems, legal issues on subjects that relate to these fields, and commentary about how the law might impact each of those cutting-edge areas of technology.

The co-editors of this blog are Mark C. Levy, Jodi Dyan Oley, and Francis G.X. Pileggi.

Our focus will include addressing the legal needs of innovative companies developing AI technology,  manufacturers that employ this technology in the transportation industry, and companies using AI/robotic tech in the medical device, pharmaceutical, biologic, food product, health care, consumer product, and industrial products industries, among others.

We will also address the impact of AI on corporate governance and related legal topics. We welcome comments and suggestions from our readers.