FDA continues to be active in emphasizing the importance of artificial intelligence in health care. Now, FDA has committed to a program of creating a knowledgeable, sustainable, and agile data science workforce ready to review and approve devices based on artificial intelligence.

In April of last year, FDA Commissioner Scott Gottlieb, in discussing the transformation of FDA’s approach to digital health, stated that one of the most promising digital health tools is Artificial Intelligence (“AI”). Then, in September of 2018, the commissioner again referenced AI as one of the drivers of unparalleled period of innovation in manufacturing medical devices. And, we saw FDA approve a record number of AI devices in 2018. We have discussed this here and here.

On January 28, 2019, Gottlieb announced that Information Exchange and Data Transformation (INFORMED), an incubator for collaborative oncology regulatory science research focused on supporting innovations that enhance FDA’s mission of promotion and protection of the public health, is going to be working with FDA’s medical product centers. Together, they will develop an FDA curriculum on machine learning and artificial intelligence, in partnership with external academic partners.

INFORMED was founded upon expanding organizational and technical infrastructure for big data analytics and examining modern approaches in evidence generation to support regulatory decisions. One of its missions is to identify opportunities for machine learning and artificial intelligence to improve existing regulatory decision-making. So, it makes sense for FDA to use this already existing incubator (although oncology focused) to facilitate increasing knowledge across all of its centers. While it is unclear what the curriculum will look like and who the “academic partners” are, the announcement by FDA that they are seeking the assistance of outside consultants and committing to training its personnel in anticipation of the growth of AI in health care is an important advancement for all those engaged in the development of AI-based devices.

Apple was recently granted a patent (10,189,434) for an augmented safety restraint. Say that again? Yes, with the rise of autonomous vehicles comes the need for changes in the safety devices placed within these vehicles. If you are wondering why this is an important patent, you are probably not alone. Currently, the states that have addressed the use of autonomous vehicles have done so with little emphasis (if any) on the safety features within the vehicle, beyond requiring what is currently mandated under the federal regulations for non-autonomous vehicles.

So, what is different about Apple’s augmented safety restraint ? The patent provides that the restraint, beyond securing the passenger within the vehicle, can

  • provide holistic monitoring of passenger status;
  • supply entertainment and comfort;
  • allow communication or interaction between the passenger, vehicle, and other passengers within the vehicle; and
  • generate power sufficient to run the aforementioned capabilities.

The reason for all of these features is to “allow for enhancement of passenger activities, improved interaction with the vehicle and/or other passengers, and energetic autonomy while at the same time meeting regulatory safety requirements.”

In order to perform the above, the device(s) will be attached to an exposed surface or embedded within the restraint. The suggested devices include contact sensitive features, such that the passenger would need to touch the device for engagement (example: a fingerprint sensor) and non-contact sensitive features (example: optical or voice-activated sensor).

In addition to the common three-point seat belts, other restraint types (e.g., inflatable belts, webs, harnesses, etc.) are noted as being possible designs for the augmented restraints. Some of the proposed features are even proposed to be activated either with or without a passenger present in the vehicles (such as devices to help aid passenger ingress or when the vehicle is transporting only packages).

In what appears to be an effect to maintain compliance within current safety standards in place, the restraints may also include an airbag and any or all of the augmented safety restraints can include a pre-tensioner device. The restraints have a passenger-securing structure, for example, a belt or a harness secured to either the vehicle or the passenger seat.  There is also a passenger-facing surface that can engage the body of the passenger to restrain motion of the passenger relative their respective seat.

Apple’s patent suggests numerous iterations of how the augmented safety restraint can look and work. How these iterations affect the safety of a vehicle have yet to be determined. Without any guidance, manufacturers are left creating designs for standards that may not apply to autonomous vehicles or standards that have yet to be created.  As the federal government continues to fail to pass any legislation regarding autonomous vehicles, this may be yet another area in which states will need to act on their own while autonomous vehicles proliferate on our roadways.

A recently published article in Nature Medicine authored by Eric Topol, M.D., Department of Molecular Medicine, Scripps Research Institute, suggests that the convergence of human and artificial intelligence can lead to “high-performance medicine.”  High performance medicine he says, will be data driven.  The development of software that can process massive amounts of information quickly and accurately, as well as less expensively, will lay the foundation for this hybrid practice of medicine.  It will not be devoid of human interaction and input he says, but more reliant on technology and less reliant on human resources.  It will combine computer developed algorithms with physician and patient input.  Topol believes that, in the long run, this will elevate the practice of medicine and patient health.

Topol sees impacts of AI at three levels of medicine—

  • Clinicians—by enabling more rapid and more accurate image interpretation (e.g., CT scans);
  • Health systems—by improving workflows and possibly reducing medical errors, and
  • Patients—by enabling them to process more data to promote better health.

While the author sees roadblocks to the integration of AI and human intelligence in medicine such as data security, privacy and bias, he believes the improvements will be actualized over time.  Topol discusses a number of disciplines in which the application of AI has already had a positive effect:  radiology, pathology, dermatology, ophthalmology, gastroenterology and mental health.  Further, Topol discusses FDA’s new pathways for approval of AI medical algorithms and the fact that there were thirteen approvals of AI devices and software by FDA in 2018 as opposed to only two in 2017.

We discussed FDA’s stated commitment to AI, FDA’s regulatory pathways for approval and FDA approval of AI related devices and software here.

Topol correctly maintains that rigorous review, whether agency review (such as FDA), or private review (industry), is necessary for the safe development of new technology generated from the combination of human and artificial intelligence.  This includes peer-reviewed publications on FDA approved devices and software, something to date he argues has been lacking.  The author does a nice job of laying out the base of evidence for the use of AI in medicine and describing the potential pitfalls of proceeding without caution and oversight, as is true with other applications of AI.  The article is a worthy read for those involved in the field of medicine including those engaged in the development of medical devices and related software.

As we discussed in our January 8 post, federal, state, and local agencies are struggling with the lack of uniform standards governing the development and testing of autonomous vehicles. A recent report prepared for Uber Advanced Technologies Group by RAND Corporation, Measuring Automated Vehicle Safety: Forging a Framework, attempted to create a framework measuring safety in Autonomous Vehicles (AV).

The report’s authors considered how to define safety for AVs, how to measure their safety, and how to communicate what is learned or understood about them. The framework proposed in the Report for AV safety has three components:

  1. Settings: contexts that give rise to safety measures, such as computer-based simulation, closed courses, public roads with a safety driver present or remotely available, and public roads without a safety driver.
  2. Stages: the life stages of AV models during which these measures can be generated. This typically involves a development stage, where the product is created and refined, and a deployment stage, where the product is released to the public.
  3. Measures: the meaning of new and traditional measures obtained in each setting as AVs move through each stage. One category of measurement consists of the standards, processes, procedures, and design requirements involved in creating the AV system hardware, software, and vehicle components. The other two categories are “leading” and “lagging” measures: leading measures reflect performance, activity, and prevention; lagging measures are observations of safety outcomes or harm.

However, a challenge to implementing this framework is the availability of statistically significant data. Currently, AVs are operating in small numbers and in limited situations. Further, a large amount of data related to AVs either is not publicly available or publicly accessible.

The report notes that certain categories of data, such as how an AV system perceives and interacts with the external environment, are unlikely to be shared between companies due to the highly proprietary nature of the data. Other categories, such as the external environmental encountered by the vehicle, could be shared via a database containing the environmental circumstances, infrastructure, and traffic, but that the data would need to be anonymized. The data could then be used in AV development and improvement. Existing traffic safety databases could also be updated to include more detailed data on AVs, and the anonymization and eventual analysis of such data will become more feasible as AVs become more common.

As described in the Eckert Seamans Autonomous Vehicle Legislative Survey, federal legislation (the SELF Drive Act in the House and the AV START Act in the Senate) was not enacted during 2018. The U.S. Department of Transportation issued Federal AV guidance in October 2018. The remaining legal regulations are determined at the state level. Uniform standards and increased information-sharing could lead to more reliability in measuring AV safety and greater predictability in the realm of product liability.

The report offers the following recommendations:

  • During AV development, regulators, and the public should focus their concerns on the public’s safety as opposed to the speed or progress of development.
  • Competitors should report on progress at key demonstration points and, to the extent possible, adopt common protocols to facilitate fair comparisons.
  • Safety events that occur in the absence of statistically significant data should be treated as case studies and used as opportunities for learning by industry professionals, policymakers, and the public.
  • Efforts should be made to develop a common approach specifying where, when, and under what circumstances an AV can operate. This would improve communication between consumers and regulators, and would make it easier to track and compare AVs through different phases of development.
  • Research should be done on how to measure and provide information on AV system safety when the system is frequently being updated. AV safety measures must balance reflecting the current system’s safety level with prior safety records.

As the RAND report notes, AV consortia have started to emerge, including the Self-Driving Coalition for Safer Streets, which was established by Ford, Lyft, Uber, Volvo Cars, and Waymo, and the Partnership for Transportation Innovation and Opportunity, whose members include Daimler, FedEx, Ford, Lyft, Toyota, Uber, and Waymo. These consortia are facilitating broad participation in standard-setting, and may eventually build momentum toward a larger degree of information-sharing about practices, tools, and even data. (See previous post: Automotive manufacturers, technology companies among those teaming up to PAVE the way for autonomous vehicle)

Studies and reports seem to be coming to a single conclusion: cooperation between policy makers, manufacturers, technology companies, and the public is a must. In the past, being first to market was a leading factor in progress. With AV technology, cooperation and sharing of information between interested parties, including the general public, seems to be the way to further the use of AV technology. This is still an area where there are many more questions than answers. Today, as demonstrated by the groups being formed, there is a willingness to work together in order to harvest the many potential benefits of AV technology for the good of all.

Yesterday, at CES 2019 in Las Vegas, it was announced that top auto makers have united to form Partners for Automated Vehicle Education (PAVE).  PAVE’s mission is to educate “policymakers and the public about automated vehicles and the increased safety, mobility and sustainability they can bring.” Current members include Toyota, General Motors, WAYMO, Audi, National Safety Council, and SAE International.

As autonomous vehicles are becoming more prevalent on U.S. roads, questions and fears in the minds of policy makers and consumers seem to be on the rise. Members of the public are physically attacking WAYMO vehicles, slicing tires and breaking windows.  Congress refused to pass the self-driving car bill last year. PAVE hopes to help answers those questions and create a level of trust with everyone who will be affected by the technology.

PAVE will work with legislators regarding driver-assistance technology and hold educational workshops on the technologies. It will present hands on demonstrations for the public to be able to experience driverless technology. Further, PAVE will work with car dealers and service centers, offering “educational materials” that can be disseminated to customers.

It will be interesting to follow PAVE’s future to see if a more direct approach with legislators, businesses, and consumers regarding this new technology will ease the tension. Subscribe to the Artificial Intelligence Law Blog to keep abreast of PAVE’s activities.

Advancing autonomous vehicles to widespread use requires significant testing, and if that testing is performed in real world conditions, safety of third parties must be an ongoing and evolving paramount concern. The March 2018 crash of an Uber Advanced Technologies Group (UATG) autonomous vehicle in Arizona resulted in the death of a pedestrian.  Local and federal findings included that the sole human back-up driver was inattentive immediately prior to the accident and that vehicle’s emergency braking systems (including Volvo’s own system) were not enabled at the time of the accident.  As a result of the crash, UATG suspended all testing to self-examine and improve safety.  It released a report based in part upon a review of the crash investigation to the National Highway Transportation Safety Administration (NHTSA) in November 2018. The report addresses operational, technical, and organizational changes to be imposed to improve the safety of UATG autonomous vehicles.

Based on these improvements, Uber submitted a Notice of Testing Highly Automated Vehicles Application to the Pennsylvania Department of Transportation (PA DOT) in November 2018.  On December 17, 2018,  PA DOT issued a Letter of Authorization to Uber to begin testing of its autonomous vehicles (good for one year).

The Authorization is consistent with the Commonwealth’s Automated Vehicle Testing Guidance issued on July 23, 2018.

Significant changes to UATG’s testing, including its Safety and Risk Mitigation Plan, as authorized by PA DOT, are as follows:

  • Operate a limited number of vehicles;
  • Operate those vehicles only during daylight hours on weekdays;
  • Operate them only in good weather;
  • Operate in them in areas where most roads have restricted speed limits of 25mph;
  • Operate them with two human backup drivers;
  • Operate them with more highly trained and audited backup drivers;
  • Operate them with the automatic emergency braking system and Volvo emergency braking system in operation.

UATG commenced testing under the Notice on December 20, 2018.  Safety related to the testing of autonomous vehicles remains the subject of ongoing debate at the federal, state, and local and private levels. The proposed changes to UATG’s testing of its autonomous vehicles are consistent with Pennsylvania’s July Guidance and Uber’s November 2018 report. We will continue to monitor and review evolving public and private guidance on the safe testing of autonomous vehicles.

This post will explain why corporate directors should keep abreast of AI concepts to effectively fulfill their fiduciary duties.

Introduction to AI

In the context of this post, let’s define AI as the use by computer and machine devices of characteristics commonly associated with human intelligence, including reasoning and learning.  Through algorithms, machine learning, or even deep learning, the devices process significant amounts of data to detect patterns, solve problems, or to provide additional data for human consideration. Netflix, Pandora, Amazon, all use AI to recommend entertainment and products based on prior selections.  Self-driving cars also utilize AI to detect and understand its surroundings and drive the car in a safe manner.

Fiduciary Duties and Considerations of AI

Members of a corporate board are required to comply with the fiduciary duties of loyalty and care.  They have a duty to act in the best interests of the corporation and its shareholders, and must be fully informed before making decisions on behalf of the company.  This includes the subsidiary duty of oversight, which requires that directors have in place an effective reporting or monitoring system and an information system that allows them to detect potential risks to the company.

Board members should keep informed about developments regarding AI that may have transformative effects, or might make a particular business model or product either obsolete or less necessary.  For example, use of ubiquitous free GPS apps that adapt to wrong turns, and online live traffic mapping, have largely displaced the necessity for printed maps.  Board members of companies that sold traditional maps should have seen this development around the bend.

Utilizing AI to both enhance the board’s decision-making capabilities and analyze data may soon be more commonplace.  For example, in 2014, a venture capital firm claimed to have “appointed” an AI program called Vital to its board of directors. Vital sifted through data about drugs used to test age-related diseases and would then advise as to whether or not the firm should invest in the drugs.  Although Vital was not a voting member, and although all boards can consider experts or various sources of information to assist in decision-making, appointing an AI program to the board was indicative of the role AI can play in the governance of a corporation.

It is important to note, however, that in Delaware, board members must be “natural persons.”  Thus, “appointing an AI program” as a board member for a Delaware corporation would be impermissible.  See DGCL Section 141(b).

To the extent a board moves forward to adopt AI, it is crucial that the board does not delegate its essential management functions and rely solely upon AI in making decisions for the corporation.  Doing so would be a prohibited delegation of its duties.

Conclusion

This post has only skimmed the surface of why board members should consider the potential impact of AI. Every board, regardless of industry, should consider how AI might transform its business.

 


On December 11, 2018, Lyft was granted a patent by the United States Patent and Trademark Offices for an Autonomous Vehicle Notification System (Patent No. US 10,152,892 B2).  The System is designed with several functions, one of which produces a “talking car.”  Talking in the sense of the ability to flash notifications to people outside the vehicle with information such as “safe to cross” (for individuals on foot) or “safe to pass” (for cyclists).  The system can also designate other messages, such as “yielding” and “warning turning left/right”.

It seems that concerns about the safety of autonomous vehicles is on the rise; just last week vigilantes actually attacked an autonomous vehicle while traveling through Phoenix.  So, the use of direct communications between the autonomous vehicles and individuals could go a long way to gaining the trust of the public.  However, this system seems to raise a new issue: what if the autonomous vehicle notifies the individual of safe passage, but another vehicle (autonomous or not) fails to stop for the individual? Further, how will the technology determine when it’s safe to pass or cross, especially during the transition period where autonomous vehicles will be sharing the road with human drivers?

In the context of product liability litigation, the question becomes who then would be responsible if the individual is injured by the second vehicle? Would this be similar to situations where a human driver “waves” another person through what turns out to be an unsafe intersection?  Depending on jurisdiction, the driver indicating safe passage may be held liable for any subsequent injuries/damages, regardless of the negligence of those around them.  Where there is no driver, will the assignment of liability become more complicated?  Will juries now need to understand the technology and algorithms used to determine “safe to pass/safe to cross” notifications in order to determine if the system failed the individual?

While the notification system patented by Lyft definitely has the potential for creating a sense of security when autonomous vehicles interact with people outside the vehicle, it also raises new questions without answers.   The ever-changing landscape of autonomous vehicles today attempts to respond to issues as they arise, but in return they create new ones without answers. This cycle will continue to perpetuate itself for some time into the near future as our roads transition to autonomous vehicles.

FDA is taking steps to embrace and enhance innovation in the field of artificial intelligence. It has already permitted the marketing of an AI-based medical device (IDx-DR) to detect certain diabetes-related eye problems, a type of computer-aided detection and diagnosis software designed to detect wrist fractures in adults (OsteoDetect), and most recently, a platform that includes predictive monitoring for moderate to high-risk surgical patients (HemoSphere).

FDA also embraced several AI-based products in late November when the Agency chose several new technologies as part of a contest to combat opioid abuse which it launched in May 2018. FDA’s Innovation Challenge, which ran through September 30, 2018, sought mHealth (mobile health) technology in any stage of development, including diagnostic tools that identify those with an increased risk for addiction, treatments for pain that eliminate the need for opioid analgesics, treatments for opioid use disorder or symptoms of opioid withdrawal, and technology that can prevent the diversion of prescription opioids.

The opioid crisis continues to ravage cities and towns across America. The selection of AI-based devices by FDA to aid in the opioid crisis is important as it shows

  • FDA’s commitment to its Action Plan to address the opioid crisis
  • FDA’s recognition that AI is an important technology that it must address and encourage;
  • FDA’s willingness to work with developers of AI devices to establish new pathways for approval and
  • The need for FDA to clarify its understanding of AI and how it will guide and regulate industry moving forward.

FDA received over 250 entries prior to the September deadline. In each proposal, applicants described the novelty of the medical device or concept; the development plan for the medical device; the team who would be responsible for developing the device; the anticipated benefit of the device when used by patients; and, the impact on public health as compared to other available alternatives. Medical devices at any stage of development were eligible for the challenge; feasibility and the potential impact of the FDA’s participation in development to expedite marketing of the device were factors considered when reviewing the submissions.

A team from the FDA’s Center for Devices and Radiological Health (CDRH) evaluated the many entries and chose eight of them to work with closely to accelerate develop and expedite marketing application review of innovative products, similar to what occurs under its Breakthrough Devices Program.

Several of the selected entries involve pattern recognition, whether by predefined algorithm or machine learning, to prevent, detect or manage and treat opioid abuse. For example, Silicon Valley-based startup CognifiSense is developing a virtual reality therapy as part of a system to treat and manage pain. CognifiSense uses a software platform that provides psychological and experiential training to chronic pain patients to normalize their pain perception. Another FDA chosen product, iPill Dispenser, uses fingerprint biometrics on a mobile app that aims to cut over-consumption by dispensing pills based on prescriptions, and which permits physician interaction with usage data to adjust dosing regimens. Yet another, Milliman, involves predictive analytics and pattern recognition to assess a patient’s potential for abuse of opioids before prescribing as well as detection of physician over-prescribing.

U.S. states with autonomous vehicle laws

The Autonomous Vehicle Legislative Survey provides a description of latest actions (regulations, executive orders, committee investigations, or the like) regarding autonomous vehicles taken by each U.S. state and territory. The survey also includes analysis of how the position of each state compares to other states.

The survey is meant to be an evolving document.  It will be updated quarterly by its authors, Jodi Dyan Oley, Monakee D. Marseille,  and Karen O. Moury of Eckert Seamans,  to keep readers abreast of the ever-changing developments in this emerging topic.  The authors are also working on a survey of the U.S. cities that are at the forefront of autonomous vehicle testing and development.  If interested, please subscribe to the AI Blog for notifications of updates on our surveys.