The FDA recently issued the discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” and a request for comments.

Commissioner Scott Gottlieb issued a statement at the time of the paper’s release lauding artificial intelligence and machine learning as having “the potential to fundamentally transform the delivery of health care.” He stated that the “ability of artificial intelligence and machine learning software to learn from real-world feedback and improve its performance is spurring innovation and leading to the development of novel medical devices.” However, he recognized the inadequacy of traditional regulatory pathways to foster the growth of this technology, saying the FDA was “announcing steps to consider a new regulatory framework specifically tailored to promote the development of safe and effective medical devices that use advanced artificial intelligence algorithms.”

FDA employs a risk-based approach to determine whether a new premarket submission is required each time a manufacturer makes substantial, iterative changes through a software update or makes other changes that would significantly affect the device’s performance. But, this approach is not a match for review of AI and machine learning-based algorithms, medical devices that may continuously update themselves in response to real-world feedback.

Gottlieb noted as an example, “an algorithm that detects breast cancer lesions on mammograms could learn to improve the confidence with which it identifies lesions as cancerous or may learn to identify specific sub-types of breast cancer by continually learning from real-world use and feedback.” The agency concluded that it had to change its approach to foster software that evolves over time to improve care, while still guaranteeing safety and effectiveness. As a first step, the FDA released the paper exploring a new proposed framework that it believes will encourage development and may allow some modifications without review—“[I]t would be a more tailored fit than our existing regulatory paradigm for software as a medical device.”

Under the proposed framework, AI/ML-based SaMD would require a premarket submission when a software change or modification “significantly affects device performance or safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the SaMD algorithm.” This approach was developed based on harmonized SaMD risk categorization principles that were established via the International Medical Devices Regulators Forum, FDA’s benefit-risk framework, risk management principles in FDA’s 2017 guidance on submitting new 510(k)s for software changes to existing devices, Software Pre-certification Pilot Program’s organizational-based total product life cycle approach, as well as the 510(k), De Novo classification request, and premarket application pathways.

So, where it is anticipated that the software will evolve over time and not remain static, the “evolution” will be described at the time of submission along with specific plans for post-market surveillance and modification of intended use where appropriate.

FDA will accept comments through June 3, 2019, via its website. This will be an important part of evolving the proposal into something that better fits the needs of this growing technology

Another partnership has been formed in the autonomous vehicle world. SAE, Ford, GM, and Toyota have announced the formation of the Autonomous Vehicle Safety Consortium (AVSC).

The focus of AVSC is to work to safely advance testing, pre-competitive development, and deployment of SAE Level 4 and 5 automated vehicles. It is the goal of the AVSC  that its work will inform and accelerate the development of industry standards for autonomous vehicles and harmonize with efforts of other consortia and standards bodies throughout the world.

AVSC’s first efforts will focus on a framework that for the safer deployment of autonomous vehicles, which is broadly applicable to all developers, manufacturers, and integrators of autonomous technologies for use in product deployment. It will consist of a set of safety principles for SAE Level 4 and 5 automated driving systems focusing on:

1) testing prior to and when operating AVs on public roads,

2) data collection, protection, and sharing required to reconstruct certain events, and

3) interactions between AVs and other road users.

In an area of technology which continues to grow rapidly, collaboration among its producers is the key to the success of autonomous vehicles in the future. Most promising about the current objective of AVSC, is their goal to make their efforts applicable to all involved in this technology. This blog will follow closely and report on the actions of  AVSC as it works toward achieving its goals.

Today, The Wall Street Journal published a special section on artificial intelligence with multiple articles on its impact on various industries, new applications of the technology, and its general impact on corporate management. This link to an article on the latter topic is worthwhile, as are the other articles in today’s edition of the WSJ. (Third-parties may require a subscription to view full article.)

Supplement: Additional helpful articles about AI from The Wall Street Journal include the following:

WSJ Pro​

Test your Knowledge of AI​

 

In a recent lawsuit filed in the Northern District of California, Tesla alleged that a former employee, Guangzhi Cao, copied more than 300,000 files of Tesla’s Auto-pilot related source code before leaving to work for one of Tesla’s competitors, Xiaopeng Motors Technology Company Ltd.

This lawsuit highlights the difficulties associated with potential collaboration in the rapidly-advancing industry of self-driving vehicles.

Tesla brings the following claims in the lawsuit: (1) misappropriation of trade secrets in violation of the Defend Trade Secrets Act; (2) misappropriation of trade secrets in violation of the California Uniform Trade Secrets Act; (3) breach of contract due to Cao’s alleged breach of Tesla’s Non-Disclosure Agreement; and (4) breach of employee’s duty of loyalty.

The lawsuit, filed shortly after Cao’s departure from Tesla, seeks an injunction preventing Cao from (1) retaining, disclosing, or using any Tesla confidential or proprietary information in any manner, and (2) soliciting other Tesla employees or contractors to leave employment with Tesla for a period of one year following his departure. The lawsuit further seeks monetary damages and a requirement that Cao “submit to ongoing auditing of his personal and work-related systems and accounts to monitor for unlawful retention or use of Tesla’s confidential and proprietary information.”

The complaint notes that Tesla’s Autopilot team, including its full self-driving technology, is “a crown jewel of Tesla’s intellectual property portfolio” and states:

“Tesla has a global fleet of more than 500,000 cars, which have driven more than a billion collective miles with Autopilot activated. Every day, thousands of Autopilot-enabled Tesla vehicles provide real-time feedback to Tesla’s servers, yielding voluminous data that Tesla uses to continually improve the Autopilot system. This fleet gives Tesla exponentially more data than its autonomous vehicle competitors, who generally have only small fleets of prototype vehicles, and has allowed Tesla to accelerate its autonomy technology in a way no other company can.”

The primary focus of the complaint is, understandably, the threat posed to Tesla’s intellectual property due to misappropriation of the Autopilot source code. However, the complaint also indicates a reluctance to divulge the inputs that are used to improve the source code, namely, the data from Tesla’s existing vehicles:

“As another example, the source code also reflects and contains improvements that are built on Tesla’s massive volume of fleet telemetry data. If disclosed to a competitor, that competitor could use Tesla’s source code to copy Tesla’s work, compete with Tesla, or otherwise accelerate the development of its own vehicle autonomy technology.”

As more autonomous vehicles enter the marketplace, sharing of data inputs (but not the source code) could assist in developing common safety standards and protocols for autonomous vehicles. However, some companies may be wary of sharing their data inputs because this could decrease the competitive edge that comes from a larger and more established fleet of vehicles.

AAA commenced conducting surveys to evaluate consumer attitudes concerning fully autonomous vehicles in 2016. An updated version of the survey, based upon research conducted in January 2019, reveals that 71 percent of U.S. drivers would be afraid to ride in a fully autonomous vehicle.

Other significant findings include:

  • 53 percent of U.S. drivers would be comfortable with using fully self-driving vehicles for people mover systems in airports and amusement parks.
  • 44 percent of drivers would be comfortable with using fully self-driving vehicle for food delivery services.

AAA believed that experience with autonomous systems had an impact on driver perspectives concerning this technology. “Automated vehicle technology is evolving on a very pubic stage and, as a result, it is affecting how consumers feel about it,” Greg Brannon, AAA’s head of Autonomous Engineering and Industry Relations, is quoted as saying in the AAA news release issued last week. The survey found that drivers who interact with advanced driver assistance systems (ADAS), such as lane keeping assistance, adaptive cruise control, automatic emergency braking and self-parking were 68 percent more likely to trust these features than drivers who do not have access to these features.

The survey concluded that the more drivers understand autonomous vehicle technology, the more receptive they will be to the technology. “Having the opportunity to interact with partially or fully automated vehicle technology will help remove some of the mystery for consumers and open the door for greater acceptance,” Brannon was quoted as saying in the news release.

Based on the results of this study, it appears that consumer confidence lags well behind the technological advances and developments of the autonomous vehicle industry.

A new federal council has been created that may help bridge the gap between the federal government and the current state of autonomous vehicle laws and regulations being adopted by cities and states across the country. The Secretary of the U.S. Department of Transportation (USDOT), Elaine Chao, announced yesterday at the South by Southwest Conference (SXSW), the creation of a new council to help further the advancement of autonomous vehicles, among other technologies.  The Non-Traditional and Emerging Transportation Technology (NETT) Council will identify and resolve jurisdictional and regulatory gaps that may impede the deployment of new technology, such as tunneling, hyperloop, autonomous vehicles, and other innovations.

Secretary Chao noted that technologies may not fit within the USDOT’s existing regulatory structure, which can impede transportation innovation. The NETT Council will address these challenges and give project sponsors a single point of access to discuss plans and proposals. The NETT Council is seen as a major step forward for USDOT in reducing regulatory burdens and paving the way for emerging technologies in the transportation industry. NETT will be chaired by Deputy Secretary Jeffrey Rosen and vice chaired by Undersecretary of Transportation for Policy Derek Kan. Other seats will be occupied by modal administrators and other high-ranking DOT officials. NETT will hold its organizing meeting this week and will first take on the topic of tunneling technologies seeking various approvals in several states.

Today, March 4, 2019, Pittsburgh’s Mayor Bill Peduto signed an executive order outlining the objectives and expectations from the city for testing autonomous vehicles. Transparency and reporting are two of the city’s biggest priorities set forth in the order.

The Department of Mobility and Infrastructure (DOMI) will be in charge of the oversight of autonomous testing.  The DOMI is charged with, among other things, to publish guidelines for the testing of self-driving technology on public streets, which, at a minimum:

  • Complement the Automated Vehicle Testing Guidance adopted by the Pennsylvania Department of Transportation (PennDOT), Legislature of the Commonwealth of Pennsylvania, or Office of the Governor;
  • Identify the testers and the anticipated time, place, and manner testing is to occur;
  • Increase public transparency and knowledge of the testing occurring on public streets;
  • Stipulate that testers articulate the necessity of testing on city streets and the manner in which testing may advance the city’s principles for shared and autonomous mobility;
  • Ensure reliable communication between testers and city authorities in the event of emergency; and
  • Identify the data reasonably necessary to be collected from testers in order for public agencies to understand the impact and opportunity of testing on public safety.

The DOMI is also charged with publishing recommendations with regard to highly automated driving systems’ use of city managed and controlled assets and facilities that will:

  • Fundamentally protect and enhance walking, public transit, and travel by bicycle in highly urbanized areas;
  • Promote and encourage development/demonstration/deployment of automated driving systems that have higher occupancy, low or no emissions, and lower household transportation costs; and
  • Minimize consequences and maximize benefits of technological disruption on city finances,

The DOMI must regularly report to the public, at least annually, regarding the development of and compliance with guidelines and policies, results of data analysis, and recommendations for continued public advancement of these technologies.

The five entities currently developing autonomous driving systems in Pittsburgh are: Aptiv, Argo AI, Aurora Innovation, Carnegie Mellon University, and Uber. The effect the order will have on these companies’ state-approved testing activities within Pittsburgh is yet to be seen, as we await the recommendations proposed by DOMI.

The February 2019 edition of Eckert Seamans’ Autonomous Vehicle Legislative Survey is now available at this link.  Updates at the national, state, and city levels are summarized below. (An expanded summary is available at this link.)

National

The U.S. Department of Transportation issued Automated Vehicles 3.0: Preparing for the Future of Transportation October 2018. The federal guidance is designed to prioritize safety, encourage a consistent regulatory environment, prepare proactively for automation, and promote the modernization of regulatory frameworks.

States

  • Minnesota: The Governor’s Advisory Council on Connected and Automated Vehicles released an executive report in December 2018. House File No 242 was introduced January 2019 to establish a microtransit rideshare pilot program.
  • Missouri: Senate Bill No. 186 was introduced January 2019 to permit vehicle platooning.
  • Nebraska: Legislative Bill No. 521 was introduced January 2019 to allow AV operations.
  • New Jersey: Senate Joint Resolution No. 105 was introduced in November 2018 to establish an Advanced AV Task Force.
  • New Mexico: Senate Bill No. 332 was introduced January 2019 to authorize the use of AVs and platooning.
  • New York: Several AV-related bills were introduced January 2019 aiming to: (a) establish the New York State AV Task Force to study AV usage; (b) require a study on the potential impact of driverless vehicles on occupations and employment; (c) set forth drivers’ license requirements for operating an AV; and (d) authorize New York’s enrollment in any federal pilot program for the collection of transportation data, including AV projects.
  • North Dakota: Numerous bills dealing with autonomous vehicles have been introduced: (a) House Bill 1418 relates to automated vehicle network companies and autonomous vehicle operations in the state; (b) House Bill 1197 reintroducing a revised version of a previously defeated Bill regarding ownership of autonomous vehicular technology; and (c) House Bill 1543 addressing requirements of having insurance, surety bond, a human driver and ability to engage and disengage the autonomous mode required to test autonomous vehicles.
  • Ohio: The House Transportation and Public Safety Committee released a report on autonomous and connected vehicles in December 2018 regarding the findings of its 14-month study, which completed hearings and stakeholder meetings earlier that year.
  • Oklahoma: For the first time, the Oklahoma state legislature is addressing autonomous vehicles. Senate Bill 365 calls for establishing the Oklahoma Driving Automation System Uniformity Act, which would give the legislature the final say on autonomous laws in the state.
  • Rhode Island: The state’s DOT selected Michigan-based May Mobility to operate a self-driving bus pilot program for one year.
  • Texas: Two bills addressing self-driving car laws have been introduced in 2019: House Bill 119 would increase liability of manufacturers in the event of a crash involving an automated vehicle; and House Bill 113 would require providers to equip vehicles with a failure alert system and the latest software.

Cities

  • Austin: Joined select cities and transportation agencies around the world to pilot a new autonomous vehicle deployment platform called INRIX AV Road Rules. Austin began allowing autonomous shuttles to operate within the city, launching the nation’s largest autonomous bus pilot program last fall.
  • Lincoln: Later this year, the city plans to debut a pilot Autonomous Shuttle Project that will leverage its “smart” traffic and high-speed data infrastructure.
  • Pittsburgh: The city agreed to allow Uber to begin testing its vehicles again within the city.  However, city officials are in safety talks with Uber and four other entities that have permits to test autonomous vehicles.
  • San Antonio: The city issued a Request for Information in July 2018 to develop an autonomous vehicle pilot program and will soon issue a request for proposals for autonomous vehicle testing.

Regular updates to Eckert Seamans’ Automated Vehicles Legislative Survey will be released quarterly.  Subscribe to this the blog to stay up to date on developments regarding the AV landscape in the United States and for notification when the next update is released.

In the current issue of Business Law Today, a publication of the American Bar Association, Francis Pileggi and Shani Else co-authored an article about the intersection of corporate governance principles and the nearly ubiquitous field of artificial intelligence. The article, titled “Corporate Directors Must Consider Impact of Artificial Intelligence for Effective Corporate Governance,” is available on the Business Law Today website.

FDA continues to be active in emphasizing the importance of artificial intelligence in health care. Now, FDA has committed to a program of creating a knowledgeable, sustainable, and agile data science workforce ready to review and approve devices based on artificial intelligence.

In April of last year, FDA Commissioner Scott Gottlieb, in discussing the transformation of FDA’s approach to digital health, stated that one of the most promising digital health tools is Artificial Intelligence (“AI”). Then, in September of 2018, the commissioner again referenced AI as one of the drivers of unparalleled period of innovation in manufacturing medical devices. And, we saw FDA approve a record number of AI devices in 2018. We have discussed this here and here.

On January 28, 2019, Gottlieb announced that Information Exchange and Data Transformation (INFORMED), an incubator for collaborative oncology regulatory science research focused on supporting innovations that enhance FDA’s mission of promotion and protection of the public health, is going to be working with FDA’s medical product centers. Together, they will develop an FDA curriculum on machine learning and artificial intelligence, in partnership with external academic partners.

INFORMED was founded upon expanding organizational and technical infrastructure for big data analytics and examining modern approaches in evidence generation to support regulatory decisions. One of its missions is to identify opportunities for machine learning and artificial intelligence to improve existing regulatory decision-making. So, it makes sense for FDA to use this already existing incubator (although oncology focused) to facilitate increasing knowledge across all of its centers. While it is unclear what the curriculum will look like and who the “academic partners” are, the announcement by FDA that they are seeking the assistance of outside consultants and committing to training its personnel in anticipation of the growth of AI in health care is an important advancement for all those engaged in the development of AI-based devices.