A new federal council has been created that may help bridge the gap between the federal government and the current state of autonomous vehicle laws and regulations being adopted by cities and states across the country. The Secretary of the U.S. Department of Transportation (USDOT), Elaine Chao, announced yesterday at the South by Southwest Conference (SXSW), the creation of a new council to help further the advancement of autonomous vehicles, among other technologies.  The Non-Traditional and Emerging Transportation Technology (NETT) Council will identify and resolve jurisdictional and regulatory gaps that may impede the deployment of new technology, such as tunneling, hyperloop, autonomous vehicles, and other innovations.

Secretary Chao noted that technologies may not fit within the USDOT’s existing regulatory structure, which can impede transportation innovation. The NETT Council will address these challenges and give project sponsors a single point of access to discuss plans and proposals. The NETT Council is seen as a major step forward for USDOT in reducing regulatory burdens and paving the way for emerging technologies in the transportation industry. NETT will be chaired by Deputy Secretary Jeffrey Rosen and vice chaired by Undersecretary of Transportation for Policy Derek Kan. Other seats will be occupied by modal administrators and other high-ranking DOT officials. NETT will hold its organizing meeting this week and will first take on the topic of tunneling technologies seeking various approvals in several states.

Today, March 4, 2019, Pittsburgh’s Mayor Bill Peduto signed an executive order outlining the objectives and expectations from the city for testing autonomous vehicles. Transparency and reporting are two of the city’s biggest priorities set forth in the order.

The Department of Mobility and Infrastructure (DOMI) will be in charge of the oversight of autonomous testing.  The DOMI is charged with, among other things, to publish guidelines for the testing of self-driving technology on public streets, which, at a minimum:

  • Complement the Automated Vehicle Testing Guidance adopted by the Pennsylvania Department of Transportation (PennDOT), Legislature of the Commonwealth of Pennsylvania, or Office of the Governor;
  • Identify the testers and the anticipated time, place, and manner testing is to occur;
  • Increase public transparency and knowledge of the testing occurring on public streets;
  • Stipulate that testers articulate the necessity of testing on city streets and the manner in which testing may advance the city’s principles for shared and autonomous mobility;
  • Ensure reliable communication between testers and city authorities in the event of emergency; and
  • Identify the data reasonably necessary to be collected from testers in order for public agencies to understand the impact and opportunity of testing on public safety.

The DOMI is also charged with publishing recommendations with regard to highly automated driving systems’ use of city managed and controlled assets and facilities that will:

  • Fundamentally protect and enhance walking, public transit, and travel by bicycle in highly urbanized areas;
  • Promote and encourage development/demonstration/deployment of automated driving systems that have higher occupancy, low or no emissions, and lower household transportation costs; and
  • Minimize consequences and maximize benefits of technological disruption on city finances,

The DOMI must regularly report to the public, at least annually, regarding the development of and compliance with guidelines and policies, results of data analysis, and recommendations for continued public advancement of these technologies.

The five entities currently developing autonomous driving systems in Pittsburgh are: Aptiv, Argo AI, Aurora Innovation, Carnegie Mellon University, and Uber. The effect the order will have on these companies’ state-approved testing activities within Pittsburgh is yet to be seen, as we await the recommendations proposed by DOMI.

The February 2019 edition of Eckert Seamans’ Autonomous Vehicle Legislative Survey is now available at this link.  Updates at the national, state, and city levels are summarized below. (An expanded summary is available at this link.)

National

The U.S. Department of Transportation issued Automated Vehicles 3.0: Preparing for the Future of Transportation October 2018. The federal guidance is designed to prioritize safety, encourage a consistent regulatory environment, prepare proactively for automation, and promote the modernization of regulatory frameworks.

States

  • Minnesota: The Governor’s Advisory Council on Connected and Automated Vehicles released an executive report in December 2018. House File No 242 was introduced January 2019 to establish a microtransit rideshare pilot program.
  • Missouri: Senate Bill No. 186 was introduced January 2019 to permit vehicle platooning.
  • Nebraska: Legislative Bill No. 521 was introduced January 2019 to allow AV operations.
  • New Jersey: Senate Joint Resolution No. 105 was introduced in November 2018 to establish an Advanced AV Task Force.
  • New Mexico: Senate Bill No. 332 was introduced January 2019 to authorize the use of AVs and platooning.
  • New York: Several AV-related bills were introduced January 2019 aiming to: (a) establish the New York State AV Task Force to study AV usage; (b) require a study on the potential impact of driverless vehicles on occupations and employment; (c) set forth drivers’ license requirements for operating an AV; and (d) authorize New York’s enrollment in any federal pilot program for the collection of transportation data, including AV projects.
  • North Dakota: Numerous bills dealing with autonomous vehicles have been introduced: (a) House Bill 1418 relates to automated vehicle network companies and autonomous vehicle operations in the state; (b) House Bill 1197 reintroducing a revised version of a previously defeated Bill regarding ownership of autonomous vehicular technology; and (c) House Bill 1543 addressing requirements of having insurance, surety bond, a human driver and ability to engage and disengage the autonomous mode required to test autonomous vehicles.
  • Ohio: The House Transportation and Public Safety Committee released a report on autonomous and connected vehicles in December 2018 regarding the findings of its 14-month study, which completed hearings and stakeholder meetings earlier that year.
  • Oklahoma: For the first time, the Oklahoma state legislature is addressing autonomous vehicles. Senate Bill 365 calls for establishing the Oklahoma Driving Automation System Uniformity Act, which would give the legislature the final say on autonomous laws in the state.
  • Rhode Island: The state’s DOT selected Michigan-based May Mobility to operate a self-driving bus pilot program for one year.
  • Texas: Two bills addressing self-driving car laws have been introduced in 2019: House Bill 119 would increase liability of manufacturers in the event of a crash involving an automated vehicle; and House Bill 113 would require providers to equip vehicles with a failure alert system and the latest software.

Cities

  • Austin: Joined select cities and transportation agencies around the world to pilot a new autonomous vehicle deployment platform called INRIX AV Road Rules. Austin began allowing autonomous shuttles to operate within the city, launching the nation’s largest autonomous bus pilot program last fall.
  • Lincoln: Later this year, the city plans to debut a pilot Autonomous Shuttle Project that will leverage its “smart” traffic and high-speed data infrastructure.
  • Pittsburgh: The city agreed to allow Uber to begin testing its vehicles again within the city.  However, city officials are in safety talks with Uber and four other entities that have permits to test autonomous vehicles.
  • San Antonio: The city issued a Request for Information in July 2018 to develop an autonomous vehicle pilot program and will soon issue a request for proposals for autonomous vehicle testing.

Regular updates to Eckert Seamans’ Automated Vehicles Legislative Survey will be released quarterly.  Subscribe to this the blog to stay up to date on developments regarding the AV landscape in the United States and for notification when the next update is released.

In the current issue of Business Law Today, a publication of the American Bar Association, Francis Pileggi and Shani Else co-authored an article about the intersection of corporate governance principles and the nearly ubiquitous field of artificial intelligence. The article, titled “Corporate Directors Must Consider Impact of Artificial Intelligence for Effective Corporate Governance,” is available on the Business Law Today website.

FDA continues to be active in emphasizing the importance of artificial intelligence in health care. Now, FDA has committed to a program of creating a knowledgeable, sustainable, and agile data science workforce ready to review and approve devices based on artificial intelligence.

In April of last year, FDA Commissioner Scott Gottlieb, in discussing the transformation of FDA’s approach to digital health, stated that one of the most promising digital health tools is Artificial Intelligence (“AI”). Then, in September of 2018, the commissioner again referenced AI as one of the drivers of unparalleled period of innovation in manufacturing medical devices. And, we saw FDA approve a record number of AI devices in 2018. We have discussed this here and here.

On January 28, 2019, Gottlieb announced that Information Exchange and Data Transformation (INFORMED), an incubator for collaborative oncology regulatory science research focused on supporting innovations that enhance FDA’s mission of promotion and protection of the public health, is going to be working with FDA’s medical product centers. Together, they will develop an FDA curriculum on machine learning and artificial intelligence, in partnership with external academic partners.

INFORMED was founded upon expanding organizational and technical infrastructure for big data analytics and examining modern approaches in evidence generation to support regulatory decisions. One of its missions is to identify opportunities for machine learning and artificial intelligence to improve existing regulatory decision-making. So, it makes sense for FDA to use this already existing incubator (although oncology focused) to facilitate increasing knowledge across all of its centers. While it is unclear what the curriculum will look like and who the “academic partners” are, the announcement by FDA that they are seeking the assistance of outside consultants and committing to training its personnel in anticipation of the growth of AI in health care is an important advancement for all those engaged in the development of AI-based devices.

Apple was recently granted a patent (10,189,434) for an augmented safety restraint. Say that again? Yes, with the rise of autonomous vehicles comes the need for changes in the safety devices placed within these vehicles. If you are wondering why this is an important patent, you are probably not alone. Currently, the states that have addressed the use of autonomous vehicles have done so with little emphasis (if any) on the safety features within the vehicle, beyond requiring what is currently mandated under the federal regulations for non-autonomous vehicles.

So, what is different about Apple’s augmented safety restraint ? The patent provides that the restraint, beyond securing the passenger within the vehicle, can

  • provide holistic monitoring of passenger status;
  • supply entertainment and comfort;
  • allow communication or interaction between the passenger, vehicle, and other passengers within the vehicle; and
  • generate power sufficient to run the aforementioned capabilities.

The reason for all of these features is to “allow for enhancement of passenger activities, improved interaction with the vehicle and/or other passengers, and energetic autonomy while at the same time meeting regulatory safety requirements.”

In order to perform the above, the device(s) will be attached to an exposed surface or embedded within the restraint. The suggested devices include contact sensitive features, such that the passenger would need to touch the device for engagement (example: a fingerprint sensor) and non-contact sensitive features (example: optical or voice-activated sensor).

In addition to the common three-point seat belts, other restraint types (e.g., inflatable belts, webs, harnesses, etc.) are noted as being possible designs for the augmented restraints. Some of the proposed features are even proposed to be activated either with or without a passenger present in the vehicles (such as devices to help aid passenger ingress or when the vehicle is transporting only packages).

In what appears to be an effect to maintain compliance within current safety standards in place, the restraints may also include an airbag and any or all of the augmented safety restraints can include a pre-tensioner device. The restraints have a passenger-securing structure, for example, a belt or a harness secured to either the vehicle or the passenger seat.  There is also a passenger-facing surface that can engage the body of the passenger to restrain motion of the passenger relative their respective seat.

Apple’s patent suggests numerous iterations of how the augmented safety restraint can look and work. How these iterations affect the safety of a vehicle have yet to be determined. Without any guidance, manufacturers are left creating designs for standards that may not apply to autonomous vehicles or standards that have yet to be created.  As the federal government continues to fail to pass any legislation regarding autonomous vehicles, this may be yet another area in which states will need to act on their own while autonomous vehicles proliferate on our roadways.

A recently published article in Nature Medicine authored by Eric Topol, M.D., Department of Molecular Medicine, Scripps Research Institute, suggests that the convergence of human and artificial intelligence can lead to “high-performance medicine.”  High performance medicine he says, will be data driven.  The development of software that can process massive amounts of information quickly and accurately, as well as less expensively, will lay the foundation for this hybrid practice of medicine.  It will not be devoid of human interaction and input he says, but more reliant on technology and less reliant on human resources.  It will combine computer developed algorithms with physician and patient input.  Topol believes that, in the long run, this will elevate the practice of medicine and patient health.

Topol sees impacts of AI at three levels of medicine—

  • Clinicians—by enabling more rapid and more accurate image interpretation (e.g., CT scans);
  • Health systems—by improving workflows and possibly reducing medical errors, and
  • Patients—by enabling them to process more data to promote better health.

While the author sees roadblocks to the integration of AI and human intelligence in medicine such as data security, privacy and bias, he believes the improvements will be actualized over time.  Topol discusses a number of disciplines in which the application of AI has already had a positive effect:  radiology, pathology, dermatology, ophthalmology, gastroenterology and mental health.  Further, Topol discusses FDA’s new pathways for approval of AI medical algorithms and the fact that there were thirteen approvals of AI devices and software by FDA in 2018 as opposed to only two in 2017.

We discussed FDA’s stated commitment to AI, FDA’s regulatory pathways for approval and FDA approval of AI related devices and software here.

Topol correctly maintains that rigorous review, whether agency review (such as FDA), or private review (industry), is necessary for the safe development of new technology generated from the combination of human and artificial intelligence.  This includes peer-reviewed publications on FDA approved devices and software, something to date he argues has been lacking.  The author does a nice job of laying out the base of evidence for the use of AI in medicine and describing the potential pitfalls of proceeding without caution and oversight, as is true with other applications of AI.  The article is a worthy read for those involved in the field of medicine including those engaged in the development of medical devices and related software.

As we discussed in our January 8 post, federal, state, and local agencies are struggling with the lack of uniform standards governing the development and testing of autonomous vehicles. A recent report prepared for Uber Advanced Technologies Group by RAND Corporation, Measuring Automated Vehicle Safety: Forging a Framework, attempted to create a framework measuring safety in Autonomous Vehicles (AV).

The report’s authors considered how to define safety for AVs, how to measure their safety, and how to communicate what is learned or understood about them. The framework proposed in the Report for AV safety has three components:

  1. Settings: contexts that give rise to safety measures, such as computer-based simulation, closed courses, public roads with a safety driver present or remotely available, and public roads without a safety driver.
  2. Stages: the life stages of AV models during which these measures can be generated. This typically involves a development stage, where the product is created and refined, and a deployment stage, where the product is released to the public.
  3. Measures: the meaning of new and traditional measures obtained in each setting as AVs move through each stage. One category of measurement consists of the standards, processes, procedures, and design requirements involved in creating the AV system hardware, software, and vehicle components. The other two categories are “leading” and “lagging” measures: leading measures reflect performance, activity, and prevention; lagging measures are observations of safety outcomes or harm.

However, a challenge to implementing this framework is the availability of statistically significant data. Currently, AVs are operating in small numbers and in limited situations. Further, a large amount of data related to AVs either is not publicly available or publicly accessible.

The report notes that certain categories of data, such as how an AV system perceives and interacts with the external environment, are unlikely to be shared between companies due to the highly proprietary nature of the data. Other categories, such as the external environmental encountered by the vehicle, could be shared via a database containing the environmental circumstances, infrastructure, and traffic, but that the data would need to be anonymized. The data could then be used in AV development and improvement. Existing traffic safety databases could also be updated to include more detailed data on AVs, and the anonymization and eventual analysis of such data will become more feasible as AVs become more common.

As described in the Eckert Seamans Autonomous Vehicle Legislative Survey, federal legislation (the SELF Drive Act in the House and the AV START Act in the Senate) was not enacted during 2018. The U.S. Department of Transportation issued Federal AV guidance in October 2018. The remaining legal regulations are determined at the state level. Uniform standards and increased information-sharing could lead to more reliability in measuring AV safety and greater predictability in the realm of product liability.

The report offers the following recommendations:

  • During AV development, regulators, and the public should focus their concerns on the public’s safety as opposed to the speed or progress of development.
  • Competitors should report on progress at key demonstration points and, to the extent possible, adopt common protocols to facilitate fair comparisons.
  • Safety events that occur in the absence of statistically significant data should be treated as case studies and used as opportunities for learning by industry professionals, policymakers, and the public.
  • Efforts should be made to develop a common approach specifying where, when, and under what circumstances an AV can operate. This would improve communication between consumers and regulators, and would make it easier to track and compare AVs through different phases of development.
  • Research should be done on how to measure and provide information on AV system safety when the system is frequently being updated. AV safety measures must balance reflecting the current system’s safety level with prior safety records.

As the RAND report notes, AV consortia have started to emerge, including the Self-Driving Coalition for Safer Streets, which was established by Ford, Lyft, Uber, Volvo Cars, and Waymo, and the Partnership for Transportation Innovation and Opportunity, whose members include Daimler, FedEx, Ford, Lyft, Toyota, Uber, and Waymo. These consortia are facilitating broad participation in standard-setting, and may eventually build momentum toward a larger degree of information-sharing about practices, tools, and even data. (See previous post: Automotive manufacturers, technology companies among those teaming up to PAVE the way for autonomous vehicle)

Studies and reports seem to be coming to a single conclusion: cooperation between policy makers, manufacturers, technology companies, and the public is a must. In the past, being first to market was a leading factor in progress. With AV technology, cooperation and sharing of information between interested parties, including the general public, seems to be the way to further the use of AV technology. This is still an area where there are many more questions than answers. Today, as demonstrated by the groups being formed, there is a willingness to work together in order to harvest the many potential benefits of AV technology for the good of all.

Yesterday, at CES 2019 in Las Vegas, it was announced that top auto makers have united to form Partners for Automated Vehicle Education (PAVE).  PAVE’s mission is to educate “policymakers and the public about automated vehicles and the increased safety, mobility and sustainability they can bring.” Current members include Toyota, General Motors, WAYMO, Audi, National Safety Council, and SAE International.

As autonomous vehicles are becoming more prevalent on U.S. roads, questions and fears in the minds of policy makers and consumers seem to be on the rise. Members of the public are physically attacking WAYMO vehicles, slicing tires and breaking windows.  Congress refused to pass the self-driving car bill last year. PAVE hopes to help answers those questions and create a level of trust with everyone who will be affected by the technology.

PAVE will work with legislators regarding driver-assistance technology and hold educational workshops on the technologies. It will present hands on demonstrations for the public to be able to experience driverless technology. Further, PAVE will work with car dealers and service centers, offering “educational materials” that can be disseminated to customers.

It will be interesting to follow PAVE’s future to see if a more direct approach with legislators, businesses, and consumers regarding this new technology will ease the tension. Subscribe to the Artificial Intelligence Law Blog to keep abreast of PAVE’s activities.

Advancing autonomous vehicles to widespread use requires significant testing, and if that testing is performed in real world conditions, safety of third parties must be an ongoing and evolving paramount concern. The March 2018 crash of an Uber Advanced Technologies Group (UATG) autonomous vehicle in Arizona resulted in the death of a pedestrian.  Local and federal findings included that the sole human back-up driver was inattentive immediately prior to the accident and that vehicle’s emergency braking systems (including Volvo’s own system) were not enabled at the time of the accident.  As a result of the crash, UATG suspended all testing to self-examine and improve safety.  It released a report based in part upon a review of the crash investigation to the National Highway Transportation Safety Administration (NHTSA) in November 2018. The report addresses operational, technical, and organizational changes to be imposed to improve the safety of UATG autonomous vehicles.

Based on these improvements, Uber submitted a Notice of Testing Highly Automated Vehicles Application to the Pennsylvania Department of Transportation (PA DOT) in November 2018.  On December 17, 2018,  PA DOT issued a Letter of Authorization to Uber to begin testing of its autonomous vehicles (good for one year).

The Authorization is consistent with the Commonwealth’s Automated Vehicle Testing Guidance issued on July 23, 2018.

Significant changes to UATG’s testing, including its Safety and Risk Mitigation Plan, as authorized by PA DOT, are as follows:

  • Operate a limited number of vehicles;
  • Operate those vehicles only during daylight hours on weekdays;
  • Operate them only in good weather;
  • Operate in them in areas where most roads have restricted speed limits of 25mph;
  • Operate them with two human backup drivers;
  • Operate them with more highly trained and audited backup drivers;
  • Operate them with the automatic emergency braking system and Volvo emergency braking system in operation.

UATG commenced testing under the Notice on December 20, 2018.  Safety related to the testing of autonomous vehicles remains the subject of ongoing debate at the federal, state, and local and private levels. The proposed changes to UATG’s testing of its autonomous vehicles are consistent with Pennsylvania’s July Guidance and Uber’s November 2018 report. We will continue to monitor and review evolving public and private guidance on the safe testing of autonomous vehicles.