In what can be seen as a huge step forward in the arena of autonomous vehicle technology, Lyft has announced that it will share with the public its level 5 dataset from its autonomous vehicle data. The dataset is described as a “comprehensive, large-scale dataset featuring the raw sensor camera and LiDAR inputs as perceived by a fleet of multiple, high-end, autonomous vehicles in a bounded geographic area.”  The Lyft Level 5 Dataset includes:

  • Over 55,000 human-labeled 3D annotated frames;
  • Data from seven cameras and up to three LiDARs;
  • A drivable surface map; and
  • An underlying HD spatial semantic map (including lanes, crosswalks, etc.)

In addition, Lyft is also launching a competition for individuals to train on the algorithms on the dataset, including completing testing on 3D object detection over the semantic maps. Although specific details about the competition have yet to be released,  Lyft has indicated there will be $25,000 in prizes, and they will fly the top researchers to the NeurIPS Conference in December, as well as allow the winners to interview with Lyft for a position within the company.

So, what does all this really mean for the advancement of autonomous vehicles? One of the biggest issues in the autonomous vehicle arena right now is cooperation among the manufacturers and developers. This is not to say that these entities are not creating consortium groups or teaming up on manufacturing ventures, but this cooperation remains very segmented. The sharing of data within the entire industry is not common place, so it is very exciting to see such an extensive dataset being shared so openly by Lyft. Sharing knowledge of this kind is crucial if autonomous vehicles are to become a reality on the roadways, as these vehicles are going to need to work in cooperation with not only each other but the infrastructure of the areas in which these vehicles operate.

It also seems, on the surface, that this dataset is being shared not only among manufacturers and technology developers, but with the entire public. While this is technically true, as anyone can download the dataset, unless you are capable of reading the data, there is no way for the average person to understand what it is that has been collected or what it really all means with respect to autonomous vehicles. I downloaded the available dataset (there is additional information being released within the coming weeks) and while it is very impressive, it doesn’t enhance my knowledge of this technology in anyway.

It’s obvious that Lyft is partially using the release of this data, in conjunction with a competition, as a way in which the company can find new talent to add to its autonomous vehicle team, and maybe even acquire some free research out of it by the submissions received in the competition. Though not a criticism of Lyft, this is an observation that this data release is really not to the public at large, but to a select group of individuals who have the capability of reading and interpreting the data.

As the public’s understanding of and trust in this technology is also a crucial factor in the acceptance of autonomous vehicles on our roadways, there needs to a way to help relay this type of information to the general public so they can continue to learn, along with the manufacturers and developers, about how the technology works, its safety and improvements being made as issues arise. While this dataset may not be capable of such relatability and dissemination, this is a reminder to all involved that no amount of technology with be worth much, without the public’s trust in and use of autonomous vehicles.

Subscribe to the AI blog to receive updates on this dataset and Lyft’s competition. Also follow our AI Blog Calendar, where you can find extensive information regarding just about every AI event coming up, worldwide, including the NeurIPS Conference discussed above.

In what appears to be a first, the city of Yuma, Arizona, has announced that it will host a free seminar on TV and over the internet regarding autonomous vehicles, Autonomous Vehicles 101: Education for Yuma and Surrounding Communities. The seminar is being sponsored by the city, Yuma County, Greater Yuma Economic Development Corporation, and the Yuma Metropolitan Planning Organization. National, state, and local experts will discuss key aspects of the autonomous vehicles economy, as well as policy and infrastructure preparations that communities can make today to prepare for the arrival of self-driving technologies.

Topics for discussion will include:

  • The Changing Nature of Transportation — Shared, Electric, Connected and Autonomous
  • The AV Situation in Arizona: The View From ADOT and the Governor’s Office
  • The Implications of the AV Economy for Yuma City and Towns in Yuma County
  • How the Siemens-anyCOMM project (the City of Yuma has dual agreements with Siemens and anyCOMM to provide infrastructure for autonomous vehicle to this region) can underpin an Autonomous Vehicle-Supportive Infrastructure

With autonomous vehicles reality on the rise, this type of educational event is not only beneficial, but absolutely mandatory for autonomous vehicles to become part of the “real world.” One of the main barriers for autonomous vehicles, beyond things such as the lack of federal regulation, is the lack of trust and acceptance of these vehicles by the public. Currently there are countless misconceptions about autonomous technology by the public, which has led to distrust and lack of willingness to use or even try the technology. By providing educational events for the public relating accurate information will go a long way to breakdown the barrier of public mistrust of autonomous vehicles, helping to smooth the path to autonomous vehicles on our roadways.

More information on this seminar and other valuable autonomous vehicle events can be found on the AI Blog Calendar. Subscribe to the calendar for updates as more events are added.

As reported yesterday, National Institute of Standards and Technology’s Request for Comments was published today and comments will be accepted until 5 p.m. Eastern time on May 31, 2019, and may be submitted via email to ai_standards@nist.gov, or by mail to the National Institute of Standards and Technology, 100 Bureau Drive, Stop 2000, Gaithersburg, MD 20899. Comments will be made publicly available without redaction.

In addition, NIST announced a workshop, Federal Engagement in Artificial Intelligence Standards Workshop, to promote discussions in support of a federal plan for engagement in AI technical standards development, on May 30, 2019, at its Gaithersburg, Maryland, campus and via webcast.  Further information can be found on our Calendar.

Tomorrow, May 1, 2019, the National Institute of Standards and Technology (NIST) will officially publish in the Federal Register a Request for Information (Docket Number: 190312229-9229-01) seeking comments on development of Artificial Intelligence (AI) Standards.  Pursuant to the Executive Order on Maintaining American Leadership in Artificial Intelligence (signed on February 11, 2019), NIST was directed to create a plan for “federal engagement in the development of technical standards and related tools in support of reliable, robust, and trustworthy systems that use AI technologies.” In order to fulfill this directive, NIST seeks to consult with will consult with federal agencies, the private sector, academia, non-governmental entities, and other stakeholders with interest in and expertise relating to AI.

Specifically, NIST seeks to understand the:

  1. Current status and plans regarding the availability, use, and development of AI technical standards and tools in support of reliable, robust, and trustworthy systems that use AI technologies;
  2. Needs and challenges regarding the existence, availability, use, and development of AI standards and tools; and
  3. The current and potential future role of federal agencies regarding the existence, availability, use, and development of AI technical standards and tools in order to meet the nation’s needs.

NIST also lists three specific categories with subtopics in the notice covering the major areas on which the department is seeking information; however, these categories are not intended to limit the topics addressed by those who submit comments to the notice.

AI Technical Standards and Related Tools Development: Status and Plans

  1. AI technical standards and tools that have been developed, and the developing organization, including the aspects of AI these standards and tools address, and whether they address sector-specific needs or are cross-sector in nature;
  2. Reliable sources of information about the availability and use of AI technical standards and tools;
  3. The needs for AI technical standards and related tools. How those needs should be determined, and challenges in identifying and developing those standards and tools;
  4. AI technical standards and related tools that are being developed, and the developing organization, including the aspects of AI these standards and tools address, and whether they address sector-specific needs or are cross sector in nature;
  5. Any supporting roadmaps or similar documents about plans for developing AI technical standards and tools;
  6. Whether the need for AI technical standards and related tools is being met in a timely way by organizations;
  7. Whether sector-specific AI technical standards needs are being addressed by sector-specific organizations, or whether those who need AI standards will rely on cross-sector standards which are intended to be useful across multiple sectors; Technical standards and guidance that are needed to establish and advance trustworthy aspects (e.g., accuracy, transparency, security, privacy, and robustness) of AI technologies.

Defining and Achieving U.S. AI Technical Standards Leadership

  1. The urgency of the U.S. need for AI technical standards and related tools, and what U.S. effectiveness and leadership in AI technical standards development should look like;
  2. Where the U.S. currently is effective and/or leads in AI technical standards development, and where it is lagging;
  3. Specific opportunities for, and challenges to, U.S. effectiveness and leadership in standardization related to AI technologies; and
  4. How the U.S. can achieve and maintain effectiveness and leadership in AI technical standards development.

Prioritizing Federal Government Engagement in AI Standardization

  1. The unique needs of the federal government and individual agencies for AI technical standards and related tools, and whether they are important for broader portions of the U.S. economy and society, or strictly for federal applications;
  2. The type and degree of federal agencies’ current and needed involvement in AI technical standards to address the needs of the federal government;
  3. How the federal government should prioritize its engagement in the development of AI technical standards and tools that have broad, cross-sectoral application versus sector- or application-specific standards and tools;
  4. The adequacy of the federal government’s current approach for government engagement in standards development, which emphasizes private sector leadership, and, more specifically, the appropriate role and activities for the federal government to ensure the desired and timely development of AI standards for federal and non-governmental uses;
  5. Examples of federal involvement in the standards arena (e.g., via its role in communications, participation, and use) that could serve as models for the Plan, and why they are appropriate approaches; and
  6. What actions, if any, the federal government should take to help ensure that desired AI technical standards are useful and incorporated into practice.

The deadlines for the submission of comments will be released tomorrow, with the official publication of the notice.

The blog is now maintaining a Google Calendar featuring upcoming notable artificial intelligence events. (If you would like to submit an event for inclusion, please contact Jodi Oley at joley@eckertseamans.com.)

The calendar will be updated on an ongoing basis, so check back or sync with your own calendar to stay in the loop. Here’s a quick preview of notable events happening in the next week:

PLM ReInvented MeetUp: Connected and Automated Vehicles – End-to-End Design, Traceability and Security

Tuesday, April 30, 6:00 to 8:30 p.m.

Microsoft Technology Center, 1 Campus Martius, Detroit, MI 48226, USA

Join us to discuss managing key functions of the product lifecycle for connected and automated vehicles (CAV). This event will focus on current strategies and solutions for CAV’s with an emphasis on solutioning trusted-platforms, connected services, and traceability of data generated necessary to build these vehicles. Hear Richard Doak, Chief Strategist for Automotive MFG at Microsoft, and Bill Bone, CTO for Automotive at Aras, present their views on business/technical challenges and solution opportunities for these vehicles in a casual environment.

Building Trust in Autonomy – Driving at the Limits of Handling and Interacting with Pedestrians

Wednesday, May 1, 12:30 to 2:30 p.m.

Francois-Xavier Bagnoud Building, 1012 FXB, University of Michigan

The first half of this talk focuses on one aspect of this challenge, developing a mathematical model for a pedestrian’s behavior and studying its interaction with an automated vehicle at a mid-block, unsignalized intersection. By modeling pedestrian behavior through the concept of gap acceptance, we show that a hybrid controller with just four distinct modes allows an autonomous vehicle to successfully interact with a pedestrian across a continuous spectrum of possible crosswalk entry behaviors. The controller is validated through extensive simulation and compared to an alternate POMDP solution, with experimental results provided on a Hyundai research vehicle for a virtual pedestrian. The second half of this talk will focus on another contribution related to automated driving – a feedback-feedforward steering algorithm that enables an autonomous vehicle to accurately follow a specified trajectory at the friction limits while preserving robust stability margins. Experimental data collected from an Audi TTS driving at the handling limits (0.95 g) on a full length race circuit will demonstrate the performance of the controller design.

Sligo Engineering & Technology Expo 2019

Thursday, May 2, 10:00 a.m. to 6:00 p.m.

Knocknarea Arena, Ash Ln, Ballytivnan, Sligo, F91 YW50, Ireland

This year’s Expo will concentrate on the exciting new developments facing industries in the coming decade. Labelled as Industry 4.0, businesses across the globe are having to adapt to new technology quicker than ever if they wish to thrive and even survive. Robotics, Artificial Intelligence, Internet of Things and Automation are all key buzzwords doing the rounds at the moment, but what do they mean and how will they affect industry and society in the near future?

AAA Autonomous Vehicles Summit 2019

Friday, May 3,⋅9:30 a.m. to 1:30 p.m.

Mohawk Valley Community College, Utica, NY 13501, USA

AAA New York State will host an Autonomous Vehicle Summit that will offer perspective on the future of self driving vehicles. Entitled “Navigating Our Transportation Future: Preparing New York for Autonomous Vehicles” the summit will bring together municipal planners, transportation professionals, business leaders, and lawmakers to discuss how autonomous vehicles will transform the state’s economy and transportation infrastructure and how New York’s policymakers should facilitate this new technology.

Machine Learning, AI, and Digital Health Panel at FDLI Annual Meeting

Friday, May 3, 10:40 to 11:30 a.m.

Ronald Reagan Building and International Trade Center, 1300 Pennsylvania Ave NW, Washington, D.C., 20004, USA

Mark Levy, one of the co-editors of the AI blog, will be on a panel discussing “Machine Learning, AI, and Digital Health” as part of the Food and Drug Law Institute’s Annual Conference in Washington, D.C., on May 2-3.

The panel will focus on digital health technologies, which are rapidly integrating into healthcare and life sciences – from wearables in clinical trials to digital tools for disease management and clinical decision support. Many of these technologies are and will deploy machine learning and artificial intelligence. This panel will discuss how these new technologies are being integrated and how FDA’s role in regulation will continue to evolve. FDA’s recent discussion paper on AI devices, as well as the challenges of AI regulation generally, such as liability, quality assurance, and approval pathways for a product that continually evolves will also be discussed.

Mark Levy, one of the co-editors of this blog, will be on a panel discussing “Machine Learning, AI, and Digital Health” as part of the Food and Drug Law Institute’s Annual Conference in Washington, D.C., on May 2-3.

The panel will focus on digital health technologies, which are rapidly integrating into healthcare and life sciences – from wearables in clinical trials to digital tools for disease management and clinical decision support. Many of these technologies are and will deploy machine learning and artificial intelligence. This panel will discuss how these new technologies are being integrated and how FDA’s role in regulation will continue to evolve. FDA’s recent discussion paper on AI devices, as well as the challenges of AI regulation generally, such as liability, quality assurance, and approval pathways for a product that continually evolves will also be discussed.

Sign up with the discount code annual15 for a 15% discount on registration, and learn more at fdli.org/annual.


The FDA recently issued the discussion paper “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)” and a request for comments.

Commissioner Scott Gottlieb issued a statement at the time of the paper’s release lauding artificial intelligence and machine learning as having “the potential to fundamentally transform the delivery of health care.” He stated that the “ability of artificial intelligence and machine learning software to learn from real-world feedback and improve its performance is spurring innovation and leading to the development of novel medical devices.” However, he recognized the inadequacy of traditional regulatory pathways to foster the growth of this technology, saying the FDA was “announcing steps to consider a new regulatory framework specifically tailored to promote the development of safe and effective medical devices that use advanced artificial intelligence algorithms.”

FDA employs a risk-based approach to determine whether a new premarket submission is required each time a manufacturer makes substantial, iterative changes through a software update or makes other changes that would significantly affect the device’s performance. But, this approach is not a match for review of AI and machine learning-based algorithms, medical devices that may continuously update themselves in response to real-world feedback.

Gottlieb noted as an example, “an algorithm that detects breast cancer lesions on mammograms could learn to improve the confidence with which it identifies lesions as cancerous or may learn to identify specific sub-types of breast cancer by continually learning from real-world use and feedback.” The agency concluded that it had to change its approach to foster software that evolves over time to improve care, while still guaranteeing safety and effectiveness. As a first step, the FDA released the paper exploring a new proposed framework that it believes will encourage development and may allow some modifications without review—“[I]t would be a more tailored fit than our existing regulatory paradigm for software as a medical device.”

Under the proposed framework, AI/ML-based SaMD would require a premarket submission when a software change or modification “significantly affects device performance or safety and effectiveness; the modification is to the device’s intended use; or the modification introduces a major change to the SaMD algorithm.” This approach was developed based on harmonized SaMD risk categorization principles that were established via the International Medical Devices Regulators Forum, FDA’s benefit-risk framework, risk management principles in FDA’s 2017 guidance on submitting new 510(k)s for software changes to existing devices, Software Pre-certification Pilot Program’s organizational-based total product life cycle approach, as well as the 510(k), De Novo classification request, and premarket application pathways.

So, where it is anticipated that the software will evolve over time and not remain static, the “evolution” will be described at the time of submission along with specific plans for post-market surveillance and modification of intended use where appropriate.

FDA will accept comments through June 3, 2019, via its website. This will be an important part of evolving the proposal into something that better fits the needs of this growing technology

Another partnership has been formed in the autonomous vehicle world. SAE, Ford, GM, and Toyota have announced the formation of the Autonomous Vehicle Safety Consortium (AVSC).

The focus of AVSC is to work to safely advance testing, pre-competitive development, and deployment of SAE Level 4 and 5 automated vehicles. It is the goal of the AVSC  that its work will inform and accelerate the development of industry standards for autonomous vehicles and harmonize with efforts of other consortia and standards bodies throughout the world.

AVSC’s first efforts will focus on a framework that for the safer deployment of autonomous vehicles, which is broadly applicable to all developers, manufacturers, and integrators of autonomous technologies for use in product deployment. It will consist of a set of safety principles for SAE Level 4 and 5 automated driving systems focusing on:

1) testing prior to and when operating AVs on public roads,

2) data collection, protection, and sharing required to reconstruct certain events, and

3) interactions between AVs and other road users.

In an area of technology which continues to grow rapidly, collaboration among its producers is the key to the success of autonomous vehicles in the future. Most promising about the current objective of AVSC, is their goal to make their efforts applicable to all involved in this technology. This blog will follow closely and report on the actions of  AVSC as it works toward achieving its goals.

Today, The Wall Street Journal published a special section on artificial intelligence with multiple articles on its impact on various industries, new applications of the technology, and its general impact on corporate management. This link to an article on the latter topic is worthwhile, as are the other articles in today’s edition of the WSJ. (Third-parties may require a subscription to view full article.)

Supplement: Additional helpful articles about AI from The Wall Street Journal include the following:

WSJ Pro​

Test your Knowledge of AI​

 

In a recent lawsuit filed in the Northern District of California, Tesla alleged that a former employee, Guangzhi Cao, copied more than 300,000 files of Tesla’s Auto-pilot related source code before leaving to work for one of Tesla’s competitors, Xiaopeng Motors Technology Company Ltd.

This lawsuit highlights the difficulties associated with potential collaboration in the rapidly-advancing industry of self-driving vehicles.

Tesla brings the following claims in the lawsuit: (1) misappropriation of trade secrets in violation of the Defend Trade Secrets Act; (2) misappropriation of trade secrets in violation of the California Uniform Trade Secrets Act; (3) breach of contract due to Cao’s alleged breach of Tesla’s Non-Disclosure Agreement; and (4) breach of employee’s duty of loyalty.

The lawsuit, filed shortly after Cao’s departure from Tesla, seeks an injunction preventing Cao from (1) retaining, disclosing, or using any Tesla confidential or proprietary information in any manner, and (2) soliciting other Tesla employees or contractors to leave employment with Tesla for a period of one year following his departure. The lawsuit further seeks monetary damages and a requirement that Cao “submit to ongoing auditing of his personal and work-related systems and accounts to monitor for unlawful retention or use of Tesla’s confidential and proprietary information.”

The complaint notes that Tesla’s Autopilot team, including its full self-driving technology, is “a crown jewel of Tesla’s intellectual property portfolio” and states:

“Tesla has a global fleet of more than 500,000 cars, which have driven more than a billion collective miles with Autopilot activated. Every day, thousands of Autopilot-enabled Tesla vehicles provide real-time feedback to Tesla’s servers, yielding voluminous data that Tesla uses to continually improve the Autopilot system. This fleet gives Tesla exponentially more data than its autonomous vehicle competitors, who generally have only small fleets of prototype vehicles, and has allowed Tesla to accelerate its autonomy technology in a way no other company can.”

The primary focus of the complaint is, understandably, the threat posed to Tesla’s intellectual property due to misappropriation of the Autopilot source code. However, the complaint also indicates a reluctance to divulge the inputs that are used to improve the source code, namely, the data from Tesla’s existing vehicles:

“As another example, the source code also reflects and contains improvements that are built on Tesla’s massive volume of fleet telemetry data. If disclosed to a competitor, that competitor could use Tesla’s source code to copy Tesla’s work, compete with Tesla, or otherwise accelerate the development of its own vehicle autonomy technology.”

As more autonomous vehicles enter the marketplace, sharing of data inputs (but not the source code) could assist in developing common safety standards and protocols for autonomous vehicles. However, some companies may be wary of sharing their data inputs because this could decrease the competitive edge that comes from a larger and more established fleet of vehicles.