A recently published article in Nature Medicine authored by Eric Topol, M.D., Department of Molecular Medicine, Scripps Research Institute, suggests that the convergence of human and artificial intelligence can lead to “high-performance medicine.”  High performance medicine he says, will be data driven.  The development of software that can process massive amounts of information quickly and accurately, as well as less expensively, will lay the foundation for this hybrid practice of medicine.  It will not be devoid of human interaction and input he says, but more reliant on technology and less reliant on human resources.  It will combine computer developed algorithms with physician and patient input.  Topol believes that, in the long run, this will elevate the practice of medicine and patient health.

Topol sees impacts of AI at three levels of medicine—

  • Clinicians—by enabling more rapid and more accurate image interpretation (e.g., CT scans);
  • Health systems—by improving workflows and possibly reducing medical errors, and
  • Patients—by enabling them to process more data to promote better health.

While the author sees roadblocks to the integration of AI and human intelligence in medicine such as data security, privacy and bias, he believes the improvements will be actualized over time.  Topol discusses a number of disciplines in which the application of AI has already had a positive effect:  radiology, pathology, dermatology, ophthalmology, gastroenterology and mental health.  Further, Topol discusses FDA’s new pathways for approval of AI medical algorithms and the fact that there were thirteen approvals of AI devices and software by FDA in 2018 as opposed to only two in 2017.

We discussed FDA’s stated commitment to AI, FDA’s regulatory pathways for approval and FDA approval of AI related devices and software here.

Topol correctly maintains that rigorous review, whether agency review (such as FDA), or private review (industry), is necessary for the safe development of new technology generated from the combination of human and artificial intelligence.  This includes peer-reviewed publications on FDA approved devices and software, something to date he argues has been lacking.  The author does a nice job of laying out the base of evidence for the use of AI in medicine and describing the potential pitfalls of proceeding without caution and oversight, as is true with other applications of AI.  The article is a worthy read for those involved in the field of medicine including those engaged in the development of medical devices and related software.

FDA is taking steps to embrace and enhance innovation in the field of artificial intelligence. It has already permitted the marketing of an AI-based medical device (IDx-DR) to detect certain diabetes-related eye problems, a type of computer-aided detection and diagnosis software designed to detect wrist fractures in adults (OsteoDetect), and most recently, a platform that includes predictive monitoring for moderate to high-risk surgical patients (HemoSphere).

FDA also embraced several AI-based products in late November when the Agency chose several new technologies as part of a contest to combat opioid abuse which it launched in May 2018. FDA’s Innovation Challenge, which ran through September 30, 2018, sought mHealth (mobile health) technology in any stage of development, including diagnostic tools that identify those with an increased risk for addiction, treatments for pain that eliminate the need for opioid analgesics, treatments for opioid use disorder or symptoms of opioid withdrawal, and technology that can prevent the diversion of prescription opioids.

The opioid crisis continues to ravage cities and towns across America. The selection of AI-based devices by FDA to aid in the opioid crisis is important as it shows

  • FDA’s commitment to its Action Plan to address the opioid crisis
  • FDA’s recognition that AI is an important technology that it must address and encourage;
  • FDA’s willingness to work with developers of AI devices to establish new pathways for approval and
  • The need for FDA to clarify its understanding of AI and how it will guide and regulate industry moving forward.

FDA received over 250 entries prior to the September deadline. In each proposal, applicants described the novelty of the medical device or concept; the development plan for the medical device; the team who would be responsible for developing the device; the anticipated benefit of the device when used by patients; and, the impact on public health as compared to other available alternatives. Medical devices at any stage of development were eligible for the challenge; feasibility and the potential impact of the FDA’s participation in development to expedite marketing of the device were factors considered when reviewing the submissions.

A team from the FDA’s Center for Devices and Radiological Health (CDRH) evaluated the many entries and chose eight of them to work with closely to accelerate develop and expedite marketing application review of innovative products, similar to what occurs under its Breakthrough Devices Program.

Several of the selected entries involve pattern recognition, whether by predefined algorithm or machine learning, to prevent, detect or manage and treat opioid abuse. For example, Silicon Valley-based startup CognifiSense is developing a virtual reality therapy as part of a system to treat and manage pain. CognifiSense uses a software platform that provides psychological and experiential training to chronic pain patients to normalize their pain perception. Another FDA chosen product, iPill Dispenser, uses fingerprint biometrics on a mobile app that aims to cut over-consumption by dispensing pills based on prescriptions, and which permits physician interaction with usage data to adjust dosing regimens. Yet another, Milliman, involves predictive analytics and pattern recognition to assess a patient’s potential for abuse of opioids before prescribing as well as detection of physician over-prescribing.