Oct 3, 2022

Foundation’s Workshop Highlights Opportunities for Artificial Intelligence Tools in Retinal Disease Clinical Care and Research

Eye On the Cure Research News

Small data sets present challenges for developing rare disease AI tools

Get updates on Eye On the Cure Research News

Artificial intelligence (AI) is quickly emerging as an efficient alternative to labor-intensive processes and a powerful tool for tackling highly complex problems. According to Precedence Research, the global AI market is expected to grow 38.1 percent annually from 2022 to 2030, reaching nearly $1.6 trillion.

AI is being used today in many diverse applications, including: medical imaging and diagnostics, driverless cars, and removing propaganda and hateful content from social media. Someday, we might even see AI-based drone bees pollinating crops.

The Foundation Fighting Blindness hosted a virtual workshop titled “Artificial Intelligence in the Retina” on September 22, 2022, to explore the opportunities and challenges in using AI for improving care and therapy development for people with retinal diseases. Nearly 60 industry professionals and researchers, including members of the Foundation’s Scientific Advisory Board, attended the event.

"Artificial intelligence is a powerful but relatively new tool for researchers in the inherited retinal disease space, so we are excited to learn more about its potential application in clinical diagnosis and evaluation of patients and therapy development," said Claire Gelfman, PhD, chief scientific officer, Foundation Fighting Blindness. "The workshop was a great introduction for our investigators."

The workshop was led by Aaron Lee, MD, an assistant professor and vitreo-retina surgeon at the University of Washington, and Adam Dubis, PhD, an associate professor and deputy director (research) at University College London and founder of the company DeepEye.

 

AI in the Ophthalmology Sector

To open the workshop, Dr. Dubis noted that AI tools are proliferating in ophthalmology with at least 20 companies in the sector, many of which are imaging systems and device developers.

Dr. Lee said that Digital Diagnostics (formerly iDX) and Eyenuk algorithms are both FDA-approved for diabetic retinopathy screening and deployed in the US. Other companies are deploying their algorithms internationally. “What’s amazing is they are making clinical decisions without a clinician overseeing them at all. It is the first time in the US that such an algorithm was ever approved, so it is really a landmark moment in the history of medicine,” he said. He added that there are also several AI tools being used in research labs (i.e., not patient-oriented applications) that are available today.

The retina-related AI applications covered in the workshop included: disease identification and diagnosis, imaging, and disease prediction.

 

Machine Learning vs. Deep Learning

Dr. Dubis explained that AI systems, the use of computers or robots to perform tasks normally done by humans, can be often classified as either machine learning (ML) or deep learning (DL) tools. ML is a type of AI that enables software applications to use historical data as input to improve accuracy in predicting classifications and outcomes. DL is ML that uses multilayered, artificial, neural networks to mimic the learning process of the human brain to improve predictions of classifications or outcomes.

 

Training Machine Learning Models

Da Ma, PhD, assistant professor at Wake Forest University School of Medicine, and Marinko V. Sarunic, PhD, professor at University College London, discussed ML model training. They said that getting sufficient data is a key challenge in ML training. And while more data is better, it must be clean. Cleaning data can be labor intensive and requires various levels of human expertise.

Dr. Ma presented an example of ML training: the classification of referable (severe and moderate) and non-referable (mild) diabetic retinopathy (DR) using optical coherence tomography. He explained that clear, detailed, properly oriented images of the relevant regions of the retina are needed. Further, image pre-processing to capture the specific features desired can make the data more powerful and reduce the amount of data needed.

Dr. Marinko introduced the potential modeling pitfalls known as underfitting and overfitting. Underfitting is an over-simplification of the problem leading to the model not accurately capturing the relationship of the input to the outcome. This occurs because of a lack of good data. Overfitting occurs when the model is highly accurate with the dataset it was trained on, but less accurate with new, unseen data.

Model developers need to seek a balance in the data, which can be challenging for rare diseases because of a lack of data to train the model.

 

What to Worry About Before Deploying an AI Model

Peter Campbell, MD, MPH, associate professor at Casey Eye Institute, Oregon Health & Science University, presented four considerations when deploying an AI model: Is the model non-biased, fair, explainable, and generalizable?

Bias can occur when erroneous assumptions and associations are made, and these can happen at multiple points in model training and deployment. Using image analysis for retinopathy of prematurity (ROP), Dr. Campbell said that differences in patient ethnicities, camera systems, and points of view (retinal region captured) can all lead to bias.

Fairness for ROP data can be ensured if large data sets are used that cross multiple races and ethnicities.

An AI model is explainable if the implementers know how and why the model makes predictions. Understanding how the algorithm determines different features is important. For example, if the model is for determining if an input object is a car, the researchers need to know the features that the model uses to label the object as a car.

A generalizable model will work well with unseen (new) data. Its algorithm can accommodate various patient subgroups and diversity across many variables (e.g., variations in data quality, devices, patient demographics).

“There is a wide gap between interesting papers about [AI] technology and evidence that technology reduces blindness,” cautioned Dr. Campbell cautioned. “It may solve some really important clinical problems, but can also worsen disparities and fail to work in relevant subgroups.”

 

Considerations for Small Data Regimes in AI

By definition, data is frequently limited for rare diseases, which creates challenges and calls for many considerations when implementing an AI model.

Nikolas Pontikis, PhD, a senior research fellow at the University College London Institute London, said that data sharing and aggregation are crucial for training AI models of rare disease. Federated learning — using multiple source data sets from multiple locations — is an option for access to more data.

Dr. Pontikis also reviewed the pros and cons of different data sources. For example, clinical trial data may be optimal — it is usually clean and well-labeled — but it might not represent real-world data. Curated (retrospective) data is lower quality than clinical; it often needs cleaning and labeling. Real world data is usually low quality, but can be made generalizable.

A technique known as semi-supervised learning — training the model to label unlabeled data — can reduce the human labor needed for labeling data.

Dr. Pontikis noted that data from rare disease often identifies patients. Therefore, patients should provide consent and/or privacy preserving approaches should be implemented. 

 

Looking Forward

While the workshop revealed exciting opportunities for implementing AI in the rare retinal disease space, more work needs to be done.

In his closing comments, Dr. Lee said, “These techniques are incredibly powerful and show great promise, but at the same time, quite honestly, there is a fair amount of work in diseases that are common, and when we try to push the envelope and make them work in rare diseases, we’re at the bleeding edge of what’s available computationally today.”