A recent study published on 7 January in Nature Medicine has shown that a smartphone app called Face2Gene designed to help diagnose a range of rare genetic disorders by analysing pictures of people’s faces can outperform clinicians (1). The app is based on machine learning and artificial neural networks inspired by the neural network of the human brain, which are excellent at finding patterns too complex for regular computer programs to identify.
Syndromic genetic conditions have distinct facial features, however, owing to their rarity and a large number of possible syndromes, attaining the correct diagnosis can be a lengthy and expensive process, often referred to as the “diagnostic odyssey”. These disorders are typically diagnosed by expert clinicians but new technology is now helping doctors and researchers quickly distinguish such conditions ― including Cornelia de Lange syndrome and Angelman syndrome ― from similar conditions. The algorithm can also classify different genetic forms of another genetic disorder called Noonan’s disease.
Facial recognition technologies, such as DeepFace12 based on so-called convolutional neural networks (DCNNs), have achieved human-level performance. However, previous disease identification experiments have mainly focused on distinguishing unaffected individuals from those with a particular condition and do not address the real-world problem of classifying hundreds of syndromes. Moreover, researchers previously only examined small training datasets ― around 200 images ― too small for deep-learning models.
The facial image analysis framework, called DeepGestalt, uses computer vision and deep-learning algorithms to quantify similarities between the facial features and hundreds of these syndromes. The app was developed by FDNA, a digital-health company in Boston and the recent study was carried out in collaboration with an international team of researchers from the US, Israel, and Germany. Since data is needed to train the algorithms, the Face2Gene app is provided to clinicians for free and is currently used as a “second opinion” for diagnosing rare genetic disorders. However, an algorithm is only as good as its training data, which is why data sharing has become so important, particularly in cases of extremely rare conditions.
The “community-driven phenotyping platform” is trained on tens of thousands of patient images. In the study, researchers input more than 17,000 images of diagnosed cases spanning 216 distinct syndromes into the app. According to the authors, DeepGestalt outperformed clinicians in three initial experiments, two with the goal of distinguishing subjects with a target syndrome ― Angelman syndrome or Cornelia de Lange syndrome — from other syndromes, and one aimed at separating different genetic subtypes in Noonan syndrome. In a final experiment designed to mimic problems encountered in a real clinical setting, the programme placed the correct syndrome in its top 10 list 91% of the time based on 502 different images examined.
Although the app does not provide a definitive diagnosis, the technology could potentially reduce the need for lengthy diagnostic testing. Moreover, could be combined with combined with genetic testing to speed up the diagnosis of genetic syndromes, which can significantly improve outcomes. Artificial intelligence can help narrow down possible conditions thus reducing the cost of expensive gene testing.
(1) Gurovich, Y. et al. Identifying facial phenotypes of genetic disorders using deep learning. Nature Medicine (2019). DOI: 10.1038/s41591-018-0279-0
Image source: A five-year-old Mexican girl with Angelman syndrome. Yokoyama-Rebollar, E, et al. Molecular Cytogenetics (2015). CC BY 4.0.