It’s probably safe to state that the most common form of dementia, Alzheimer’s Disease (AD), is one of the biggest concerns regarding the well-being of older populations on developed countries. The situation is twofold: besides the well-known behavioral and cognitive issues affecting the patient and her most immediate relatives who usually assume the caregiving burden, the disease also has an economic impact that put additional stress on any public or private health systems.
The origin of AD is still the subject of intense scientific research, with hypotheses ranging from hereditary origins to more speculative ones that consider bacteria or even prions as possible causes. A combination of several of these factors cannot be ruled out, either, which makes it more difficult.
Given the current state of research there is no known cure for AD, so the dementia symptoms get progressively worse —AD is the 6th cause of death in the United States for populations over 65 years old. The impact of AD is such that deaths from different forms of cancer or heart strokes are decreasing, but reported deaths from AD are actually increasing.
Although symptoms cannot be stopped, they can be mitigated if AD is diagnosed on its early onset. In its initial stages AD symptoms look very similar to mild dementia, which is a more benign form of cognitive impairment, and this makes the diagnostic of AD specially challenging.
Can the recent advances in Artificial Intelligence be applied against this problem?
Expert systems and their failure
Expert systems were the poster child of computer aided medical diagnosis during the first wave of AI in the 1970s, with early systems like MYCINwhich showed promising success diagnosing infections like meningitis, an impressive feat given that MYCIN ran on what we today consider ancient machines. Expert systems tried to capture what was called the “domain knowledge” of experts, medical doctors in this case, and encode this field knowledge into a complex and comprehensive system of rules –“if this then that”– that would be run by a software application called the inference engine, asking all the necessary questions and giving a final answer to the question: does this patient suffer from the disease?
But expert systems ultimately became out of fashion, marking the beginning of what computer historians call the “AI winter”, almost two decades of disillusionment with AI across academia and industry. Two were the lessons learnt: first, field knowledge cannot usually be encoded into simple pairs of questions/answers, but instead it tends to come in the form of “if this, then it’s 80% that or 20% that”, a problem which quickly becomes difficult when there are thousands of chained rules and needs to be dealt with using fuzzy logic. Second, they have an implicit fragility: if one of the answers to any of the questions is unknown, the carefully built model just breaks apart and no meaningful or approximate answer can be obtained. (It bears to mention, though, that expert systems are not deployed any more to perform this kind of tasks, are considered a solved problem and are a useful tool in other domains: the reader’s bank probably runs loan applications through software that is the direct descendant of the earlier expert systems).
Current trends: enter deep learning
Instead of using this type of reasoning, AI researchers and practitioners have turned to a totally different approach, one that started trying to mimic the fundamental workings of the neuron: artificial neural networks. These software abstractions show an intriguing property: instead of mimicking the abstract, symbolic reasoning of a human expert, they can discover these concepts by themselves from huge amounts of data using the right learning methods.
Taking these principles to the problem of medical diagnosis, the software is fed with data from patients and, for each individual patient, it is taught if the patient was diagnosed with the particular disease or not. Applying different numerical and statistical algorithms the neural network learns automatically which pieces of data are relevant and which are not. For example, the weekday of the birth date is not a feature useful to predict the disease so it will be ignored, while the presence of a relevant biomarker will be taken into account. Using more complicated architectures neural networks can use computer imagery such as CAT or MRI scans.
The training process of a deep learning system is extremely expensive in terms of computer hours needed to train it, so to be useful beyond smaller cases like recognizing handwritten numbers from images this technology had to wait for a revolution that we now take for granted: cloud computing, which makes it possible to acquire powerful virtual computers to perform the training when needed and dispose of them when the work is done, paying only for the hours used thus reducing hardware costs. This has taken these deep learning techniques beyond the realm of university labs and made it available to anyone interested, which has been a boon for the scientific community.
Diagnosing AD with deep learning: results and possible uses
Given the interest in AD, researchers are applying these deep learning techniques to the disease prediction with promising results. One of these is the work of Bhagwat, Viviano et al. which present a deep learning framework built upon freely available tools like [Tensorflow[(https://www.tensorflow.org/) or Scikit-learn , capable not only of performing a diagnostic but also predicting the evolution of symptoms for patients.
The system is trained with data gathered from thousands of patients during six years, covering different types of data like cognitive assessment surveys and magnetic resonance images. With a predictive power ranging from 70% to 90%, this deep learning framework is undeniably a powerful tool.
There are two ways that this framework could be applied, as the authors explain:
First, identifying those patients that will undergo more intense cognitive impairment, meaning those that will require more attention, not only benefiting from early diagnostic but also receiving cognitive assessments more frequently, and separating them from those that will experiment milder symptoms. The approach here would be ensuring that all patients are adequately taken care of, optimizing the available resources.
Second, help the selection of patients that will be taking part in clinic trials in order to accelerate the results on drug development, as trial results are statistically more significant if positive results are demonstrated on a cohort of patients that would suffer from worse symptoms.
It’s important to understand that, from a medical community perspective, this new breed of tools should be part of a medical workflow, not substituting the role of the doctor but enhancing and it. There are no computer alternatives to the essential relationship between doctor and patient.