Until recently, clinical observation and statistical analysis of symptoms and treatment responses drove diagnosis, in what Psychology Today called an “imperfect science,” considering the substantial overlap of many conditions. According to the American Psychological Association, nearly half of patients are diagnosed with multiple psychiatric conditions. While the presence of co-occurring conditions can be valid, it can also be the result of lack of clarity in diagnosis.
To combat these barriers, researchers are creating artificial intelligence (AI) models to aid screening and diagnosis in a burgeoning field known as computational psychology. Machine learning can assess massive datasets to uncover patterns and establish a “transdiagnostic” perspective to view systems across categories.
These AI models, which also help reduce human error, are usually created to analyze language and imagery to diagnose different mental illnesses, which isn’t entirely different from a psychiatrist or psychologist examining a patient. But while the professional listens and makes a subjective judgment based on signals and experience, an AI model trains on predictive factors, and makes an objective computational assessment. At the outset, machine learning requires human input to create examples of correct and incorrect outputs to “train” the machine but, over time, accuracy and reliability increase.
Applications of AI in Mental Health
This technology is applied to predict and diagnose multiple conditions, including:
In 2015, a group of researchers created an AI model to accurately predict the young people who would develop psychosis, a primary trait of schizophrenia. The model examined speech transcripts, looking for verbal tics that are linked to psychosis, including jumbled meaning between sentences, succinct sentences, and excessive use of words like “this,” “a,” or “that.” Building off the model, Engineer and NeuroLex Diagnostics CEO Jim Schwoebel created a product (which will likely be administered via an app) to record appointments and later search the transcript for indicators, according to a 2016 article in The Atlantic. The product would spit out a number, similar to blood pressure or heart rate, for the psychiatrist to consider in diagnosis.
A recent study using Instagram data established a model to predict depression by analyzing images to screen for depression. Leveraging machine learning, the researchers applied color analysis, metadata and algorithmic face detection to compute statistical features from the photos, predicting depression more accurately with these features than a human assessment of the images. Not only was the model accurate, in some cases, it correctly assessed images for depression before the individual was diagnosed, suggesting a promising application for early diagnosis.
Borderline Personality Disorder
Characterized by an unstable sense of self, unstable relationships and unstable emotions, borderline personality disorder correlates to higher rates of self-harm and suicide than the total population. Understanding of this diagnosis has remained limited, though experts associate some genetic, social and environmental factors. Through a computer-controlled game, researchers can prompt a feeling of social rejection and measure feelings of the human player, according to a 2017 article from the MIT Technology Review. In the game, two of the three players are actually computer-controlled; however, the subject believes they are all human-controlled. By controlling how often the subject receives the ball – which is really not at all after the first pass – the game provokes sadness or anger. The intensity level of these feelings, researchers have found, vary greatly between those with BPD and those without. And, not only did those with BPD feel more intensely, but even when the ball passes were fair, they still felt excluded.
Diagnosing autism early on can lead to better outcomes, and yet the process is cumbersome for families, often requiring multiple in-person appointments at clinics with limited availability. But through a facet of machine learning that locates subtle patterns often untraceable to human observation, deep learning may be able to supersede even statistical analysis by observing clusters of differences, instead of seeking a single, differentiated trait, according to a 2018 article from The Atlantic.
Cognitive Disorders in Children
For children, remaining untreated for a cognitive disorder, like Attention-Deficit/Hyperactivity Disorder (ADHD) can be problematic to learning. With a program helmed by the University of Texas at Arlington (UTA) researchers, children are assessed during both physical and computer play for attention, decision-making, emotion management and executive function skills, and the collected data is analyzed, creating an outputted recommendation for intervention. Founded on expertise and interdisciplinary perspectives, the system is intended to work for any special education practice across the globe.
Focus on Symptoms
Machine learning is also striving to understand symptoms from a data-driven perspective. For example, data can be grouped into independent clusters, based on statistically significant symptoms. When this was done in a 2018 study, the algorithm produced six clusters, compared to the human variation, which had only three. Specific to psychosis, Natural Language processing can better analysis psychosis and thought disorder, which is a primary symptom of schizophrenia, even to the degree of simply highlighting those with vulnerability to schizophrenia from computerized speech analysis.
While traditionally observation and assessment were key drivers in diagnosis mental health conditions, AI offers a groundbreaking opportunity to detect the propensity for mental health conditions, providing a chance for early detection that can be pivotal for treatment. In practice, higher-risk groups could be screened easily through an AI system. For example, the Department of Veterans Affairs partnered with Cogito to pilot an app developed to monitor veteran’s mental health, according to the World Economic Forum.
While AI is promising, it’s certainly not perfect. Diagnostic categories are murky, and AI must be trained and, therefore, monitored and validated on an ongoing basis. In addition, while in business applications improving accuracy may be precise enough, when it comes to the field of health, even a 10% error rate represents serious risks to individual health. Not to mention that the training of AI is dependent on massive datasets and focuses primarily on speech, which makes it vulnerable to biases in its dataset of demographic groups that don’t have proper representation in the data. While none of these factors dampen the potential for AI to improve mental health diagnosis, they do highlight the need for more research and development in this promising new mental health discipline.