A study has revealed using AI tools for mental health platforms could perpetuate ethnic and gender biases inherited from the dataset used for training the models.
According to a study by the University of Colorado, led by Professor Theodora Chaspari, some AI-powered tools used in programs for treating mental health patients could rely on biased information containing stereotypes and other information that could compromise the programs’ effectiveness.
As a result, such AI tools can be biased towards patients’ ethnicity and gender when used to screen patients for various mental health conditions such as anxiety and depression.
The worrying reality has raised concerns about the effectiveness and fairness of emerging mental health technologies, especially when the healthcare sector is rapidly adopting AI technologies to improve efficiency, effectiveness, and access to quality care.
Professor Chaspari from the Department of Computer Science noted that AI models can propagate human or societal biases if trained on limited data or if the models are not adequately trained on inclusive data.
In one study, the researchers discovered a dangerous flaw when people’s audio samples were subjected to a set of learning AI algorithms and noted that the models were more likely to underdiagnose women for risks of having depression than their male counterparts.
In another study, a set of algorithms assigned equal levels of depression to both men and women, despite the women in the study having more symptoms than men.
To test for ethnic bias, researchers put a group of white, Latin American, and black participants to give a speech in front of an unfamiliar group and analyzed their performance. However, the results were unexpected.
While participants of Latin American origin reported being more nervous than the white and the black participants, the AI model did not detect it, pointing to a potential bias in the data used to train the algorithms.
The Study, published in the journal Frontiers in Digital Health, demonstrates that the AI-powered algorithms that clinicians may rely on to screen for mental illnesses such as depression and anxiety can give inaccurate analysis by making assumptions based on a patient’s gender and ethnicity.