Unveiling and Mitigating Bias in Mental Health Analysis with Large Language Models

Y Wang, Y Zhao, SA Keller, A de Hond…�- arXiv preprint arXiv�…, 2024 - arxiv.org
arXiv preprint arXiv:2406.12033, 2024arxiv.org
The advancement of large language models (LLMs) has demonstrated strong capabilities
across various applications, including mental health analysis. However, existing studies
have focused on predictive performance, leaving the critical issue of fairness underexplored,
posing significant risks to vulnerable populations. Despite acknowledging potential biases,
previous works have lacked thorough investigations into these biases and their impacts. To
address this gap, we systematically evaluate biases across seven social factors (eg, gender�…
The advancement of large language models (LLMs) has demonstrated strong capabilities across various applications, including mental health analysis. However, existing studies have focused on predictive performance, leaving the critical issue of fairness underexplored, posing significant risks to vulnerable populations. Despite acknowledging potential biases, previous works have lacked thorough investigations into these biases and their impacts. To address this gap, we systematically evaluate biases across seven social factors (e.g., gender, age, religion) using ten LLMs with different prompting methods on eight diverse mental health datasets. Our results show that GPT-4 achieves the best overall balance in performance and fairness among LLMs, although it still lags behind domain-specific models like MentalRoBERTa in some cases. Additionally, our tailored fairness-aware prompts can effectively mitigate bias in mental health predictions, highlighting the great potential for fair analysis in this field.
arxiv.org
Showing the best result for this search. See all results