Bias and Fairness in Automated Loan Approvals: A Systematic Review of Machine Learning Approaches

Authors

DOI:

https://doi.org/10.47344/jbzmnx25

Keywords:

AI bias, fairness techniques, loan approval, financial inclusion, regulatory compliance, algorithmic fairness, proxy bias

Abstract

Artificial intelligence (AI) is increasingly transforming credit approval processes, enabling financial institutions to assess risk more efficiently and at greater scale. As these systems become more embedded in lending decisions, concerns around fairness, bias, and accountability have grown significantly. Many of these concerns stem from the use of historical data, proxy variables, and model optimization choices that can unintentionally reinforce existing social and economic inequalities. This work presents a systematic overview of the types and sources of bias in AI - driven loan approval systems and critically examines how machine learning techniques attempt to address them. It also highlights emerging solutions, including explainable AI, federated learning, human-in-the-loop frameworks, and intersectional fairness approaches. Despite ongoing advancements, unresolved challenges remain - particularly the need for dynamic fairness monitoring and for addressing intersectional biases affecting individuals from multiple marginalized groups. To bridge these gaps, the paper emphasizes the importance of interdisciplinary collaboration among AI developers, regulatory bodies, and social scientists. It advocates embedding fairness as a core design principle in the development and deployment of future AI systems. Overall, this study contributes to the growing effort to develop more transparent, inclusive, and socially responsible financial technologies.

Additional Files

Published

2025-05-27

How to Cite

Raziyeva, S., & Meraliyev, M. (2025). Bias and Fairness in Automated Loan Approvals: A Systematic Review of Machine Learning Approaches. Journal of Emerging Technologies and Computing, 1(1). https://doi.org/10.47344/jbzmnx25