Artificial intelligence is rapidly changing the landscape of decision making in critical areas, but questions of fairness,transparency, and accountability are at the top of ethical AI discussion. This empirical research paper examines the design of human-centredAI systems by analyzing the COMPAS recidivism risk assessment tool, which is popular in the criminal justice system of the U.S. Buildingon secondary qualitative analysis of the publicly accessible COMPAS dataset, the study reveals the way in which algorithmic predictions,even though statistically validated, demonstrate significant differences between racial groups. Black defendants are much more likely to bemisclassified as high risk compared to the white defendants and this raises serious questions about bias and equity. The research shows thattechnical accuracy in itself, even when the COMPAS manages to obtain moderate predictive validity (AUC ≈ 0.70), is not enough forethical deployment. In its place, the findings promote the incorporation of dynamic fairness constraints, clear explanations, and strongaccountability mechanisms. This work offers practical insights for policymakers, AI developers, and practitioners to inform them of howto build more just and trustworthy AI systems that are consistent with societal values and human rights.Keywords: Algorithmic fairness, Human-centred AI, Recidivism, Transparency, Explainable AI, Accountability, Bias mitigation, AIgovernance, COMPAS, Ethical AI