The growing dependence on algorithmic decision-making in public and private sectors has intensified concerns about fairness,accountability, and transparency. Citizens often encounter opaque outcomes in domains such as housing, welfare, employment, andvisaprocessing, with little visibility into the reasoning behind automated judgments. This paper proposes a Community-DrivenAIAuditPlatform designed to bridge the gap between policy-level AI governance principles and lived experiences of affectedindividuals.Thesystem enables users to anonymously submit reports of questionable or biased AI decisions, which are then processedusingnaturallanguage processing (NLP) techniques for metadata extraction and bias categorization. Structured data are stored in a lightweight SQLitedatabase and visualized through interactive dashboards that highlight bias patterns, geographic disparities, and domain-specificanomalies.By integrating citizen narratives with explainable analytics, the platform offers a transparent, participatory frameworkfor algorithmicaccountability and regulatory reform. The proposed approach emphasizes open-source accessibility, privacy preservation, andsocialempowerment, contributing to a more equitable and trustworthy digital ecosystem. Keywords: Responsible AI, Algorithmic Transparency, Fairness and Accountability, Citizen-Led Audit Platform, Natural LanguageProcessing (NLP), Data Anonymization, Bias Visualization Dashboard, Public-Sector AI Governance