This review synthesizes empirical evidence on how large language models (LLMs) are used as programmingtutorsacrossfivekey applications: code explanation, debugging, formative feedback, exercise/test generation, and assessment. FollowingPRISMA2020guidance, studies from 2020–2025 were screened for relevance to programming education, with outcomes on learning effectiveness,errorpatterns, and academic integrity synthesized via narrative and thematic methods. The mapping of recent studies indicates rapiduptakeofLLMs, mixed but promising learning outcomes, recurring failure modes in logic and specification adherence, and emergingacademicintegrity risks alongside mitigation practices. Implications are provided for course design in Object-Oriented Programming(OOP)contexts, assessment practices, and departmental policy, with identified gaps and recommendations for standardized evaluation. Keywords: Large language models; programming education; formative feedback; exercise/test generation; assessment; academicintegrity; object-oriented programming; Java; C++.