Intracranial tumor diagnosis requires exceptional precision due to the complex morphology of brain structures and the clinical risks associated with misclassification. While deep learning has improved the automation of MRI-based tumor analysis, black-box nature limits trust, interpretability, and clinical deployment. This paper introduces an explainable neuro-symbolic spiking neural network (NS-SNN) framework that integrates biologically inspired spike-based computation with symbolic reasoning mechanisms to achieve high-precision and fully interpretable intracranial tumor diagnosis. The approach leverages the temporal dynamics and energy efficiency of spiking neural networks while embedding medical knowledge graphs and rule-based logic to provide transparent diagnostic insights. By bridging data-driven neural representations with human-understandable symbolic rules, NS-SNNs offer enhancedexplainability,robustness, and alignment with radiological reasoning. This work demonstrates how neuro-symbolic integration improvesdecisiontraceability, supports causal interpretation, enables uncertainty analysis, and meets emerging clinical requirements for transparent AI in healthcare. Results indicate that NS-SNNs provide competitive diagnostic accuracy while generating interpretable reasoning pathways that can be reviewed and validated by clinicians, positioning them as a promising next-generation AI paradigm for safe and precise brain tumour diagnosis. Keywords: Neuro-symbolic AI; Spiking Neural Networks (SNNs); Brain Tumor Diagnosis; Explainable AI; Medical Imaging; MRI Analysis; Interpretable Machine Learning