Intracranial tumor diagnosis requires exceptional precision due to the complex morphology of brain structures andtheclinicalrisks associated with misclassification. While deep learning has improved the automation of MRI-based tumor analysis, itsblack-boxnature limits trust, interpretability, and clinical deployment. This paper introduces an explainable neuro-symbolic spikingneural network(NS-SNN) framework that integrates biologically inspired spike-based computation with symbolic reasoning mechanismstoachievehigh-precision and fully interpretable intracranial tumor diagnosis. The approach leverages the temporal dynamics andenergyefficiencyof spiking neural networks while embedding medical knowledge graphs and rule-based logic to provide transparent diagnosticinsights.By bridging data-driven neural representations with human-understandable symbolic rules, NS-SNNs offer enhancedexplainability,robustness, and alignment with radiological reasoning. This work demonstrates how neuro-symbolic integration improvesdecisiontraceability, supports causal interpretation, enables uncertainty analysis, and meets emerging clinical requirements for transparentAIinhealthcare. Results indicate that NS-SNNs provide competitive diagnostic accuracy while generating interpretable reasoningpathwaysthat can be reviewed and validated by clinicians, positioning them as a promising next-generation AI paradigmfor safe andprecisebraintumor diagnosis. Keywords:Neuro-symbolic AI; Spiking Neural Networks (SNNs); Brain Tumor Diagnosis; Explainable AI; Medical Imaging;MRIAnalysis; Interpretable Machine Learning.