Spiking neural networks (SNNs) are often called the third generation of neural models. They communicate with brief asynchronous pulses rather than continuous values, which suits event-driven sensors and low-power neuromorphic hardware. The mathematics behind them is split across neuroscience textbooks, machine learning papers, and stochastic process literature, and a researcher entering the area runs into a notation problem before anything else. The same neuron model is written one way in a textbook, another way in a machine learning paper, and a third way in the stochastic process literature. This tutorial collects what a graduate student or research engineer needs in order to start working with SNNs, in one place and with one notation. We start with the leaky integrate-and-fire (LIF) neuron, derived from a conductance-based picture of the membrane. Reset semantics, the spike response model, and the broader family that includes Hodgkin-Huxley and adaptive exponential models are discussed. Network equations are written out for feedforward and recurrent architectures. We cover the main neural coding schemes (rate, time-to-first-spike, rank-order, phase, burst, population) and discuss when each is appropriate. The neuromorphic datasets a beginner is likely to encounter, including N-MNIST, DVS-Gesture, CIFAR10-DVS, SHD, and SSC, are described together with the tensor formats event-based data takes. For learning, we derive pair-based spike-timing-dependent plasticity from exponential traces and develop the surrogate gradient framework, which has become the dominant tool for training deep SNNs by backpropagation through time. Reset semantics, the choice of surrogate function, and common pitfalls in BPTT are addressed in a way that maps onto code. Alternative training paradigms (e-prop, EventProp, SLAYER, three-factor rules, ANN-to-SNN conversion) are introduced briefly so the reader knows what else is available. A practical section walks through a first SNN training loop with framework-agnostic pseudocode and points to the main software libraries (snnTorch, SpikingJelly, Norse). Throughout the article, equations carry derivational status labels (exact, reduction, approximation, heuristic) so that the reader sees at a glance which steps are mathematical identities and which involve approximations. We do not cover hardware implementation, detailed point process theory, expressivity proofs, or open problems in SNN complexity; these belong in a more advanced treatment. The article is meant to be read linearly, and a suggested reading path closes it.