Nighttime vehicle detection poses significant challenges due to reduced visibility, uneven illumination, and increased noise in low-light imagery. While deep learning approaches have achieved remarkable success in daytime scenarios, their application to nighttime conditions remains constrained by the scarcity of specialized datasets and the computational demands of existing architectures. This paper presents three primary contributions to address these challenges. First, we introduce the Low-light Vehicle Annotation Dataset (L-VAD), comprising 13,648 annotated frames captured exclusively during nighttime conditions across three vehicle categories: motorcycle, car, and truck/bus. Second, we propose TinyNight-YOLO, an ultra-lightweight detection architecture achieving competitive performance with only ∼1.0 million parameters—representing a 2.6× reduction compared to YOLO11- N and 26.4× reduction compared to YOLO11-L. Third, we provide a comprehensive benchmark evaluating ten model variants across YOLO11 and YOLOv12 families. Experimental results demonstrate that TinyNight-YOLO achieves F1-Score of 0.9207 and mAP@50 of 0.9474, representing only 1.44% accuracy reduction compared to models 2.6× larger, while outperforming YOLOv12-L (26.4M parameters) despite having 26.4× fewer parameters. Among full-scale models, YOLO11-L achieves the highest F1-Score (0.9486), while YOLO11-M attains superior mAP@50-95 (0.7271). The L-VAD dataset is publicly available at Mendeley Data (doi: 10.17632/h6p2w53my5.1), providing the research community with a dedicated resource for advancing nighttime vehicle detection. The proposed TinyNight-YOLO architecture enables practical deployment on resource-constrained edge devices while maintaining detection accuracy above 94% mAP@50.