This study demonstrates the efficacy of structured pruning and architectural fine-tuning on the YOLOv7-tiny model for eye state detection, emphasizing optimization for real-time applications. Structured pruning significantly reduced the model's complexity and storage size while maintaining high detection accuracy, as evidenced by stable precision, recall, and
[email protected] metrics across iterations. Further fine-tuning adjusted the model's width and depth, optimizing efficiency and processing speed without compromising performance. These optimizations yielded YOLOv7-tiny variants that are both computationally efficient and accurate, suitable for resource-constrained environments. The findings highlight the critical role of model optimization in deploying effective neural networks for specific detection tasks in real-time scenarios.