Preprint Article Version 1 Preserved in Portico This version is not peer-reviewed

YOLOv2 for Pigs Detection in Industrial Farming

Version 1 : Received: 3 September 2020 / Approved: 4 September 2020 / Online: 4 September 2020 (07:59:03 CEST)

How to cite: Khan, A.Q.; Khan, S. YOLOv2 for Pigs Detection in Industrial Farming. Preprints 2020, 2020090088 (doi: 10.20944/preprints202009.0088.v1). Khan, A.Q.; Khan, S. YOLOv2 for Pigs Detection in Industrial Farming. Preprints 2020, 2020090088 (doi: 10.20944/preprints202009.0088.v1).

Abstract

Generic object detection is one of the most important and flourishing branches of computer vision and has real-life applications in our day to day life. With the exponential development of deep learning-based techniques for object detection, the performance has enhanced considerably over the last 2 decades. However, due to the data-hungry nature of deep models, they don't perform well on tasks which have very limited labeled dataset available. To handle this problem, we proposed a transfer learning-based deep learning approach for detecting multiple pigs in the indoor farm setting. The approach is based on YOLO-v2 and the initial parameters are used as the optimal starting values for train-ing the network. Compared to the original YOLO-v2, we transformed the detector to detect only one class of objects i.e. pigs and the back-ground. For training the network, the farm-specific data is annotated with the bounding boxes enclosing pigs in the top view. Experiments are performed on a different configuration of the pen in the farm and convincing results have been achieved while using a few hundred annotated frames for fine-tuning the network.

Subject Areas

YOLOv2; transfer learning; pig farming; object detection

Comments (0)

We encourage comments and feedback from a broad range of readers. See criteria for comments and our diversity statement.

Leave a public comment
Send a private comment to the author(s)
Views 0
Downloads 0
Comments 0
Metrics 0


×
Alerts
Notify me about updates to this article or when a peer-reviewed version is published.
We use cookies on our website to ensure you get the best experience.
Read more about our cookies here.