Automatic pose detection in farm animals
Keywords:Deep Learning, Precision Livestock, Automation
Contextualization: Animals use a wide range of variations on their body poses that can be interpreted as information about their health, welfare, or activity level. However, direct observation of these poses is a great time-consuming activity and economically unfeasible task for farmers. Although, with the use of computer vision techniques, it is possible to implement automatic observation systems on farms making this complicated and costly activity a viable alternative. Knowledge gap: Currently, there is a lack of a pose estimation model exclusively for farm animals, which allows the development of automatic posture detection systems. Purpose: The objective of this work was to evaluate the performance of a re-trained neural network model for the detection of poses in some ruminant species and horses. Methodology: More than ten thousand images of ruminants and horses were downloaded from the Imagenet database. From these images, 2000 cattle and 591 other species were selected for re-training and evaluation of the model, respectively. These images were labelled with the COCO Annotator software. This process consisted of the manual identification of eight key points on the animals’ anatomy in each image. The retraining process was carried out with the detectron2 library in Python. Object Keypoint Similarity was used to quantify the precision of the model. Results and conclusions: The Object Keypoint Similarity index established that the learning developed by the model to identify key points in cattle can be used for the same task in other farm animals. Horses and buffaloes had the best detection results. In conclusion, a relatively small data set of position in animals allows evaluating the generalizability of the inference of the models within (cattle) and outside the domain (other ruminants and equines). This type of work serves as a baseline for the development of automatic farm animal monitoring systems.
How to Cite
Copyright (c) 2022 John Fredy Ramirez Agudelo, Jose Fernando Guarín Montoya, Sebastian Bedoya Mazo
This work is licensed under a Creative Commons Attribution 4.0 International License.