Skip to main content

AI helps decode horse body language for better veterinary care

Two women in stable interacting with horse
Working with Elin Hernlund from SLU (center), Hedvig Kjellström created a 3D motion model that "gives animals a digital voice" to communicate with veterinarians. (Photo: David Callahan)
Published Apr 09, 2025

Researchers are using AI to bridge the communication gap between horse and human. Combining 3D motion capture and machine learning, a new modeling system would equip veterinarians with a powerful visual tool for interpreting equine body language—the key to detecting physical and even behavioral problems.

Based on new research from KTH Royal Institute of Technology and Swedish University of Agricultural Science (SLU), the platform can reconstruct the exact 3D motion of horses from videos, using an AI-based parametric model of the horse’s pose and shape. The model is precise enough to enable a veterinarian, for example, to spot telling changes which could otherwise be overlooked or misinterpreted in an examination, such as in a horse’s posture or their body weight.

The system—titled DESSIE—employs disentangled learning, which separates different important factors in an image and helps the AI avoid confusion with background details or lighting conditions, says Hedvig Kjellström, a Professor in computer vision and machine learning at KTH.

“DESSIE marks the first example of disentangled learning in non-human 3D motion models,” Kjellström says.

Elin Hernlund, Associate Professor in biomechanics at SLU and equine orthopedics clinician, says DESSIE would enable greater accuracy in observation and interpretation of horses’ movements and, as a result, earlier and more precise intervention than today. In a sense, it enables getting critical information “straight from the horse’s mouth.”

“Horses are powerful but fragile and they tell us how they are feeling by their body language,” Hernlund says. “By watching their gate we can see, for example, if they are offloading pain,” she says.

“We say we created a digital voice to help these animals break through the barrier of communication between animals and humans. To tell us what they are feeling,” Hernlund says. “It’s the smartest and highest resolution way to extract digital information from the horse’s body—even their faces, which can tell us a great deal.”

The research team are further training DESSIE with images of a wider variety of horse breeds and sizes, which would enable them to link genetics to phenotypes and gain a better understanding of the core biological structure of animals.

“To achieve this, we're asking breeders to send images of their breeds to capture as much variation as possible," Hernlund says.

David Callahan

Publication

Dessie: Disentanglement for articulated 3D horse shape and pose estimation from images, Asian Conference on Computer Vision (ACCV) DOI: 10.48550/arXiv.2410.03438

Related: The Poses for Equine Research Dataset, Scientific Data, DOI: 10.1038/s41597-024-03312-1 

About the researcher

Hedvig Kjellström is a Professor in the Division of Robotics, Perception and Learning, KTH, and also affiliated with Swedish e-Science Research Centre and Max Planck Institute for Intelligent Systems, Germany. Her research focus is Computer Vision and Machine Learning. The general theme is methods for enabling artificial agents to interpret human and animal behavior.

Hedvig Kjellström, hedvig@kth.se