Author profile picture

Gollum in ‘The Lord of the Rings’, Thanos in ‘Avengers’ and Snoke in ‘Star Wars’ are marvels of the world of motion-capture, a film technique that follows the movements of an actor and translates them into computer animation, and in doing so awakens a life-like figure, so to speak.

Motion capture without being rigged up

Motion capture is not limited to just the big screen but extends to science as well. Behavioral scientists have developed and used similar tools to study and analyze the poses and movements of animals in various circumstances.

But this presents a problem. When it comes to capturing the motion of people, a person has to wear rather complicated equipment with markers that let the computer know where each part of the body is in three-dimensional space. But animals have a tendency not to appreciate such a get-up.

The laboratory of the young American neuroscientist has developed an extensive software toolkit called DeepLabCut.

To solve that problem, scientists are combining motion capture with deep learning, a method that essentially allows a computer to learn how to perform a task in the best possible way, for example by recognizing specific points in video images. The idea is to teach the computer to follow and even predict an animal’s movements or poses without the need for markers.

One of the scientists leading the ‘marker-less’ approach is Mackenzie Mathis of the École Polytechnique Fédérale de Lausanne, the Lausanne University of Technology in Switzerland. The laboratory of this young American neuroscientist has developed a comprehensive software tool kit called DeepLabCut, which can track and identify animal movements in real-time by using video images.

DLC-Live! has low latency levels

Earlier this month Mathis and colleagues from Harvard University presented a new version under the name DeepLabCut-Live! (DLC-Live!). The article can be found in eLife, an open-access scientific journal for bio- and biomedical sciences.

DLC-Live! distinguishes itself by its very low latency levels. With a 15,000th of a second at more than 100 FPS( frames per second), this amounts to an almost real-time registration of motion. DLC-Live! uses customized networks to predict the poses of animals based on video images (frames), with the capacity to get up to 2500 FPS offline on a standard graphics processor (GPU). This makes DLC-Live! tremendously valuable for observing and examining the neural mechanisms of behavior. It can thus be linked to laboratory hardware for neurological analysis. DLC-Live! is available for researchers to use (open source).

This is bound to be is good news too for the makers of (any eventual) new releases of ‘The Lion King’, ‘Flipper’ and ‘Lassie.’