© Pixabay
Author profile picture

Advances in the field of augmented and virtual reality enable users to have ever-more realistic experiences. Just recently, a computer scientist at the University of Saarbrücken succeeded in making virtual worlds physically “tangible” (IO reported on this). Now, researchers from the Vision and Imaging Technologies (VIT) department at the Fraunhofer Heinrich Hertz Institute (HHI) are in the process of developing new methods to enable realistic interactions with virtual characters. With the help of innovative volumetric videos.

Augmented and virtual reality is simply inconceivable without volumetric video. That’s because it’s the only method that ensures high-quality “free-viewpoint image-based rendering” (IBR) of dynamic scenes. Up until now, these scenes have been limited to pre-recorded scenes. Consequently, individual interactions with virtual characters weren’t an option. “Classic computer graphics models are usually used in order to be able to interact. Except that they don’t achieve the same level of realism,” according to a statement issued by Fraunhofer HHI.

Dr. Anna Hilsmann is head of the Computer Vision and Graphics research group within the VIT department. Together with her colleagues, she has come up with a method that utilizes data from the real world. Including all of its natural distortions and characteristics. They recently presented this method in a new paper and launched the EU-funded project “Innovative Volumetric Capture and Editing Tools for Ubiquitous Storytelling” (INVICTUS for short).

© Fraunhofer HHI

Faces are a real challenge

The key features of the proposed AR pipeline are “the addition of semantics and animation features to the captured data and the use of hybrid geometric and video-based animation methods that facilitate instant animation.” Apart from that, there is also a more reliable method for processing body and facial movements.

In particular, faces – along with their myriad of facial expressions –  traditionally pose a challenge for creators of Augmented and Virtual Reality. This is why Hilsmann and her colleagues propose a three-step solution in their publication. This was published in a special issue with the theme: “Computer Vision for the Creative Industry.”
First, use geometry to model features with low resolution – such as rough movements. Second, superimpose video-based textures to capture finer movements and subtle features. And third, use an autoencoder-based approach and synthesize traditionally overlooked features such as eyes.

In the INVICTUS project, both the appearance and the movement of actors will be recorded using state-of-the-art volumetric motion capture technologies. Volumetric avatars are then created to “enhance the development of narratives.” At the end of the project, which has been running for two years as of March 2020, three innovative authoring tools available according to the researcher. One of these is for high-resolution volumetric capture of an actor’s appearance and movement. This can then be used for high-quality offline (film) productions as well as real-time rendering. Another tool is used for editing high-resolution volumetric appearances and movement. The third one, for story authoring. The latter includes e.g. post-editing of decors, layouts, and animated characters.

Involved in the project are two research groups from the VIT department at Fraunhofer HHI (Computer Vision and Graphics, and Immersive Media and Communication) and partners Ubisoft Motion Pictures, Volograms Limited, Université de Rennes 1 and Interdigital R&D France.