Researchers at the University of Southern California (USC) and Facebook have developed a system to track the facial expressions of users wearing a virtual-reality headset and transfer them to a virtual avatar.
The system tracks the motion of a user's mouth using a three-dimensional (3D) camera attached to the headset with a short boom. Meanwhile, movements of the upper part of the face are measured using strain gauges added to the foam padding that fits the headset to the face. The two data sources then are combined to create an accurate 3D representation of the user's facial movements that can be used to animate a virtual character.
"This is the first facial tracking that has been demonstrated through a head-mounted display," says USC professor Hao Li.
The system is based on software that can combine data from the sensors tracking the upper and lower parts of the face and match the result onto a 3D model of the face. The software requires a user to go through a brief calibration process the first time the system is used. This step collects data that helps the software correctly match the streams of data from the upper and lower parts of the face. However, Li is working on eliminating this step by feeding the software more data on different faces.
From Technology Review
View Full Article
Abstracts Copyright © 2015 Information Inc., Bethesda, Maryland, USA
No entries found