Software allows you to control faces of people from videos

Tuesday, March 22, 2016

A research team at Stanford University have developed Face2Face, a software that allows users to control the facial expressions of people in YouTube videos in real-time, by only using a webcam.

We present a novel approach for real-time facial reenactment of a monocular target video sequence (e.g., Youtube video). The source sequence is also a monocular video stream, captured live with a commodity webcam. Our goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion. To this end, we first address the under-constrained problem of facial identity recovery from monocular video by non-rigid model-based bundling. At run time, we track facial expressions of both source and target video using a dense photometric consistency measure. Reenactment is then achieved by fast and efficient deformation transfer between source and target. The mouth interior that best matches the re-targeted expression is retrieved from the target sequence and warped to produce an accurate fit. Finally, we convincingly re-render the synthesized target face on top of the corresponding video stream such that it seamlessly blends with the real-world illumination.

Follow Blame it on the Voices on Twitter | Blame it on the Voices on Facebook

If you liked this post, you can subscribe to the Blame It On The Voices RSS feed and get your regular fix

Post a Comment

Dear spammers! Please note that a nofollow attribute is automatically added to all the comment-related links!

You can use the following HTML tags: <b>, <i>, <a>