Upcoming Events! Community Event Calendar
Social Media Follow us, or not!

An Interview with Faceware Written Saturday 23rd of September 2017 at 01:53pm by Erris

When the new Faceware tech for Star Citizen was announced during Gamescom, Relay reached out to Faceware with some questions. Today we bring you the answers.


The answers below were provided by Peter Busch, Vice President of Business Development and day-to-day operations at Faceware - a huge thank you to him!

---

Did you approach CIG, or did CIG approach you?

CIG approached us, although we have since learned that around the time of the launch of their original crowdfunding campaign, some of the backers had requested a feature like what FOIP is. Coincidentally, Faceware had been working to develop our realtime product offering within our Interactive Division, which opened in 2016. CIG was the first customer to contact us.

How long did it take to implement the Faceware solution?

Discussions have been ongoing between our dev teams for the most part of a year, but the bulk of the integration has been completed in the month or two leading up to Gamescom.

How long have you been partnered with CIG?

Our relationship to CIG dates back several years. In addition to the FOIP feature, CIG uses our Analyzer and Retargeter software in their animation production for animation within Star Citizen, including Squadron 42.

Had you heard about Star Citizen before?

We’ve been Chris Roberts fans for years. We knew of the crowdfunded project back in 2012 — the same year Faceware spun out from our core technology company, Image Metrics.

When did Faceware start, and how did the idea for it come about?

Faceware dates back to 2003 with the Hitman and Grand Theft Auto titles. The concept came out of the University of Manchester (UK) and was originally offered exclusively as a facial mocap service to the entertainment industry under the Image Metrics brand. In 2008, we partnered with Double Negative to adopt the Faceware products directly into their visual effects pipeline. The company Faceware Technologies spun out of Image Metrics in 2012 and began offering standalone facial mocap software and hardware to the entertainment industry. The core team behind Faceware has been in the animation industry for almost 20 years. Its core staff has been together for nine of those years. The idea behind Faceware Technologies was simple—make dominating facial mocap products that bring digital characters to life.

What other projects, if any, has Faceware worked on?

Faceware has been used on hundreds of projects in and outside of the entertainment industry. In the video game space, some notable projects include Destiny 1 & 2Call of Duty Advanced and Infinite WarfareGrand Theft Auto III, IV, and VRed Dead RedemptionShadow of WarFIFANBA2K09-18Madden18, and Star Wars Battlefront II.

What’s next for Faceware?

Our focus right now is on our Interactive Division and our realtime technologies, and developing news ways for consumers to interact with games, apps, brands, ads, and more. In gaming, that includes developing interactive experiences, like what we are doing with CIG. We’ve also expanded into Augmented Reality, and are the official AR provider of the Baltimore Ravens and Chicago Bears. Our core technology is always in development. There are exciting things to come from us!

Have people become more interested since the Star Citizen demo? 

Absolutely.  Star Citizen crashed our website not once, but twice during the debut of the FOIP feature. We’ve brought on additional staff since the demo. These are good problems to have—ever since CIG approached us, we knew that together we could revolutionize the way in which players communicate in-game.

Have VR helmets been considered in the design of the software or is it assumed that other software will deliver the same result? 

VR is a very interesting area for us. We’ve worked with Oculus as far back as 2014 to determine how our technology can apply to VR.  Because our software simply needs color video as an input to get animation, we have a very specific strategy for VR. We’ve been working on our technology for nearly two decades—quite a headstart for anyone wanting to do live animation in VR. We can’t tell you too much, but we’re keeping a close eye on the social titles within VR.

Any thoughts to people who find the results ‘uncanny’?

The uncanny valley is a tremendous challenge to gaming, and ultimately something that never will truly be overcome —meaning that people will never really believe that real people are talking to each other in a game. There are many reasons for this, mostly related to current technologies. But even when the technologies are in place, it’s hard to believe that the mind will ever be tricked enough for humans to believe a digital character is actually human.

At the end of the day, what Faceware provides is one ingredient in the recipe of what it takes to create believable animation: motion. We will continue to improve the motion side of what we can achieve in realtime, but because we are just one element, we need to work collaboratively with the engine providers and processing companies, as well as developers like CIG, to make the improvements necessary to create even better artwork. 

Every innovation has to start somewhere. What we have built with CIG is phenomenal and has the foundation to be truly something special.

How will the software account for errors? Say another person comes into view of the camera, or someone is eating.

Our tech is calibrated and “trained” on a specific users’ faces—meaning it is looking for your face vs your friends’ face who is photo bombing your camera.  If you re-calibrate on their face, it will now be looking for their face. That will minimize the errors if multiple people move in and out of the frame. In addition, we are building a layer of code into the engine called “Motion Logic,” which is an aesthetic set of rules that ensures the animation will automatically try to always look correct. It may not always do exactly what you are doing, but we can easily make sure that your character's face doesn’t, for example, explode if you are eating or drinking water. 

Will the software be toggleable?

This would be a decision that the CIG team will make, but there’s nothing in our tech that has to have an ‘always-on’ functionality.

Outside of games what practical applications does the company see for this technology? 

We think what we’re doing with CIG is just the beginning for Live animation. Wherever you can combine a great IP or brand, an audience, and a screen that displays animation, the possibilities are endless. Think of interactive movie posters, or live animation installations (we recently made Chester the Cheetah from Cheetos come to life in New York). In the case of the Baltimore Ravens, we’re overlaying digital imagery on real faces because our software understands where your face is and how it’s moving. In terms of gaming and VR—what we enable is truly a social experience that will become more commonplace in the future for all games. We think anyone wanting to create Machinma or Twitch Streams should certainly be excited. ;)

---

That is all for our interview with Faceware, we hope you found it as interesting as we did!  

Once again, a huge thank you to Peter Busch!

Erris

Founder

Erris is Canadian. He does some random things for Relay, no-one really knows what, but still they're stuck with him. He’s also written one Young Adult novel that he can’t stand, which can be found here.

You can find him on Twitter too, if you want.