The company also announced a $1.35 million seed funding round from Silicon Valley investors, including Y Combinator and virtual reality luminaries. Since 2005, they have funded over 1,200 startups and their combined companies have a valuation of US$65 billion. Y Combinator is a seed accelerator, started in March 2005 and their investment partners have included AirBnB, Dropbox, Reddit and many others. As Loom.ai is just a part of the pipeline solution for a virtual interactive avatar, the company is also releasing a public API for powering applications in areas like VR, games, and virtual worlds. Loom.ai has a patent pending algorithm that uses deep learning and the team’s expertise in computer vision along with VFX conceptual approaches inherited from the team’s years of feature film work.
#Faceware vs faceshift software#
The company’s fully automated software creates 3D avatars that are lifelike, animatable and stylizable. “Using Loom.ai’s facial musculature rigs powered by robust image analysis software, our partners can create personalized 3D animated experiences with similar visual fidelity seen in feature films, all from a single image.” “The magic is in bringing the avatars to life and making an emotional connection,” added Ramasubramanian. “The new suite of computational algorithms built by Loom.ai will democratize the process of building believable 3D avatars for everyone, a process that was previously expensive and exclusive to Hollywood actors benefiting from a studio infrastructure,” he adds. “The key to building believable digital characters is to extract the perceptually salient features from a human face in 3D: for instance, Mark Ruffalo’s version of the Hulk in The Avengers,” said Bhat.
#Faceware vs faceshift how to#
It can also provide a rigged avatar but it does not address how to drive your Avatar in a VR space. It provides a way to produce the cartoon version of you from a still you might take on your phone. As the system allows facial animation in a VFX style, the range of expressions their avatar can deliver is greatly in excess of the Facebook approach.
Loom.ai’s face platform provides the essential building block for providing informative social interactions and co-presence between individuals in virtual reality or augmented reality. It builds a face that is intended to be driven by a traditional vfx pipeline for previz or connected to some as yet unknown or undeveloped facial tracking system. It is not designed as a system that would be rigged and driven by markerless facial tracking. With the Facebook model, the expression of your avatar would be driven by the system inferring your facial expression from hand and head movements.
In that VR story, we discussed Facebook’s approach to avatars which is very much built on a Constructo set or Identikit idea of building a version of yourself, rather than using an image of yourself and having it automatically generated. The Facebook look released just prior to VR on the Lot.
He recently was featured in fxguide as one of the panelists on the VR on the Lot panel at Paramount Studios chaired by our own Mike Seymour. Kiran Bhat previously architected ILM’s facial performance capture system and was the R&D facial lead on The Avengers, Pirates of the Caribbean, and TMNT. Mahesh Ramasubramanian comes from DreamWorks Animation where he was the visual effects supervisor on such movies as Madagascar 3 and Home, and he also worked on the Academy Award-winning Shrek. Loom.ai is a San Francisco based startup founded by award winning visual effects veterans from DreamWorks and LucasFilm. Loom.ai was founded by CEO Mahesh Ramasubramanian and CTO Kiran Bhat, who together bring decades of hard core visual effects and animation experience with digital faces. Loom.ai today publicly unveiled their new platform to create personalized 3D avatars from a single selfie.