For his work in non-rigid shape registration, human digitization, and real-time facial performance capture, Li received the TR35 Award in 2013 from the MIT Technology Review.[6] He was named Andrew and Erna Viterbi Early Career Chair in 2015, and was awarded the Google Faculty Research Award and the Okawa Foundation Research Grant the same year. Li won an Office of Naval Research Young Investigator Award in 2018[7] and was named to the DARPA ISAT Study Group in 2019.[8] He is a member of the Global Future Council on Virtual and Augmented Reality of the World Economic Forum.[9]
Early life
Li's parents are Taiwanese and lived in Germany as of 2013.[1]
He has worked on dynamic geometry processing and data-driven techniques for making 3D human digitization and facial animation. During his PhD, Li co-created a real-time and markerless system for performance-driven facial animation based on depth sensors which won the best paper award at the ACM SIGGRAPH / Eurographics Symposium on Computer Animation in 2009.[12] The team later commercialized a variant of this technology as the facial animation software Faceshift[13] (acquired by Apple Inc. in 2015 and incorporated into the iPhone X in 2017[14][15]). This technique in deformable shape registration is used by the company C-Rad AB and deployed in hospitals for tracking tumors in real-time during radiation therapy. In 2013, he worked on a home scanning system that uses a Kinect to capture people into game characters or realistic miniature versions.[16] This technology was licensed by Artec and released as a free software Shapify.me. In 2014, he was brought on as visiting professor at Weta Digital to build the high-fidelity facial performance capture pipeline for reenacting the deceased actor Paul Walker[17] in the movie Furious 7 (2015).
His recent research focuses on combining techniques in Deep Learning and Computer Graphics to facilitate the creation of 3D avatars and to enable true immersive face-to-face communication and telepresence in Virtual Reality.[18] In collaboration with Oculus / Facebook, in 2015 he helped developed a facial performance sensing head-mounted display,[19] which allows users to transfer their facial expressions onto their digital avatars while being immersed in a virtual environment. In the same year, he founded the company Pinscreen, Inc.[20] in Los Angeles, which introduced a technology that can generate realistic 3D avatars of a person including the hair from a single photograph.[21] They also work on deep neural networks that can infer photorealistic faces[22] and expressions,[23] which has been showcased at the Annual Meeting of the New Champions 2019 of the World Economic Forum in Dalian.[10]
Due to the ease of generating and manipulating digital faces, Hao has been raising public awareness about the threat of manipulated videos such as deepfakes.[24][25] In 2019, Hao and media forensics expert, Hany Farid, from the University of California, Berkeley, released a research paper outlining a new method for spotting deepfakes by analyzing facial expression and movement patterns of a specific person.[10] With the rapid progress in artificial intelligence and computer graphics, Li has predicted that genuine videos and deepfakes will become indistinguishable in as soon as 6 to 12 months, as of September 2019.[26] In January 2020, Li spoke at the World Economic Forum Annual Meeting 2020 in Davos about deepfakes[27] and how they could pose a danger to society. Li and his team at Pinscreen, Inc. also demonstrated a real-time deepfake technology[28] at the annual meeting, where the faces of celebrities are superimposed onto the participants' face.
In 2020, Li and his team developed a volumetric human teleportation system which can digitize an entire human body in 3D from a single webcam and stream the content in real-time. The technology uses 3D deep learning to infer a complete textured model of a person using a single view. The team presented the work at ECCV 2020 and demonstrated the system live at the ACM SIGGRAPH's Real-Time Live! show, where they won the "Best in Show" award.[29][30]
^"Say hello to virtual human Hao Li". News - Mohamed bin Zayed University of Artificial Intelligence. Mohamed bin Zayed University of Artificial Intelligence.