Age, Biography and Wiki

Hao Li was born on 17 January, 1981 in Saarbrücken, Germany, is an American computer scientist & university professor. Discover Hao Li's Biography, Age, Height, Physical Stats, Dating/Affairs, Family and career updates. Learn How rich is he in this year and how he spends money? Also learn how he earned most of networth at the age of 43 years old?

Popular As N/A
Occupation N/A
Age 43 years old
Zodiac Sign Capricorn
Born 17 January 1981
Birthday 17 January
Birthplace Saarbrücken, Germany
Nationality Germany

We recommend you to check the complete list of Famous People born on 17 January. He is a member of famous Computer with the age 43 years old group.

Hao Li Height, Weight & Measurements

At 43 years old, Hao Li height not available right now. We will update Hao Li's Height, weight, Body Measurements, Eye Color, Hair Color, Shoe & Dress size soon as possible.

Physical Status
Height Not Available
Weight Not Available
Body Measurements Not Available
Eye Color Not Available
Hair Color Not Available

Dating & Relationship status

He is currently single. He is not dating anyone. We don't have much information about He's past relationship and any previous engaged. According to our Database, He has no children.

Family
Parents Not Available
Wife Not Available
Sibling Not Available
Children Not Available

Hao Li Net Worth

His net worth has been growing significantly in 2023-2024. So, how much is Hao Li worth at the age of 43 years old? Hao Li’s income source is mostly from being a successful Computer. He is from Germany. We have estimated Hao Li's net worth, money, salary, income, and assets.

Net Worth in 2024 $1 Million - $5 Million
Salary in 2024 Under Review
Net Worth in 2023 Pending
Salary in 2023 Under Review
House Not Available
Cars Not Available
Source of Income Computer

Hao Li Social Network

Instagram
Linkedin
Twitter Hao Li Twitter
Facebook Hao Li Facebook
Wikipedia Hao Li Wikipedia
Imdb

Timeline

1981

Hao Li (, born 1981 (age 32)) is a computer scientist, innovator, and entrepreneur from Germany, working in the fields of computer graphics and computer vision.

He is co-founder and CEO of Pinscreen, Inc, as well as associate professor of computer vision at the Mohamed Bin Zayed University of Artificial Intelligence (MBZUAI).

He was previously a Distinguished Fellow at the University of California, Berkeley, an associate professor of computer science at the University of Southern California, and former director of the Vision and Graphics Lab at the USC Institute for Creative Technologies.

He was also a visiting professor at Weta Digital and a research lead at Industrial Light & Magic / Lucasfilm.

2003

He was a visiting researcher at ENSIMAG in 2003, the National University of Singapore in 2006, Stanford University in 2008, and EPFL in 2010.

2006

He obtained his Diplom (eq. M.Sc.) in computer science at the Karlsruhe Institute of Technology (then University of Karlsruhe (TH)) in 2006 and his PhD in computer science at ETH Zurich in 2010.

2009

During his PhD, Li co-created a real-time and markerless system for performance-driven facial animation based on depth sensors which won the best paper award at the ACM SIGGRAPH / Eurographics Symposium on Computer Animation in 2009.

2011

He was also a postdoctoral fellow at Columbia University and Princeton University between 2011 and 2012.

2012

Li joined Industrial Light & Magic / Lucasfilm in 2012 as a research lead to develop next generation real-time performance capture technologies for virtual production and visual effects.

2013

For his work in non-rigid shape registration, human digitization, and real-time facial performance capture, Li received the TR35 Award in 2013 from the MIT Technology Review.

Li's parents are Taiwanese and lived in Germany as of 2013.

Li went to a French-German high school in Saarbrücken and speaks four languages (English, German, French, and Mandarin Chinese).

He later joined the computer science department at the University of Southern California as an assistant professor in 2013 and was promoted to associate professor in 2019.

In 2013, he worked on a home scanning system that uses a Kinect to capture people into game characters or realistic miniature versions.

This technology was licensed by Artec and released as a free software Shapify.me.

2014

In 2014, he spent a summer as a visiting professor at Weta Digital working on facial tracking and hair digitization technologies for the visual effects of Furious 7 and The Hobbit: The Battle of the Five Armies.

In 2014, he was brought on as visiting professor at Weta Digital to build the high-fidelity facial performance capture pipeline for reenacting the deceased actor Paul Walker in the movie Furious 7 (2015).

His recent research focuses on combining techniques in Deep Learning and Computer Graphics to facilitate the creation of 3D avatars and to enable true immersive face-to-face communication and telepresence in Virtual Reality.

2015

He was named Andrew and Erna Viterbi Early Career Chair in 2015, and was awarded the Google Faculty Research Award and the Okawa Foundation Research Grant the same year.

In 2015, he founded Pinscreen, Inc., an Artificial Intelligence startup that specializes on the creation of photorealistic virtual avatars using advanced machine learning algorithms.

The team later commercialized a variant of this technology as the facial animation software Faceshift (acquired by Apple Inc. in 2015 and incorporated into the iPhone X in 2017 ).

This technique in deformable shape registration is used by the company C-Rad AB and deployed in hospitals for tracking tumors in real-time during radiation therapy.

In collaboration with Oculus / Facebook, in 2015 he helped developed a facial performance sensing head-mounted display, which allows users to transfer their facial expressions onto their digital avatars while being immersed in a virtual environment.

In the same year, he founded the company Pinscreen, Inc. in Los Angeles, which introduced a technology that can generate realistic 3D avatars of a person including the hair from a single photograph.

2016

In 2016, he was appointed director of the Vision and Graphics Lab at the USC Institute for Creative Technologies and joined the University of California, Berkeley in 2020 as a Distinguished Fellow.

In 2022, Li was appointed associate professor of computer vision at the Mohamed Bin Zayed University of Artificial Intelligence in Abu Dhabi to direct a new AI center for Metaverse research.

He has worked on dynamic geometry processing and data-driven techniques for making 3D human digitization and facial animation.

2018

Li won an Office of Naval Research Young Investigator Award in 2018 and was named to the DARPA ISAT Study Group in 2019.

He is a member of the Global Future Council on Virtual and Augmented Reality of the World Economic Forum.

2019

They also work on deep neural networks that can infer photorealistic faces and expressions, which has been showcased at the Annual Meeting of the New Champions 2019 of the World Economic Forum in Dalian.

Due to the ease of generating and manipulating digital faces, Hao has been raising public awareness about the threat of manipulated videos such as deepfakes.

In 2019, Hao and media forensics expert, Hany Farid, from the University of California, Berkeley, released a research paper outlining a new method for spotting deepfakes by analyzing facial expression and movement patterns of a specific person.

With the rapid progress in artificial intelligence and computer graphics, Li has predicted that genuine videos and deepfakes will become indistinguishable in as soon as 6 to 12 months, as of September 2019.

2020

In January 2020, Li spoke at the World Economic Forum Annual Meeting 2020 in Davos about deepfakes and how they could pose a danger to society.

Li and his team at Pinscreen, Inc. also demonstrated a real-time deepfake technology at the annual meeting, where the faces of celebrities are superimposed onto the participants' face.

In 2020, Li and his team developed a volumetric human teleportation system which can digitize an entire human body in 3D from a single webcam and stream the content in real-time.

The technology uses 3D deep learning to infer a complete textured model of a person using a single view.

The team presented the work at ECCV 2020 and demonstrated the system live at the ACM SIGGRAPH's Real-Time Live!

show, where they won the "Best in Show" award.

For his work on visual effects, Hao has been credited in several motion pictures, including Blade Runner 2049 (2017), Valerian and the City of a Thousand Planets (2017), Furious 7 (2015), The Hobbit: The Battle of the Five Armies (2014), and Noah (2014).