--- res: bibo_abstract: - This chapter presents a method for real-time animation of highly detailed facial expressions based on sparse motion captures data and a limited set of static example poses. The method for real-time animation of highly detailed facial expressions decomposes geometry into large-scale motion and fine-scale details, such as expression wrinkles. Both large- and fine-scale deformation algorithms run entirely on the GPU, and our implementation based on CUDA achieves an overall performance of about 30 fps. The face conveys the most relevant visual characteristics of human identity and expression. Hence, realistic facial animations or interactions with virtual avatars are important for storytelling and gameplay. However, current approaches are either computationally expensive, require very specialized capture hardware, or are extremely labor intensive. At runtime, given an arbitrary facial expression, the algorithm computes the skin strain from the relative distance between marker points and derives fine-scale corrections for the largescale deformation. During gameplay only the sparse set of marker-point positions is transmitted to the GPU. The face animation is entirely computed on the GPU where the resulting mesh can directly be used as input for the rendering stages. This data can be easily obtained by traditional capture hardware. The proposed in-game algorithm is fast. It also is easy to implement and maps well onto programmable GPUs.@eng bibo_authorlist: - foaf_Person: foaf_givenName: Bernd foaf_name: Bernd Bickel foaf_surname: Bickel foaf_workInfoHomepage: http://www.librecat.org/personId=49876194-F248-11E8-B48F-1D18A9856A87 orcid: 0000-0001-6511-9385 - foaf_Person: foaf_givenName: Manuel foaf_name: Lang, Manuel foaf_surname: Lang bibo_doi: 10.1016/B978-0-12-384988-5.00027-9 dct_date: 2011^xs_gYear dct_publisher: Science Direct@ dct_title: From sparse mocap to highly detailed facial animation@ ...