Residential College | false |
Status | 已發表Published |
High Fidelity Makeup via 2D and 3D Identity Preservation Net | |
Liu, Jinliang1; Zheng, Zhedong2; Yang, Zongxin3; Yang, Yi3 | |
2024-08 | |
Source Publication | ACM Transactions on Multimedia Computing, Communications and Applications |
ISSN | 1551-6857 |
Volume | 20Issue:8Pages:230 |
Abstract | In this article, we address the challenging makeup transfer task, aiming to transfer makeup from a reference image to a source image while preserving facial geometry and background consistency. Existing deep neural network-based methods have shown promising results in aligning facial parts and transferring makeup textures. However, they often neglect the facial geometry of the source image, leading to two adverse effects: (1) alterations in geometrically relevant facial features, causing face flattening and loss of personality, and (2) difficulties in maintaining background consistency, as networks cannot clearly determine the face-background boundary. To jointly tackle these issues, we propose the High Fidelity Makeup via two-dimensional (2D) and 3D Identity Preservation Network (IP23-Net), to the best of our knowledge, a novel framework that leverages facial geometry information to generate more realistic results. Our method comprises a 3D Shape Identity Encoder, which extracts identity and 3D shape features. We incorporate a 3D face reconstruction model to ensure the three-dimensional effect of face makeup, thereby preserving the characters' depth and natural appearance. To preserve background consistency, our Background Correction Decoder automatically predicts an adaptive mask for the source image, distinguishing the foreground and background. In addition to popular benchmarks, we introduce a new large-scale High Resolution Synthetic Makeup Dataset containing 335,230 diverse high-resolution face images to evaluate our method's generalization ability. Experiments demonstrate that IP23-Net achieves high-fidelity makeup transfer while effectively preserving background consistency. The code will be made publicly available. |
Keyword | 3d Stereoscopic Loss Additional Key Words And Phrasesmakeup Transfer High Resolution Synthetic Makeup Dataset Multimedia Application |
DOI | 10.1145/3656475 |
URL | View the original |
Indexed By | SCIE |
Language | 英語English |
WOS Research Area | Computer Science |
WOS Subject | Computer Science, Information Systems ; Computer Science, Software Engineering ; Computer Science, Theory & Methods |
WOS ID | WOS:001338106300001 |
Publisher | ASSOC COMPUTING MACHINERY, 1601 Broadway, 10th Floor, NEW YORK, NY 10019-7434 |
Scopus ID | 2-s2.0-85202710817 |
Fulltext Access | |
Citation statistics | |
Document Type | Journal article |
Collection | Faculty of Science and Technology DEPARTMENT OF COMPUTER AND INFORMATION SCIENCE |
Affiliation | 1.University of Technology Sydney, Broadway, Australia 2.University of Macau, Taipa, Macao 3.Zhejiang University, Hangzhou, China |
Recommended Citation GB/T 7714 | Liu, Jinliang,Zheng, Zhedong,Yang, Zongxin,et al. High Fidelity Makeup via 2D and 3D Identity Preservation Net[J]. ACM Transactions on Multimedia Computing, Communications and Applications, 2024, 20(8), 230. |
APA | Liu, Jinliang., Zheng, Zhedong., Yang, Zongxin., & Yang, Yi (2024). High Fidelity Makeup via 2D and 3D Identity Preservation Net. ACM Transactions on Multimedia Computing, Communications and Applications, 20(8), 230. |
MLA | Liu, Jinliang,et al."High Fidelity Makeup via 2D and 3D Identity Preservation Net".ACM Transactions on Multimedia Computing, Communications and Applications 20.8(2024):230. |
Files in This Item: | There are no files associated with this item. |
Items in the repository are protected by copyright, with all rights reserved, unless otherwise indicated.
Edit Comment