Development of Web Content for Music Education Using AR Human Facial Recognition Technology
DOI:
https://doi.org/10.13052/jwe1540-9589.2252Keywords:
Augmented reality (AR), unity engine, face recognition, facial expression, real-time tracking, YouTube, web contentAbstract
As the media market changes rapidly, market demand is increasing for content that can be consumed on web platforms. It’s required to produce differentiated web content that can attract viewers’ interest. In order to increase the productivity and efficiency of content creation, cases of content production using AR engines are increasing. This study has a development environment in which parametrics and muscle-based model techniques are mixed. The faces of famous Western classical musicians, such as Mozart, Beethoven, Chopin and List are created as 3D characters and augmented on human’s face based on facial recognition technology in this study. It analyzes and traces the changed of facial expression of each person, then apply to 3D character’s facial expression in real-time. Each person who augmented musicians’ faces can become those who lived in different times, deliver information and communicate with viewers of the present era based on the music educational scripts. This study presents a new direction for video production required in the media market.
Downloads
References
H. J. Kim, “The approaching direction of producing animation contents based on new media”, Korea Digital Design Council, Digital Design Studies, vol. 10, no. 3, pp. 185–195, Oct., 2010.
H. Y. Jeon, “Domestic and foreign AR/VR industry status and implications”, Hyundai Economic Research Institute, Seoul, Korea, no. 687, 2017.
H. G. Seo, “Domestic virtual characters that go beyond YouTubers and game promotion models”, gamemeca.com, https://zrr.kr/yDZE, (accessed February 1, 2021).
Hypresense, hypresense.com, https://www.hyprsense.com (URL), (accessed February 1, 2021).
Y. G. Kim, “A study on marker tracking research for utilization in AU based on facial motion capture -Based on low polygon”, The Korean Journal of Animation, no. 10(4), pp. 45–60, 2014.
K. Waters, “Muscle Model for Animating Three Dimensional Facial Expression”, Proceeding of SIGGRAPH, vol. 21, no. 4, July, 1987.
J. A. Kim, “A Study on Effective Facial Expression of 3D Character through Variation of Emotions (Model using Facial Anatomy)”, Journal of Korea Multimedia Society, vol. 9, no. 7, July, 2006.
P. Ekman, W. V. Friesen, “Facial Action Coding System”, Consulting Psychologists Press Inc., 577 College Avenue, Palo Alto, California 94306, 1978.
W. E. Rinn, “The Neuropsychology of facial expression: A review of the neurological and psychological mechanisms for producing facial expressions”, Psychological Bulletin, 95(1), pp. 52–77, 1984.
K. W. E. Lin, T. Nakano, M. Goto, (2019). “VocalistMirror: A Singer Support Interface for Avoiding Undesirable Facial Expressions”, 16th Sound and Music Computing Conference (SMC2019), National Institute of Advanced Industrial Science and Technology (AIST), pp. 2518–3672, doi: 10.5281/zenodo.3249451.