A Real-time Data-driven Multimedia Platform Integrating Public Data and AI-based Facial Generation for Personalized Interaction

Authors

  • Jungjo Na Duksung Women’s University, Republic of Korea
  • Miso Kim Duksung Women’s University, Republic of Korea
  • Hyeon Gyu Kim Duksung Women’s University, Republic of Korea

DOI:

https://doi.org/10.13052/jmm1550-4646.2166

Keywords:

Real-Time Data Visualization, AI-Based Facial Generation, Stable Diffusion, Public Data APIs, Interactive Multimedia Platform, Human–Computer Interaction (HCI), Urban Data Integration

Abstract

This study proposes a data-driven interactive multimedia platform that integrates real-time environmental and demographic data visualization with AI-based facial generation to deliver dynamic and immersive user experiences. The system combines image generation AI with data visualization in a unified framework, enabling real-time, personalized interaction. Real-time weather and demographic data are collected and processed through public APIs provided by Seoul Metropolitan Government, and these data streams are mapped to visual parameters such as sky color, cloud density, and background environments to reflect local conditions dynamically. Facial generation is carried out using a fine-tuned stable diffusion model trained on a Korean facial dataset categorized by age and gender. The generated face meshes are refined using detailed expression capture and animation (DECA) and implemented as MetaHuman characters within Unreal Engine to produce expressive real-time avatars. The platform adopts a client–server architecture and leverages cloud-based asset management to efficiently handle real-time data and 3D resources. This approach demonstrates a novel form of interactive media experience that merges real-time public data with AI-driven personalization and presents new opportunities for interdisciplinary HCI research that bridges art, design, urban data, and artificial intelligence.

Downloads

Download data is not yet available.

Author Biographies

Jungjo Na, Duksung Women’s University, Republic of Korea

Jungjo Na is a professor in the Department of Virtual Reality Convergence at Duksung Women’s University. She received her Ph.D. in Media Studies from Soongsil University. Her research focuses on the development of AI-based VR and XR content, with a particular interest in creating convergent media experiences. Her work aims to implement art therapy for users through immersive and interdisciplinary content.

Miso Kim, Duksung Women’s University, Republic of Korea

Miso Kim is currently a master’s student in ICT Convergence Engineering (Media Convergence Major) at Duksung Women’s University. She graduated with a bachelor’s degree in IT Media Engineering from Duksung Women’s University.

Hyeon Gyu Kim, Duksung Women’s University, Republic of Korea

Hyeon Gyu Kim received his B.Sc. and M.Sc. degrees in computer science from University of Ulsan in 2000, and his Ph.D. degree in computer science from Korea Advanced Institute of Science and Technology, Daejeon, South Korea, in 2010. From 2001 to 2011, he was a Chief Research Engineer with LG Electronics. From 2012, he was an Associate Professor with the Division of Computer Science and Engineering at Sahmyook University, Seoul, South Korea. Since 2025, he has been an Associate Professor with the Department of VR Convergence Engineering at Duksung Women’s University, Seoul, South Korea. His research interests include artificial intelligence, machine learning, and big data processing.

References

K. Iranshahi, J. Brun, and T. Arnold, “Digital twins: Recent advances and future directions in engineering fields,” Intelligent Systems with Applications, vol. 26, 200516, 2025.

B. Rajasekaran, T. Brahmani, and C. Reshma, “Spatial personality for human space interaction,” in Beyond Codes and Pixels: Proceedings of the 17th International Conference on Computer-Aided Architectural Design Research in Asia, T. Fischer et al. (Eds.), pp. 69–78, Hong Kong: Association for Computer-Aided Architectural Design Research in Asia, 2012.

H. Xu, A. Berres, S. Yoginath, H. Sorensen, P. Nugent, J. Severino, S. Tennille, A. Moore, W. Jones, and J. Sanyal, “Smart mobility in the cloud: Enabling real-time situational awareness and cyber-physical control through a digital twin for traffic,” IEEE Transactions on Intelligent Transportation Systems, vol. 24, pp. 3145–3156, 2023.

B. Ansari, “Enhancing the usability and usefulness of open government data: A comprehensive review of the state of open government data visualization research,” Government Information Quarterly, vol. 39, 101664, 2022.

K. N. Mahajan and L. A. Gokhale, “Comparative study of static and interactive visualization approaches,” International Journal of Computer Science and Engineering (IJCSE), vol. 10, no. 3, pp. 85–91, 2018.

T. Karras, S. Laine, and T. Aila, “Analyzing and improving the image quality of StyleGAN,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2020, pp. 8107–8116.

X. Li, X. Hou, and C. C. Loy, “When StyleGAN Meets Stable Diffusion: a W+

Adapter for Personalized Image Generation,” arXiv preprint, 2023.

Y. Feng, V. Choutas, M. J. Black, and T. Bolkart, “DECA: Detailed Expression Capture and Animation from a Single Image,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 19–25 June 2021, pp. 20311–20320.

Seoul Metropolitan Government, “Seoul Open Data Plaza,” Available online: https://data.seoul.go.kr/ (accessed on 1 September 2025).

KT Corporation, “KT Public Data API Portal,” Available online: https://apilink.kt.co.kr/ (accessed on 1 September 2025).

SK Telecom, “SK Open API Portal,” Available online: https://openapi.sk.com/ (accessed on 1 September 2025).

Data Labs Agency, “Hydro Tasmania Energy Portal Case Study,” Available online: https://www.datalabsagency.com/case-studies/ (accessed on 15 August 2025).

Data Labs Agency, “Monash Health Interactive Timeline Tool,” Available online: https://www.datalabsagency.com/case-studies/ (accessed on 15 August 2025).

Vev Design, “The Pudding: Wine Data Modeling,” Available online: https://www.vev.design/blog/interactive-data-visualization-examples/ (accessed on 15 August 2025).

Y. Choi, “SVAD: From single image to 3D avatar via synthetic data generation with video diffusion and data augmentation,” arXiv, 2025. https://arxiv.org/abs/2505.05475.

Pegasus Project, “Personalized generative 3D avatars with composable attributes,” in Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024.

Vocal Media, “How face changing tech is shaping creativity in visual arts,” Available online: https://vocal.media/art/how-face-changing-tech-is-shaping-creativity-in-visual-arts (accessed on 10 August 2025).

Times of India, “Pimpri Chinchwad Municipal Corporation takes the tech route to propel civic solutions,” Available online: https://timesofindia.indiatimes.com/city/pune/pimpri-chinchwad-municipal-corporation-takes-the-tech-route-to-propel-civic-solutions/articleshow/123265739.cms (accessed on 12 August 2025).

IoT Network Certified, “Peachtree Corners Smart City Deployment,” Available online: https://iotnetworkcertified.com/case-study-smart-cities/ (accessed on 15 August 2025).

NTT Data, “City of Las Vegas Smart Solutions,” Available online: https://us.nttdata.com/en/case-studies/city-of-las-vegas-client-story (accessed on 15 August 2025).

Downloads

Published

2025-12-19

How to Cite

Na, J. ., Kim, M. ., & Kim, H. G. . (2025). A Real-time Data-driven Multimedia Platform Integrating Public Data and AI-based Facial Generation for Personalized Interaction. Journal of Mobile Multimedia, 21(06), 1135–1166. https://doi.org/10.13052/jmm1550-4646.2166

Issue

Section

ECTI