Personalized User Models in a Real-world Edge Computing Environment: A Peer-to-peer Federated Learning Framework

Authors

  • Xiangchi Song Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
  • Zhaoyan Wang Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
  • KyeongDeok Baek Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
  • In-Young Ko Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea

DOI:

https://doi.org/10.13052/jwe1540-9589.2381

Keywords:

Peer-to-peer federated learning, personalized federated learning, hierarchical edge computing, edge-cloud environment

Abstract

As the number of IoT devices and the volume of data increase, distributed computing systems have become the primary deployment solution for large-scale Internet of Things (IoT) environments. Federated learning (FL) is a collaborative machine learning framework that allows for model training using data from all participants while protecting their privacy. However, traditional FL suffers from low computational and communication efficiency in large-scale hierarchical cloud-edge collaborative IoT systems. Additionally, due to heterogeneity issues, not all IoT devices necessarily benefit from the global model of traditional FL, but instead require the maintenance of personalized levels in the global training process. Therefore we extend FL into a horizontal peer-to-peer (P2P) structure and introduce our P2PFL framework: efficient peer-to-peer federated learning for users (EPFLU). EPFLU transitions the paradigms from vertical FL to a horizontal P2P structure from the user perspective and incorporates personalized enhancement techniques using private information. Through horizontal consensus information aggregation and private information supplementation, EPFLU solves the weakness of traditional FL that dilutes the characteristics of individual client data and leads to model deviation. This structural transformation also significantly alleviates the original communication issues. Additionally, EPFLU has a customized simulation evaluation framework, and uses the EUA dataset containing real-world edge server distribution, making it more suitable for real-world large-scale IoT. Within this framework, we design two extreme data distribution scenarios and conduct detailed experiments of EPFLU and selected baselines on the MNIST and CIFAR-10 datasets. The results demonstrate that the robust and adaptive EPFLU framework can consistently converge to optimal performance even under challenging data distribution scenarios. Compared with the traditional FL and selected P2PFL methods, EPFLU achieves communication time improvements of 39% and 16% respectively.

Downloads

Download data is not yet available.

Author Biographies

Xiangchi Song, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea

Xiangchi Song is enrolled in the integrated program for Master’s and Ph.D. degrees at the Web Engineering and Service Computing Lab, School of Computing, Korea Advanced Institute of Science and Technology (KAIST). His research interests include federated learning, P2P learning, hierarchical edge computing and edge cloud environments.

Zhaoyan Wang, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea

Zhaoyan Wang is enrolled in the Master’s program at the Web Engineering and Service Computing Lab, School of Computing, Korea Advanced Institute of Science and Technology (KAIST). His research interests include service computing, spatial-temporal data analysis, and edge-cloud collaboration.

KyeongDeok Baek, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea

KyeongDeok Baek is a post-doctoral researcher at the School of Computing at the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, Korea. He received his Ph.D. in Computer Science from the School of Computing at KAIST in 2024. His recent research focuses on realizing interactive Internet of Things services in public spaces by extending multi-agent reinforcement learning and human–computer interaction approaches.

In-Young Ko, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea

In-Young Ko is a professor at the School of Computing at the Korea Advanced Institute of Science and Technology (KAIST) in Daejeon, Korea. He received his Ph.D. in computer science from the University of Southern California (USC) in 2003. His research interests include services computing, web engineering, and software engineering. His recent research focuses on service-oriented software development in large-scale and distributed system environments such as the web, Internet of Things (IoT), and edge cloud environments. He is a member of the IEEE.

References

Samiul Alam, Tuo Zhang, Tiantian Feng, Hui Shen, Zhichao Cao, Dong Zhao, JeongGil Ko, Kiran Somasundaram, Shrikanth S Narayanan, Salman Avestimehr, et al. Fedaiot: A federated learning benchmark for artificial intelligence of things. arXiv preprint arXiv:2310.00109, 2023.

Qian Chen, Zilong Wang, Wenjing Zhang, and Xiaodong Lin. Ppt: A privacy-preserving global model training protocol for federated learning in p2p networks. Computers & Security, 124:102966, 2023.

Qian Chen, Zilong Wang, Yilin Zhou, Jiawei Chen, Dan Xiao, and Xiaodong Lin. Cfl: Cluster federated learning in large-scale peer-to-peer networks. In International Conference on Information Security, pages 464–472. Springer, 2022.

Yiqiang Chen, Xin Qin, Jindong Wang, Chaohui Yu, and Wen Gao. Fedhealth: A federated transfer learning framework for wearable healthcare. IEEE Intelligent Systems, 35(4):83–93, 2020.

Qianlong Dang, Guanghui Zhang, Ling Wang, Shuai Yang, and Tao Zhan. Hybrid iot device selection with knowledge transfer for federated learning. IEEE Internet of Things Journal, 2023.

Tiantian Feng, Digbalay Bose, Tuo Zhang, Rajat Hebbar, Anil Ramakrishna, Rahul Gupta, Mi Zhang, Salman Avestimehr, and Shrikanth Narayanan. Fedmultimodal: A benchmark for multimodal federated learning. In Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and Data Mining, pages 4035–4045, 2023.

Harald T Friis. A note on a simple transmission formula. Proceedings of the IRE, 34(5):254–256, 1946.

Masaharu Hata. Empirical formula for propagation loss in land mobile radio services. IEEE transactions on Vehicular Technology, 29(3):317–325, 1980.

Alex Krizhevsky, Geoffrey Hinton, et al. Learning multiple layers of features from tiny images. 2009.

Phu Lai, Qiang He, Mohamed Abdelrazek, Feifei Chen, John Hosking, John Grundy, and Yun Yang. Optimal edge user allocation in edge computing with variable sized vector bin packing. In Service-Oriented Computing: 16th International Conference, ICSOC 2018, Hangzhou, China, November 12-15, 2018, Proceedings 16, pages 230–245. Springer, 2018.

Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11):2278–2324, 1998.

Ang Li, Jingwei Sun, Binghui Wang, Lin Duan, Sicheng Li, Yiran Chen, and Hai Li. Lotteryfl: Empower edge intelligence with personalized and communication-efficient federated learning. In 2021 IEEE/ACM Symposium on Edge Computing (SEC), pages 68–79. IEEE, 2021.

Daliang Li and Junpu Wang. Fedmd: Heterogenous federated learning via model distillation. arXiv preprint arXiv:1910.03581, 2019.

Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, and Ji Liu. Can decentralized algorithms outperform centralized algorithms? a case study for decentralized parallel stochastic gradient descent. Advances in neural information processing systems, 30, 2017.

Lumin Liu, Jun Zhang, SH Song, and Khaled B Letaief. Client-edge-cloud hierarchical federated learning. In ICC 2020-2020 IEEE international conference on communications (ICC), pages 1–6. IEEE, 2020.

Tianyi Liu, Ruyu Luo, Fangmin Xu, Chaoqiong Fan, and Chenglin Zhao. Distributed learning based joint communication and computation strategy of iot devices in smart cities. Sensors, 20(4):973, 2020.

Brendan McMahan, Eider Moore, Daniel Ramage, Seth Hampson, and Blaise Aguera y Arcas. Communication-efficient learning of deep networks from decentralized data. In Artificial intelligence and statistics, pages 1273–1282. PMLR, 2017.

Jed Mills, Jia Hu, and Geyong Min. Communication-efficient federated learning for wireless edge intelligence in iot. IEEE Internet of Things Journal, 7(7):5986–5994, 2019.

Jed Mills, Jia Hu, and Geyong Min. Multi-task federated learning for personalised deep neural networks in edge computing. IEEE Transactions on Parallel and Distributed Systems, 33(3):630–641, 2021.

Pramod Kaushik Mudrakarta, Mark Sandler, Andrey Zhmoginov, and Andrew Howard. K for the price of 1: Parameter-efficient multi-task and transfer learning. arXiv preprint arXiv:1810.10703, 2018.

SashankJ. Reddi, Zachary Charles, Manzil Zaheer, Zachary Garrett, Keith Rush, Jakub Konečný, Sanjiv Kumar, and H.Brendan McMahan. Adaptive federated optimization. arXiv: Learning,arXiv: Learning, Feb 2020.

Abhijit Guha Roy, Shayan Siddiqui, Sebastian Pölsterl, Nassir Navab, and Christian Wachinger. Braintorrent: A peer-to-peer environment for decentralized federated learning. arXiv preprint arXiv:1905.06731, 2019.

Atul Sharma, Joshua C Zhao, Wei Chen, Qiang Qiu, Saurabh Bagchi, and Somali Chaterji. How to learn collaboratively-federated learning to peer-to-peer learning and what’s at stake. In 2023 53rd Annual IEEE/IFIP International Conference on Dependable Systems and Networks-Supplemental Volume (DSN-S), pages 122–126. IEEE, 2023.

Virginia Smith, Chao-Kai Chiang, Maziar Sanjabi, and Ameet S Talwalkar. Federated multi-task learning. Advances in neural information processing systems, 30, 2017.

Xiangchi Song, Zhaoyan Wang, KyeongDeok Baek, and In-Young Ko. Epflu: Efficient peer-to-peer federated learning for personalized user models in edge-cloud environments. In ICWE 2024 International Workshops, BECS and WALS, Tampere, Finland, June 2024.

Minxue Tang, Xuefei Ning, Yitu Wang, Yu Wang, and Yiran Chen. Fedgp: Correlation-based active client selection strategy for heterogeneous federated learning. arXiv preprint arXiv:2103.13822, 2021.

Nguyen H Tran, Wei Bao, Albert Zomaya, Minh NH Nguyen, and Choong Seon Hong. Federated learning over wireless networks: Optimization model design and analysis. In IEEE INFOCOM 2019-IEEE conference on computer communications, pages 1387–1395. IEEE, 2019.

Kaibin Wang, Qiang He, Feifei Chen, Chunyang Chen, Faliang Huang, Hai Jin, and Yun Yang. Flexifed: Personalized federated learning for edge clients with heterogeneous model architectures. In Proceedings of the ACM Web Conference 2023, pages 2979–2990, 2023.

Hongda Wu and Ping Wang. Node selection toward faster convergence for federated learning on non-iid data. IEEE Transactions on Network Science and Engineering, 9(5):3099–3111, 2022.

Qiong Wu, Kaiwen He, and Xu Chen. Personalized federated learning for intelligent iot applications: A cloud-edge based framework. IEEE Open Journal of the Computer Society, 1:35–44, 2020.

Tuo Zhang, Tiantian Feng, Samiul Alam, Sunwoo Lee, Mi Zhang, Shrikanth S Narayanan, and Salman Avestimehr. Fedaudio: A federated learning benchmark for audio tasks. In ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1–5. IEEE, 2023.

Downloads

Published

2025-02-07

How to Cite

Song, X. ., Wang, Z. ., Baek, K. ., & Ko, I.-Y. . (2025). Personalized User Models in a Real-world Edge Computing Environment: A Peer-to-peer Federated Learning Framework. Journal of Web Engineering, 23(08), 1057–1084. https://doi.org/10.13052/jwe1540-9589.2381

Issue

Section

Articles