Personalized Recommendation Framework Using Large Language Model and Chain-of-thought Prompting: A Case Study of a Computer Programming Course
DOI:
https://doi.org/10.13052/jmm1550-4646.2165Keywords:
Recommendation system, personalized learning, chain-of-thought prompting, large language modelAbstract
Traditional learning methodologies often fall short of accommodating diverse learner needs and adapting dynamically to individual learning paces and styles. This limitation underscores the growing need for personalized learning, which has the potential to significantly improve learning outcomes, foster deeper engagement, and enhance learner motivation. This study introduces a novel personalized recommendation framework (PRF) that leverages large language models (LLMs) and chain-of-thought (CoT) prompting techniques to advance personalized learning. Specifically, it proposes a strategic personalization framework that addresses learner heterogeneity by incorporating both preference-based and performance-based features. CoT prompting is integrated to simulate human-like sequential reasoning in LLMs, thereby improving the framework’s adaptability and effectiveness. A case study was conducted in a computer programming course, a domain that requires both conceptual understanding and practical problem-solving, to evaluate the proposed framework. The assessment involved 15 expert reviewers who examined the framework’s effectiveness and overall satisfaction. Experimental results showed that the proposed PRF generated recommendations perceived as significantly more satisfactory than those produced by the non-PRF system (M = 4.50 ± 0.30 vs. 3.73 ± 0.21, p < 0.001). In addition, the experts strongly agreed that the framework effectively identified students in urgent need of support, provided timely recommendations, and delivered personalized learning experiences aligned with individual learner needs.
Downloads
References
Ifenthaler, D., and C. Schumacher. 2023. Reciprocal issues of artificial and human intelligence in education. J. Res. Technol. Educ. 55: 1–6. [Online]. Available: https://doi.org/10.1080/15391523.2022.2154511.
How, M. 2019. Future-ready strategic oversight of multiple artificial superintelligence-enabled adaptive learning systems via human-centric explainable AI-empowered predictive optimizations of educational outcomes. Big Data Cogn. Comput. 3: 46. [Online]. Available: https://doi.org/10.3390/BDCC3030046.
Alonso, J., and G. Casalino. 2019. Explainable artificial intelligence for human-centric data analysis in virtual learning environments. pp. 125–138. [Online]. Available: https://doi.org/10.1007/978-3-030-31284-8_10.
Pardamean, B., T. Suparyanto, T. W. Cenggoro, D. Sudigyo, and A. Anugrahana. 2022. AI-based learning style prediction in online learning for primary education. IEEE Access 10: 35725–35735.
Wan, P., X. Wang, Y. Lin, and G. Pang. 2021. A knowledge diffusion model in autonomous learning under multiple networks for personalized educational resource allocation. IEEE Trans. Learn. Technol. 14: 430–444.
Zalavra, K., Papanikolaou, Y., Dimitriadis, and C. Sgouropoulou. 2022. Personalising learning: towards a coherent learning design framework. In Proc. Int. Conf. Adv. Learn. Technol. (ICALT). pp. 77–79.
Murtaza, Y., J. A. Ahmed, F. Shamsi, et al. 2022. AI-based personalized e-learning systems: issues, challenges, and solutions. IEEE Access 10: 81323–81342.
Zheng, A., Naghizadeh, and Yener. 2022. DiPLe: learning directed collaboration graphs for peer-to-peer personalized learning. In IEEE Inf. Theory Workshop (ITW). pp. 446–451.
Rukmono, S. A., L. Ochoa, and M. R. V. Chaudron. 2023. Achieving high-level software component summarization via hierarchical chain-of-thought prompting and static code analysis. In IEEE Int. Conf. Data Softw. Eng. (ICoDSE). pp. 7–12.
Gemini Team Google. 2023. Gemini: a family of highly capable multimodal models. arXiv. [Online]. Available: https://arxiv.org/abs/2312.11805.
Touvron, H., T. Lavril, G. Izacard, X. Martinet, M. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, A. Rodriguez, A. Joulin, E. Grave, and G. Lample. 2023. LLaMA: open and efficient foundation language models. arXiv abs/2302.13971.
Shum, K., S. Diao, and T. Zhang. 2023. Automatic prompt augmentation and selection with chain-of-thought from labeled data. arXiv. [Online]. Available: https://doi.org/10.48550/arXiv.2302.12822.
Mani, G., and G. B. Namomsa. 2023. Large language models (LLMs): representation matters, low-resource languages and multi-modal architecture. In IEEE AFRICON. pp. 1–6.
Du, C., J. Tian, H. Liao, J. Chen, H. He, and Y. Jin. 2023. Task-level thinking steps help large language models for challenging classification task. In Proc. Conf. Empirical Methods Nat. Lang. Process. Singapore. pp. 2454–2470.
Feng, G., B. Zhang, Y. Gu, H. Ye, D. He, and L. Wang. 2024. Towards revealing the mystery behind chain of thought: a theoretical perspective. In Adv. Neural Inf. Process. Syst. 36.
Zhang, H., and D. C. Parkes. 2023. Chain-of-thought reasoning is a policy improvement operator. arXiv (Cornell University). doi: 10.48550/arxiv.2309.08589.
Shvetsov, A. N., A. P. Sergushicheva, I. A. Andrianov, M. V. Kharina, and O. Yu. Zaslavskaya. 2020. Student model implementation in the digital educational environment for IT specialists training. J. Phys. Conf. Ser. 1691(1): 012080.
Hao, L., and Q. Liu. 2020. Design of resource recommendation model for personalized learning in the era of big data. In Proc. Annu. Meet. Manage. Eng. (AMME). pp. 181–187. Assoc. Comput. Mach.
Corral, H. Y., J. Clemente, and D. Rodríguez. 2018. Competence-based recommender systems: a systematic literature review. Behav. Inf. Technol. 37(10–11): 958–977.
Pang, F., K.-L. Lu, and W.-J. Gu. 2020. Review on student profile in educational research. In Proc. 5th Int. Conf. Distance Educ. Learn. (ICDEL). pp. 21–24. doi: 10.1145/3402569.3402585.
Bodily, R., J. Kay, I. Vincent, Aleven, et al. 2018. Open learner models and learning analytics dashboards: a systematic review. In Proc. 8th Int. Conf. Learn. Anal. Knowl. (LAK). pp. 41–50.
Yang, F., and F. W. B. Li. 2018. Study on student performance estimation, student progress analysis, and student potential prediction based on data mining. Comput. Educ. 123: 97–108. doi: 10.1016/j.compedu.2018.04.006.
Kulkarni, A. B., A. Shivananda, A. Kulkarni, and V. A. Krishnan. 2023. Applied Recommender Systems with Python. doi: 10.1007/978-1-4842-8954-9.
Camilli, F., and M. Mézard. 2022. Matrix factorization with neural networks. arXiv (Cornell University). doi: 10.48550/arxiv.2212.02105.
Chinnasamy, P., K. B. Sathya, B. J. Jebamani, A. Nithyasri, and S. Fowjiya. 2023. Deep learning: algorithms, techniques, and applications – a systematic survey. Deep Learn. Res. Appl. Nat. Lang. Process. pp. 1–17.
Nian, R., J. Liu, and B. Huang. 2020. A review on reinforcement learning: introduction and applications in industrial process control. Comput. Chem. Eng. 139: 106886. doi: 10.1016/j.compchemeng.2020.106886.
AlShaikh, F., and N. Hewahi. 2021. AI and machine learning techniques in the development of intelligent tutoring system: a review. In Proc. Int. Conf. Innov. Intell. Inform., Comput., Technol. (3ICT). Zallaq, Bahrain. pp. 403–410. doi: 10.1109/3ICT53449.2021.9582029.
Rajendran, R., S. Iyer, and S. Murthy. 2019. Personalized affective feedback to address students’ frustration in ITS. IEEE Trans. Learn. Technol. 12(1): 87–97. doi: 10.1109/TLT.2018.2807447.
Wei, X., S. Sun, D. Wu, and L. Zhou. 2021. Personalized online learning resource recommendation based on artificial intelligence and educational psychology. Front. Psychol. 12. doi: 10.3389/fpsyg.2021.767837.
Liu, J. Y. 2018. A survey of deep learning approaches for recommendation systems. J. Phys. Conf. Ser. 1087: 062022. doi: 10.1088/1742-6596/1087/6/062022.
Hengst, F. D., E. M. Grua, A. E. Hassouni, and M. Hoogendoorn. 2020. Reinforcement learning for personalization: a systematic literature review. Data Sci. 3(2): 107–147. doi: 10.3233/ds-200028.
Yu, X., D. Wei, Q. Chu, and H. Wang. 2018. The personalized recommendation algorithms in educational application. In Proc. Int. Conf. Inf. Technol. Med. Educ. (ITME). Hangzhou, China. pp. 664–668. doi: 10.1109/ITME.2018.00153.
Raj, N. S., and V. G. Renumol. 2021. A systematic literature review on adaptive content recommenders in personalized learning environments from 2015 to 2020. J. Comput. Educ. 9(1): 113–148. doi: 10.1007/s40692-021-00199-4.
Zhao, X., and B. Liu. 2020. Application of personalized recommendation technology in MOOC system. In Proc. Int. Conf. Intell. Transp., Big Data Smart City (ICITBS). Vientiane, Laos. pp. 720–723. doi: 10.1109/ICITBS49701.2020.00159.
Hu, X., Y. Wang, Q. B. Chen, Q. Liu, and X. Fan. 2021. Research on personalized learning based on collaborative filtering method. J. Phys. Conf. Ser. 1757(1): 012050. doi: 10.1088/1742-6596/1757/1/012050.
Geetha, G., M. Safa, C. Fancy, and D. Saranya. 2018. A hybrid approach using collaborative filtering and content-based filtering for recommender system. J. Phys. Conf. Ser. 1000: 012101. doi: 10.1088/1742-6596/1000/1/012101.
Cui, L.-Z., F. Guo, and Y. Liang. 2018. Research overview of educational recommender systems. In Proc. 2nd Int. Conf. Comput. Sci. Appl. Eng. doi: 10.1145/3207677.3278071.
Joy, J., and R. V. G. Pillai. 2022. Review and classification of content recommenders in e-learning environment. J. King Saud Univ. Comput. Inf. Sci. 34(9): 7670–7685. doi: 10.1016/j.jksuci.2021.06.009.
Zhong, L., Y. Wei, H. Yao, W. Deng, Z. Wang, and M. Tong. 2020. Review of deep learning-based personalized learning recommendation. In Proc. Int. Conf. E-Educ., E-Business, E-Manage., E-Learn. (IC4E). Assoc. Comput. Mach. pp. 145–149. doi: 10.1145/3377571.3377587.
Zhang, S., L. Yao, A. Sun, and Y. Tay. 2019. Deep learning-based recommender system. ACM Comput. Surv. 52(1): 1–38. doi: 10.1145/3285029.
Bubeck, S., et al. 2023. Sparks of artificial general intelligence: early experiments with GPT-4. arXiv (Cornell University). doi: 10.48550/arxiv.2303.12712.
Dai, Z., Z. Yang, Y. Yang, J. G. Carbonell, Q. V. Le, and R. Salakhutdinov. 2019. Transformer-XL: attentive language models beyond a fixed-length context. arXiv (Cornell University). doi: 10.48550/arxiv.1901.02860.
Lu, S., I. Bigoulaeva, R. Sachdeva, H. T. Madabushi, and I. Gurevych. 2023. Are emergent abilities in large language models just in-context learning? arXiv (Cornell University). doi: 10.48550/arxiv.2309.01809.
Friedman, L. K., et al. 2023. Leveraging large language models in conversational recommender systems. arXiv (Cornell University). doi: 10.48550/arxiv.2305.07961.
Zhao, W. X., et al. 2023. A survey of large language models. arXiv (Cornell University). doi: 10.48550/arxiv.2303.18223.
Leiker, D., et al. 2023. Prototyping the use of large language models (LLMs) for adult learning content creation at scale. In Proc. AIED Workshop. Tokyo, Japan.
Risang Baskara, F. X. 2023. Navigating the complexities and potentials of language learning machines in EFL contexts: a multidimensional analysis. EAI Endorsed Trans. Creative Technol. 14(3): 1–15.
Gao, Y., et al. 2023. An investigation of applying large language models to spoken language learning. Appl. Sci. 14: 224.
Lee, J., et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. arXiv (Cornell University). doi: 10.48550/arxiv.2201.11903.
Diao, S., P. Wang, L. Ye, and T. Zhang. 2023. Active prompting with chain-of-thought for large language models. arXiv (Cornell University). doi: 10.48550/arxiv.2302.12246.
Mishra, A. K., and K. N. Thakkar. 2023. Stress testing chain-of-thought prompting for large language models. arXiv (Cornell University). doi: 10.48550/arxiv.2309.16621.
Chen, W., X. Ma, X. Wang, and W. W. Cohen. 2023. Program of thoughts prompting: disentangling computation from reasoning for numerical reasoning tasks. Trans. Mach. Learn. Res. 10: 1–16.
Karnati, R. T., H. Kundra, and P. R. S. S. Naidu. 2025. Study Pilot: an AI-powered platform for personalized learning through retrieval-augmented generation on diverse user content. Int. J. Sci. Res. Eng. Manage. 9(6): 1–7. https://doi.org/10.55041/ijsrem.ncft007.
Shan, R. 2024. OpenRAG: open-source retrieval-augmented generation architecture for personalized learning. In Proc. Int. Conf. Artif. Intell., Robot., Commun. (ICAIRC). pp. 212–216. https://doi.org/10.1109/ICAIRC64177.2024.10900069.
Shan, R. 2025. LearnRAG: implementing retrieval-augmented generation for adaptive learning systems. In Proc. Int. Conf. Artif. Intell. Inf. Commun. (ICAIIC). pp. 224–229. https://doi.org/10.1109/ICAIIC64266.2025.10920869.
Abdelmagied, M., M. A. Chatti, S. Joarder, Q. Ul Ain, and R. Alatrash. 2025. Leveraging graph retrieval-augmented generation to support learners’ understanding of knowledge concepts in MOOCs. arXiv preprint arXiv:2505.10074. https://arxiv.org/abs/2505.10074.
Zerhoudi, S., and M. Granitzer. 2024. PersonaRAG: enhancing retrieval-augmented generation systems with user-centric agents. arXiv preprint arXiv:2407.09394. https://arxiv.org/abs/2407.09394.
Spatioti, A., I. Kazanidis, and J. Pange. 2023. Educational design and evaluation models of the learning effectiveness in e-learning process: a systematic review. Turk. Online J. Distance Educ. 24(4): 318–347. https://doi.org/10.17718/tojde.1177297.
Fleming, N. D., and C. Mills. 1992. Helping students understand how they learn. Teach. Professor 7(4).
Wang, Y., M. Zuo, X. He, and Z. Wang. 2025. Exploring students’ online learning behavioural engagement in university: factors, academic performance and their relationship. Behav. Sci. 15(1): 78. https://doi.org/10.3390/bs15010078.
Pricopie, V. 2020. Constructivism. In SAGE Int. Encycl. Mass Media Soc. Vol. 5, pp. 377–378. SAGE Publications, Inc. https://doi.org/10.4135/9781483375519.n148.
Medina, M. S., A. N. Castleberry, and A. M. Persky. 2017. Strategies for improving learner metacognition in health professional education. Am. J. Pharm. Educ. 81(4): 78. https://doi.org/10.5688/ajpe81478.
Abadi, D. P., M. Ramli, and F. Wahyuni. 2025. Analysis of behaviorism theory: classical conditioning and operant conditioning in changing students’ truancy behaviour. J. Pembelajaran Bimbingan Pengelolaan Pendidikan 5(2): 8. https://doi.org/10.17977/um065.v5.i2.2025.8.



