A Continuous Hidden Markov Algorithm-Based Multimedia Melody Retrieval System for Music Education

Authors

  • Yingjie Cheng Conservatory of Music, Hubei Engineering University, Xiaogan, Hubei 432000, China

DOI:

https://doi.org/10.13052/jicts2245-800X.1211

Keywords:

Music education, Multimedia melody retrieval system (MMRS), Audio features, Continuous Hidden Markov algorithm (CHMA)

Abstract

Education professionals receive instruction in Music Education (ME) to prepare for prospective jobs like secondary or primary music teachers, schools ensembles executives, or ensembles directors at music institutions. In the discipline of music education, educators do original research on different approaches to teaching and studying music. The most accurate and effective method of extracting music from huge music databases has become one of the most frequently discussed participants in contemporary multimedia information retrieval development. The essence of multimedia material is presented within a range of techniques since it is not bound to a single side. These several categories could consist of the song’s audio components and lyrics for musical information. Retrieving melodic information, subsequently, becomes the main focus of most recent studies. Aside from being an expensive deviate from academics, music programs are neither a viable profession neither a valid pastime. Therefore, in this study, we offer a Continuous Hidden Markov Algorithm (CHMA) related a novel method for recovering melodies from musical multimedia recordings. CHMA is considered to be the most basic dynamic Bayesian network. Two various types of audio frame features and audio example features are extracted throughout the feature extraction procedure from the audio signal according to unit length. Every music clip receives a unique approach that we implement with concurrently using various CHMA. The initial music gets processed using a trained CHMA that monitors fundamental frequencies, maps states, and generates retrieval outcomes. The training time for Traditional opera reached 455.76 minutes, the testing time for Narration achieved 56.10 minutes, and the recognition accuracy for advertisement reached an impressive 98.02%. A subsequently experimental result validates the applicability of the proposed approach.

Downloads

Download data is not yet available.

Author Biography

Yingjie Cheng, Conservatory of Music, Hubei Engineering University, Xiaogan, Hubei 432000, China

Yingjie Cheng (1970.10–) male, Han Nationality, Hubei Qichun people, master of art, associate professor of Hubei Institute of Engineering, research direction: computer music creation, music education.

References

Zhang, J., 2021. Music feature extraction and classification algorithm based on deep learning. Scientific Programming, 2021, pp. 1–9.

Li, N. and Ismail, M.J.B., 2022. Application of artificial intelligence technology in the teaching of complex situations of folk music under the vision of new media art. Wireless Communications and Mobile Computing, 2022, pp. 1–10.

Qin, T., Poovendran, P. and BalaMurugan, S., 2021. RETRACTED ARTICLE: Student-Centered Learning Environments Based on Multimedia Big Data Analytics. Arabian Journal for Science and Engineering, pp. 1–1.

Wang, D., 2022. Analysis of multimedia teaching path of popular music based on multiple intelligence teaching mode. Advances in Multimedia, 2022.

Dong, K., 2022. Multimedia pop music teaching model integrating semi finished teaching strategies. Advances in Multimedia, 2022, pp. 1–13.

Rui, W., 2021, February. Application of the singing techniques in Oroqen folk songs teaching with the help of multimedia technology. In Journal of Physics: Conference Series (Vol. 1744, No. 3, p. 032244). IOP Publishing.

Kratus, J., 2019. On the road to popular music education: The road goes on forever. The Bloomsbury Handbook of Popular Music Education: Perspectives and Practices, pp. 455–463.

Wang, W., Li, Q., Xie, J., Hu, N., Wang, Z. and Zhang, N., 2023. Research on emotional semantic retrieval of attention mechanism oriented to audio-visual synesthesia. Neurocomputing, 519, pp. 194–204.

Hirai, T. and Sawada, S., 2019. Melody2vec: Distributed representations of melodic phrases based on melody segmentation. Journal of Information Processing, 27, pp. 278–286.

Karsdorp, F., van Kranenburg, P. and Manjavacas, E., 2019, October. Learning Similarity Metrics for Melody Retrieval. In ISMIR (pp. 478–485).

Zhang, Z., 2023. Extraction and recognition of music melody features using a deep neural network. Journal of Vibroengineering, 25(4).

Wu, J., Liu, X., Hu, X. and Zhu, J., 2020. PopMNet: Generating structured pop music melodies using neural networks. Artificial Intelligence, 286, p. 103303.

Carnovalini, F., Roda, A. and Caneva, P., 2023. A rhythm-aware serious game for social interaction. Multimedia Tools and Applications, 82(3), pp. 4749–4771.

Fedotchev, A.I. and Bondar, A.T., 2022. Adaptive Neurostimulation, Modulated by Subject’s Own Rhythmic Processes, in the Correction of Functional Disorders. Human Physiology, 48(1), pp. 108–112.

Lu, P., Tan, X., Yu, B., Qin, T., Zhao, S. and Liu, T.Y., 2022. MeloForm: Generating melody with musical form based on expert systems and neural networks. arXiv preprint arXiv:2208.14345.

Zhang, J., 2022. Music Data Feature Analysis and Extraction Algorithm Based on Music Melody Contour. Mobile Information Systems, 2022.

Chabin, T., Pazart, L. and Gabriel, D., 2022. Vocal melody and musical background are simultaneously processed by the brain for musical predictions. Annals of the New York Academy of Sciences, 1512(1), pp. 126–140.

Li, Z., Yao, Q. and Ma, W., 2021. Matching Subsequence Music Retrieval in a Software Integration Environment. Complexity, 2021, pp. 1–12.

Wang, T., 2022. Neural Network-Based Dynamic Segmentation and Weighted Integrated Matching of Cross-Media Piano Performance Audio Recognition and Retrieval Algorithm. Computational Intelligence and Neuroscience, 2022.

Zheng, X., 2022. Research on the whole teaching of vocal music course in university music performance major based on multimedia technology. Scientific Programming, 2022.

Zhang, J., 2022. Music Data Feature Analysis and Extraction Algorithm Based on Music Melody Contour. Mobile Information Systems, 2022.

Shi, Y., 2022, April. Music Note Segmentation Recognition Algorithm Based on Nonlinear Feature Detection. In International Conference on Multi-modal Information Analytics (pp. 578–585). Cham: Springer International Publishing.

Shi, N. and Wang, Y., 2020. Symmetry in computer-aided music composition system with social network analysis and artificial neural network methods. Journal of Ambient Intelligence and Humanized Computing, pp. 1–16.

Ouyang, M., 2023. Employing mobile learning in music education. Education and Information Technologies, 28(5), pp. 5241–5257.

Li, T., Choi, M., Fu, K. and Lin, L., 2019, December. Music sequence prediction with mixture hidden markov models. In 2019 IEEE International Conference on Big Data (Big Data) (pp. 6128–6132). IEEE.

Downloads

Published

2024-05-21

How to Cite

Cheng, Y. . (2024). A Continuous Hidden Markov Algorithm-Based Multimedia Melody Retrieval System for Music Education. Journal of ICT Standardization, 12(01), 1–20. https://doi.org/10.13052/jicts2245-800X.1211

Issue

Section

Articles