How to Retrieve Music using Mood Tags in a Folksonomy

Authors

  • Chang Bae Moon ICT-Convergence Research Center, Kumoh National Institute of Technology, Korea https://orcid.org/0000-0003-2919-0373
  • Jong Yeol Lee Computer and Software Engineering, Kumoh National Institute of Technology, Korea
  • Byeong Man Kim Computer and Software Engineering, Kumoh National Institute of Technology, Korea https://orcid.org/0000-0003-4456-9314

DOI:

https://doi.org/10.13052/jwe1540-9589.2086

Keywords:

music mood, folksonomy, mood tag, Last.fm, mood vector, relationship between mood and tag

Abstract

A folksonomy is a classification system in which volunteers collaboratively create and manage tags to annotate and categorize content. The folksonomy has several problems in retrieving music using tags, including problems related to synonyms, different tagging levels, and neologisms. To solve the problem posed by synonyms, we introduced a mood vector with 12 possible moods, each represented by a numeric value, as an internal tag. This allows moods in music pieces and mood tags to be represented internally by numeric values, which can be used to retrieve music pieces. To determine the mood vector of a music piece, 12 regressors predicting the possibility of each mood based on acoustic features were built using Support Vector Regression. To map a tag to its mood vector, the relationship between moods in a piece of music and mood tags was investigated based on tagging data retrieved from Last.fm, a website that allows users to search for and stream music. To evaluate retrieval performance, music pieces on Last.fm annotated with at least one mood tag were used as a test set. When calculating precision and recall, music pieces annotated with synonyms of a given query tag were treated as relevant. These experiments on a real-world data set illustrate the utility of the internal tagging of music. Our approach offers a practical solution to the problem caused by synonyms.

Downloads

Download data is not yet available.

Author Biographies

Chang Bae Moon, ICT-Convergence Research Center, Kumoh National Institute of Technology, Korea

Chang Bae Moon received a BSc, an MSc, and a PhD from the Dept. of Software Eng. at Kumoh National Institute of Technology, Korea, in 2007, 2010, and 2013, respectively. He has been with the Kumoh National Institute of Technology since 2014 as a Research Professor in the ICT Convergence Research Center. From 2013 to 2014, he was a Senior Researcher in Young Poong Elec. Co. His current research areas include artificial intelligence, Web intelligence, information filtering, and image processing.

Jong Yeol Lee, Computer and Software Engineering, Kumoh National Institute of Technology, Korea

Jong Yeol Lee received the BS and MS degree in Dept. of computer Eng. from Kumoh National Institute of Technology, Korea, in 1992 and 1994, respectively and the PhD candidate in software Eng. from Kumoh National Institute of Technology, Korea, in 2018. He has been with Kumoh National Institute of Technology since 2005 as a time lecturer of Computer Software Engineering Department. His current research areas include Artificial Intelligence, Machine Learning and Information Security.

Byeong Man Kim, Computer and Software Engineering, Kumoh National Institute of Technology, Korea

Byeong Man Kim received the BS degree in Dept. of computer Eng. from Seoul National University (SNU), Korea, in 1987, and the MS and the PhD degree in computer science from Korea Advanced Institute of Science and Technology (KAIST), Korea, in 1989 and 1992, respectively. He has been with Kumoh National Institute of Technology since 1992 as a faculty member of Computer Software Engineering Department. From 1998–1999, he was a post-doctoral fellow in UC, Irvine. From 2005–2006, he was a visiting scholar at Dept. of Computer Science of Colorado State University, working on design of a collaborative Web agent based on friend network. His current research areas include artificial intelligence, Web intelligence, information filtering and brain computer interface.

References

J. A. Russell, ‘A Circumplex Model of Affect’, Journal of Personality and Social Psychology, 39(6), pp. 1161–1178, 1980.

K. Hevner, ‘Experimental studies of the elements of expression in music’, The American Journal of Psychology, 48(2), 246–268, 1936.

R. E. Thayer, ‘The Biopsychology of Mood and Arousal’, Oxford University Press, 1989.

D. Liu, N. Zhang and H. Zhu, ‘Form and mood recognition of Johann Strauss’s waltz centos’, Chinese Journal of Electronics, 12(4), 2003.

H. Katayose, M. Imai and S. Inokuchi, ‘Sentiment extraction in music’, Proceedings of the 9th International Conference on Pattern Recognition, pp. 1083–1087, 1988.

Y. Feng, Y. Zhuang and Y. Pan, ‘Popular Music Retrieval by Detecting Mood’, Proceedings of the 26th annual international ACM SIGIR conference on Research and development in informaion retrieval, pp. 375–376, 2003.

T. Li and M. Ogihara, ‘Detecting emotion in Music’, ISMIR, 3, pp. 239–240, 2003.

K. Hevner, ‘Expression in music: a discussion of experimental studies and theories’, Psychological Review, 42(2), pp. 186–204, 1935.

P. R. Farnsworth, ‘The Social Psychology of Music’, The Dryden Press, 1958.

Y. H. Yang, C. C. Liu and H. H. Chen, ‘Music emotion classification: a fuzzy approach’, Proceedings of the 14th Annual ACM International Conference on Multimedia, pp. 81–84, 2006.

Y. H. Yang, Y. F. Su, Y. C. Lin and H. H. Chen, ‘Music emotion recognition: the role of individuality’, Proceedings of the International Workshop on Human-centered Multimedia, pp. 13–22, 2007.

Y. H. Yang, C. C. Liu and H. H. Chen, ‘A regression approach to music emotion recognition’, IEEE Transactions on Audio Speech and Language Processing, 16(2), pp. 448–457, 2008.

J. I. Lee, D. G. Yeo, B. M. Kim and H. Y. Lee, ‘Automatic Music Mood Detection through Musical Structure Analysis’, International Conference on Computer Science and its Application, pp. 510–515, 2009.

C. B. Moon, H. S. Kim, H. A. Lee and B. M. Kim, ‘Analysis of relationships between mood and color for different musical preferences’, Color Research & Application, 39(4), pp. 413–423, 2014.

S. R. Ness, A. Theocharis, G. Tzanetakis and L. G. Martins, ‘Improving Automatic Music Tag Annotation Using Stacked Generalization Of Probabilistic SVM Outputs’, Proceedings of the 17th ACM International Conference on Multimedia, pp. 705–708, 2009.

C. Laurier, M. Sordo, J. Serra and P. Herrera, ‘Music Mood Representation from Social Tags’, ISMIR, pp. 381–386, 2009.

J. H. Kim, S. Lee, S. M. Kim and W. Y. Yoo, ‘Music mood classification model based on Arousal-Valence values’, Advanced Communication Technology (ICACT), 2011 13th International Conference on, pp. 292–295, 2011.

M. Levy, M. Sandier and M. Casey, ‘Extraction of High-Level Musical Structure From Audio Data and Its Application to Thumbnail Generation’, Acoustics, Speech and Signal Processing, 2006. ICASSP 2006 Proceedings. 2006 IEEE International Conference on, 5, pp. 13–16, 2006.

O. Lartillot and P. Toiviainen, ‘A Matlab toolbox for musical feature extraction from audio’, International Conference on Digital Audio Effects, pp. 237–244, 2007.

S. J. Ryu, H. Y. Lee, I. W. Cho and H. K. Lee, ‘Document Forgery Detection with SVM Classifier and Image Quality Measure’, Advances in Multimedia Information Processing-PCM 2008, pp. 486–495, 2008.

C. C, Chang and C. J. Lin, ‘LIBSVM: a library for support vector machines’, ACM Transactions on Intelligent Systems and Technology (TIST), 2(3), p. 27, 2011.

E. D. Scheirer, ‘Music-listening Systems’, Ph. D. Thesis, MIT Media Lab, 2000.

Y. E.. Kim and W. Brian, ‘Singer Identification in Popular Music Recordings Using Voice Coding Features’, Proceedings of the 3rd International Conference on Music Information Retrieval, 13, 17, 2002.

Seungmin Rho, Byeong-jun Han and Eenjun Hwang, ‘SVR-based music mood classification and context-based music recommendation’, Proceedings of the 17th International Conference on Multimedia 2009, Vancouver, British Columbia, Canada, October 19–24, 2009.

C. B. Moon, J. Y. Lee, D. S. Kim and B. M. Kim, ‘Multimedia content recommendation in social networks using mood tags and synonyms’, Multimedia Systems 26, 139–156 (2020).

Downloads

Published

2021-11-21

How to Cite

Moon, C. B., Lee, J. Y. ., & Kim, B. M. . (2021). How to Retrieve Music using Mood Tags in a Folksonomy. Journal of Web Engineering, 20(8), 2335–2360. https://doi.org/10.13052/jwe1540-9589.2086

Issue

Section

Articles