We propose a five regression models’ system to classify music emotion. To this end, a dataset similar to MIREX contest dataset was used. Songs from each cluster are separated in five sets and labeled as 1. A similar number of songs from other clusters are then added to each set and labeled 0, training regression models to output a value representing how much a song is related to the specific cluster. The five outputs are combined and the highest score used as classification. An F-measure of 68.9% was obtained. Results were validated with 10-fold cross-validation and feature selection was tested.
Subject
music emotion recognition, music information retrieval
Related Project
MOODetector: A System for Mood-based Classification and Retrieval of Audio Music
Conference
5th International Workshop on Music and Machine Learning – MML’2012 – in conjunction with the 19th International Conference on Machine Learning – ICML’2012, June 2012
PDF File
Cited by
Year 2016 : 1 citations
Saim Shin, Sei-Jin Jang, Donghyun Lee, Unsang Park and Ji-Hwan Kim, "Brainwave-based Mood Classification Using Regularized Comm," KSII Transactions on Internet and Information Systems, vol. 10, no. 2, pp. 807-824, 2016. DOI: 10.3837/tiis.2016.02.020
Year 2013 : 1 citations
1. Piva, R. (2013). Combining timbric and rhythmic features for semantic music tagging. MSc Thesis. University of Padova, Italy.