CISUC

Using Support Vector Machines for Automatic Mood Tracking in Audio Music

Authors

Abstract

In this paper we propose a solution for automatic mood tracking in audio music, based on supervised learning and classification. To this end, various music clips with a duration of 25 seconds, previously annotated with arousal and valence (AV) values, were used to train several models. These models were used to predict quadrants of the Thayer?s taxonomy and AV values, of small segments from full songs, revealing the mood changes over time. The system accuracy was measured by calculating the matching ratio between predicted results and full song annotations performed by volunteers. Different combinations of audio features, frameworks and other parameters were tested, resulting in an accuracy of 56.3% and showing there is still much room for improvement.

Subject

Music Information Retrieval, Music Emotion Recognition

Related Project

MOODetector: A System for Mood-based Classification and Retrieval of Audio Music

Conference

130th Audio Engineering Convention - AES 130, July 2011

PDF File


Cited by

Year 2016 : 2 citations

 Chau, Chuck-jee, Ronald Mo, and Andrew Horner. "The Emotional Characteristics of Piano Sounds with Different Pitch and Dynamics." Journal of the Audio Engineering Society 64.11 (2016): 918-932.

 Aljanaki, A. (2016) "Music and emotion: representation and computational modeling" .PhD Thesis, Utrecht University. ISBN: 978-94-6328-083-9

Year 2015 : 4 citations

 Dufour, I. (2015). Improving Music Mood Annotation Using Polygonal Circular Regression. MSc Thesis. Department of Computer Science, University of Victoria, Victoria, BC, Canada.

 Imbrasaite, Vaiva. "Continuous dimensional emotion tracking in music". PhD thesis. University of Cambridge, 2015.

 Plewa, M., Kostek, B. (2015) "Music Mood Visualization Using Self-Organizing Maps". Archives of Acoustics. Volume 40, Issue 4, Pages 513–525, ISSN (Online) 2300-262X, DOI: 10.1515/aoa-2015-0051, December 2015.

 C. H. Chung and H. Chen, "Vector representation of emotion flow for popular music," Multimedia Signal Processing (MMSP), 2015 IEEE 17th International Workshop on, Xiamen, 2015, pp. 1-6.
doi: 10.1109/MMSP.2015.7340797

Year 2014 : 2 citations

 Imbrasait?, Vaiva, Tadas Baltrušaitis, and Peter Robinson. "CCNF for continuous emotion tracking in music: Comparison with CCRF and relative feature representation." Multimedia and Expo Workshops (ICMEW), 2014 IEEE International Conference on. IEEE, 2014.

 Baltrušaitis, Tadas. Automatic facial expression analysis. PhD thesis. University of Cambridge, 2014.

Year 2013 : 6 citations

 Amanda Cohen Mostafavi, Zbigniew Ras and Alicja Wieczorkowska (2013). “Developing Personalized Classifiers for Retrieving Music by Mood”, ECML/PKDD 2013

 Kostek, Bo?ena, and Magdalena Plewa. "Parametrisation and correlation analysis applied to music mood classification." International Journal of Computational Intelligence Studies 2.1 (2013): 4-25.

 Imbrasait?, Vaiva, and Peter Robinson. "Absolute of Relative? A New Approach to Building Feature Vectors For Emotion Tracking In Music." The 3rd International Conference on Music & Emotion, Jyväskylä, Finland, June 11-15, 2013. University of Jyväskylä, Department of Music, 2013.

 Mostafavi, Amanda Cohen, Zbigniew W. Ra?, and Alicja A. Wieczorkowska. "From Personalized to Hierarchically Structured Classifiers for Retrieving Music by Mood." International Workshop on New Frontiers in Mining Complex Patterns. Springer International Publishing, 2013.

 Imbrasaite, Vaiva, Tadas Baltrušaitis, and Peter Robinson (2013). "EMOTION TRACKING IN MUSIC USING CONTINUOUS CONDITIONAL RANDOM FIELDS AND RELATIVE FEATURE REPRESENTATION.", AAM Workshop, ICME’2013.

 Plewa, Magdalena, and Bozena Kostek. "Multidimensional Scaling Analysis Applied to Music Mood Recognition." Audio Engineering Society Convention 134. Audio Engineering Society, 2013.

Year 2012 : 4 citations

 Plewa, Magdalena, and Bozena Kostek. "A Study on Correlation between Tempo and Mood of Music." Audio Engineering Society Convention 133. Audio Engineering Society, 2012.

 Scott Beveridge (2012). “A novel approach for time-continuous tension prediction in film soundtracks”, Proceedings of the 7th Audio Mostly Conference: A Conference on Interaction with Sound, Pages 55-60

 den Brinker, Bert, Ralph van Dinther, and Janto Skowronek. "Expressed music mood classification compared with valence and arousal ratings." EURASIP Journal on Audio, Speech, and Music Processing 2012.1 (2012): 1-14.

 Plewa, Magdalena, and Bozena Kostek. "Creating Mood Dictionary Associated with Music." Audio Engineering Society Convention 132. 2012.

Year 2011 : 1 citations

 1. J McGowan, "Harmonious: An Emotion-Matching System for Intelligent Use of Players’ Own Music Libraries with Game Soundtracks", Harmonious Project Technical Report, Leeds Metropolitan Universit, UK.