Bi-Modal Music Emotion Recognition: Novel Lyrical Features and Dataset
Authors
Abstract
This research addresses the role of audio and lyrics in the music emo-tion recognition. Each dimension (e.g., audio) was separately studied, as well as
in a context of bimodal analysis. We perform classification by quadrant categories (4 classes). Our approach is based on several audio and lyrics state-of-the-art
features, as well as novel lyric features. To evaluate our approach we create a
ground-truth dataset. The main conclusions show that unlike most of the similar
works, lyrics performed better than audio. This suggests the importance of the
new proposed lyric features and that bimodal analysis is always better than each
dimension.
Subject
Music Emotion Recognition, Music Information Retrieval, Natural Language ProcessingRelated Project
MOODetector: A System for Mood-based Classification and Retrieval of Audio MusicConference
9th International Workshop on Music and Machine Learning – MML’2016 – in conjunction with the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases – ECML/PKDD 2016, October 2016PDF File
Cited by
Year 2017 : 1 citations
Çano, E., Morisio, M.. "MoodyLyrics: A Sentiment Annotated Lyrics Dataset. International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence", Hong Kong, March, 2017.