CISUC

Emotion-Based Analysis and Classification of Music Lyrics

Authors

Abstract

Music emotion recognition (MER) is gaining significant attention in the Music Information Retrieval (MIR) scientific community. In fact, the search of music through emotions is one of the main criteria utilized by users. Real-world music databases from sites like AllMusic or Last.fm grow larger and larger on a daily basis, which requires a tremendous amount of manual work for keeping them updated. Unfortunately, manually annotating music with emotion tags is normally a subjective process and an expensive and time-consuming task. This should be overcome with the use of automatic systems. Besides automatic music classification, MER has several applications related to emotion-based retrieval tools such as music recommendation or automatic playlist generation. MER is also used in areas such as game development, cinema, advertising and health. Most of early-stage automatic MER
systems were based on audio content analysis. Later on, researchers started combining audio and lyrics, leading to bimodal MER systems with improved accuracy. This research addresses the role of lyrics in the music emotion recognition process. Feature extraction is one of the key stages of the Lyrics Music Emotion Recognition (LMER). We follow a learning-based approach using several state of the art features complemented by novel stylistic structural and semantic features. To evaluate our approach, we created a ground truth dataset containing 180 song lyrics, according to Russell’s emotion model. We conduct four types of experiments: regression and classification by quadrant, arousal and valence categories. To validate these systems we created a validation dataset composed of 771 song lyrics. To study the relation between features and emotions (quadrants) we performed experiments to identify the best features that allow to describe and discriminate each quadrant. We also conducted experiments to identify interpretable rules that show the relation between features and emotions and the relation among features. This research addresses also the role of the lyrics in the context of music emotion variation detection. To accomplish this task, we create a system to detect the predominant emotion expressed by each sentence (verse) of the lyrics. The system employs Russell’s emotion model with four sets of emotions (quadrants). To detect the predominant emotion in each verse, we proposed a novel keyword-based approach, which receives a sentence (verse) and classifies it in the appropriate quadrant. To tune the system parameters, we created a 129-sentence training dataset from 68 songs. To validate our system, we created a separate ground-truth containing 239 sentences (verses) from 44 songs. Finally, we measure the efficiency of the lyric features in a context of bimodal (audio and lyrics) analysis. We used almost all the state of the art features that we are aware of for both dimensions, as well as new lyric features proposed by us.

Keywords

Detection of Emotions in Music Lyrics, Natural Language Processing, Text Mining, Artificial Intelligence, Detection of Emotions in Music

Subject

Detection of Emotions in Music Lyrics

PhD Thesis

Emotion-Based Analysis and Classification of Music Lyrics, May 2017

PDF File


Cited by

No citations found