CISUC

Automatic Manipulation of Music to Express Desired Emotions

Authors

Abstract

We are developing a computational system that produces
music expressing desired emotions. This paper is focused
on the automatic transformation of 2 emotional dimensions
of music (valence and arousal) by changing musical
features: tempo, pitch register, musical scales, instruments
and articulation. Transformation is supported by 2
regression models, each with weighted mappings between
an emotional dimension and music features. We also
present 2 algorithms used to sequence segments.

We made an experiment with 37 listeners who were
asked to label online 2 emotional dimensions of 132
musical segments. Data coming from this experiment was
used to test the effectiveness of the transformation
algorithms and to update the weights of features of the
regression models. Tempo and pitch register proved to be
relevant on both valence and arousal. Musical scales and
instruments were also relevant for both emotional
dimensions but with a lower impact. Staccato articulation
influenced only valence.

Keywords

Music Computing

Subject

Music Computing

Conference

Sound and Music Computing, July 2009

PDF File


Cited by

Year 2013 : 2 citations

 Kirke, A., Miranda, E., Nasuto, S. "Artificial Affective Listening towards a Machine Learning Tool for Sound-Based Emotion Therapy and Control". Proceedings of 2013 Sound and Music Computing Conference.

 Kirke, A., Miranda, E., Nasuto, S. "Learning to Make Feelings: Expressive Performance as a Part of a Machine Learning Tool for Sound-Based Emotion Control". From Sounds to Music and Emotions, 490-499.

Year 2011 : 1 citations

 Kirke, A. "Application of Intermediate Multi-Agent Systems to Integrated Algorithmic Composition and Expressive Performance of Music". University of Plymouth.

Year 2010 : 1 citations

 Livingstone, S., Muhlberger, R., Brown, A. and Thompson, W. "Changing musical emotion: A computational rule system for modifying score and performance". Computer Music Journal, 34(1):41-64.