Главная
Study mode:
on
1
Intro
2
Join the community!
3
Automatic music emotion recognition
4
What do we feed to the network?
5
Problem
6
Idea
7
What are mid-level perceptual features?
8
Datasets
9
Three architectures to predict emotion
10
Architecture details
11
Experimental results
12
How can we explain the results?
13
Weights of linear layer
14
Song explainability
15
Possible applications
16
What have we learnt?
17
Join the discussion!
Description:
Explore a deep learning architecture designed to recognize moods in songs and explain its predictions in this 38-minute video analysis. Break down the research paper "Towards Explainable Music Emotion Recognition: The Route via Mid-Level Features" published at ISMIR 2019 by University of Linz researchers. Delve into the problem of automatic music emotion recognition, the use of mid-level perceptual features, and three proposed architectures for predicting emotion. Examine experimental results, methods for explaining the model's decisions, and potential applications of this technology. Gain insights into VGG networks and join the discussion in The Sound of AI community to further explore this fascinating intersection of artificial intelligence and music emotion recognition.

This AI Recognises Moods in Songs and Explains How It Does It

Valerio Velardo - The Sound of AI
Add to list
0:00 / 0:00