Note
If you use our work in your research, cite us:
Simonetta F., Certo F., Ntalampiras S. "Joint Learning of Emotions in Music and Generalized Sounds", AudioMostly 2024, Milan, Italy. DOI: https://doi.org/10.1145/3678299.3678324
This project was developed using pdm and intel MKL libraries. To setup the same identical environment, do as follows:
- Install pdm (e.g. using pipx)
- Enter the project directory and run
pdm sync
- Download IADS-E and PMEmo datasets ad unzip them
- Download OpenSmile 3.0.1
- Download and extract the datasets each in a different directory (IADS-E, IADS-2, PMEmo)
- Modify
music_sound_emotions/settings.py
to match your paths:
- the path to OpenSmile root directory
- the paths to the datasets root directories
- From the project root run:
pdm features
to extract featurespdm experiment
to reproduce our experimentspdm parse experiment*.log
to parse the logs