Skip to content

Deep learning for monitoring environment and biodiversity through acoustic recognition of landscapes and animal sounds

Notifications You must be signed in to change notification settings

amarmeddahi/sounds-of-nature

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

53 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Sounds of Nature

About The Project

Welcome to the Sounds of Nature Github repository! This project aims to use supervised and unsupervised learning methods to develop a processing chain in Python for the analysis of acoustic landscapes and animal sound emissions. The project has two main components: characterization of acoustic landscapes and species recognition. We hope this repository will be a useful resource for researchers and practitioners interested in integrative monitoring of the environment and biodiversity using soundscapes.

Useful ressources

Here are some useful resources related to this project:

  • Extended abstract - This document provides an in-depth overview of the project, including methodology, results, and conclusions. It's an essential resource for those interested in the details of our research.
  • Presentation slides - We created a set of slides summarizing the key aspects of our project. These slides are an excellent resource for those who want a quick and easy way to understand the project's main points.

We hope these resources will be helpful to those interested in learning more about our project.

Built with

Repository description

This repository provides a lot of code and data to handle different project tasks. You can find codes to process audio data, codes to classify data and so on. The following describes the purpose of the different files in the repository:

Getting Started

Please note that the code has been developed to be as flexible as possible. However, it is possible that you may need to adapt it to your specific use case and situation.

Prerequisites

All notebooks have been developed in the google colab tool. You can easily use them in this environment to facilitate the use of codes.

Usage

Creating Acoustic Representations

To create acoustic representations, follow these steps:

  1. Ensure that all audio recordings are stored in a hierarchy of folders as follows: label/site/audios.
  2. Use the create_indices_dataset.ipynb notebook for acoustic indices representation or create_latent_dataset.ipynb notebook for latent representation
    • Specify the path where you data is located (e.g., parent_dir)
    • Specify the label directories (e.g., label_dirs)
    • (latent representation only): Specify the size of the latent space (e.g., 512 or 6114) and the type of spectrogram you want to work with (e.g., linear, mel128 or mel256).
    • Specify the path to save your file

You can find example outputs here: indices.csv (acoustic indices representation) or latent_space_mel128_512.csv (latent representation)

Acoustic Landscapes Recognition

To classify acoustic landscapes, follow these steps:

  1. Make sure to have a CSV file that describes the acoustic environment you are working on.
  2. Use the mlp_classification.ipynb notebook for classification using neural networks or stats_regression.ipynb for statical machine learning classification.
    • Specify the path to your data (indices or latent representation)
    • Specify the labels you want to use for your classification (e.g., labels = ["BE-BL-RE-RL", "R-B", "L-E"])

Biodiversty Data Estimation

To estimate biodiversity data, follow these steps:

  1. Make sure to have the regression dataset (ex. regression.csv), the acoustic representation (ex. latent_space_mel128_512.csv) and a joint table (ex. reg-ind_joint_table.csv). The joint table makes it easier to match the data and the predictions.
  2. Use the mlp_regression.ipynb or stats_regression.ipynb wether you want to predict with neural nets or statistical machine learning.
    • Specify the path to your data.

Feature Details - This document provides a detailed explanation of the features used in the regression task.

Bird Species Recognition

This uses spectograms to classify bird species. To get started, follow these steps:

  1. Ensure that you have a folder with all the spectograms already split into train/test/val.
  2. Open the species_classification.ipynb notebook to begin classifying the spectograms.

The species_classification.ipynb notebook provides detailed instructions on how to prepare your data, build and train the model, and evaluate its performance. You will need to have Python and the required dependencies installed to run the notebook.

Contact

Amar Meddahi - amar [dot] meddahi1 [at] gmail [dot] com

Acknowledgments

Sounds of Nature communicates with and/or references the following:

We thank all their contributors and maintainers!

About

Deep learning for monitoring environment and biodiversity through acoustic recognition of landscapes and animal sounds

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published