Skip to content

Drone Based Object Detection and Segmentation Model for Smart Farming.

License

Notifications You must be signed in to change notification settings

sudoshivam/drone-vegetation-mapping

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Drone-Based Semantic Segmentation of Vegetation

This repository contains the implementation and models for the Semantic Segmentation of Vegetation (Plants, crops, trees etc.) in Drone Images. The goal of this project is to accurately classify and segment vegetation in drone-captured images using advanced deep learning techniques. In this README, you'll find an overview of the project, details about the models used, sample prediction images, and instructions for running the code in a Conda environment or Kaggle notebook.

Table of Contents

Introduction

Semantic segmentation plays a pivotal role in understanding the distribution of vegetation in drone images. This project leverages deep learning techniques to achieve accurate segmentation, enabling applications like land cover monitoring, precision agriculture, and environmental assessment.

Architectures

U-Net Architecture

The U-Net architecture has proven to be effective in semantic segmentation tasks. It consists of a contracting path to capture context and a symmetric expanding path to generate precise segmentations. The model's performance is as follows:

  • Training Accuracy: 91.22%
  • Validation Accuracy: 92.64%

U-Net with Pretrained MobileNetV2

This model utilizes pre-trained MobileNetV2 as the encoder of U-Net. This addition enhances feature extraction and often leads to improved segmentation accuracy. The model's performance is as follows:

  • Training Accuracy: 92.08%
  • Validation Accuracy: 92.86%

Sample Predictions

Here are some sample predictions generated by model:

Prediction

Drone view

Drone view

Getting Started

To run the code in this repository, follow these steps:

Setup Conda Environment

  1. Install Miniconda or Anaconda.
  2. Create a new Conda environment using the provided environment.yml file:
conda env create -f environment.yml
  1. Activate the created environment:
conda activate segmentation-env

Running the Code

  1. Open the Jupyter Notebook files in the activated environment to explore the model training and evaluation pipeline. Alternatively, you can import the code on Kaggle and run it directly.
  2. Make sure to adjust paths and configurations as needed to fit your environment.
  3. Execute the notebook cells to train and evaluate the models.

Feel free to reach out if you have any questions or suggestions!


Disclaimer: This project is for educational and research purposes. The models' performance and predictions may vary based on dataset quality, preprocessing, and hyperparameters. The dataset can be made available upon request.


You can find more models on my Kaggle profile.

About

Drone Based Object Detection and Segmentation Model for Smart Farming.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published