Skip to content

Latest commit

 

History

History
91 lines (63 loc) · 3.04 KB

README.md

File metadata and controls

91 lines (63 loc) · 3.04 KB

SAM-CD

Pytorch codes of Adapting Segment Anything Model for Change Detection in HR Remote Sensing Images [paper]

alt text

The SAM-CD adopts FastSAM as the visual encoder with some modifications.

How to Use

  1. Installation

  2. Dataset preparation.

    • Please split the data into training, validation and test sets and organize them as follows:
      YOUR_DATA_DIR
      ├── ...
      ├── train
      │   ├── A
      │   ├── B
      │   ├── label
      ├── val
      │   ├── A
      │   ├── B
      │   ├── label
      ├── test
      │   ├── A
      │   ├── B
      │   ├── label
  • Find change line 13 in SAM-CD/datasets/Levir_CD.py (or other data-loading .py files), change /YOUR_DATA_ROOT/ to your local dataset directory.
  1. Training

    classic CD training: python train_CD.py

    training CD with the proposed task-agnostic semantic learning: python train_SAM_CD.py

    line 16-45 are the major training args, which can be changed to load different datasets, models and adjust the training settings.

  2. Inference and evaluation

    inference on test sets: set the chkpt_path and run

    python pred_CD.py

    evaluation of accuracy: set the prediction dir and GT dir, and run

    python eval_CD.py

(More details to be added...)

Dataset Download

In the following, we summarize links to some frequently used CD datasets:

Pretrained Models

For readers to easily evaluate the accuracy, we provide the trained weights of the SAM-CD.

Drive
Baidu (pswd: SMCD)

Cite SAM-CD

If you find this work useful or interesting, please consider citing the following BibTeX entry.

@article{ding2024adapting,
title={Adapting Segment Anything Model for Change Detection in HR Remote Sensing Images},
author={Ding, Lei and Zhu, Kun and Peng, Daifeng and Tang, Hao and Yang, Kuiwu and Bruzzone, Lorenzo},
journal={IEEE Transactions on Geoscience and Remote Sensing}, 
year={2024},
volume={62},
pages={1-11},
doi={10.1109/TGRS.2024.3368168}
}