Skip to content

Commit

Permalink
Bump version to v2.1.0
Browse files Browse the repository at this point in the history
  • Loading branch information
CVHub520 committed Nov 24, 2023
1 parent 4229ae3 commit 06c9329
Show file tree
Hide file tree
Showing 7 changed files with 247 additions and 43 deletions.
25 changes: 14 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -74,17 +74,19 @@
## 🥳 What's New [⏏️](#📄-table-of-contents)

- Nov. 2023:
- 🤗🤗🤗 Release the latest version [2.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.0.0).
- 🔥🔥🔥 Added support for Grounding-SAM, combining [GroundingDINO](https://github.com/wenyi5608/GroundingDINO) with [HQ-SAM](https://github.com/SysCV/sam-hq) to achieve sota zero-shot high-quality predictions!
- 🚀🚀🚀 Enhanced support for [HQ-SAM](https://github.com/SysCV/sam-hq) model to achieve high-quality mask predictions.
- 🙌🙌🙌 Support the [PersonAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release%2F2.5/docs/en/PULC/PULC_person_attribute_en.md) and [VehicleAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release%2F2.5/docs/en/PULC/PULC_vehicle_attribute_en.md) model for multi-label classification task.
- 🆕🆕🆕 Introducing a new multi-label attribute annotation functionality.
- Release the latest version [1.1.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.1.0).
- 🤗 Release the latest version [2.1.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.1.0) 🤗
- Supoort [InternImage](https://arxiv.org/abs/2211.05778) model (**CVPR'23**).
- Release version [2.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.0.0).
- Added support for Grounding-SAM, combining [GroundingDINO](https://github.com/wenyi5608/GroundingDINO) with [HQ-SAM](https://github.com/SysCV/sam-hq) to achieve sota zero-shot high-quality predictions!
- Enhanced support for [HQ-SAM](https://github.com/SysCV/sam-hq) model to achieve high-quality mask predictions.
- Support the [PersonAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release%2F2.5/docs/en/PULC/PULC_person_attribute_en.md) and [VehicleAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release%2F2.5/docs/en/PULC/PULC_vehicle_attribute_en.md) model for multi-label classification task.
- Introducing a new multi-label attribute annotation functionality.
- Release version [1.1.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.1.0).
- Support pose estimation: [YOLOv8-Pose](https://github.com/ultralytics/ultralytics).
- Support object-level tag with yolov5_ram.
- Add a new feature enabling batch labeling for arbitrary unknown categories based on Grounding-DINO.
- Oct. 2023:
- Release the latest version [1.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.0.0).
- Release version [1.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.0.0).
- Add a new feature for rotation box.
- Support [YOLOv5-OBB](https://github.com/hukaixuan19970627/yolov5_obb) with [DroneVehicle](https://github.com/VisDrone/DroneVehicle) and [DOTA](https://captain-whu.github.io/DOTA/index.html)-v1.0/v1.5/v2.0 model.
- SOTA Zero-Shot Object Detection - [GroundingDINO](https://github.com/wenyi5608/GroundingDINO) is released.
Expand All @@ -96,7 +98,7 @@
- Add a new feature for small object detection using [SAHI](https://github.com/obss/sahi).
- Sep. 2023:
- Release version [0.2.4](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v0.2.4).
- Release [EfficientViT-SAM](https://github.com/mit-han-lab/efficientvit) (**ICCV'23**), [SAM-Med2D](https://github.com/OpenGVLab/SAM-Med2D), [MedSAM](https://arxiv.org/abs/2304.12306) and YOLOv5-SAM.
- Release [EfficientViT-SAM](https://github.com/mit-han-lab/efficientvit) (**ICCV'23**),[SAM-Med2D](https://github.com/OpenGVLab/SAM-Med2D), [MedSAM](https://arxiv.org/abs/2304.12306) and YOLOv5-SAM.
- Support [ByteTrack](https://github.com/ifzhang/ByteTrack) (**ECCV'22**) for MOT task.
- Support [PP-OCRv4](https://github.com/PaddlePaddle/PaddleOCR) model.
- Add `video` annotation feature.
Expand Down Expand Up @@ -162,7 +164,7 @@ For more details, please refer to [models_list](./docs/models_list.md).

### 🔜Quick Start

Download and run the `GUI` version directly from [Release](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.0.0) or [Baidu Disk](https://pan.baidu.com/s/1qX1Q36EfHEJiTkS7xri3_g?pwd=e5it).
Download and run the `GUI` version directly from [Release](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.1.0) or [Baidu Disk](https://pan.baidu.com/s/1XKflqsbb7C_7seL-ROD3bg?pwd=a64z).

Note:
- For MacOS:
Expand Down Expand Up @@ -197,6 +199,8 @@ python anylabeling/app.py

### 📦Build executable

> It's essential to note that these steps are not obligatory for regular users; they are intended for scenarios where customization or re-distribution of executable files is necessary.
```bash
#Windows-CPU
bash scripts/build_executable.sh win-cpu
Expand Down Expand Up @@ -249,11 +253,10 @@ bash scripts/build_executable.sh linux-gpu
- [FAQ](./docs/Q&A.md)
- [Model Zoo](./docs/models_list.md)
- [Loading Custom Models](./docs/custom_model.md)
- [Video Toturial]()

### 🧷Hotkeys

<details open>
<details>

<summary>Click to Expand/Collapse</summary>

Expand Down
27 changes: 15 additions & 12 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,19 +73,21 @@
## 🥳 新功能 [⏏️](#📄-目录)

- Nov. 2023:
- 🤗🤗🤗 Release the latest version [2.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.0.0).
- 🔥🔥🔥 Added support for Grounding-SAM, combining [GroundingDINO](https://github.com/wenyi5608/GroundingDINO) with [HQ-SAM](https://github.com/SysCV/sam-hq) to achieve sota zero-shot high-quality predictions!
- 🚀🚀🚀 Enhanced support for [HQ-SAM](https://github.com/SysCV/sam-hq) model to achieve high-quality mask predictions.
- 🙌🙌🙌 Support the [PersonAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_person_attribute.md) and [VehicleAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release/2.5/docs/zh_CN/models/PULC/PULC_vehicle_attribute.md) model for multi-label classification task.
- 🆕🆕🆕 Introducing a new multi-label attribute annotation functionality.
- Release the latest version [1.1.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.1.0).
- 🤗 Release the latest version [2.1.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.1.0) 🤗
- Supoort [InternImage](https://arxiv.org/abs/2211.05778) model (**CVPR'23**).
- Release version [2.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.0.0).
- Added support for Grounding-SAM, combining [GroundingDINO](https://github.com/wenyi5608/GroundingDINO) with [HQ-SAM](https://github.com/SysCV/sam-hq) to achieve sota zero-shot high-quality predictions!
- Enhanced support for [HQ-SAM](https://github.com/SysCV/sam-hq) model to achieve high-quality mask predictions.
- Support the [PersonAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release%2F2.5/docs/en/PULC/PULC_person_attribute_en.md) and [VehicleAttribute](https://github.com/PaddlePaddle/PaddleClas/blob/release%2F2.5/docs/en/PULC/PULC_vehicle_attribute_en.md) model for multi-label classification task.
- Introducing a new multi-label attribute annotation functionality.
- Release version [1.1.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.1.0).
- Support pose estimation: [YOLOv8-Pose](https://github.com/ultralytics/ultralytics).
- Support object-level tag with yolov5_ram.
- Add a new feature enabling batch labeling for arbitrary unknown categories based on Grounding-DINO.
- Oct. 2023:
- Release the latest version [1.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.0.0).
- Release version [1.0.0](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v1.0.0).
- Add a new feature for rotation box.
- Support [YOLOv5-OBB](https://github.com/hukaixuan19970627/yolov5_obb) with [DroneVehicle](https://github.com/VisDrone/DroneVehicle) and [DOTA](https://captain-whu.github.io/DOTA/index.html)-v1.0/v1.5/v2.0 model.
- Support [YOLOv5-OBB](https://github.com/hukaixuan19970627/yolov5_obb) with [DroneVehicle](https://github.com/VisDrone/DroneVehicle) and [DOTA](https://captain-whu.github.io/DOTA/index.html)-v1.0/v1.5/v2.0 model.
- SOTA Zero-Shot Object Detection - [GroundingDINO](https://github.com/wenyi5608/GroundingDINO) is released.
- SOTA Image Tagging Model - [Recognize Anything](https://github.com/xinyu1205/Tag2Text) is released.
- Support **YOLOv5-SAM** and **YOLOv8-EfficientViT_SAM** union task.
Expand All @@ -95,7 +97,7 @@
- Add a new feature for small object detection using [SAHI](https://github.com/obss/sahi).
- Sep. 2023:
- Release version [0.2.4](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v0.2.4).
- Release [EfficientViT-SAM](https://github.com/mit-han-lab/efficientvit) (**ICCV'23**), [SAM-Med2D](https://github.com/OpenGVLab/SAM-Med2D), [MedSAM](https://arxiv.org/abs/2304.12306) and YOLOv5-SAM.
- Release [EfficientViT-SAM](https://github.com/mit-han-lab/efficientvit) (**ICCV'23**),[SAM-Med2D](https://github.com/OpenGVLab/SAM-Med2D), [MedSAM](https://arxiv.org/abs/2304.12306) and YOLOv5-SAM.
- Support [ByteTrack](https://github.com/ifzhang/ByteTrack) (**ECCV'22**) for MOT task.
- Support [PP-OCRv4](https://github.com/PaddlePaddle/PaddleOCR) model.
- Add `video` annotation feature.
Expand Down Expand Up @@ -161,7 +163,7 @@

### 🔜快速开始

直接从 [Release](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.0.0)[百度网盘](https://pan.baidu.com/s/1qX1Q36EfHEJiTkS7xri3_g?pwd=e5it) 下载并运行 `GUI` 版本。
直接从 [Release](https://github.com/CVHub520/X-AnyLabeling/releases/tag/v2.1.0)[百度网盘](https://pan.baidu.com/s/1XKflqsbb7C_7seL-ROD3bg?pwd=a64z) 下载并运行 `GUI` 版本。

注意事项:
- 对于 MacOS:
Expand Down Expand Up @@ -196,6 +198,8 @@ python anylabeling/app.py

### 📦编译

> 请注意,以下步骤是非必要的,这些构建脚本仅为可能需要自定义和编译软件以在特定环境中分发的用户提供的。
```bash
#Windows-CPU
bash scripts/build_executable.sh win-cpu
Expand Down Expand Up @@ -247,11 +251,10 @@ bash scripts/build_executable.sh linux-gpu
- [帮助文档](./docs/Q&A.md)
- [模型库](./docs/models_list.md)
- [加载自定义模型](./docs/custom_model.md)
- [视频教程]()

### 🧷快捷键

<details open>
<details>

<summary>点击展开/关闭</summary>

Expand Down
2 changes: 1 addition & 1 deletion anylabeling/app_info.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
__appname__ = "X-AnyLabeling"
__appdescription__ = "Advanced Auto Labeling Solution with Added Features"
__version__ = "2.0.0"
__version__ = "2.1.0"
__preferred_device__ = "CPU" # GPU or CPU
2 changes: 1 addition & 1 deletion assets/demo.json
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
{
"version": "2.0.0",
"version": "2.1.0",
"flags": {},
"shapes": [
{
Expand Down
45 changes: 33 additions & 12 deletions docs/Q&A.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,16 +5,16 @@
## 功能支持

Q: **`X-AnyLabeling` 目前支持哪些标注样式?**</br>
A: 当前支持**多边形****矩形****圆形****直线******。</br>
A: 当前支持**多边形****矩形框****旋转框****圆形****直线****线段******。</br>

Q: **`X-AnyLabeling` 目前提供哪些基础模型**</br>
Q: **`X-AnyLabeling` 目前提供哪些内置模型**</br>
A: 详情可移步至 [models_list](./models_list.md) 文档查看。

## 标签转换

Q: **如何将打标完的 `*.json` 标签文件转换为 `YOLO`/`VOC`/`COCO` 等主流格式?**</br>
A: 考虑到当前标注工具框架的兼容性,为了更好的扩展和维护,目前部分功能并没有集成到工具内,而是视为一个独立的组件抽离出来供大家使用,同时也方便大家修改。</br>
针对工具本身自定义(`custom`)的格式,我们可以使用工程目录下的 `tools/label_converter.py` 脚本轻松转换,以下是参考的转换指令:
A: X-AnyLabeling 工具目前内置了多种主流数据格式的导出,包括但不仅限于 YOLO/VOC/COCO。</br>
为了满足更多元化的需求,针对自定义(`custom`)格式,额外向大家提供了 `tools/label_converter.py` 一键转换脚本。</br>

```bash
#=============================================================================== Usage ================================================================================#
Expand All @@ -27,14 +27,15 @@ A: 考虑到当前标注工具框架的兼容性,为了更好的扩展和维
#
#---------------------------------------------------------------------------- custom2yolo ----------------------------------------------------------------------------#
# python tools/label_converter.py --task rectangle --src_path custom_folder --dst_path yolo_folder --classes xxx.txt --mode custom2yolo
# python tools/label_converter.py --task polygon --src_path custom_folder --dst_path yolo_folder --classes xxx.txt --mode custom2yolo
# python tools/label_converter.py --task polygon --src_path custom_folder --dst_path yolo_folder --classes xxx.txt --mode custom2yolo
#
#---------------------------------------------------------------------------- yolo2custom ----------------------------------------------------------------------------#
# python tools/label_converter.py --task rectangle --src_path yolo_folder --img_path img_folder --classes xxx.txt --mode yolo2custom
# python tools/label_converter.py --task polygon --src_path yolo_folder --img_path img_folder --classes xxx.txt --mode yolo2custom
#
#---------------------------------------------------------------------------- custom2coco ----------------------------------------------------------------------------#
# python tools/label_converter.py --task rectangle --src_path custom_folder --dst_path coco_folder --classes xxx.txt --mode custom2coco
# python tools/label_converter.py --task rectangle --src_path custom_folder --dst_path coco_folder --classes xxx.txt --mode custom2coco
# python tools/label_converter.py --task polygon --src_path custom_folder --dst_path coco_folder --classes xxx.txt --mode custom2coco
#
#---------------------------------------------------------------------------- coco2custom ----------------------------------------------------------------------------#
# python tools/label_converter.py --task rectangle --src_path coco.json --img_path img_folder --mode coco2custom
Expand All @@ -56,8 +57,8 @@ A: 考虑到当前标注工具框架的兼容性,为了更好的扩展和维
```

注意:
1. 目前 `--task` 支持的任务有 ['rectangle', 'polygon'] 即矩形框和多边形框两种,其中多边形框任务只提供 `yolo``custom` 之间的互相转换,方便大家训练检测和分割任务。至于其它的任务,如关键点等,可以参考下脚本自行修改适配下。</br>
2. 此处 `--classes` 参数指定的 `*.txt` 文件是用户预定义的类别文件,每一行代表一个类别,类别编号按从上到下的顺序编排,可参考 `assets` 目录下的 `classes.txt`。</br>
1. 目前 `--task` 支持的任务有 ['rectangle', 'polygon', 'rotation'] 即矩形框、多边形框和旋转框三种,方便大家快速接入训练框架。至于其它的任务如关键点等,可参考脚本自行修改下。</br>
2. 此处 `--classes` 参数指定的 `*.txt` 文件是用户预定义的类别文件,每一行代表一个类别,类别编号按从上到下的顺序编排,可参考此文件[classes.txt](../assets/classes.txt)。</br>

Q: **语义分割任务如何将输出的标签文件转换为 \*.png 格式输出?**</br>
A: 针对工具本身自定义(`custom`)的格式,我们可以使用工程目录下的 `tools/polygon_mask_conversion.py` 脚本轻松转换,以下是参考的转换指令:
Expand All @@ -79,26 +80,34 @@ python tools/polygon_mask_conversion.py --img_path xxx_folder --mask_path xxx_fo

Q: **如何修改自定义快捷键?**
</br>
A: 可通过修改当前设备的用户根目录下的 `.anylabelingrc` 文件:
A: 可通过修改当前设备用户根目录下的 `.anylabelingrc` 文件:

```bash
#Linux
cd ~/.anylabelingrc

#Windows
cd C:\\Users\\xxx\\.anylabelingrc

```

Q: **如何使用X-AnyLabeling自动标注功能?**
A: 可参考此篇[文章](https://zhuanlan.zhihu.com/p/667668033)

Q:**如何使用GPU加速推理?**</br>
A:由于 `X-AnyLabeling` 现阶段的 IR 是基于 `OnnxRuntime` 库实现的。因此,如需使用 GPU 推理,需安装 `onnxruntime-gpu` 版本并确保与机器的 CUDA 版本相匹配,具体参照表可参考[官方文档](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html)。此外,当确定当前环境可用时,只需将此[文件](../anylabeling/app_info.py)中的 `__preferred_device__` 字段设置为 `GPU` 即可。

Q: **如何进行多分类属性标注**</br>
A: 可参考以下步骤:
1. 准备好一个自定义属性标签文件,具体示例可参考[attributes.json](../assets/attributes.json)
1. 准备好一个自定义属性标签文件,具体示例可参考 [attributes.json](../assets/attributes.json)
2. 运行`X-AnyLabeling`,点击`Import`-`Import Attributes File`导入上一步的标签文件;
3. 加载图片,绘制目标矩形框,标签必须与自定义类别标签一致;
4. 右键或者快捷键`Ctrl+J`打开编辑模式;
5. 点击选中目标,右上角即可进行标签属性的标注。
注:如果你打算使用AI模型进行预打标,可先加载对应的模型,选择一键运行所有图像功能,再进行微调即可。

Q: **如何进行旋转目标标注**</br>
A: 可参考此篇[博客](https://blog.csdn.net/CVHub/article/details/134216999)

Q: **如何使用 SAM 系列模型?**</br>
A: 可参考以下步骤实施:
1. 点击菜单栏左侧的`Brain`标志按钮以激活AI功能选项;
Expand Down Expand Up @@ -140,9 +149,15 @@ bash scripts/build_executable.sh linux-gpu

> 注意事项:</br>
> 1. 编译前请针对相应的 GPU/CPU 版本修改 `anylabeling/app_info.py` 文件中的 `__preferred_device__` 参数;</br>
> 2. 如果需要编译`GPU`版本,请通过`pip install -r requirements-gpu-dev.txt`安装对应的环境;特别的,对于 `Windows-GPU` 版本的编译,请自行修改 `anylabeling-win-gpu.spec``datas` 列表参数,将您本地的`onnxruntime-gpu`的相关动态库`*.dll`添加进列表中;此外,下载 `onnxruntime-gpu` 包是需要根据 `CUDA` 版本进行适配,具体匹配表可参考[官方文档](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html)说明。</br>
> 2. 如果需要编译`GPU`版本,请通过`pip install -r requirements-gpu-dev.txt`安装对应的环境;
> - 对于 `Windows-GPU` 版本的编译,请自行修改 `anylabeling-win-gpu.spec``datas` 列表参数,将您本地的`onnxruntime-gpu`的相关动态库`*.dll`添加进列表中;</br>
> - 对于 `Linux-GPU` 版本的编译,请自行修改 `anylabeling-linux-gpu.spec``datas` 列表参数,将您本地的`onnxruntime-gpu`的相关动态库`*.so`添加进列表中;</br>
> - 此外,下载 `onnxruntime-gpu` 包是需要根据 `CUDA` 版本进行适配,具体匹配表可参考[官方文档](https://onnxruntime.ai/docs/execution-providers/CUDA-ExecutionProvider.html)说明。</br>
> 3. 对于 `macos` 版本可自行参考 `anylabeling-win-*.spec` 脚本进行修改。
Q: **使用YOLO系列模型如何只标注自己需要的类别?**</br>
A:可以通过添加 `filter_classes` 字段来实现,具体方法与步骤可参考 [#121](https://github.com/CVHub520/X-AnyLabeling/issues/121).

## 问题反馈

Q: **初始化加载时下载模型失败如何处理?**</br>
Expand All @@ -164,6 +179,12 @@ A: 由于当前模型权重暂存在 `github` 上,下载前请先开启科学
└── yolox_l.onnx
```

Q: **自定义快捷键不生效?**</br>
A: 可参考 [#100](https://github.com/CVHub520/X-AnyLabeling/issues/100).

Q: **pip安装包时遇到lap库安装失败怎么办?**</br>
A: 可参考 [#124](https://github.com/CVHub520/X-AnyLabeling/issues/124).

Q: **能正常加载模型,但推理没结果?**</br>
A: 请先下载源码,在终端运行,查看具体的报错信息再尝试解决,如解决不了可在 [issue](https://github.com/CVHub520/X-AnyLabeling/issues) 提交反馈。

Expand Down
Loading

0 comments on commit 06c9329

Please sign in to comment.