Skip to content

Commit

Permalink
bump version to v0.7.0.post3 (#3115)
Browse files Browse the repository at this point in the history
  • Loading branch information
lvhan028 authored Feb 10, 2025
1 parent 2a69e0a commit e98fd6a
Show file tree
Hide file tree
Showing 6 changed files with 9 additions and 28 deletions.
11 changes: 2 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ ______________________________________________________________________

## Latest News 🎉

<details open>
<details close>
<summary><b>2024</b></summary>

- \[2024/11\] Support Mono-InternVL with PyTorch engine
Expand Down Expand Up @@ -91,14 +91,6 @@ LMDeploy is a toolkit for compressing, deploying, and serving LLM, developed by

![v0 1 0-benchmark](https://github.com/InternLM/lmdeploy/assets/4560679/8e455cf1-a792-4fa8-91a2-75df96a2a5ba)

For detailed inference benchmarks in more devices and more settings, please refer to the following link:

- [A100](./docs/en/benchmark/a100_fp16.md)
- V100
- 4090
- 3090
- 2080

# Supported Models

<table>
Expand Down Expand Up @@ -160,6 +152,7 @@ For detailed inference benchmarks in more devices and more settings, please refe
<li>DeepSeek-VL (7B)</li>
<li>InternVL-Chat (v1.1-v1.5)</li>
<li>InternVL2 (1B-76B)</li>
<li>InternVL2.5(MPO) (1B-78B)</li>
<li>Mono-InternVL (2B)</li>
<li>ChemVLM (8B-26B)</li>
<li>MiniGeminiLlama (7B)</li>
Expand Down
10 changes: 2 additions & 8 deletions README_ja.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ ______________________________________________________________________

## 最新ニュース 🎉

<details open>
<details close>
<summary><b>2024</b></summary>

- \[2024/08\] 🔥🔥 LMDeployは[modelscope/swift](https://github.com/modelscope/swift)に統合され、VLMs推論のデフォルトアクセラレータとなりました
Expand Down Expand Up @@ -89,13 +89,6 @@ LMDeploy TurboMindエンジンは卓越した推論能力を持ち、さまざ

![v0 1 0-benchmark](https://github.com/InternLM/lmdeploy/assets/4560679/8e455cf1-a792-4fa8-91a2-75df96a2a5ba)

詳細な推論ベンチマークについては、以下のリンクを参照してください:

- [A100](./docs/en/benchmark/a100_fp16.md)
- 4090
- 3090
- 2080

# サポートされているモデル

<table>
Expand Down Expand Up @@ -156,6 +149,7 @@ LMDeploy TurboMindエンジンは卓越した推論能力を持ち、さまざ
<li>DeepSeek-VL (7B)</li>
<li>InternVL-Chat (v1.1-v1.5)</li>
<li>InternVL2 (1B-76B)</li>
<li>InternVL2.5(MPO) (1B-78B)</li>
<li>Mono-InternVL (2B)</li>
<li>ChemVLM (8B-26B)</li>
<li>MiniGeminiLlama (7B)</li>
Expand Down
10 changes: 2 additions & 8 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ ______________________________________________________________________

## 最新进展 🎉

<details open>
<details close>
<summary><b>2024</b></summary>

- \[2024/11\] PyTorch engine 支持 Mono-InternVL 模型
Expand Down Expand Up @@ -93,13 +93,6 @@ LMDeploy TurboMind 引擎拥有卓越的推理能力,在各种规模的模型

![v0 1 0-benchmark](https://github.com/InternLM/lmdeploy/assets/4560679/8e455cf1-a792-4fa8-91a2-75df96a2a5ba)

更多设备、更多计算精度、更多setting下的的推理 benchmark,请参考以下链接:

- [A100](./docs/en/benchmark/a100_fp16.md)
- 4090
- 3090
- 2080

# 支持的模型

<table>
Expand Down Expand Up @@ -161,6 +154,7 @@ LMDeploy TurboMind 引擎拥有卓越的推理能力,在各种规模的模型
<li>DeepSeek-VL (7B)</li>
<li>InternVL-Chat (v1.1-v1.5)</li>
<li>InternVL2 (1B-76B)</li>
<li>InternVL2.5(MPO) (1B-78B)</li>
<li>Mono-InternVL (2B)</li>
<li>ChemVLM (8B-26B)</li>
<li>MiniGeminiLlama (7B)</li>
Expand Down
2 changes: 1 addition & 1 deletion docs/en/get_started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ pip install lmdeploy
The default prebuilt package is compiled on **CUDA 12**. If CUDA 11+ (>=11.3) is required, you can install lmdeploy by:

```shell
export LMDEPLOY_VERSION=0.7.0.post2
export LMDEPLOY_VERSION=0.7.0.post3
export PYTHON_VERSION=38
pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118
```
Expand Down
2 changes: 1 addition & 1 deletion docs/zh_cn/get_started/installation.md
Original file line number Diff line number Diff line change
Expand Up @@ -23,7 +23,7 @@ pip install lmdeploy
默认的预构建包是在 **CUDA 12** 上编译的。如果需要 CUDA 11+ (>=11.3),你可以使用以下命令安装 lmdeploy:

```shell
export LMDEPLOY_VERSION=0.7.0.post2
export LMDEPLOY_VERSION=0.7.0.post3
export PYTHON_VERSION=38
pip install https://github.com/InternLM/lmdeploy/releases/download/v${LMDEPLOY_VERSION}/lmdeploy-${LMDEPLOY_VERSION}+cu118-cp${PYTHON_VERSION}-cp${PYTHON_VERSION}-manylinux2014_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118
```
Expand Down
2 changes: 1 addition & 1 deletion lmdeploy/version.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
# Copyright (c) OpenMMLab. All rights reserved.
from typing import Tuple

__version__ = '0.7.0.post2'
__version__ = '0.7.0.post3'
short_version = __version__


Expand Down

0 comments on commit e98fd6a

Please sign in to comment.