-
Notifications
You must be signed in to change notification settings - Fork 485
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add distributed context in pytorch engine to support torchrun #2615
Conversation
"""get current world size and rank.""" | ||
world_size = 1 | ||
rank = 0 | ||
from lmdeploy.pytorch.distributed import get_world_rank |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you provide a short script to show how to use torchrun with lmdeploy to test this pr?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
import os
import time
import argparse
import torch
import torch.distributed as dist
import torch.multiprocessing as mp
from tqdm import tqdm
from lmdeploy import pipeline, PytorchEngineConfig, GenerationConfig, ChatTemplateConfig, VisionConfig
from lmdeploy.vl import load_image as load_image
os.environ['TOKENIZERS_PARALLELISM'] = 'true'
def init_dist_pytorch(tcp_port, local_rank, backend='nccl'):
if mp.get_start_method(allow_none=True) is None:
mp.set_start_method('spawn')
num_gpus = torch.cuda.device_count()
if torch.__version__ > '1.10':
local_rank = int(os.environ['LOCAL_RANK'])
torch.cuda.set_device(local_rank % num_gpus)
dist.init_process_group(
backend=backend,
)
rank = dist.get_rank()
num_gpus = dist.get_world_size()
return num_gpus, rank
def parse_config():
parser = argparse.ArgumentParser(description='arg parser')
parser.add_argument('--model_path', type=str, default=None, help='checkpoint to start from')
parser.add_argument('--tcp_port', type=int, default=18888, help='tcp port for distrbuted training')
parser.add_argument('--local_rank', type=int, default=0, help='local rank for distributed training')
args = parser.parse_args()
return args
if __name__ == '__main__':
args = parse_config()
num_gpus, rank = init_dist_pytorch(args.tcp_port, args.local_rank)
pipe = pipeline(model_path=args.model_path,
backend_config=PytorchEngineConfig(dtype='bfloat16', cache_max_entry_count=0.1,
max_batch_size=1),
vision_config=VisionConfig(max_batch_size=1), log_level='INFO',
# chat_template_config=ChatTemplateConfig(model_name='internvl2-internlm2')
)
generation_config = GenerationConfig(max_new_tokens=4096, do_sample=False, temperature=0.0)
iteration_num = 20
input_text = "Explain the concept of artificial intelligence in simple terms."
if num_gpus == 1:
start_time = time.time()
for _ in tqdm(range(iteration_num), ncols=140, desc=f"Single GPU"):
output = pipe([input_text], gen_config=generation_config)
print(f"Single GPU average inference time: {time.time()-start_time:.1f} seconds")
else:
dist.barrier()
start_time = time.time()
for _ in tqdm(range(iteration_num//num_gpus), ncols=140, desc=f"Multi GPU", disable=rank!=0):
output = pipe([input_text], gen_config=generation_config)
dist.barrier()
if rank == 0:
print(f"Multi-GPU average inference time: {time.time()-start_time:.1f} seconds")
torchrun --nproc_per_node=2 test.py \
--model_path InternVL2-1B
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Thanks for your contribution and we appreciate it a lot. The following instructions would make your pull request more healthy and more easily receiving feedbacks. If you do not understand some items, don't worry, just make the pull request and seek help from maintainers.
Motivation
Please describe the motivation of this PR and the goal you want to achieve through this PR.
Modification
Please briefly describe what modification is made in this PR.
BC-breaking (Optional)
Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.
Use cases (Optional)
If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.
Checklist