Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor lora #2466

Merged
merged 12 commits into from
Sep 24, 2024
Merged

Refactor lora #2466

merged 12 commits into from
Sep 24, 2024

Conversation

grimoire
Copy link
Collaborator

s-lora is hard to maintain.

@lvhan028
Copy link
Collaborator

Should we update api_server_lora.md?

@AllentDan
Copy link
Collaborator

May update pipeline.html as well.

@irexyc
Copy link
Collaborator

irexyc commented Sep 24, 2024

https://github.com/InternLM/lmdeploy/blob/85cf6e3f3f780e83f33cda608f5e312c5502cda2/docs/zh_cn/llm/pipeline.md

Do adapters only support local paths?

  File "/home/chenxin/ws3/topk/lmdeploy/pytorch/engine/model_agent.py", line 248, in _build_model
    add_adapters(patched_model,
  File "/home/chenxin/miniconda3/envs/38/lib/python3.8/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
    return func(*args, **kwargs)
  File "/home/chenxin/ws3/topk/lmdeploy/pytorch/models/patch.py", line 298, in add_adapters
    state_dict = torch.load(checkpoint_path, map_location=device)
  File "/home/chenxin/miniconda3/envs/38/lib/python3.8/site-packages/torch/serialization.py", line 997, in load
    with _open_file_like(f, 'rb') as opened_file:
  File "/home/chenxin/miniconda3/envs/38/lib/python3.8/site-packages/torch/serialization.py", line 444, in _open_file_like
    return _open_file(name_or_buffer, mode)
  File "/home/chenxin/miniconda3/envs/38/lib/python3.8/site-packages/torch/serialization.py", line 425, in __init__
    super().__init__(open(name, mode))
FileNotFoundError: [Errno 2] No such file or directory: 'chenchi/lora-chatglm2-6b-guodegang/adapter_model.bin'

@grimoire
Copy link
Collaborator Author

Do adapters only support local paths?

Add download adapters.

@irexyc
Copy link
Collaborator

irexyc commented Sep 24, 2024

With do_sample=False and same input, the output of the first request is different from the output of the subsequent request. Is this as expected?

from lmdeploy import pipeline, GenerationConfig, PytorchEngineConfig
backend_config = PytorchEngineConfig(session_len=2048,
                                     adapters=dict(lora_name_1='chenchi/lora-chatglm2-6b-guodegang'))
pipe = pipeline('/mnt/140/chatglm2-6b/',
                log_level='INFO',
                backend_config=backend_config)
prompts = [[{
    'role': 'user',
    'content': '您猜怎么着'
}]]
response = pipe(prompts, adapter_name='lora_name_1')
print(response[0].text)
response = pipe(prompts, adapter_name='lora_name_1')
print(response[0].text)
response = pipe(prompts, adapter_name='lora_name_1')
print(response[0].text)

#  我不知道您想说什么,可以请您把您的想法、问题、观点、建议或者疑问提出来,我会尽力帮助您解答。
#  没有啊,您给我说。
#  没有啊,您给我说。

@grimoire
Copy link
Collaborator Author

With do_sample=False and same input, the output of the first request is different from the output of the subsequent request. Is this as expected?

Fixed.

Copy link
Collaborator

@AllentDan AllentDan left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@lvhan028 lvhan028 merged commit 0fbb2ee into InternLM:main Sep 24, 2024
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants