-
Notifications
You must be signed in to change notification settings - Fork 487
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor lora #2466
Refactor lora #2466
Conversation
Should we update |
May update |
Do
|
Add download adapters. |
With from lmdeploy import pipeline, GenerationConfig, PytorchEngineConfig
backend_config = PytorchEngineConfig(session_len=2048,
adapters=dict(lora_name_1='chenchi/lora-chatglm2-6b-guodegang'))
pipe = pipeline('/mnt/140/chatglm2-6b/',
log_level='INFO',
backend_config=backend_config)
prompts = [[{
'role': 'user',
'content': '您猜怎么着'
}]]
response = pipe(prompts, adapter_name='lora_name_1')
print(response[0].text)
response = pipe(prompts, adapter_name='lora_name_1')
print(response[0].text)
response = pipe(prompts, adapter_name='lora_name_1')
print(response[0].text)
# 我不知道您想说什么,可以请您把您的想法、问题、观点、建议或者疑问提出来,我会尽力帮助您解答。
# 没有啊,您给我说。
# 没有啊,您给我说。 |
Fixed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
s-lora is hard to maintain.