-
Notifications
You must be signed in to change notification settings - Fork 472
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix custom vllm eval args #2325
Conversation
vllm_utils.py可以还原一下不, 怎么改了这么多哇 |
swift/llm/utils/vllm_utils.py
Outdated
@@ -260,12 +270,24 @@ def __post_init__(self): | |||
self.top_k = -1 | |||
if self.stop is None: | |||
self.stop = [] | |||
if self.repetition_penalty is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
可以在eval.py中对None进行过滤不
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
自动lint工具修改的,改了一些格式,vllm_utils.py还原了 |
swift/llm/utils/vllm_utils.py
Outdated
@@ -620,5 +620,6 @@ def prepare_vllm_engine_template(args: InferArguments, use_async: bool = False) | |||
args.truncation_strategy, | |||
model=llm_engine, | |||
tools_prompt=args.tools_prompt) | |||
logger.info(f'system: {template.default_system}') | |||
args.system = template.default_system |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这里是为什么呢
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It was likely modified by mistake and has been restored.
swift/llm/eval.py
Outdated
if max_new_tokens is not None: | ||
infer_cfg['max_tokens'] = max_new_tokens | ||
defaults = {'repetition_penalty': 1.0, 'top_p': 1.0, 'top_k': -1} | ||
# 使用默认值覆盖 None 值 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
please use english
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
fixed
thank you for your PR! |
PR type
PR information
Fix eval dataset with vllm inference backend args.
Add following in
ms-swift/swift/llm/utils/vllm_utils.py
Experiment results
Paste your experiment result here(if needed).