[Misc] Postpone torch_profiler deprecation#32867
[Misc] Postpone torch_profiler deprecation#32867NickLucche merged 1 commit intovllm-project:mainfrom
Conversation
|
cc @MatthewBonanni for all things backends |
There was a problem hiding this comment.
Code Review
The pull request successfully updates the deprecation warning for the VLLM_ATTENTION_BACKEND environment variable, postponing its removal from v0.14.0 to v0.15.0 or v1.0.0, whichever is soonest. This change aligns with the stated objective of safely postponing the deprecation schedule and updating the warning accordingly. No critical or high-severity issues were found in the changes.
yewentao256
left a comment
There was a problem hiding this comment.
LGTM, thanks for the work!
The title might not be "VLLM_ATTENTION_BACKEND", torch_profiler would be better.
MatthewBonanni
left a comment
There was a problem hiding this comment.
Makes sense to bump this, but as @yewentao256 points out, this is unrelated to attention backends, which have been removed already by #32812
Signed-off-by: NickLucche <nlucches@redhat.com> Signed-off-by: 陈建华 <1647430658@qq.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
Signed-off-by: NickLucche <nlucches@redhat.com>
I think we can safely postpone this deprecation schedule.
Updating warning accordingly.