Skip to content

[Speculative Decoding] Test refactor #8317

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 14 commits into from
Sep 11, 2024

Conversation

LiuXiaoxuanPKU
Copy link
Collaborator

Refactor Speculative Decoding tests to remove AsyncLLMEngine. Related to #8126.

@LiuXiaoxuanPKU LiuXiaoxuanPKU marked this pull request as draft September 10, 2024 04:44
Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@LiuXiaoxuanPKU LiuXiaoxuanPKU changed the title refactor [Speculative Decoding] Test refactor Sep 10, 2024
@LiuXiaoxuanPKU
Copy link
Collaborator Author

/ready

@LiuXiaoxuanPKU LiuXiaoxuanPKU marked this pull request as ready for review September 10, 2024 21:55
Copy link
Member

@youkaichao youkaichao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the great simplification! the code looks much better now.

@youkaichao youkaichao added the ready ONLY add when PR is ready to merge/full CI is needed label Sep 10, 2024
@youkaichao
Copy link
Member

for the test failure I disabled, to reproduce and investigate, run the following in L4 machines:

docker run -it --gpus all --ipc=host -e HF_TOKEN -v ~/.cache/huggingface:/root/.cache/huggingface public.ecr.aws/q9t5s3a7/vllm-ci-test-repo:f3d79ad8513645851909fc9ea217a9ad0c413427
pytest -v -s tests/spec_decode/e2e/test_multistep_correctness.py -k "test_spec_decode_e2e_greedy_correctness_tiny_model_bs1 or test_spec_decode_e2e_with_detokenization"

I find the problem is the spec decode metrics are not collected.

vllm/vllm/engine/llm_engine.py

Lines 1900 to 1905 in 7015417

if model_output and (model_output[0].spec_decode_worker_metrics
is not None):
spec_decode_metrics = model_output[0].spec_decode_worker_metrics
else:
spec_decode_metrics = None

after these lines, spec_decode_metrics is None.

cc @comaniac if you have bandwidth to investigate in the future.

@youkaichao youkaichao merged commit 775f00f into vllm-project:main Sep 11, 2024
60 of 61 checks passed
dtrifiro pushed a commit to opendatahub-io/vllm that referenced this pull request Sep 12, 2024
@LiuXiaoxuanPKU LiuXiaoxuanPKU deleted the sd-test-refactor branch September 17, 2024 04:29
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
LeiWang1999 pushed a commit to LeiWang1999/vllm-bitblas that referenced this pull request Mar 26, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants