Skip to content

Comments

[mis][ci/test] fix flaky test in tests/test_sharded_state_loader.py#5361

Merged
youkaichao merged 1 commit intovllm-project:mainfrom
youkaichao:fix_flaky_test
Jun 9, 2024
Merged

[mis][ci/test] fix flaky test in tests/test_sharded_state_loader.py#5361
youkaichao merged 1 commit intovllm-project:mainfrom
youkaichao:fix_flaky_test

Conversation

@youkaichao
Copy link
Member

This flaky test is observed in https://buildkite.com/vllm/ci/builds/9517#018ff9e8-1c4b-4fac-b3c5-af50946523d8 . torch.empty tensor can contain NaN values.

@youkaichao
Copy link
Member Author

FYI: @WoosukKwon this is the investigation result of #5074 (comment) .

@youkaichao youkaichao enabled auto-merge (squash) June 9, 2024 02:39
@youkaichao youkaichao merged commit 5d7e3d0 into vllm-project:main Jun 9, 2024
@youkaichao youkaichao deleted the fix_flaky_test branch June 9, 2024 03:54
dtrifiro pushed a commit to opendatahub-io/vllm that referenced this pull request Jun 10, 2024
…roject#5361)

[mis][ci/test] fix flaky test in tests/test_sharded_state_loader.py (vllm-project#5361)
robertgshaw2-redhat pushed a commit to neuralmagic/nm-vllm that referenced this pull request Jun 11, 2024
…roject#5361)

[mis][ci/test] fix flaky test in tests/test_sharded_state_loader.py (vllm-project#5361)
tjohnson31415 added a commit to tjohnson31415/vllm that referenced this pull request Jun 11, 2024
* upstream/main: (126 commits)
  [Bugfix][Frontend] Cleanup "fix chat logprobs" (vllm-project#5026)
  [Bugfix] OpenAI entrypoint limits logprobs while ignoring server defined --max-logprobs (vllm-project#5312)
  [Misc] Various simplifications and typing fixes (vllm-project#5368)
  [ci] Fix Buildkite agent path (vllm-project#5392)
  [Doc] Add documentation for FP8 W8A8 (vllm-project#5388)
  Bump version to v0.5.0 (vllm-project#5384)
  [Docs] Alphabetically sort sponsors (vllm-project#5386)
  [Docs] Add Docs on Limitations of VLM Support (vllm-project#5383)
  [ci] Mount buildkite agent on Docker container to upload benchmark results (vllm-project#5330)
  [ci] Use small_cpu_queue for doc build (vllm-project#5331)
  [Bugfix] Fix LLaVA-NeXT (vllm-project#5380)
  [Feature][Frontend]:  Continued `stream_options` implementation also in CompletionRequest (vllm-project#5319)
  [Model] Initial support for LLaVA-NeXT (vllm-project#4199)
  [Misc] Improve error message when LoRA parsing fails (vllm-project#5194)
  [misc][typo] fix typo (vllm-project#5372)
  [Frontend][Misc] Enforce Pixel Values as Input Type for VLMs in API Server (vllm-project#5374)
  [Misc] Update to comply with the new `compressed-tensors` config (vllm-project#5350)
  [Bugfix] Fix KeyError: 1 When Using LoRA adapters (vllm-project#5164)
  [Kernel][Misc] Use TORCH_LIBRARY instead of PYBIND11_MODULE for custom ops (vllm-project#5047)
  [mis][ci/test] fix flaky test in test_sharded_state_loader.py (vllm-project#5361)
  ...
joerunde pushed a commit to joerunde/vllm that referenced this pull request Jun 17, 2024
…roject#5361)

[mis][ci/test] fix flaky test in tests/test_sharded_state_loader.py (vllm-project#5361)
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jun 27, 2024
…roject#5361)

[mis][ci/test] fix flaky test in tests/test_sharded_state_loader.py (vllm-project#5361)
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 8, 2024
…roject#5361)

[mis][ci/test] fix flaky test in tests/test_sharded_state_loader.py (vllm-project#5361)
xjpang pushed a commit to xjpang/vllm that referenced this pull request Jul 24, 2024
…roject#5361)

[mis][ci/test] fix flaky test in tests/test_sharded_state_loader.py (vllm-project#5361)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants