Skip to content

[Fix] Fix default port number in benchmark scripts#265

Merged
zhuohan123 merged 1 commit intomainfrom
fix-port
Jun 26, 2023
Merged

[Fix] Fix default port number in benchmark scripts#265
zhuohan123 merged 1 commit intomainfrom
fix-port

Conversation

@zhuohan123
Copy link
Copy Markdown
Member

No description provided.

@zhuohan123 zhuohan123 requested a review from WoosukKwon June 26, 2023 18:56
@WoosukKwon
Copy link
Copy Markdown
Collaborator

A quick question: why should we change the port number? Could you provide a little bit more background?

@zhuohan123
Copy link
Copy Markdown
Member Author

A quick question: why should we change the port number? Could you provide a little bit more background?

All the servers in vLLM right now have a default port number of 8000. This PR is to fix previous merge errors and make the default port number to 8000.

Copy link
Copy Markdown
Collaborator

@WoosukKwon WoosukKwon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh I see. Thanks for the explanation!

@zhuohan123 zhuohan123 merged commit 43710e8 into main Jun 26, 2023
@zhuohan123 zhuohan123 deleted the fix-port branch June 29, 2023 17:25
michaelfeil pushed a commit to michaelfeil/vllm that referenced this pull request Jul 1, 2023
hongxiayang pushed a commit to hongxiayang/vllm that referenced this pull request Feb 13, 2024
yukavio pushed a commit to yukavio/vllm that referenced this pull request Jul 3, 2024
SUMMARY:
* enable python 3.9 and 3.11 in "remote push"
* update "nightly" to run for all pythons
* for python 3.8, 3.9. 3.11 just enable minimal testing for now
* update install commands used in automation to include
https://pypi.neuralmagic.com/simple
* update tmp skip list to skip almost everything

NOTES: adjusting the skip lists will be completed in an upcoming PR. i'd
like to get these updates in place so we can nominally start pushing a
NIGHTLY

TEST PLAN:
runs on remote push

---------

Co-authored-by: andy-neuma <andy@neuralmagic.com>
mht-sharma pushed a commit to mht-sharma/vllm that referenced this pull request Dec 9, 2024
dtrifiro added a commit to dtrifiro/vllm that referenced this pull request Jan 7, 2025
wuhuikx pushed a commit to wuhuikx/vllm that referenced this pull request Mar 27, 2025
see vllm-project#265

Signed-off-by: MengqingCao <cmq0113@163.com>
amy-why-3459 pushed a commit to amy-why-3459/vllm that referenced this pull request Sep 15, 2025
Update torch-npu version to fix torch npu exponential_ accuracy
With this update, the percision issue when setting `temperature > 0` is
fixed.

---------

Signed-off-by: Mengqing Cao <cmq0113@163.com>
cursor Bot pushed a commit to Shirley125/vllm_epd that referenced this pull request Jan 22, 2026
Signed-off-by: David Chen <530634352@qq.com>
mickg10 pushed a commit to mickg10/vllm that referenced this pull request Feb 11, 2026
…rted models with arch verification (vllm-project#265)

Signed-off-by: Salar <skhorasgani@tenstorrent.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants