Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] Different BatchSizes Can Affect Results #2567

Closed
5 tasks done
sitabulaixizawaluduo opened this issue Dec 24, 2024 · 10 comments
Closed
5 tasks done

[Bug] Different BatchSizes Can Affect Results #2567

sitabulaixizawaluduo opened this issue Dec 24, 2024 · 10 comments
Assignees

Comments

@sitabulaixizawaluduo
Copy link

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

When I inferred using the sglang-0.4.0 version, I found that concurrent 1 requests and concurrent 2 requests would result in inconsistent results, causing a difference of 5 to 10 points when I scored the results using GPT-4o

Reproduction

Model: Mixtral-8x7B
CMD: python3 -m sglang.launch_server --model-path models/Mixtral-8x7B-Instruct-v0.1/ --tp-size 4 --trust-remote-code --disable-cuda-graph --sampling-backend pytorch --disable-radix-cache --disable-overlap-schedule

Environment

Python: 3.10.16 (main, Dec 4 2024, 08:53:37) [GCC 9.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA L40
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.9
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.104.12
PyTorch: 2.5.1+cu124
sglang: 0.4.0.post1
flashinfer: 0.1.6+cu124torch2.4
triton: 3.1.0
transformers: 4.45.2
torchao: 0.6.1
numpy: 1.26.4
aiohttp: 3.11.10
fastapi: 0.115.6
hf_transfer: 0.1.8
huggingface_hub: 0.26.3
interegular: 0.3.3
modelscope: 1.21.0
orjson: 3.10.12
packaging: 24.2
psutil: 6.1.0
pydantic: 2.10.3
multipart: 0.0.19
zmq: 26.2.0
uvicorn: 0.32.1
uvloop: 0.21.0
vllm: 0.6.4.post1
openai: 1.57.0
anthropic: 0.40.0
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX PXB PXB SYS SYS SYS SYS PXB 0-31,64-95 0 N/A
GPU1 PIX X PXB PXB SYS SYS SYS SYS PXB 0-31,64-95 0 N/A
GPU2 PXB PXB X PXB SYS SYS SYS SYS PXB 0-31,64-95 0 N/A
GPU3 PXB PXB PXB X SYS SYS SYS SYS PIX 0-31,64-95 0 N/A
GPU4 SYS SYS SYS SYS X PIX PXB PXB SYS 32-63,96-127 1 N/A
GPU5 SYS SYS SYS SYS PIX X PXB PXB SYS 32-63,96-127 1 N/A
GPU6 SYS SYS SYS SYS PXB PXB X PXB SYS 32-63,96-127 1 N/A
GPU7 SYS SYS SYS SYS PXB PXB PXB X SYS 32-63,96-127 1 N/A
NIC0 PXB PXB PXB PIX SYS SYS SYS SYS X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_bond_0

ulimit soft: 1048576

@zhaochenyang20
Copy link
Collaborator

I think it's related to the sampling parameter, not the server. What's the sampling parameter of your cases?

@zhaochenyang20 zhaochenyang20 self-assigned this Dec 24, 2024
@sitabulaixizawaluduo
Copy link
Author

I think it's related to the sampling parameter, not the server. What's the sampling parameter of your cases?

temperature=0, repetition_penalty=1.1, topk=1

@zhaochenyang20
Copy link
Collaborator

Okay. Quite strange.

@zhaochenyang20
Copy link
Collaborator

Let me discuss with our team mates first.

@zhaochenyang20
Copy link
Collaborator

If I have not replied before next week. Please @zhaochenyang20 at this issue. Thanks! @sitabulaixizawaluduo

@cermeng
Copy link
Contributor

cermeng commented Dec 25, 2024

TL; DR: dynamic batch size causes this.

Different batch size will invoke different CUDA kernels, which leads to numerical differences. This difference accumulates across a large collection of model layers. Features like continuous batching, chunked prefill, prefix cache and real environment reasons like concurrency will generate a nondeterministic batch size.

I also raised a similar issue in vllm vllm-project/vllm#10074. In fact sglang already explored it https://sgl-project.github.io/references/faq.html

@zhaochenyang20
Copy link
Collaborator

Thanks! I should check faq more frenquently.

@zhaochenyang20
Copy link
Collaborator

Yeah. The same thing also happens for reward model serving.

@qeternity
Copy link
Contributor

qeternity commented Dec 27, 2024

I absolutely understand that things like batch size can affect determinism. My issue with writing off everything as "dynamic batching causes non determinism" is because it will prevent us from investigating other bugs that present as non determinism.

We would expect a long generation under heterogenous batch sizes to exhibit some level of variance and non determinism. What has been reported by myself and others is different, and also not strictly a function of batch size or concurrency either.

This issue has a good visualization: #1729

My PR here (#2165) demonstrates that we can achieve non determinism at lower levels of concurrency depending on how we generate a single token. So it's not strictly dynamic batching afaict.

@zhaochenyang20
Copy link
Collaborator

I absolutely understand that things like batch size can affect determinism. My issue with writing off everything as "dynamic batching causes non determinism" is because it will prevent us from investigating other bugs that present as non determinism.

We would expect a long generation under heterogenous batch sizes to exhibit some level of variance and non determinism. What has been reported by myself and others is different, and also not strictly a function of batch size or concurrency either.

This issue has a good visualization: #1729

My PR here (#2165) demonstrates that we can achieve non determinism at lower levels of concurrency depending on how we generate a single token. So it's not strictly dynamic batching afaict.

Thanks. You will continue working on #1729 and #2165 right?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants