-
Notifications
You must be signed in to change notification settings - Fork 858
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] Different BatchSizes Can Affect Results #2567
Comments
I think it's related to the sampling parameter, not the server. What's the sampling parameter of your cases? |
temperature=0, repetition_penalty=1.1, topk=1 |
Okay. Quite strange. |
Let me discuss with our team mates first. |
If I have not replied before next week. Please @zhaochenyang20 at this issue. Thanks! @sitabulaixizawaluduo |
TL; DR: dynamic batch size causes this. Different batch size will invoke different CUDA kernels, which leads to numerical differences. This difference accumulates across a large collection of model layers. Features like continuous batching, chunked prefill, prefix cache and real environment reasons like concurrency will generate a nondeterministic batch size. I also raised a similar issue in vllm vllm-project/vllm#10074. In fact sglang already explored it https://sgl-project.github.io/references/faq.html |
Thanks! I should check faq more frenquently. |
Yeah. The same thing also happens for reward model serving. |
I absolutely understand that things like batch size can affect determinism. My issue with writing off everything as "dynamic batching causes non determinism" is because it will prevent us from investigating other bugs that present as non determinism. We would expect a long generation under heterogenous batch sizes to exhibit some level of variance and non determinism. What has been reported by myself and others is different, and also not strictly a function of batch size or concurrency either. This issue has a good visualization: #1729 My PR here (#2165) demonstrates that we can achieve non determinism at lower levels of concurrency depending on how we generate a single token. So it's not strictly dynamic batching afaict. |
|
Checklist
Describe the bug
When I inferred using the sglang-0.4.0 version, I found that concurrent 1 requests and concurrent 2 requests would result in inconsistent results, causing a difference of 5 to 10 points when I scored the results using GPT-4o
Reproduction
Model: Mixtral-8x7B
CMD: python3 -m sglang.launch_server --model-path models/Mixtral-8x7B-Instruct-v0.1/ --tp-size 4 --trust-remote-code --disable-cuda-graph --sampling-backend pytorch --disable-radix-cache --disable-overlap-schedule
Environment
Python: 3.10.16 (main, Dec 4 2024, 08:53:37) [GCC 9.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA L40
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.9
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 535.104.12
PyTorch: 2.5.1+cu124
sglang: 0.4.0.post1
flashinfer: 0.1.6+cu124torch2.4
triton: 3.1.0
transformers: 4.45.2
torchao: 0.6.1
numpy: 1.26.4
aiohttp: 3.11.10
fastapi: 0.115.6
hf_transfer: 0.1.8
huggingface_hub: 0.26.3
interegular: 0.3.3
modelscope: 1.21.0
orjson: 3.10.12
packaging: 24.2
psutil: 6.1.0
pydantic: 2.10.3
multipart: 0.0.19
zmq: 26.2.0
uvicorn: 0.32.1
uvloop: 0.21.0
vllm: 0.6.4.post1
openai: 1.57.0
anthropic: 0.40.0
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PIX PXB PXB SYS SYS SYS SYS PXB 0-31,64-95 0 N/A
GPU1 PIX X PXB PXB SYS SYS SYS SYS PXB 0-31,64-95 0 N/A
GPU2 PXB PXB X PXB SYS SYS SYS SYS PXB 0-31,64-95 0 N/A
GPU3 PXB PXB PXB X SYS SYS SYS SYS PIX 0-31,64-95 0 N/A
GPU4 SYS SYS SYS SYS X PIX PXB PXB SYS 32-63,96-127 1 N/A
GPU5 SYS SYS SYS SYS PIX X PXB PXB SYS 32-63,96-127 1 N/A
GPU6 SYS SYS SYS SYS PXB PXB X PXB SYS 32-63,96-127 1 N/A
GPU7 SYS SYS SYS SYS PXB PXB PXB X SYS 32-63,96-127 1 N/A
NIC0 PXB PXB PXB PIX SYS SYS SYS SYS X
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
NIC Legend:
NIC0: mlx5_bond_0
ulimit soft: 1048576
The text was updated successfully, but these errors were encountered: