-
-
Notifications
You must be signed in to change notification settings - Fork 11.9k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
Your current environment
The output of python collect_env.py
Collecting environment information...
==============================
System Info
==============================
OS : Ubuntu 22.04.5 LTS (aarch64)
GCC version : (Ubuntu 12.3.0-1ubuntu1~22.04.2) 12.3.0
Clang version : 16.0.6 (++20231112100510+7cbf1a259152-1~exp1~20231112100554.106)
CMake version : version 4.1.0
Libc version : glibc-2.35
==============================
PyTorch Info
==============================
PyTorch version : 2.8.0+cpu
Is debug build : False
CUDA used to build PyTorch : None
ROCM used to build PyTorch : N/A
==============================
Python Environment
==============================
Python version : 3.10.12 (main, Aug 15 2025, 14:32:43) [GCC 11.4.0] (64-bit runtime)
Python platform : Linux-6.8.0-1040-aws-aarch64-with-glibc2.35
==============================
CUDA / GPU Info
==============================
Is CUDA available : False
CUDA runtime version : No CUDA
CUDA_MODULE_LOADING set to : N/A
GPU models and configuration : No CUDA
Nvidia driver version : No CUDA
cuDNN version : No CUDA
HIP runtime version : N/A
MIOpen runtime version : N/A
Is XNNPACK available : True
==============================
CPU Info
==============================
Architecture: aarch64
CPU op-mode(s): 64-bit
Byte Order: Little Endian
CPU(s): 96
On-line CPU(s) list: 0-95
Vendor ID: ARM
Model name: Neoverse-V2
Model: 1
Thread(s) per core: 1
Core(s) per socket: 96
Socket(s): 1
Stepping: r0p1
BogoMIPS: 2000.00
Flags: fp asimd evtstrm aes pmull sha1 sha2 crc32 atomics fphp asimdhp cpuid asimdrdm jscvt fcma lrcpc dcpop sha3 asimddp sha512 sve asimdfhm dit uscat ilrcpc flagm ssbs sb paca pacg dcpodp sve2 sveaes svepmull svebitperm svesha3 flagm2 frint svei8mm svebf16 i8mm bf16 dgh rng bti
L1d cache: 6 MiB (96 instances)
L1i cache: 6 MiB (96 instances)
L2 cache: 192 MiB (96 instances)
L3 cache: 36 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-95
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Reg file data sampling: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec rstack overflow: Not affected
Vulnerability Spec store bypass: Mitigation; Speculative Store Bypass disabled via prctl
Vulnerability Spectre v1: Mitigation; __user pointer sanitization
Vulnerability Spectre v2: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
==============================
Versions of relevant libraries
==============================
[pip3] numpy==2.2.6
[pip3] pyzmq==27.1.0
[pip3] torch==2.8.0+cpu
[pip3] torchaudio==2.8.0
[pip3] torchvision==0.23.0
[pip3] transformers==4.57.1
[conda] Could not collect
==============================
vLLM Info
==============================
ROCM Version : Could not collect
vLLM Version : 0.11.0rc2.dev481+g74704d455.d20251016 (git sha: 74704d455, date: 20251016)
vLLM Build Flags:
CUDA Archs: Not Set; ROCm: Disabled
GPU Topology:
Could not collect
==============================
Environment Variables
==============================
PYTORCH_NVML_BASED_CUDA_CHECK=1
🐛 Describe the bug
Incorrect outputs with batch size > 1 on CPU
I tested (see script below) a 1b llama instruct model (with greedy decoding - i.e. deterministic) and found out the following:
- when prefix caching is disabled (not the default case): output from generating responses to a batch of 5 requests differs from generating the the output of these requests one at a time. Only the first request in the batch is correct. This is because we do prefill (for all but first prompt) over wrong query/key/value tensor
- when prefix caching is enabled (default case): output from generating responses to a batch of 5 requests differs from generating the the output of these requests one at a time AND also different from the case where prefix caching is disabled
this is because with prefix caching we cache the K/V of the prefix (which we prefilled for an earlier prompt) in the paged cache, which our prefill SDPA is not aware of - i.e. our prefill SDPA only runs over K/V from the suffix.
Here are five prompts I tested:
- Memorize this short fact for later: the verification passphrase is cobalt fig 41. Answer the following, with no explanation: what is the verification passphrase?
- Memorize this short fact for later: the verification passphrase is cobalt fig 913. Answer the following, with no explanation: what is the verification passphrase?
- Memorize this short fact for later: the verification passphrase is cobalt fig 768. Answer the following, with no explanation: what is the verification passphrase?
- Memorize this short fact for later: the verification passphrase is cobalt fig 9687. Answer the following, with no explanation: what is the verification passphrase?
- Memorize this short fact for later: the verification passphrase is cobalt fig 8578. Answer the following, with no explanation: what is the verification passphrase?
Here are the reference outputs, when running the requests with a batch size of 1 (one at a time):
- Cobalt 41
- Cobalt fig 913
- 768
- Cobalt 9687
- 8578
Here are the outputs when we batch the requests, with prefix-caching disabled (not the default):
- Cobalt 41
- 913
- 768
- Tags: Cobalt, fig
- Tags: verification passphrase
Here are the outputs when we batch the requests, with prefix caching enabled (default):
- Cobalt 41
- VERIFIED\nVERIFICATION\nVerification Passphrase: Cobalt Fig
- VERIFICATION PASSPHRASE: COBALT FIG
- Verifying a verification passphrase is cobalt fig 1.
- It seems like you're looking for a fact to memorize. Here's one:\n"The shortest war in history was between Britain and Zanzibar on August 27, 1896, and lasted only 38 minutes."\nThis fact is often cited as the shortest war in recorded history.
Here's a script to reproduce the issues:
# SPDX-FileCopyrightText: Copyright 2025 Arm Limited and/or its affiliate <[email protected]>
# SPDX-License-Identifier: BSD-3-Clause
import os
os.environ["VLLM_USE_V1"] = "1"
from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
from argparse import ArgumentParser
SHARED_PART_OF_PROMPT = """
Memorize this short fact for later: the verification passphrase is {prompt_key}.
Answer the following, with no explanation: what is the verification passphrase?
"""
PROMPT_KEYS = [
"cobalt fig 41",
"cobalt fig 913",
"cobalt fig 768",
"cobalt fig 9687",
"cobalt fig 8578"
]
def main(args):
model_id = "meta-llama/Llama-3.2-1B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id)
messages = [[{"role": "user", "content": SHARED_PART_OF_PROMPT.format(prompt_key=prompt_key)}] for prompt_key in PROMPT_KEYS]
messages_with_chat_template = tokenizer.apply_chat_template(messages, add_generation_prompt=True, tokenize=False)
llm = LLM(model=model_id,
seed=0,
max_num_batched_tokens=4096,
max_model_len=4096,
enable_prefix_caching=args.with_prefix_caching,
dtype="float32",
)
params = SamplingParams(
seed=0,
max_tokens=128,
temperature=0.0, # greedy deocoding
top_p=1.0,
)
results = llm.generate(messages_with_chat_template, params)
for req_i, req_out in enumerate(results):
print(f"\n=== Prompt {req_i} ===")
try:
print(req_out.prompt)
except AttributeError:
print(messages_with_chat_template[req_i])
for cand in req_out.outputs:
print(f"\n--- Candidate {cand.index} ---")
print(cand.text)
print("finish_reason:", cand.finish_reason)
print("num_tokens:", len(cand.token_ids))
if __name__ == "__main__":
parser = ArgumentParser()
parser.add_argument("--with-prefix-caching", action="store_true")
args = parser.parse_args()
main(args)
Before submitting a new issue...
- Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working