-
-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Frontend][Misc] Goodput metric support #9338
Conversation
👋 Hi! Thank you for contributing to the vLLM project. Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging. To run CI, PR reviewers can do one of these:
🚀 |
Hey @ywang96 @KuntaiDu do you have any thoughts on @Imss27 question above?
|
benchmarks/benchmark_serving.py
Outdated
parser.add_argument( | ||
"--goodput", | ||
nargs="+", | ||
required=False, | ||
help="Specify service level objectives for goodput as \"KEY:VALUE\" " | ||
"pairs, where the key is a metric name, and the value is in " | ||
"milliseconds. Multiple \"KEY:VALUE\" pairs can be provided, " | ||
"separated by spaces. Allowed request level metric names are " | ||
"\"ttft\", \"tpot\", \"e2el\". ", | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Better to provide the paper link given that not many people know goodput atm.
benchmarks/benchmark_serving.py
Outdated
if slo_name not in VALID_NAMES: | ||
raise ValueError( | ||
f"Invalid metric name found, {slo_name}: {slo_val}. " | ||
f"The service level objective name should be one of " | ||
f"\"ttft\", \"tpot\", \"e2el\". ") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if slo_name not in VALID_NAMES: | |
raise ValueError( | |
f"Invalid metric name found, {slo_name}: {slo_val}. " | |
f"The service level objective name should be one of " | |
f"\"ttft\", \"tpot\", \"e2el\". ") | |
if slo_name not in VALID_NAMES: | |
raise ValueError( | |
f"Invalid metric name found, {slo_name}: {slo_val}. " | |
"The service level objective name should be one of " | |
f"{str(VALID_NAMES)}.") |
benchmarks/benchmark_serving.py
Outdated
f"The service level objective value should be " | ||
f"non-negative.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to add f
if there's no variables.
benchmarks/benchmark_serving.py
Outdated
@@ -664,6 +738,8 @@ def main(args: argparse.Namespace): | |||
else: | |||
raise ValueError(f"Unknown dataset: {args.dataset_name}") | |||
|
|||
slos_dict = check_goodput_args(args) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name slos_dict
is confusing in this script. Maybe call it gootput_config_dict
or slo_config_dict
. I'll prefer to have goodput
in the name because the purpose of this config is only for calculating goodput.
benchmarks/benchmark_serving.py
Outdated
if slos_dict: | ||
valid_metrics = [] | ||
slo_values = [] | ||
MS_TO_S = 1000 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make this as a global variable and rename it to be more clear.
Goodput is a well-defined term in https://arxiv.org/pdf/2401.09670. @Imss27 please provide more information about goodput in the RFC, PR description and the script. |
Yes - I can confirm that in our paper, a request is "good" if
In practice, whether this is a good enough metric remains questionable, although this is what a lot of other LLM system paper still continues to use. |
Thanks for the review and discussion! Updated according to the great suggestions. @comaniac Please take a look when you get a chance😄 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Otherwise LGTM
if "ttft" in gootput_config_dict: | ||
valid_metrics.append(ttfts) | ||
slo_values.append(gootput_config_dict["ttft"] / | ||
MILLISECONDS_TO_SECONDS_CONVERSION) | ||
if "tpot" in gootput_config_dict: | ||
valid_metrics.append(all_tpots) | ||
slo_values.append(gootput_config_dict["tpot"] / | ||
MILLISECONDS_TO_SECONDS_CONVERSION) | ||
if "e2el" in gootput_config_dict: | ||
valid_metrics.append(e2els) | ||
slo_values.append(gootput_config_dict["e2el"] / | ||
MILLISECONDS_TO_SECONDS_CONVERSION) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The current implementation needs to manually add more metrics as needed. Can make this logic more flexible? For example I may want to use P90 e2e latency as the SLO later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is indeed a problem for future possible new request-level metric. A possible implementation to be more extensible would require much more refactoring and should be a separate PR. We would need to decouple request-level Metrics
and the computation process of output into different classes. In this way, I think whenever a new metric is defined in new request-level Metrics
class, we can easily support to use it as an SLO for goodput computation.
But for the P90 e2e latency, I think we don't need to worry about it.
The goodput we defined is a request-level metric. We can only consider P90 e2e latency based on all requests' information. And if the SLO only contains e2e latency constraint and we set it to P90 e2e latency value, it basically means we want the goodput ≈ throughput * 90%
. But SLOs should be set based on LLM applications, and the SLO values should be something that bring good user experience
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Make sense. So I could use e2e latency as the SLO and check if gootput >= 0.9 * throughput to see if my P90 latency requirement is met.
benchmarks/benchmark_serving.py
Outdated
req_metric_list = list(zip(*valid_metrics)) | ||
for req_metric in req_metric_list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
req_metric_list = list(zip(*valid_metrics)) | |
for req_metric in req_metric_list: | |
for req_metric in zip(*valid_metrics): |
benchmarks/benchmark_serving.py
Outdated
is_good_req = True | ||
for i in range(len(slo_values)): | ||
if slo_values[i] < req_metric[i]: | ||
is_good_req = False | ||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
is_good_req = True | |
for i in range(len(slo_values)): | |
if slo_values[i] < req_metric[i]: | |
is_good_req = False | |
break | |
is_good_req = all([s < r for s, r in zip(slo_values, req_metric)]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Updated according to all suggestions here.😄
Found a bug during local test🤣, we should compare values in the opposite direction if using |
Signed-off-by: charlifu <[email protected]>
Signed-off-by: Vinay Damodaran <[email protected]>
Signed-off-by: Alvant <[email protected]>
Signed-off-by: Amit Garg <[email protected]>
Signed-off-by: qishuai <[email protected]>
Signed-off-by: Sumit Dubey <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
This PR adds support of goodput metric for benchmarking serving system.
If the user doesn't specify
--goodput
, request goodput will not be shown.Supported service level objectives:
TTFT, TPOT, E2EL
FIX #8782
Goodput Context
Goodput is defined as the number of completed requests per second that meet certain Service Level Objectives, such as TTFT, TPOT, E2E Latency which could be defined by the users.
Goodput aims to benchmark GenAI services from a user’s perspective and will provide insights for service providers to further optimize their serving and inference system for better User Experience.
Why do we need Goodput?
1. Users tolerate latency(TTFT, TPOT, E2E Latency) differently for different applications.
2. High throughput does not mean high goodput
Usage Example
Start a server
Benchmark serving with goodput
Example Console Output
TODO
cc: @ywang96 @KuntaiDu Any inputs from you and the community will be extremely valuable for future CI Dashboard work on Goodput. Like what kind of plots, setting what kind of SLOs for different LLM applications on different CI hardware.
PR Checklist (Click to Expand)
Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.
PR Title and Classification
Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:
[Bugfix]
for bug fixes.[CI/Build]
for build or continuous integration improvements.[Doc]
for documentation fixes and improvements.[Model]
for adding a new model or improving an existing model. Model name should appear in the title.[Frontend]
For changes on the vLLM frontend (e.g., OpenAI API server,LLM
class, etc.)[Kernel]
for changes affecting CUDA kernels or other compute kernels.[Core]
for changes in the core vLLM logic (e.g.,LLMEngine
,AsyncLLMEngine
,Scheduler
, etc.)[Hardware][Vendor]
for hardware-specific changes. Vendor name should appear in the prefix (e.g.,[Hardware][AMD]
).[Misc]
for PRs that do not fit the above categories. Please use this sparingly.Note: If the PR spans more than one category, please include all relevant prefixes.
Code Quality
The PR need to meet the following code quality standards:
format.sh
to format your code.docs/source/
if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.Adding or changing kernels
Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.
Tensors
require meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.torch.libary.opcheck()
to test the function registration and meta-function for any registered ops. Seetests/kernels
for examples.Notes for Large Changes
Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with
rfc-required
and might not go through the PR.What to Expect for the Reviews
The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:
action-required
label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.Thank You
Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!