Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Frontend][Misc] Goodput metric support #9338

Merged
merged 8 commits into from
Oct 20, 2024

Conversation

Imss27
Copy link
Contributor

@Imss27 Imss27 commented Oct 14, 2024

This PR adds support of goodput metric for benchmarking serving system.
If the user doesn't specify --goodput, request goodput will not be shown.
Supported service level objectives: TTFT, TPOT, E2EL

FIX #8782

Goodput Context

Goodput is defined as the number of completed requests per second that meet certain Service Level Objectives, such as TTFT, TPOT, E2E Latency which could be defined by the users.

Goodput aims to benchmark GenAI services from a user’s perspective and will provide insights for service providers to further optimize their serving and inference system for better User Experience.

Why do we need Goodput?

1. Users tolerate latency(TTFT, TPOT, E2E Latency) differently for different applications.
diverse_slo

2. High throughput does not mean high goodput
goodput

Usage Example

Start a server

vllm serve meta-llama/Llama-3.2-1B-Instruct \
    --disable-log-requests \
    --tensor-parallel-size 1 \

Benchmark serving with goodput

cd benchmarks
python benchmark_serving.py \
    --backend vllm \
    --model meta-llama/Llama-3.2-1B-Instruct \
    --dataset-name sonnet \
    --num-prompt=10 \
    --dataset-path="sonnet.txt" \
    --sonnet-input-len 600 \
    --save-result  \
    --goodput ttft:500 tpot:14 e2el:2500

Example Console Output

============ Serving Benchmark Result ============
Successful requests:                     10        
Benchmark duration (s):                  2.37      
Total input tokens:                      5477      
Total generated tokens:                  1500      
Request throughput (req/s):              4.23      
Request goodput (req/s):                 1.69      
Output token throughput (tok/s):         634.23    
Total Token throughput (tok/s):          2950.01   
---------------Time to First Token----------------
Mean TTFT (ms):                          288.13    
Median TTFT (ms):                        272.04    
P99 TTFT (ms):                           435.43    
-----Time per Output Token (excl. 1st token)------
Mean TPOT (ms):                          13.93     
Median TPOT (ms):                        14.04     
P99 TPOT (ms):                           14.89     
---------------Inter-token Latency----------------
Mean ITL (ms):                           13.93     
Median ITL (ms):                         12.85     
P99 ITL (ms):                            14.35     
==================================================

TODO

  • Investigate proper service level objectives for CI Dashboard model.
  • Add goodput plot to CI Dashboard.

cc: @ywang96 @KuntaiDu Any inputs from you and the community will be extremely valuable for future CI Dashboard work on Goodput. Like what kind of plots, setting what kind of SLOs for different LLM applications on different CI hardware.


PR Checklist (Click to Expand)

Thank you for your contribution to vLLM! Before submitting the pull request, please ensure the PR meets the following criteria. This helps vLLM maintain the code quality and improve the efficiency of the review process.

PR Title and Classification

Only specific types of PRs will be reviewed. The PR title is prefixed appropriately to indicate the type of change. Please use one of the following:

  • [Bugfix] for bug fixes.
  • [CI/Build] for build or continuous integration improvements.
  • [Doc] for documentation fixes and improvements.
  • [Model] for adding a new model or improving an existing model. Model name should appear in the title.
  • [Frontend] For changes on the vLLM frontend (e.g., OpenAI API server, LLM class, etc.)
  • [Kernel] for changes affecting CUDA kernels or other compute kernels.
  • [Core] for changes in the core vLLM logic (e.g., LLMEngine, AsyncLLMEngine, Scheduler, etc.)
  • [Hardware][Vendor] for hardware-specific changes. Vendor name should appear in the prefix (e.g., [Hardware][AMD]).
  • [Misc] for PRs that do not fit the above categories. Please use this sparingly.

Note: If the PR spans more than one category, please include all relevant prefixes.

Code Quality

The PR need to meet the following code quality standards:

  • We adhere to Google Python style guide and Google C++ style guide.
  • Pass all linter checks. Please use format.sh to format your code.
  • The code need to be well-documented to ensure future contributors can easily understand the code.
  • Include sufficient tests to ensure the project to stay correct and robust. This includes both unit tests and integration tests.
  • Please add documentation to docs/source/ if the PR modifies the user-facing behaviors of vLLM. It helps vLLM user understand and utilize the new features or changes.

Adding or changing kernels

Each custom kernel needs a schema and one or more implementations to be registered with PyTorch.

  • Make sure custom ops are registered following PyTorch guidelines: Custom C++ and CUDA Operators and The Custom Operators Manual
  • Custom operations that return Tensors require meta-functions. Meta-functions should be implemented and registered in python so that dynamic dims can be handled automatically. See above documents for a description of meta-functions.
  • Use torch.libary.opcheck() to test the function registration and meta-function for any registered ops. See tests/kernels for examples.
  • When changing the C++ signature of an existing op, the schema must be updated to reflect the changes.
  • If a new custom type is needed, see the following document: Custom Class Support in PT2.

Notes for Large Changes

Please keep the changes as concise as possible. For major architectural changes (>500 LOC excluding kernel/data/config/test), we would expect a GitHub issue (RFC) discussing the technical design and justification. Otherwise, we will tag it with rfc-required and might not go through the PR.

What to Expect for the Reviews

The goal of the vLLM team is to be a transparent reviewing machine. We would like to make the review process transparent and efficient and make sure no contributor feel confused or frustrated. However, the vLLM team is small, so we need to prioritize some PRs over others. Here is what you can expect from the review process:

  • After the PR is submitted, the PR will be assigned to a reviewer. Every reviewer will pick up the PRs based on their expertise and availability.
  • After the PR is assigned, the reviewer will provide status update every 2-3 days. If the PR is not reviewed within 7 days, please feel free to ping the reviewer or the vLLM team.
  • After the review, the reviewer will put an action-required label on the PR if there are changes required. The contributor should address the comments and ping the reviewer to re-review the PR.
  • Please respond to all comments within a reasonable time frame. If a comment isn't clear or you disagree with a suggestion, feel free to ask for clarification or discuss the suggestion.

Thank You

Finally, thank you for taking the time to read these guidelines and for your interest in contributing to vLLM. Your contributions make vLLM a great tool for everyone!

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@andoorve
Copy link
Collaborator

Hey @ywang96 @KuntaiDu do you have any thoughts on @Imss27 question above?

cc: @ywang96 @KuntaiDu Any inputs from you and the community will be extremely valuable for future CI Dashboard work on Goodput. Like what kind of plots, setting what kind of SLOs for different LLM applications on different CI hardware.

@KuntaiDu
Copy link
Collaborator

I am not an industry guy so I am not the best guy checking if the definition of TTFT < TTFT SLO and Average TPOT < TPOT SLO is the right way to define goodput (especially using average TPOT < TPOT SLO to define goodput) @comaniac @ywang96 would be great if you can share your thoughts!

Comment on lines 961 to 970
parser.add_argument(
"--goodput",
nargs="+",
required=False,
help="Specify service level objectives for goodput as \"KEY:VALUE\" "
"pairs, where the key is a metric name, and the value is in "
"milliseconds. Multiple \"KEY:VALUE\" pairs can be provided, "
"separated by spaces. Allowed request level metric names are "
"\"ttft\", \"tpot\", \"e2el\". ",
)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better to provide the paper link given that not many people know goodput atm.

Comment on lines 620 to 624
if slo_name not in VALID_NAMES:
raise ValueError(
f"Invalid metric name found, {slo_name}: {slo_val}. "
f"The service level objective name should be one of "
f"\"ttft\", \"tpot\", \"e2el\". ")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if slo_name not in VALID_NAMES:
raise ValueError(
f"Invalid metric name found, {slo_name}: {slo_val}. "
f"The service level objective name should be one of "
f"\"ttft\", \"tpot\", \"e2el\". ")
if slo_name not in VALID_NAMES:
raise ValueError(
f"Invalid metric name found, {slo_name}: {slo_val}. "
"The service level objective name should be one of "
f"{str(VALID_NAMES)}.")

Comment on lines 628 to 629
f"The service level objective value should be "
f"non-negative.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to add f if there's no variables.

@@ -664,6 +738,8 @@ def main(args: argparse.Namespace):
else:
raise ValueError(f"Unknown dataset: {args.dataset_name}")

slos_dict = check_goodput_args(args)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The name slos_dict is confusing in this script. Maybe call it gootput_config_dict or slo_config_dict. I'll prefer to have goodput in the name because the purpose of this config is only for calculating goodput.

if slos_dict:
valid_metrics = []
slo_values = []
MS_TO_S = 1000
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make this as a global variable and rename it to be more clear.

@comaniac
Copy link
Collaborator

I am not an industry guy so I am not the best guy checking if the definition of TTFT < TTFT SLO and Average TPOT < TPOT SLO is the right way to define goodput (especially using average TPOT < TPOT SLO to define goodput) @comaniac @ywang96 would be great if you can share your thoughts!

Goodput is a well-defined term in https://arxiv.org/pdf/2401.09670. @Imss27 please provide more information about goodput in the RFC, PR description and the script.

@GindaChen
Copy link
Contributor

I am not an industry guy so I am not the best guy checking if the definition of TTFT < TTFT SLO and Average TPOT < TPOT SLO is the right way to define goodput (especially using average TPOT < TPOT SLO to define goodput) @comaniac @ywang96 would be great if you can share your thoughts!

Yes - I can confirm that in our paper, a request is "good" if

  • its TTFT < TTFT SLO
  • its average TPOT < TPOT SLO

In practice, whether this is a good enough metric remains questionable, although this is what a lot of other LLM system paper still continues to use.

@Imss27
Copy link
Contributor Author

Imss27 commented Oct 19, 2024

Thanks for the review and discussion! Updated according to the great suggestions.

@comaniac Please take a look when you get a chance😄

Copy link
Collaborator

@comaniac comaniac left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Otherwise LGTM

Comment on lines +362 to +373
if "ttft" in gootput_config_dict:
valid_metrics.append(ttfts)
slo_values.append(gootput_config_dict["ttft"] /
MILLISECONDS_TO_SECONDS_CONVERSION)
if "tpot" in gootput_config_dict:
valid_metrics.append(all_tpots)
slo_values.append(gootput_config_dict["tpot"] /
MILLISECONDS_TO_SECONDS_CONVERSION)
if "e2el" in gootput_config_dict:
valid_metrics.append(e2els)
slo_values.append(gootput_config_dict["e2el"] /
MILLISECONDS_TO_SECONDS_CONVERSION)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The current implementation needs to manually add more metrics as needed. Can make this logic more flexible? For example I may want to use P90 e2e latency as the SLO later.

Copy link
Contributor Author

@Imss27 Imss27 Oct 20, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is indeed a problem for future possible new request-level metric. A possible implementation to be more extensible would require much more refactoring and should be a separate PR. We would need to decouple request-level Metrics and the computation process of output into different classes. In this way, I think whenever a new metric is defined in new request-level Metrics class, we can easily support to use it as an SLO for goodput computation.

But for the P90 e2e latency, I think we don't need to worry about it.

The goodput we defined is a request-level metric. We can only consider P90 e2e latency based on all requests' information. And if the SLO only contains e2e latency constraint and we set it to P90 e2e latency value, it basically means we want the goodput ≈ throughput * 90%. But SLOs should be set based on LLM applications, and the SLO values should be something that bring good user experience

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Make sense. So I could use e2e latency as the SLO and check if gootput >= 0.9 * throughput to see if my P90 latency requirement is met.

Comment on lines 375 to 376
req_metric_list = list(zip(*valid_metrics))
for req_metric in req_metric_list:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
req_metric_list = list(zip(*valid_metrics))
for req_metric in req_metric_list:
for req_metric in zip(*valid_metrics):

Comment on lines 377 to 381
is_good_req = True
for i in range(len(slo_values)):
if slo_values[i] < req_metric[i]:
is_good_req = False
break
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
is_good_req = True
for i in range(len(slo_values)):
if slo_values[i] < req_metric[i]:
is_good_req = False
break
is_good_req = all([s < r for s, r in zip(slo_values, req_metric)])

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Updated according to all suggestions here.😄

@comaniac comaniac added the ready ONLY add when PR is ready to merge/full CI is needed label Oct 20, 2024
@Imss27
Copy link
Contributor Author

Imss27 commented Oct 20, 2024

Found a bug during local test🤣, we should compare values in the opposite direction if using all() for SLOs and req_metrics. Please take a look when you get a chance. @comaniac

@comaniac comaniac enabled auto-merge (squash) October 20, 2024 17:59
@comaniac comaniac merged commit 855e0e6 into vllm-project:main Oct 20, 2024
39 checks passed
charlifu pushed a commit to charlifu/vllm that referenced this pull request Oct 23, 2024
vrdn-23 pushed a commit to vrdn-23/vllm that referenced this pull request Oct 23, 2024
Alvant pushed a commit to compressa-ai/vllm that referenced this pull request Oct 26, 2024
garg-amit pushed a commit to garg-amit/vllm that referenced this pull request Oct 28, 2024
FerdinandZhong pushed a commit to FerdinandZhong/vllm that referenced this pull request Oct 29, 2024
sumitd2 pushed a commit to sumitd2/vllm that referenced this pull request Nov 14, 2024
KuntaiDu pushed a commit to KuntaiDu/vllm that referenced this pull request Nov 20, 2024
mfournioux pushed a commit to mfournioux/vllm that referenced this pull request Nov 20, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[RFC]: Add Goodput Metric to Benchmark Serving
6 participants