Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[DOCS] Proposal to extend the Quickstart - Scale and Distribute #615

Open
ffuerste opened this issue Sep 20, 2024 · 3 comments · May be fixed by #654
Open

[DOCS] Proposal to extend the Quickstart - Scale and Distribute #615

ffuerste opened this issue Sep 20, 2024 · 3 comments · May be fixed by #654
Assignees

Comments

@ffuerste
Copy link

Hello,

when I followed the Quickstart for Scaling up the hello world application, it felt more like a thought experiment.

What was roughly going on in my head:

  • Okay I can configure scaling capabilities and now my handler can handle multiple requests in parallel ...
  • What it was not able to do that by default?
  • Okay how to check that?

Therefore, I think the section Scaling up could be a little bit extended, explaining why the handler can only handle a single request currently (single thread) and how to test that.

For testing purposes I added a simple sleep to the handler:

...
use std::{thread, time};
...
        let sleep = time::Duration::from_secs(2);
        logging::log(
            logging::Level::Info,
            "",
            &format!("{handler_name} - Sleep for {} to simulate longer processing time", sleep.as_secs()),
        );
        thread::sleep(sleep);
...

and executed the curl command of the previous section in parallel:

> seq 1 10 | xargs -P0 -I {} curl --max-time 3 "localhost:8080?name=Alice"                                         0|1 ✘  15:10:22 
Hello x1, Alice!
curl: (28) Operation timed out after 3002 milliseconds with 0 bytes received
curlc:u r(l2:8 )( 2O8p)e rOapteiroant itoinm etdi med out after 3006 omuillist after 30econ0ds6  wmiitlhl i0s ebcyotnedss recei vwith 0 bytes reed
ceived
curl: (28) Operation timed out after 3006 milliseconds with 0 bytes received
curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received
curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received
curl: (28) Operation timed out after 3003 milliseconds with 0 bytes received
curl: (28) Operation timed out after 3005 milliseconds with 0 bytes received
curl: (28) Operation timed out after 3001 milliseconds with 0 bytes received

Note that the handler sleeps for two seconds, so the curl command, which only waits 3 seconds for a response, will time out as expected.

Now reconfiguring our Application to allow up to 100 replicas and executing the test command again, clearly shows that now the handler can handle multiple requests:

> seq 1 10 | xargs -P0 -I {} curl --max-time 3 "localhost:8080?name=Bob"                                       ✔  31s   15:12:16 
Hello x1, Bob!
Hello x2, Bob!
Hello x3, Bob!
Hello x5, Bob!
Hello x4, Bob!
Hello x6, Bob!
Hello x8, Bob!
Hello x7, Bob!
Hello x9, Bob!
Hello x10, Bob!

Maybe this check is to simple and was left up to the user on purpose. But I think it would fit perfectly the difficulty level of the other quickstarts (e.g. the adding persistent storage guide).

Additionally, it might also be great to somehow see/list the invoked components (or handlers running in parallel), e.g. with wash. This would provide a similar experience to Kubernetes, where users can actually see how pods are scaled up. But it seems that it is currently not possible to find out which components are currently running? If it is possible and I just didn't figure out how to do it, I think it would be great to add it to the docs.

I was also thinking of making the duration of the sleep configurable. Currently the quickstart doesn't cover the configuration of components in the wadm.yaml file at all. But I would open a new issue for that and just added it here as background information on what I was thinking while following the quickstart.

If you think my suggestion is a good addition to the quickstart, I'm happy to create a PR for the Scaling up section. What do you think?

@brooksmtownsend
Copy link
Member

@ffuerste I really like this idea! Adding this sleep + sending multiple requests is a really easy way to show the updates in handling concurrent requests.

I'd happily accept a PR that shows how to just add a simple sleep + sends multiple requests on the Scaling Up page

@ffuerste
Copy link
Author

@brooksmtownsend Cool. I will try to raise a PR as soon as possible. But unfortunately it may take a few days.

@brooksmtownsend
Copy link
Member

No rush here, thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants