Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reduce unsuccessful launch rate. #5

Merged
merged 1 commit into from
May 12, 2019

Conversation

joshuagetega
Copy link

Problem Description:

Initially, when the manager received a request to start a worker from a stopped state, it would first checl to verify that the notebook server is not running on the worker before starting the notebook server. After looking at the logs, it was found that, in some fraction of the time, the verification call - a call to the is_noteboook_running function - would go to the wrong worker, a worker that has the notebook server already running. In this case, the manager would erroneously think that the notebook server was already up and running on the worker to be started. Consequently, the manager
would fail to start the notebook server on the worker resuling in an unsuccessful launch. On the browser, the user would fail to see the expected notebook interface. They would just see a red progress bar with a message indicating that the spawn failed. To move forward, the user would then have to click on the Home button then the Start My Server button to retry the launch.

Solution:

This commit introduces a few changes to reduce the rate of unsuccessful launches, which at the moment is roughly 50% for a cluster with more than about 30 worker instances already running. Namely:

  • In the start_worker_server function, removing the verification call that checks whether the notebook server is running, if the worker is being started from a stopped state. The verification call is unnecessary given that the notebook server would not be running in the first place. Thus, the change here is to have the manager run the notebook server start command without doing the verification call. I note that this is a workaround to the problem described. Namely, instead of taking the risk to make the verification call to the wrong worker, the manager just doesn't make the call in the first place.

  • Adding a while loop to the remote_notebook_start function that retries the notebook server start command until either the notebook server starts on the worker or the maximum number of retries is reached. Initially, the notebook server start command would be called only once. By adding retries, the probability that the launch will be successful increases.

  • Adding more conditions to the logic in the is_notebook_running function to ensure that the function returns True only if the notebook server is running on the right worker instance and the worker instance belongs to the user in question. The logs showed that, in some fraction of the time, the function returns True after verifying that the notebook server is running on the wrong instance. The logic, thus, needed tightening.

  • Adding more debugging statements for the sake of ease of debugging when grepping through the manager logs.

I have tested the above changes in a cluster of about 60 worker instances wherein I launched each worker instance sequentially and had them all running simultaneously. My observation was that the average time to launch reduced by 40% - from approx 1 min 20 sec to about 50 min. In addition, no worker launched unsuccessfully, except for 1 that had a different issue going on.

@joshuagetega joshuagetega requested a review from arthurian April 24, 2019 15:41
@joshuagetega
Copy link
Author

Hi @arthurian, the PR is ready for review. Thoughts?

Copy link
Member

@arthurian arthurian left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall looks good @joshuagetega. All of the extra debugging info will be helpful in diagnosing issues with the spawner, as well as failing fast if/when the spawner fails to SSH to a server. The major change with respect to starting the remote notebook server seems OK, the only side effect I see is an uptick in network traffic in the worst case (total failure), but it may not turn out to be an issue in practice. I left a comment on a possible mitigation if that turns out to be the case.

self.user.settings[self.user.name] = ""
yield sudo("%s %s --user=%s --notebook-dir=/home/%s/ --allow-root > /tmp/jupyter.log 2>&1 &" % (lenv, start_notebook_cmd,self.user.name,self.user.name), pty=False)
self.log.debug("just started the notebook for user %s, private ip %s, waiting." % (self.user.name, worker_ip_address_string))
try:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm a little unclear what the intention is for this try/catch block, since worker instances only ever have private ip addresses since they are not intended to be publicly accessible.

Perhaps an earlier version/iteration of the spawner was exposing the worker instances in a public subnet? I don't see any other references to self.user.settings or a public IP address in the spawner, so I wonder if this try/catch block and the same one repeated below could be removed.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Spot on. I had the same thoughts too but left the block in there just to be safe, with the intention of getting clarification from Faras about why it is there in the first place. I'll remove it for now, but will still ask Faras about it in the PR against the parent cloudjhub repo upstream.

self.user.settings[self.user.name] = instance.public_ip_address
except:
self.user.settings[self.user.name] = ""
notebook_running = yield self.is_notebook_running(worker_ip_address_string, attempts=30)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I'm understanding the process correctly, a complete and total failure to start the notebook would result in 5 * 30 = 150 attempts. Prior to this change, we tried to start the notebook server once and then retried up to 30 times.

The assumption is that we can SSH into the server, but the remote notebook server might fail to start and so we need to retry a few times, with a delay in between. To that end, I'm wondering if we could achieve a similar effect by reducing the number of attempts when checking if the notebook server is running and instead add a short delay after those attempts (e.g. yield gen.sleep(10)). This would provide a little more time for the server to start without incurring additional network traffic, which could help at peak times.

Having said that, consider this for a future enhancement if the extra network traffic turns out to be a problem.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes total sense. I'll just go ahead and implement that right away.

@arthurian
Copy link
Member

@joshuagetega Changes look good to me! 👍

Problem Description: Initially, when the manager received a request to start a worker from a stopped state, it would first check
to verify that the notebook server is not running on the worker before starting the notebook server. After looking
at the logs, it was found that, in some fraction of the time, the verification call - a call to the is_noteboook_running function -
would go to the wrong worker, a worker that has the notebook server already running. In this case, the manager would
erroneously think that the notebook server was already up and running on the worker to be started. Consequently, the manager
would fail to start the notebook server on the worker resuling in an unsuccessful launch. On the browser, the user would fail
to see the expected notebook interface. They would just see a red progress bar with a message indicating that the
spawn failed. To move forward, the user would then have to click on the Home button then the Start My Server button
to retry the launch.

Solution: This commit introduces a few changes to reduce the rate of unsuccessful launches, which at the moment is roughly 50%
for a cluster with more than about 30 worker instances already running. Namely:

- In the start_worker_server function, removing the verification call that checks whether the notebook server is running, if the worker is being started from a stopped state. The verification call is unnecessary given that the notebook server would not be running in the first place. Thus, the change here is to have the manager run the notebook server start command without doing the verification call. I note that this is a workaround to the problem
described. Namely, instead of taking the risk to make the verification call to the wrong worker, the manager just doesn't
make the call in the first place.

- Adding a while loop to the remote_notebook_start function that retries the notebook server start command until either
the notebook server starts on the worker or the maximum number of retries is reached. Initially, the notebook server
start command would be called only once. By adding retries, the probability that the launch will be successful increases.

- Adding more conditions to the logic in the is_notebook_running function to ensure that the function returns True only if
the notebook server is running on the right worker instance and the worker instance belongs to the user in question.
The logs showed that, in some fraction of the time, the function returns True after verifying that the notebook server is
running on the wrong instance. The logic, thus, needed tightening.

- Adding more debugging statements for the sake of ease of debugging when grepping through the manager logs.

I have tested the above changes in a cluster of about 60 worker instances wherein I launched each worker instance sequentially
and had them all running simultaneously. My observation was that the average time to launch reduced by 40% - from approx 1 min 20 sec
to about 50 min. In addition, no worker launched unsuccessfully, except for 1 that had a different issue going on.

Refactor changes introduced to reduce unsuccessful-launch-rate

Removed unnecessary try/except blocks. Reduced number of attempts in each call to the is_notebook_running function from 30 to 10.
Increased gen.sleep time in is_notebook_running function from 1 to 3 seconds. The reasoning here is that 3 seconds should be enough
for the notebook server to start.
@joshuagetega joshuagetega force-pushed the bugfix/unsuccessful-server-launch branch from f2f68a3 to e806933 Compare May 12, 2019 04:09
@joshuagetega joshuagetega merged commit 5b1a964 into develop May 12, 2019
@joshuagetega joshuagetega deleted the bugfix/unsuccessful-server-launch branch May 12, 2019 04:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants