Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ATG fork merge #21

Open
wants to merge 27 commits into
base: master
Choose a base branch
from
Open

ATG fork merge #21

wants to merge 27 commits into from

Conversation

dodget
Copy link

@dodget dodget commented Apr 19, 2021

This PR includes:

  • Upgrading from fabric to fabric2
  • Updating whitelist to allowed_users for jhub upgrade
  • Slowing the spawner poll interval

joshuagetega and others added 27 commits March 25, 2019 14:25
Tag manager and workers with 'platform', 'product', and 'environment' tags. These are required by HUIT, as specified in https://confluence.huit.harvard.edu/display/CLA/Cloud+Resource+Tagging#CloudResourceTagging-4.5%22platform%22Tag, for cost allocation.
Add HUIT cost allocation tags
Problem Description: Initially, when the manager received a request to start a worker from a stopped state, it would first check
to verify that the notebook server is not running on the worker before starting the notebook server. After looking
at the logs, it was found that, in some fraction of the time, the verification call - a call to the is_noteboook_running function -
would go to the wrong worker, a worker that has the notebook server already running. In this case, the manager would
erroneously think that the notebook server was already up and running on the worker to be started. Consequently, the manager
would fail to start the notebook server on the worker resuling in an unsuccessful launch. On the browser, the user would fail
to see the expected notebook interface. They would just see a red progress bar with a message indicating that the
spawn failed. To move forward, the user would then have to click on the Home button then the Start My Server button
to retry the launch.

Solution: This commit introduces a few changes to reduce the rate of unsuccessful launches, which at the moment is roughly 50%
for a cluster with more than about 30 worker instances already running. Namely:

- In the start_worker_server function, removing the verification call that checks whether the notebook server is running, if the worker is being started from a stopped state. The verification call is unnecessary given that the notebook server would not be running in the first place. Thus, the change here is to have the manager run the notebook server start command without doing the verification call. I note that this is a workaround to the problem
described. Namely, instead of taking the risk to make the verification call to the wrong worker, the manager just doesn't
make the call in the first place.

- Adding a while loop to the remote_notebook_start function that retries the notebook server start command until either
the notebook server starts on the worker or the maximum number of retries is reached. Initially, the notebook server
start command would be called only once. By adding retries, the probability that the launch will be successful increases.

- Adding more conditions to the logic in the is_notebook_running function to ensure that the function returns True only if
the notebook server is running on the right worker instance and the worker instance belongs to the user in question.
The logs showed that, in some fraction of the time, the function returns True after verifying that the notebook server is
running on the wrong instance. The logic, thus, needed tightening.

- Adding more debugging statements for the sake of ease of debugging when grepping through the manager logs.

I have tested the above changes in a cluster of about 60 worker instances wherein I launched each worker instance sequentially
and had them all running simultaneously. My observation was that the average time to launch reduced by 40% - from approx 1 min 20 sec
to about 50 min. In addition, no worker launched unsuccessfully, except for 1 that had a different issue going on.

Refactor changes introduced to reduce unsuccessful-launch-rate

Removed unnecessary try/except blocks. Reduced number of attempts in each call to the is_notebook_running function from 30 to 10.
Increased gen.sleep time in is_notebook_running function from 1 to 3 seconds. The reasoning here is that 3 seconds should be enough
for the notebook server to start.
Hotfix for bug causing some worker launches to fail
Add in changes from upstream fork
Changes include increasing the poll interval and increasing the number of attempts the is_notebook_running() function will be called with from within the poll() function. This is meant to reduce the chances of the poll() function wrongly determining that the jupyterhub-singleuser process is not running on a worker instance when it actually is.
Fabric3 is an unauthorized fork of the main fabric project. It was
originally forked to add python3 support, but the main fabric project
now supports python3 and also has gone through a significant rewrite.

The motivation for upgrading, besides the fact that Fabric3 is
deprecated and no longer supported, is the fact that fabric is
thread-safe, providing better support for concurrency.
Updated spawner logging to improve debugging and traceability.
Refactor launch script with fabric upgrade changes
Update 'whitelist' naming to 'allowed_users' as in JHub 1.2 change
Update the Harvard-ATG:master branch with changes in the Harvard-ATG:develop branch
Increase max_retries to 30 in create_new_instance
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants