Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix: plugin grpc server retry method #1096

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

googs1025
Copy link
Contributor

To improve observability and facilitate debugging, this PR enhances logging after the GRPC server crashes

Copy link

copy-pr-bot bot commented Dec 11, 2024

This pull request requires additional validation before any workflows can run on NVIDIA's runners.

Pull request vetters can view their responsibilities here.

Contributors can view more details about this message here.

@googs1025
Copy link
Contributor Author

friendly ping @elezar /PTAL thanks

@googs1025
Copy link
Contributor Author

@cdesiniotis can you help this ?

Copy link
Member

@elezar elezar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since we're changing this anyway. Let's use the built-in types.

@@ -183,11 +183,13 @@ func (plugin *nvidiaDevicePlugin) Serve() error {
go func() {
lastCrashTime := time.Now()
restartCount := 0
maxRestarts := 5
crashTimeoutSeconds := float64(3600) // 1 hour in seconds
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
crashTimeoutSeconds := float64(3600) // 1 hour in seconds
crashTimeout := 1 * time.Hour

@@ -198,17 +200,29 @@ func (plugin *nvidiaDevicePlugin) Serve() error {
break
}

klog.Infof("GRPC server for '%s' crashed with error: %v", plugin.rm.Resource(), err)

timeSinceLastCrash := time.Since(lastCrashTime).Seconds()
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
timeSinceLastCrash := time.Since(lastCrashTime).Seconds()
timeSinceLastCrash := time.Since(lastCrashTime)

timeSinceLastCrash,
)

if timeSinceLastCrash > crashTimeoutSeconds {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if timeSinceLastCrash > crashTimeoutSeconds {
if timeSinceLastCrash > crashTimeout {

plugin.rm.Resource(),
err,
restartCount+1,
timeSinceLastCrash,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably need:

Suggested change
timeSinceLastCrash,
timeSinceLastCrash.Seconds(),

or need to adjust the format string.

// add a small delay before restarting to prevent tight loops
retryDelay := 5 * time.Second
klog.Infof("Waiting for %v before attempting to restart GRPC server for '%s'", retryDelay, plugin.rm.Resource())
time.Sleep(retryDelay) // Wait for 5 seconds before attempting to restart
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
time.Sleep(retryDelay) // Wait for 5 seconds before attempting to restart
time.Sleep(retryDelay)

// it has been one hour since the last crash.. reset the count
// to reflect on the frequency
restartCount = 0
} else {
restartCount++
}

// add a small delay before restarting to prevent tight loops
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason that you chose 5 seconds specifically? This seems quite long.

Comment on lines 215 to 216
// it has been one hour since the last crash.. reset the count
// to reflect on the frequency
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's rather just log the crashTimeout instead of a comment that is now further from the place where the one hour is specified.

Comment on lines 190 to 191
// quite if it has been restarted too often
// i.e. if server has crashed more than 5 times and it didn't last more than one hour each time
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This comment should be moved to the docstring or at the point where we declare the variables.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants