RFD 0148 - Pinned Resources in the web UI#32344
Conversation
| Assist AssistUserPreferencesResponse `json:"assist"` | ||
| Theme userpreferencesv1.Theme `json:"theme"` | ||
| Onboard OnboardUserPreferencesResponse `json:"onboard"` | ||
| + PinnedResources []string `json:"pinnedResources"` |
There was a problem hiding this comment.
nit: use a tab instead of spaces here so it matches the other lines
| it is expected that the user would want the same theme across all clusters. However, with pinned resources, | ||
| we would want a separate list per cluster. Instead of creating a new mechanism to store pinned resources, we can | ||
| reuse the current user preferences method but update/fetch pinned resources per cluster instead. This would require two | ||
| new endpoints in the apiserver `pinnedResourcesUpsert` and `pinnedResourcesGet`. |
There was a problem hiding this comment.
Why do we need a new endpoint to update pinned resources? Wouldn't we just use the existing endpoints to update user preferences?
There was a problem hiding this comment.
Good question. The difference is, if we go with per-cluster, we will have to have some way to say "this is the overall preferences" and "this is the cluster's preferences".
I think both can make sense where there are some user settings that you'd expect to be global, like theme, assist settings and what not, and then storing "cluster specific" settings in their respective cluster.
If thats the case, we can use the same mechanism of user preferences, but just change which auth server we reference. currently, the endpoints now are root only. Instead of changing that (and maybe my proposed endpoints were poorly named) we could have like getClusterUserPreferences instead. Maybe we don't need an endpoint and can just have part of the request specify the cluster to pull from.
| pinning a specific resource after assuming a role, and only caring about that resource's pin when you've assumed the role again. | ||
| 2. Resource loses connectivity. We probably wouldn't want to remove a resource's pin due to connection issues unless x amount of time | ||
| has passed. | ||
| 3. Resource changes it's name. Idk how frequent this happens. |
There was a problem hiding this comment.
Are we capable of detecting when a resource has been "replaced" - i.e. if I stop an agent, wipe the data dir, start the agent again. It will have the same friendly name in this case however the host ID will be a brand new UUID which will prevent a match on the resource "key" used by the cache.
What happens if I keep pinning ephemeral agents?
There was a problem hiding this comment.
What happens if I keep pinning ephemeral agents?
This would have to be handled by whatever mechanism for automatic cleanup that gets implemented. The scenario I imagine is every week (or day), a new resource is spun up for some reason and the dev pins it so he can work on it until the next week when its gone and another resource gets spun up. If there is something that removes pinned ids from all user's preferences in that cluster, should be good enough I think.
Are we capable of detecting when a resource has been "replaced" - i.e. if I stop an agent, wipe the data dir, start the agent again.
I don't think so, unless there is something reporting to the system "hey this ID is now this ID". If not, I'd imagine this would eventually get autocleaned by the same mechanism above. We can afford to have quite a few stale IDs in the pinned resources list so they can rot for quite a while before needing to be cleaned up. Unless someone is pinning thousands of resources a day that are being replaced every other day i suppose
|
The last remaining bit of pinned resources would be figuring out how to perform automatic cleanup.
The issue is we don't really know if a resource will come back. So some sort of compromise would have to be made with an acceptable amount of time/frequency something is gone to really decide "ok lets remove it from users pinned resources. I don't know what the right option would be. Unfortunately, I think for more ephemeral resources like the ones described in the above scenarios are almost the perfect resource for resource pinning. Because they can just open up the pinned dashboard and see which ones are alive. hard to discount that use case. I think a good compromise would be a timeout like a week or something. At that point, I don't think anyone would complain that their resource isn't pinned anymore if it was gone for that long. We have the luxury of being able to wait for quite some time letting stale resources 'rot' in the pinned resources preferences since the storage cap is so high. |
|
This has been updated and is ready for any final reviews |
|
The PR changelog entry failed validation: Changelog entry not found in the PR body. Please add a "no-changelog" label to the PR, or changelog lines starting with |
75069cd to
f23e996
Compare
In support of #30418
The automatic cleanup still isn't figured out yet but hoping we can discuss here.
Rendered view https://github.com/gravitational/teleport/blob/9734780bc3190e214497f7392b45d6ee3a74ac64/rfd/0148-pinned-unified-resources.md
There are some open PRs already that had some discussions that I felt warranted a more concrete discussion via RFD. I will return to them once we hammer out some of the details here
#32009
#32077