[Fleet] Upgrade managed package policies in a background task#191097
[Fleet] Upgrade managed package policies in a background task#191097nchaulet merged 11 commits intoelastic:mainfrom
Conversation
🤖 GitHub commentsExpand to view the GitHub comments
Just comment with:
|
|
/ci |
There was a problem hiding this comment.
Should we make those key configurable through kibana config under fleet.internal.*
There was a problem hiding this comment.
It could be useful, but is there a risk that the user adds a number too small or too big and creates some issue? If we decide to do it we should document it very clearly.
There was a problem hiding this comment.
yes I think if we make this configurable it will be under an internal keyword probably that could be useful for supportability (SDH with a scenario we did not think of)
There was a problem hiding this comment.
We can start without making this configurable, those size are pretty small it should not have a huge impact
There was a problem hiding this comment.
There is no reason for this to change during an upgrade and that codepath was extremely slow
There was a problem hiding this comment.
Do we have tests that cover this removal? Just making sure that we don't inadvertently break anything.
There was a problem hiding this comment.
We do not, but it seems that codepath was doing nothing, it seems it was only here for supporting an UI that do not exists anymore #190613
effdca7 to
4ef0d1b
Compare
|
/ci |
| ignoreUnverified?: boolean; | ||
| prerelease?: boolean; | ||
| }): Promise<PackageInfo> { | ||
| const cacheResult = getPackageInfoCache(pkgName, pkgVersion); |
There was a problem hiding this comment.
Curious to get your though on that pattern for caching, it seems easier to use that passing packageInfo through all codebase
There was a problem hiding this comment.
I agree, currently we have to pass the packageInfo object around and is not the best pattern. With this we could probably replace it.
|
Pinging @elastic/fleet (Team:Fleet) |
…-managed-pacakge-policies-task
|
@elasticmachine merge upstream |
There was a problem hiding this comment.
It could be useful, but is there a risk that the user adds a number too small or too big and creates some issue? If we decide to do it we should document it very clearly.
| ignoreUnverified?: boolean; | ||
| prerelease?: boolean; | ||
| }): Promise<PackageInfo> { | ||
| const cacheResult = getPackageInfoCache(pkgName, pkgVersion); |
There was a problem hiding this comment.
I agree, currently we have to pass the packageInfo object around and is not the best pattern. With this we could probably replace it.
There was a problem hiding this comment.
Do we have tests that cover this removal? Just making sure that we don't inadvertently break anything.
|
@elasticmachine merge upstream |
💛 Build succeeded, but was flaky
Failed CI StepsTest Failures
Metrics [docs]Unknown metric groupsESLint disabled in files
Total ESLint disabled count
History
To update your PR or re-run it, just comment with: cc @nchaulet |
ymao1
left a comment
There was a problem hiding this comment.
response ops changes LGTM.
Summary
Resolve #188666
Upgrading managed package policies is taking a long time, even when trying to optimize the code. To avoid setup that are too long that PR move the process of upgrading managed package policies from the fleet setup to a background task triggered by the setup.
In addition to this I still implemented a few optimization to make that code path a little quicker:
getPackageInfoandgetAssetsMapthe cache is scopped to the task with small size so it should not impact to much Kibana memoryI capture a few APM trace of that background task with 100 synthetics package policies, and it went from ~5 minute to do the upgrade to ~1 minute
Todo