Introduce Kibana task to deploy agentless connectors for 9.0#203973
Introduce Kibana task to deploy agentless connectors for 9.0#203973artem-shelkovnikov merged 36 commits intomainfrom
Conversation
0500e73 to
7456f1b
Compare
There was a problem hiding this comment.
This was done due to the fact that I needed a create method that depends on a lot of other private/internal methods.
I had to either make the methods public + add here; or I could pass the service itself. Potentially there might be other way, but I'm not familiar enough with Kibana development yet to know, please tell me if there's a better way :)
|
Pinging @elastic/fleet (Team:Fleet) |
There was a problem hiding this comment.
Side-effect of removing the usage of AgentPolicyServiceInterface: interface had getByIDs and implementation has getByIds. I chose the latter to stay, but it's easy to rename implementation to getByIDs. This was mostly done to avoid pinging other code owners that might have used the interface method name.
There was a problem hiding this comment.
For some reason this doesn't work - I never get a policy that has supports_agentless field.
There was a problem hiding this comment.
Using our regular NATIVE_CONNECTOR_DEFINITIONS as a source of truth for connectors that we support. I could theoretically instead list integrations that are branched off connectors-py instead, is it possible/better?
There was a problem hiding this comment.
@elastic/fleet - what's the minimal interval with which we could query fleet package policies (we narrow them with a kuery that only returns our package elastic_connectors?
Can we do 10 seconds? 30 seconds?
There was a problem hiding this comment.
If we only query for a certain package, there shouldn't be too many results, so it shouldn't be a problem with scale. I think using 30s sounds fine too, 10s might be too frequent.
There was a problem hiding this comment.
Do we even need to retry, since we run pretty often?
There was a problem hiding this comment.
Great stuff! The changes in the search_connectors plugin LGTM. I have a couple of minor comments regarding naming and one question about hardcoding the package version in the task manager logic.
I’ll defer reviewing the changes in the fleet plugin to the fleet team. EDIT: I see they just approved 🚀
x-pack/solutions/search/plugins/search_connectors/server/task.ts
Outdated
Show resolved
Hide resolved
x-pack/solutions/search/plugins/search_connectors/server/task.ts
Outdated
Show resolved
Hide resolved
x-pack/test/plugin_api_integration/test_suites/task_manager/check_registered_task_types.ts
Outdated
Show resolved
Hide resolved
|
|
||
| const connectorsInputName = 'connectors-py'; | ||
| const pkgName = 'elastic_connectors'; | ||
| const pkgVersion = '0.0.4'; |
There was a problem hiding this comment.
do we need this version hardcoded here? The current (latest) version in the integration registry should be def tracked somewhere by fleet, can we look it up in the package registry dynamically?
Context, 0.0.4 is already outdated
There was a problem hiding this comment.
Maybe code edited in this PR will help? https://github.com/elastic/kibana/pull/192081/files here I was able to access package info and adjust permissions dynamically
x-pack/solutions/search/plugins/search_connectors/server/task.ts
Outdated
Show resolved
Hide resolved
Co-authored-by: Jedr Blaszyk <jedrazb@gmail.com>
| const taskInstance = await taskManager.ensureScheduled({ | ||
| id: AGENTLESS_CONNECTOR_DEPLOYMENTS_SYNC_TASK_ID, | ||
| taskType: AGENTLESS_CONNECTOR_DEPLOYMENTS_SYNC_TASK_TYPE, | ||
| schedule: SCHEDULE, |
There was a problem hiding this comment.
Taking a quick look here from Response Ops. I was reading the PR description and was wondering if we need to have this task run every 30s indefinitely or if it would be possible to make it event based so it runs after a user creates or deletes a connector? Or perhaps a combo of the two but the schedule runs less frequently?
There was a problem hiding this comment.
For now this seemed to us like the best way to move forward:
The task runs and checks if any agentless policies need to be created for our connector records. Connector records can be created in multiple ways:
- User creates a connector via UI
- Connector is created automatically by already running agentless connector deployment
- User creates a connector via API/CLI
Scenario #1 can be done with an event triggered by Kibana UI easily. Scenario #2 does not need this logic. Scenario #3 really needs this task - our CLI doesn't have access to Task Manager + our API is hosted in Elasticsearch, and Elasticsearch also has no way to affect this task run time.
This way we've taken current approach with polling every 30 seconds (a minute should be fine too), plus the task itself queries reasonably small amount of data, I believe, for it hopefully not to be too problematic.
There was a problem hiding this comment.
The GenAI connectors have a similar sort of constraint, where something in Kibana wants to know when connectors get created / updated / deleted. Added in #189027
That PR originally contained some connector logic for the new "hooks", but we extracted that and restructured into a stand-alone PR: #194081 , rather than ship the two pieces together.
So, in theory case 3 can be handled this way.
Looking at those PRs, I'm also wondering if you need to handle the case of connectors being updated / deleted ...
There was a problem hiding this comment.
I've skimmed through. the change but don't understand how it handles case 3 - we have customer calling Elasticsearch API directly, Kibana is not involved in this.
So we cannot have hooks attached to this call, all we can do is poll the content of a couple indices to see if changes were made. Am I missing some detail in the mentioned PR that works around this limitation?
Connector update is not important for us, but deletion is also handled in this PR
There was a problem hiding this comment.
Oh, these aren't alerting connectors? These are "search" connectors? If so, you're correct, completely different "connector" framework I was talking about (I was talking about the alerting connectors).
pmuellr
left a comment
There was a problem hiding this comment.
ResponseOps code LGTM, left a few comments
| const AGENTLESS_CONNECTOR_DEPLOYMENTS_SYNC_TASK_ID = 'search:agentless-connectors-manager-task'; | ||
| const AGENTLESS_CONNECTOR_DEPLOYMENTS_SYNC_TASK_TYPE = 'search:agentless-connectors-manager'; | ||
|
|
||
| const SCHEDULE = { interval: '30s' }; |
There was a problem hiding this comment.
Setting this to the largest value you are willing to live with, will be helpful to Kibana's task throughput :-)
I believe a comment in the PR indicated it could be set to "1m" which would cut down the executions by 50% (useful!)
There was a problem hiding this comment.
I'll change to "1m" indeed, it should not hurt us much, and we can iterate on this number later if it's gonna be too much/too little!
| }; | ||
| } | ||
| }, | ||
| cancel: async () => { |
There was a problem hiding this comment.
Note that if you want to have the cancel actually stop the task from running, you'll have to do a bit more. This function is invoked when TM decides the task needs to be cancelled (running longer than it's time limit). The basic idea is you set a local indicating you've been cancelled, and then can check that in the run() method. Example here:
isCancelled() local function they created - I think it did at one point, must have been removed in another PR ...
There was a problem hiding this comment.
Thanks, I see now - I copied original code from some other place.
The task we have is supposed to be very fast (obviously, depending on response times from Elasticsearch). At best it's 2 calls to Elasticsearch, at worst it's probably does 10s of calls (realistically, I expect 2 calls 99.99% of the time, and occasionally 4 calls).
IMO adding true cancellation is not gonna add a lot here, but I'm not sure yet. I will merge what is there for now and keep it in mind for when we need it :)
💔 Build Failed
Failed CI StepsTest Failures
Metrics [docs]Public APIs missing comments
History
|
…6606) ## Summary This PR makes it so that the Agentless Kibana task implemented in #203973 properly handles soft-deleted connectors. This helps with the situation when an integration policy has been created for an agentless connector but a connector record has not yet been created by an agentless host. With current Kibana task implementation it could lead to the Policy being deleted. With this change, only policies that refer to soft-deleted connectors will be cleaned up. ### Checklist Check the PR satisfies following conditions. Reviewers should verify this PR satisfies this list as well. - [ ] Any text added follows [EUI's writing guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses sentence case text and includes [i18n support](https://github.com/elastic/kibana/blob/main/src/platform/packages/shared/kbn-i18n/README.md) - [ ] [Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html) was added for features that require explanation or tutorials - [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios - [ ] If a plugin configuration key changed, check if it needs to be allowlisted in the cloud and added to the [docker list](https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/resources/base/bin/kibana-docker) - [ ] This was checked for breaking HTTP API changes, and any breaking changes have been approved by the breaking-change committee. The `release_note:breaking` label should be applied in these situations. - [ ] [Flaky Test Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner/1) was used on any tests changed - [ ] The PR description includes the appropriate Release Notes section, and the correct `release_note:*` label is applied per the [guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process) --------- Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
…#203973) ## Closes elastic/search-team#8508 ## Closes elastic/search-team#8465 ## Summary This PR adds a background task for search_connectors plugin. This task checks connector records and agentless package policies and sees if new connector was added/old was deleted, and then adds/deletes package policies for these connectors. Scenario 1: a new connector was added by a user/API call User creates an Elastic-managed connector: https://github.com/user-attachments/assets/38296e48-b281-4b2b-9750-ab0a47334b55 When the user is done, a package policy is created by this background task: https://github.com/user-attachments/assets/12dbc33f-32bf-472d-b854-64588fc1e5b1 Scenario 2: a connector was deleted by a user/API call User deletes an Elastic-managed connector: https://github.com/user-attachments/assets/5997897e-fb9d-4199-8045-abe163264976 ### Checklist Check the PR satisfies following conditions. Reviewers should verify this PR satisfies this list as well. - [ ] Any text added follows [EUI's writing guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses sentence case text and includes [i18n support](https://github.com/elastic/kibana/blob/main/packages/kbn-i18n/README.md) - [ ] [Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html) was added for features that require explanation or tutorials - [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios - [ ] If a plugin configuration key changed, check if it needs to be allowlisted in the cloud and added to the [docker list](https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/resources/base/bin/kibana-docker) - [ ] This was checked for breaking HTTP API changes, and any breaking changes have been approved by the breaking-change committee. The `release_note:breaking` label should be applied in these situations. - [ ] [Flaky Test Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner/1) was used on any tests changed - [x] The PR description includes the appropriate Release Notes section, and the correct `release_note:*` label is applied per the [guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process) --------- Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com> Co-authored-by: Jedr Blaszyk <jedrazb@gmail.com>
…stic#206606) ## Summary This PR makes it so that the Agentless Kibana task implemented in elastic#203973 properly handles soft-deleted connectors. This helps with the situation when an integration policy has been created for an agentless connector but a connector record has not yet been created by an agentless host. With current Kibana task implementation it could lead to the Policy being deleted. With this change, only policies that refer to soft-deleted connectors will be cleaned up. ### Checklist Check the PR satisfies following conditions. Reviewers should verify this PR satisfies this list as well. - [ ] Any text added follows [EUI's writing guidelines](https://elastic.github.io/eui/#/guidelines/writing), uses sentence case text and includes [i18n support](https://github.com/elastic/kibana/blob/main/src/platform/packages/shared/kbn-i18n/README.md) - [ ] [Documentation](https://www.elastic.co/guide/en/kibana/master/development-documentation.html) was added for features that require explanation or tutorials - [x] [Unit or functional tests](https://www.elastic.co/guide/en/kibana/master/development-tests.html) were updated or added to match the most common scenarios - [ ] If a plugin configuration key changed, check if it needs to be allowlisted in the cloud and added to the [docker list](https://github.com/elastic/kibana/blob/main/src/dev/build/tasks/os_packages/docker_generator/resources/base/bin/kibana-docker) - [ ] This was checked for breaking HTTP API changes, and any breaking changes have been approved by the breaking-change committee. The `release_note:breaking` label should be applied in these situations. - [ ] [Flaky Test Runner](https://ci-stats.kibana.dev/trigger_flaky_test_runner/1) was used on any tests changed - [ ] The PR description includes the appropriate Release Notes section, and the correct `release_note:*` label is applied per the [guidelines](https://www.elastic.co/guide/en/kibana/master/contributing.html#kibana-release-notes-process) --------- Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
Closes https://github.com/elastic/search-team/issues/8508
Closes https://github.com/elastic/search-team/issues/8465
Summary
This PR adds a background task for search_connectors plugin. This task checks connector records and agentless package policies and sees if new connector was added/old was deleted, and then adds/deletes package policies for these connectors.
Scenario 1: a new connector was added by a user/API call
User creates an Elastic-managed connector:
Screen.Recording.2024-12-25.at.12.59.14.mov
When the user is done, a package policy is created by this background task:
Screen.Recording.2024-12-25.at.13.00.14.mov
Scenario 2: a connector was deleted by a user/API call
User deletes an Elastic-managed connector:
Screen.Recording.2024-12-25.at.13.21.13.mov
Checklist
Check the PR satisfies following conditions.
Reviewers should verify this PR satisfies this list as well.
release_note:breakinglabel should be applied in these situations.release_note:*label is applied per the guidelines