Skip to content

[ti_anomali] Add Benchmark and Policy Test#17428

Merged
mohitjha-elastic merged 2 commits intoelastic:mainfrom
mohitjha-elastic:ti_anomali-benchmark-pipeline-test
Feb 20, 2026
Merged

[ti_anomali] Add Benchmark and Policy Test#17428
mohitjha-elastic merged 2 commits intoelastic:mainfrom
mohitjha-elastic:ti_anomali-benchmark-pipeline-test

Conversation

@mohitjha-elastic
Copy link
Copy Markdown
Collaborator

Proposed commit message

ti_anomali: Add benchmark and policy test.

Checklist

  • I have reviewed tips for building integrations and this pull request is aligned with them.
  • I have verified that all data streams collect metrics or logs.
  • I have added an entry to my package's changelog.yml file.
  • I have verified that Kibana version constraints are current according to guidelines.
  • I have verified that any added dashboard complies with Kibana's Dashboard good practices

How to test this PR locally

  • Clone integrations repo.
  • Install the elastic package locally.
  • Start the elastic stack using the elastic package.
  • Move to integrations/packages/ti_anomali directory.
  • Run the following command to run tests.

elastic-package test -v

Related Issues

@mohitjha-elastic mohitjha-elastic self-assigned this Feb 16, 2026
@mohitjha-elastic mohitjha-elastic requested a review from a team as a code owner February 16, 2026 12:45
@mohitjha-elastic mohitjha-elastic added enhancement New feature or request Integration:ti_anomali Anomali ThreatStream Category: Integration quality Category: Quality used for SI planning Team:Security-Service Integrations Security Service Integrations team [elastic/security-service-integrations] Team:SDE-Crest Crest developers on the Security Integrations team [elastic/sit-crest-contractors] labels Feb 16, 2026
@elasticmachine
Copy link
Copy Markdown

Pinging @elastic/security-service-integrations (Team:Security-Service Integrations)

Copy link
Copy Markdown
Contributor

@chrisberkhout chrisberkhout left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. One minor suggestion in a policy test.

It looks like this has rally benchmarks, system benchmarks and pipeline benchmarks. It would be great if you could remind me of how those benchmark types differ in function and value. Are all three worthwhile here?

@mohitjha-elastic
Copy link
Copy Markdown
Collaborator Author

Looks good. One minor suggestion in a policy test.

It looks like this has rally benchmarks, system benchmarks and pipeline benchmarks. It would be great if you could remind me of how those benchmark types differ in function and value. Are all three worthwhile here?

Pipeline benchmarking focuses only on the ingest pipeline layer and measures how efficiently events are processed by processors such as grok, dissect, script, and convert. It helps identify event processing latency, and throughput in docs/sec. Its main value is optimizing parsing and enrichment logic without involving indexing or cluster performance.

Rally benchmarking evaluates core Elasticsearch indexing and query performance at scale. It measures bulk indexing rate, search latency, disk I/O, and the impact of mappings and shard configurations. This is primarily used for cluster sizing, capacity planning, and understanding how many events per second a given infrastructure can handle.

System benchmarking tests the full end-to-end flow—Elastic Agent to ingest pipelines to Elasticsearch indexing—under realistic conditions. It captures overall throughput, resource usage, and bottlenecks across the stack, making it the closest representation of production behavior. Its value lies in validating real integration performance and identifying whether limitations come from the agent, pipelines, or the cluster.

@elastic-vault-github-plugin-prod
Copy link
Copy Markdown

🚀 Benchmarks report

To see the full report comment with /test benchmark fullreport

@elasticmachine
Copy link
Copy Markdown

💚 Build Succeeded

History

cc @mohitjha-elastic

@chrisberkhout
Copy link
Copy Markdown
Contributor

Looks good. One minor suggestion in a policy test.
It looks like this has rally benchmarks, system benchmarks and pipeline benchmarks. It would be great if you could remind me of how those benchmark types differ in function and value. Are all three worthwhile here?

Pipeline benchmarking focuses only on the ingest pipeline layer and measures how efficiently events are processed by processors such as grok, dissect, script, and convert. It helps identify event processing latency, and throughput in docs/sec. Its main value is optimizing parsing and enrichment logic without involving indexing or cluster performance.

Rally benchmarking evaluates core Elasticsearch indexing and query performance at scale. It measures bulk indexing rate, search latency, disk I/O, and the impact of mappings and shard configurations. This is primarily used for cluster sizing, capacity planning, and understanding how many events per second a given infrastructure can handle.

System benchmarking tests the full end-to-end flow—Elastic Agent to ingest pipelines to Elasticsearch indexing—under realistic conditions. It captures overall throughput, resource usage, and bottlenecks across the stack, making it the closest representation of production behavior. Its value lies in validating real integration performance and identifying whether limitations come from the agent, pipelines, or the cluster.

Thanks. So let's merge this.

I think pipeline benchmarking is definitely useful for integrations. A lot of our logic is there. The system benchmarking is probably also useful because sometimes the agent may be the bottleneck and that's definitely something that the integration is in charge of. The rally benchmarking seems a bit less relevant because things like sharding, etc. are not really integration concerns. These seem like they would be useful less for monitoring integration quality and more for informing cluster resourcing in actual setups. Where we get value from these might be a topic for future discussion in the team, but for now let's have them all.

Thanks for the good work!

@mohitjha-elastic mohitjha-elastic merged commit 77d8344 into elastic:main Feb 20, 2026
10 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Category: Integration quality Category: Quality used for SI planning enhancement New feature or request Integration:ti_anomali Anomali ThreatStream Team:SDE-Crest Crest developers on the Security Integrations team [elastic/sit-crest-contractors] Team:Security-Service Integrations Security Service Integrations team [elastic/security-service-integrations]

Projects

None yet

Development

Successfully merging this pull request may close these issues.

ti_anomali: Add benchmarks for integration quality checks ti_anomali: Add policy tests for integration quality checks

3 participants