Skip to content

[Security Solution][Timeline] fix Kibana DoS via Timeline Bulk Export#260265

Merged
agusruidiazgd merged 4 commits intoelastic:mainfrom
agusruidiazgd:fix/fix-export-timeline-bulk
Apr 1, 2026
Merged

[Security Solution][Timeline] fix Kibana DoS via Timeline Bulk Export#260265
agusruidiazgd merged 4 commits intoelastic:mainfrom
agusruidiazgd:fix/fix-export-timeline-bulk

Conversation

@agusruidiazgd
Copy link
Copy Markdown
Contributor

@agusruidiazgd agusruidiazgd commented Mar 30, 2026

Summary

Closes: https://github.com/elastic/security-team/issues/14883
This PR fixes a DoS risk in Timeline bulk export (POST /api/timeline/_export).

A user could send a very large list of timeline IDs (including duplicates), which caused unbounded work during export and could degrade Kibana availability.

What changed

  • Added stronger request validation for export ids:
    • minimum 1
    • maximum 1000
  • Deduplicated incoming export IDs before processing.
  • Enforced export size checks on normalized IDs.
  • Replaced unbounded enrichment fan-out (notes/pinned events) with bounded batching.
How to test

UI sanity test

  1. Log in with a user that has timeline read access (timeline_read).
  2. Go to Security -> Timelines.
  3. Select one or more timelines and export from the UI.
  4. Verify export still works and downloads NDJSON as expected.

Browser Console test (while logged in to Kibana)

  1. Oversized payload should fail (400)
await fetch('/api/timeline/_export?file_name=timelines_export.ndjson', {
  method: 'POST',
  credentials: 'same-origin',
  headers: {
    'content-type': 'application/json',
    'kbn-xsrf': 'true',
    'elastic-api-version': '2023-10-31',
  },
  body: JSON.stringify({
    ids: Array.from({ length: 1001 }, () => crypto.randomUUID()),
  }),
});
  1. Duplicate IDs should be handled safely
await fetch('/api/timeline/_export?file_name=timelines_export.ndjson', {
  method: 'POST',
  credentials: 'same-origin',
  headers: {
    'content-type': 'application/json',
    'kbn-xsrf': 'true',
    'elastic-api-version': '2023-10-31',
  },
  body: JSON.stringify({
    ids: ['REAL_TIMELINE_ID', 'REAL_TIMELINE_ID', 'REAL_TIMELINE_ID'],
  }),
});

@agusruidiazgd agusruidiazgd self-assigned this Mar 30, 2026
@agusruidiazgd agusruidiazgd requested a review from a team as a code owner March 30, 2026 13:55
@agusruidiazgd agusruidiazgd added release_note:fix Team:Threat Hunting:Investigations Security Solution Threat Hunting Investigations Team backport:version Backport to applied version labels v9.4.0 v9.3.3 v9.2.8 v8.19.14 labels Mar 30, 2026
@elasticmachine
Copy link
Copy Markdown
Contributor

Pinging @elastic/security-threat-hunting-investigations (Team:Threat Hunting:Investigations)

@elastic-vault-github-plugin-prod elastic-vault-github-plugin-prod bot requested a review from a team as a code owner March 30, 2026 15:09
@elasticmachine
Copy link
Copy Markdown
Contributor

💚 Build Succeeded

Metrics [docs]

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
securitySolution 11.5MB 11.5MB +16.0B

cc @agusruidiazgd

@agusruidiazgd agusruidiazgd merged commit 9d0ff0c into elastic:main Apr 1, 2026
18 checks passed
@kibanamachine
Copy link
Copy Markdown
Contributor

Starting backport for target branches: 8.19, 9.2, 9.3

https://github.com/elastic/kibana/actions/runs/23839724833

@kibanamachine
Copy link
Copy Markdown
Contributor

💔 All backports failed

Status Branch Result
8.19 Backport failed because of merge conflicts

You might need to backport the following PRs to 8.19:
- [ska] relocation security_solution_* FTR tests (#231416)
9.2 Backport failed because of merge conflicts
9.3 Backport failed because of merge conflicts

Manual backport

To create the backport manually run:

node scripts/backport --pr 260265

Questions ?

Please refer to the Backport tool documentation

jeramysoucy pushed a commit to jeramysoucy/kibana that referenced this pull request Apr 1, 2026
…elastic#260265)

## Summary

Closes: elastic/security-team#14883
This PR fixes a DoS risk in Timeline bulk export (`POST
/api/timeline/_export`).

A user could send a very large list of timeline IDs (including
duplicates), which caused unbounded work during export and could degrade
Kibana availability.

## What changed

- Added stronger request validation for export `ids`:
  - minimum 1
  - maximum 1000
- Deduplicated incoming export IDs before processing.
- Enforced export size checks on normalized IDs.
- Replaced unbounded enrichment fan-out (notes/pinned events) with
bounded batching.

<details>
<summary><strong>How to test</strong></summary>

### UI sanity test

1. Log in with a user that has timeline read access (`timeline_read`).
2. Go to **Security -> Timelines**.
3. Select one or more timelines and export from the UI.
4. Verify export still works and downloads NDJSON as expected.

### Browser Console test (while logged in to Kibana)
1) Oversized payload should fail (`400`)
```js
await fetch('/api/timeline/_export?file_name=timelines_export.ndjson', {
  method: 'POST',
  credentials: 'same-origin',
  headers: {
    'content-type': 'application/json',
    'kbn-xsrf': 'true',
    'elastic-api-version': '2023-10-31',
  },
  body: JSON.stringify({
    ids: Array.from({ length: 1001 }, () => crypto.randomUUID()),
  }),
});
```
2) Duplicate IDs should be handled safely
```js
await fetch('/api/timeline/_export?file_name=timelines_export.ndjson', {
  method: 'POST',
  credentials: 'same-origin',
  headers: {
    'content-type': 'application/json',
    'kbn-xsrf': 'true',
    'elastic-api-version': '2023-10-31',
  },
  body: JSON.stringify({
    ids: ['REAL_TIMELINE_ID', 'REAL_TIMELINE_ID', 'REAL_TIMELINE_ID'],
  }),
});
```

---------

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
eokoneyo pushed a commit to davismcphee/kibana that referenced this pull request Apr 2, 2026
…elastic#260265)

## Summary

Closes: elastic/security-team#14883
This PR fixes a DoS risk in Timeline bulk export (`POST
/api/timeline/_export`).

A user could send a very large list of timeline IDs (including
duplicates), which caused unbounded work during export and could degrade
Kibana availability.

## What changed

- Added stronger request validation for export `ids`:
  - minimum 1
  - maximum 1000
- Deduplicated incoming export IDs before processing.
- Enforced export size checks on normalized IDs.
- Replaced unbounded enrichment fan-out (notes/pinned events) with
bounded batching.

<details>
<summary><strong>How to test</strong></summary>

### UI sanity test

1. Log in with a user that has timeline read access (`timeline_read`).
2. Go to **Security -> Timelines**.
3. Select one or more timelines and export from the UI.
4. Verify export still works and downloads NDJSON as expected.

### Browser Console test (while logged in to Kibana)
1) Oversized payload should fail (`400`)
```js
await fetch('/api/timeline/_export?file_name=timelines_export.ndjson', {
  method: 'POST',
  credentials: 'same-origin',
  headers: {
    'content-type': 'application/json',
    'kbn-xsrf': 'true',
    'elastic-api-version': '2023-10-31',
  },
  body: JSON.stringify({
    ids: Array.from({ length: 1001 }, () => crypto.randomUUID()),
  }),
});
```
2) Duplicate IDs should be handled safely
```js
await fetch('/api/timeline/_export?file_name=timelines_export.ndjson', {
  method: 'POST',
  credentials: 'same-origin',
  headers: {
    'content-type': 'application/json',
    'kbn-xsrf': 'true',
    'elastic-api-version': '2023-10-31',
  },
  body: JSON.stringify({
    ids: ['REAL_TIMELINE_ID', 'REAL_TIMELINE_ID', 'REAL_TIMELINE_ID'],
  }),
});
```

---------

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
paulinashakirova pushed a commit to paulinashakirova/kibana that referenced this pull request Apr 2, 2026
…elastic#260265)

## Summary

Closes: elastic/security-team#14883
This PR fixes a DoS risk in Timeline bulk export (`POST
/api/timeline/_export`).

A user could send a very large list of timeline IDs (including
duplicates), which caused unbounded work during export and could degrade
Kibana availability.

## What changed

- Added stronger request validation for export `ids`:
  - minimum 1
  - maximum 1000
- Deduplicated incoming export IDs before processing.
- Enforced export size checks on normalized IDs.
- Replaced unbounded enrichment fan-out (notes/pinned events) with
bounded batching.

<details>
<summary><strong>How to test</strong></summary>

### UI sanity test

1. Log in with a user that has timeline read access (`timeline_read`).
2. Go to **Security -> Timelines**.
3. Select one or more timelines and export from the UI.
4. Verify export still works and downloads NDJSON as expected.

### Browser Console test (while logged in to Kibana)
1) Oversized payload should fail (`400`)
```js
await fetch('/api/timeline/_export?file_name=timelines_export.ndjson', {
  method: 'POST',
  credentials: 'same-origin',
  headers: {
    'content-type': 'application/json',
    'kbn-xsrf': 'true',
    'elastic-api-version': '2023-10-31',
  },
  body: JSON.stringify({
    ids: Array.from({ length: 1001 }, () => crypto.randomUUID()),
  }),
});
```
2) Duplicate IDs should be handled safely
```js
await fetch('/api/timeline/_export?file_name=timelines_export.ndjson', {
  method: 'POST',
  credentials: 'same-origin',
  headers: {
    'content-type': 'application/json',
    'kbn-xsrf': 'true',
    'elastic-api-version': '2023-10-31',
  },
  body: JSON.stringify({
    ids: ['REAL_TIMELINE_ID', 'REAL_TIMELINE_ID', 'REAL_TIMELINE_ID'],
  }),
});
```

---------

Co-authored-by: kibanamachine <42973632+kibanamachine@users.noreply.github.com>
@kibanamachine kibanamachine added the backport missing Added to PRs automatically when the are determined to be missing a backport. label Apr 3, 2026
@kibanamachine
Copy link
Copy Markdown
Contributor

Friendly reminder: Looks like this PR hasn’t been backported yet.
To create automatically backports add a backport:* label or prevent reminders by adding the backport:skip label.
You can also create backports manually by running node scripts/backport --pr 260265 locally
cc: @agusruidiazgd

2 similar comments
@kibanamachine
Copy link
Copy Markdown
Contributor

Friendly reminder: Looks like this PR hasn’t been backported yet.
To create automatically backports add a backport:* label or prevent reminders by adding the backport:skip label.
You can also create backports manually by running node scripts/backport --pr 260265 locally
cc: @agusruidiazgd

@kibanamachine
Copy link
Copy Markdown
Contributor

Friendly reminder: Looks like this PR hasn’t been backported yet.
To create automatically backports add a backport:* label or prevent reminders by adding the backport:skip label.
You can also create backports manually by running node scripts/backport --pr 260265 locally
cc: @agusruidiazgd

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport missing Added to PRs automatically when the are determined to be missing a backport. backport:version Backport to applied version labels release_note:fix Team:Threat Hunting:Investigations Security Solution Threat Hunting Investigations Team v8.19.14 v9.2.8 v9.3.3 v9.4.0

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants