Skip to content

[8.19](backport #44782) [beatreceiver] - Add status reporting#45045

Merged
VihasMakwana merged 6 commits into
8.19from
mergify/bp/8.19/pr-44782
Jul 4, 2025
Merged

[8.19](backport #44782) [beatreceiver] - Add status reporting#45045
VihasMakwana merged 6 commits into
8.19from
mergify/bp/8.19/pr-44782

Conversation

@mergify
Copy link
Copy Markdown
Contributor

@mergify mergify Bot commented Jun 26, 2025

Proposed commit message

This PR adds status reporting for beatreceivers. The status reporting is added while creating the runners. The first PR (#44528) was quite "hacky" and it had go deep down to inject status reporters.

This PR adds a runner factory wrapper that will:

  1. Call the parent factory to create the runner
  2. Inject status reporter

The code responsible for doing the above tasks will live in libbeat and we will only enable it for beatreceivers. From an the beat receiver high level, it will do following:

  1. The beater will be created in createReceiver
  2. We will add the factory wrapper
    if w, ok := br.beater.(cfgfile.WithFactoryWrapper); ok {
    groupReporter := status.NewGroupStatusReporter(host)
    w.WithFactoryWrapper(status.StatusReporterFactory(groupReporter))
    }
  3. The receiver will kick off the beater
    func (br *BeatReceiver) Start() error {
    if err := br.beater.Run(&br.beat.Beat); err != nil {
    return fmt.Errorf("beat receiver run error: %w", err)
    }
    return nil
    }

Note:

To accomplish the above steps, it is essential that we create the runners in beater.Run(...). Currently, metricbeat creates runners during the beater creation phase and starts them in beater.Run(...). This PR moves the runner creation code in beater.Run(...) to closely align with filebeat's implementation.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in CHANGELOG.next.asciidoc or CHANGELOG-developer.next.asciidoc.

Related issues

Screenshots

Screenshot 2025-05-29 at 8 14 07 PM

Output

Here's output of running two streams (degraded) together:

┌─ fleet
│  └─ status: (STOPPED) Not enrolled into Fleet
└─ elastic-agent
   ├─ status: (DEGRADED) 1 or more components/units in a degraded state
   ├─ pipeline:logs/_agent-component/filestream-default
   │  ├─ status: StatusRecoverableError [error while running harvester: cannot read from file source: /var/log/elasticAgent-install-20240625_133733.log]
   │  ├─ exporter:elasticsearch/_agent-component/default
   │  │  └─ status: StatusOK
   │  └─ receiver:filebeatreceiver/_agent-component/filestream-default
   │     └─ status: StatusRecoverableError [error while running harvester: cannot read from file source: /var/log/elasticAgent-install-20240625_133733.log]
   └─ pipeline:logs/_agent-component/system/metrics-default
      ├─ status: StatusRecoverableError [Error fetching data for metricset system.process: error fetching process list: non fatal error; reporting partial metrics: error fetching PID metrics for 607 processes, most likely a "permission denied" error. Enable debug logging to determine the exact cause.]
      ├─ exporter:elasticsearch/_agent-component/default
      │  └─ status: StatusOK
      └─ receiver:metricbeatreceiver/_agent-component/system/metrics-default
         └─ status: StatusRecoverableError [Error fetching data for metricset system.process: error fetching process list: non fatal error; reporting partial metrics: error fetching PID metrics for 607 processes, most likely a "permission denied" error. Enable debug logging to determine the exact cause.]

Testing

  1. Checkout this PR locally
  2. Go to elastic-agent and follow this guide to test local beats changes
  3. Package agent with mage package
  4. Follow steps on Beat receivers do not correctly report status back to the Elastic Agent elastic-agent#8210 to install agent and verify the status

Closes elastic/elastic-agent#8210


This is an automatic backport of pull request #44782 done by Mergify.

* initial commit

* todo

* implment reporter for mbreceiver

* notice and lint

* mbreceiver

* lint

* comments

* notice

* errors

* optimization

* add test case suite

* fix benchmark and tests

* rename to otel specific

* test log

(cherry picked from commit d71266c)
@mergify mergify Bot requested review from a team as code owners June 26, 2025 08:31
@mergify mergify Bot added the backport label Jun 26, 2025
@mergify mergify Bot requested review from faec and khushijain21 and removed request for a team June 26, 2025 08:31
@botelastic botelastic Bot added the needs_team Indicates that the issue/PR needs a Team:* label label Jun 26, 2025
@github-actions github-actions Bot added the Team:Elastic-Agent-Data-Plane Label for the Agent Data Plane team label Jun 26, 2025
@elasticmachine
Copy link
Copy Markdown
Contributor

Pinging @elastic/elastic-agent-data-plane (Team:Elastic-Agent-Data-Plane)

@botelastic botelastic Bot removed the needs_team Indicates that the issue/PR needs a Team:* label label Jun 26, 2025
@mergify
Copy link
Copy Markdown
Contributor Author

mergify Bot commented Jun 30, 2025

This pull request has not been merged yet. Could you please review and merge it @VihasMakwana? 🙏

@mergify
Copy link
Copy Markdown
Contributor Author

mergify Bot commented Jul 4, 2025

This pull request is now in conflicts. Could you fix it? 🙏
To fixup this pull request, you can check out it locally. See documentation: https://help.github.com/articles/checking-out-pull-requests-locally/

git fetch upstream
git checkout -b mergify/bp/8.19/pr-44782 upstream/mergify/bp/8.19/pr-44782
git merge upstream/8.19
git push upstream mergify/bp/8.19/pr-44782

@VihasMakwana VihasMakwana merged commit 58a0081 into 8.19 Jul 4, 2025
196 of 200 checks passed
@VihasMakwana VihasMakwana deleted the mergify/bp/8.19/pr-44782 branch July 4, 2025 13:53
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

backport Team:Elastic-Agent-Data-Plane Label for the Agent Data Plane team

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants