-
Notifications
You must be signed in to change notification settings - Fork 447
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Barracuda CloudGen] Add initial Barracuda CloudGen Firewall integration #3796
[Barracuda CloudGen] Add initial Barracuda CloudGen Firewall integration #3796
Conversation
Tried using the lumberjack input today with the agent and I guess it needs to be added to this list first https://github.com/elastic/elastic-agent/blob/main/internal/spec/filebeat.yml#L24??? |
Good call. That spec needs updated. |
@andrewkroh Using the 8.5.0-SNAPSHOT image, the lumberjack input is working. I am running into 2 issues though.
|
We can add in a lumberjack output to elastic/stream to ease testing. I've opened elastic/stream#39 to track this.
The event that is logged doesn't show that odd year value so I wonder if it's something happening on the ingest node pipeline side that's causing the data corruption. |
I think I may have found it. The Web data uses Epoch MS, not Seconds. Also I may have a way to wait for the agent to be ready. |
Pinging @elastic/security-external-integrations (Team:Security-External Integrations) |
Can you add this integration to the CODEOWNERS file in this PR? |
I added the Lumberjack output to elastic/stream in elastic/stream#41. This is the config that I was using. I got blocked by an issue in 8.5.0-SNAPSHOT today where the API keys for Filebeat were invalid, so I couldn't test E2E. But I did confirm that stream was connected and communicating to Filebeat. The JSON contained in the sample_logs will need adapted to look more like what Filebeat is be sending. version: '2.3'
services:
barracuda-cloudgen-lumberjack:
image: docker.elastic.co/observability/stream:local # Requires next release.
volumes:
- ./sample_logs:/sample_logs:ro
environment:
- STREAM_PROTOCOL=lumberjack
- STREAM_LUMBERJACK_PARSE_JSON=true
- STREAM_ADDR=tcp://elastic-agent:5044
- STREAM_DELAY=5s
- STREAM_START_SIGNAL=SIGHUP
command: log /sample_logs/*.log I also found a bug in the input that would cause it Filebeat to panic if multiple clients were streaming data. elastic/beats#33071 |
Using ur example, the JSON in the sample logs file was sent in the |
Using my example stream config, each JSON log line will be sent as a structured event. On receiving side, the lumberjack input will produce an event that contains |
The new container with lumberjack output is ready: |
644bbeb
to
1e14290
Compare
/test |
🌐 Coverage report
|
@andrewkroh I think ready to try tests. I sent a message in slack with an error that the stream container seems to have
It appears to hang on the |
This is to be consistent with other integrations. Also disable lumberjack v1 by default.
If a failure occurs while @timestamp was deleted then indexing would fail and data would be lost.
[git-generate] cd packages/barracuda_cloudgen_firewall/data_stream/log/fields yq 'sort_by(.name)' ecs.yml yq -i 'sort_by(.name)' ecs.yml
I didn't see where it was used. And if we were going to keep it then it should be using 'external: ecs'.
/test |
- rename: | ||
field: source.address | ||
target_field: labels.origin_address | ||
ignore_missing: true | ||
- rename: | ||
field: tls.client.subject | ||
target_field: labels.origin_client_subject | ||
ignore_missing: true | ||
- remove: | ||
field: | ||
- source | ||
- tls | ||
ignore_missing: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Separate from this PR would it make sense to update the lumberjack input to use different fields than source and tls in the actual input code? It seems misleading by default to have those fields populated when its only relevant to the transport of receiving the logs, not really whats in the log its self? Just a thought.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking to propose some official ECS fields for this data like perhaps log.source.ip
, log.source.port
, and log.source.x509.*
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The syslog and TCP/UDP uses log.source.address
but its not an official ECS field, i think its a legacy but works for exactly what is wanted.
🚀 Benchmarks reportTo see the full report comment with |
Add sample event to readme. Remove reference links. I'm not sure if they are supported in kibana.
/test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this is good to go for a technical preview. Things to follow up on:
- Propose standardized fields for log origin metadata.
- Test with a real CloudGen firewall.
- Document TLS requirements if any (I think the CloudGen UI requires TLS enabled on the server.)
Great work @legoguy1000 getting to the finishing line. Thank you! |
/test |
What does this PR do?
Add initial Barracuda CloudGen Firewall integration for receiving Firewall Insight logs as described at https://campus.barracuda.com/product/cloudgenfirewall/doc/96025953/how-to-enable-filebeat-stream-to-a-logstash-pipeline. Elastic Agent starts a server to receive data sent over the Lumberjack protocol by CloudGen firewall. (This is the same protocol used between Beats and Logstash.)
Checklist
changelog.yml
file.Related issues
Screenshots
Logs
Real sample from CloudGen 8.3:
2022-09-23-barracuda-cloudgen-firewall-insights.ndjson.txt