-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UDP input TCP output and Gelf #3
Comments
@Hermain From what I know of the GELF format, the resulting messages can be chunked and/or compressed. Consequently, this beat doesn't directly support the format as there's no logic to handle reassembling potentially larger messages via UDP that span multiple chunks nor is there any decompressions. It only accepts plain uncompressed text/json as input. In theory, you could adapt this beat to implement a support for the GELF message with Graylog's GELF reader. As for the output, you could then just implement the logstash output and then apply any final processing there. |
I think I'll just wait until docker supports Gelf logging over TCP and use some other solution until then or accept that I might loose some log messages. Thank you |
@Hermain Let me take a look at the specifications and the Go code for the GELF format and if it doesn't diverge much from the purpose of this beat, then I see no harm in implementing and additional option to support GELF message format as input. |
That would be great. Thank you for considering it. https://blog.docker.com/2017/02/adventures-in-gelf/ This link discribes the kind of issues that you have with the docker gelf driver and there is no good solution for this except for running logstash on each machine. I looked for a good solution for a long time and found nothing. The Gelf driver has a lot of additional fields it adds to each docker log message which no other driver delivers so theres added value |
@Hermain Seems like this was relatively simply to implement with Graylog's Go library. I'm just doing a few things to test so I should have a new version with GELF input support within the next day or so. Seeing I don't really work with Graylog, I'll probably ask you (if you don't mind) to do a bit of testing on your side with some data before I release the next version. Let me know if that's ok with you. |
@hartfordfive I would gladly try to use this beat with logstash and the gelf plugin. |
@Hermain I've got a beta build up if you'd like to test it: https://github.com/hartfordfive/protologbeat/releases/tag/0.2.0-beta You'll have to set the following config:
I've done some minimal testing on my side and it seems to work fine with small basic messages. Let me know how it works with more realistic message payloads. So if you're shipping data to Logstash, your config might look something like this:
You can set |
I added it to my current sprint and will have a look at it after the current ticket I am working on.
|
Similar, but you'll need to add your ssl parameters:
For more info about securing your filbeat/logstash connection, you can view this documentation page. |
@hartfordfive I tested protologbeat without encryption and it didn't work. Logstash received messages but the output was all scrambled. I containerized the binary you sent me with the follwing dockerfile:
using config.yml:
and generated the logmessages with:
Here the part of the docker-compose file which describes logstash:
Protologbeat log Output: 2017/05/19 13:32:54.295486 beat.go:267: INFO Home path: [/] Config path: [/] Data path: [//data] Logs path: [//logs] Logstash received: |
I've tested with the basic GELF message writer function that comes with the Go GELF library and I've also tested by sending raw messages like this:
Both of these cases work fine as I can see the resulting data in Elasticsearch in the proper self format. I'm digging a bit deeper to see what the issue might be here. Which version of Logstash are you using? |
I'm using logstash:5.3.1. I would try to have a docker container generate gelf messages as this is how I got the scramled results (which also show in kibana) All you need is to have docker installed.
Thank you very much for your work |
I remembered that docker does some compression and tried gelf-compression-type=none. This fixed the issue and Logstash receives proper entries. The standart compression type of docker Gelf messages is Gzip. The logstash gelf plugin worked with gzip compressed messages. Was it intended for protologbeat to support compressed data (maybe the code from the gelf logstash plugin can be used)?
|
Sorry for the delay. I just checked the documentation for docker and you're right. It does in fact say the default compression is GZIP for GELF messages. The detection of compression type should be automatic determined with the magic byte header, as shown here in the Graylog package Read function. I'm not sure why it wasn't automatically detecting the compression, although I wouldn't really advise on using compression in this case as protologbeat should be local to your docker instance. There's no network transmission from the docker GELF logging driver to the protologbeat beat instance in this case, so you'll just be using up additional CPU to compress/decompress the data. |
@hartfordfive I guess for my specific usecase it might not be a big problem but it should nevertheless work. I will now test this on my development swarm without compression and see if I encounter further issues. |
The entire gelf message goes into the message field. |
Can you send me an example of the final document that gets indexed in Elasticsearch? |
This is an example:
Right now I try to use the json filter to split the fields up but then I get issues because the docker fields start with '_' |
Ahh, yes you simply have to use the logstash JSON plugin. That will parse all the data and merge it to the root level. For example, add this into your filter stage:
The |
I did some testing:
Now I have seen the following issues:
Any idea what this could be? It happens quite frequently. |
@Hermain I apologize for the delayed response. I'll try to take some time by the end of the week to look at this a bit further. |
I will be on holiday from the 17.6 until 16.7 so I won't be able to test any changes or answer any questions then. This does not mean I'm not interested :) |
Thanks for the heads up and I apologize for not having the time to look
into your issues sooner. Hoping to be able to get back to my OS projects
soon!
…On Mon, Jun 12, 2017 at 5:12 AM Hermain ***@***.***> wrote:
I will be on holiday from the 17.6 until 16.7 so I won't be able to test
any changes or answer any questions then. This does not mean I'm not
interested :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#3 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ADjN8M2ljZjZJgMzpc6ZNIDCmfmjoGjmks5sDQEPgaJpZM4NUHr1>
.
|
@Hermain Sorry I haven't been able to get around to this yet. Between vacations and a very busy few months with my actual day job, I haven't been able to look at this yet although I haven't forgotten. |
@hartfordfive If I read the release notes for Version 0.2.0 it sound like some of the issues might have been fixed. Is that true? |
@Hermain I haven't actually had the chance to get around to checking out some of your issues unfortunately. I have another fix that was released so I figured I'd include the GELF input feature although that's still in beta. |
Can I use this to receive Gelf udp packages (gelf is a subset of Json) and forward them to logstash with TCP? I don't want to run logstash on every machine in my cluster and am looking for a lightweight solution to forward the UDP log messages (docker gelf logging driver only supports udp) to logstash reliably (logstash)
The text was updated successfully, but these errors were encountered: