Replies: 1 comment 5 replies
-
I would recommend vector.dev and our team maintains the greptimedb sinks of vector so it has first class support. For metrics, if you are from Prometheus world I would recommend you to use vector's prometheus remote write sink. For the docker metrics source, do you know if there is any prometheus exporter for that? If it's compatible with Prometheus's openmetric data, you can also configure vector to scrape it. |
Beta Was this translation helpful? Give feedback.
5 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
What do you do to collect all the logs and metrics you need and get them into greptimedb?
I've been trying to use Telegraf as my single collector, but I'm running into issues with logs. Specifically if I query too many logs, greptime will use up enough memory that it causes problems. I'm still learning about greptimedb, but I think part of the solution is to configure pipelines that create proper indexes and parse logs out into their fields. My initial configuration has been using the loki telegraf output and greptime input, but it turns out that greptime doesn't support pipelines with the loki ingest. (https://github.com/orgs/GreptimeTeam/discussions/6270)
(Note: metrics via the infludb v1 output/ingest work great. Super fast, and no issues with large queries.)
I just tried the Elasticsearch telegraf output, it didn't work. The http output pointed at the elasticsearch ingest resulted 404's for some reason.
I like Vector.dev, but it doesn't have a docker metrics source (I think I read in their issue queue that they just want to leave that up to their cgroups source..) I've looked at Grafana Alloy, but I haven't wanted to try it due to needing to decipher yet another configuration syntax.
Anyway, what do you use? Do you use one collector, or more?
Beta Was this translation helpful? Give feedback.
All reactions