-
Notifications
You must be signed in to change notification settings - Fork 715
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prometheus support #1507
Comments
Yes, but more interesting would be Zabbix. |
Maybe there is a common way to handle both monitoring via prometheus and other monitoring systems in a generic way? |
These metrics must be helpful for us.
|
Prometheus support would be greatly appreciated. |
+1 for Prometheus |
+1 for Prometheus! |
1 similar comment
+1 for Prometheus! |
@fujita, I am interested 😃 There are at least two ways to do that.
Any preference? |
@fujita , I looked that |
Sorry about the delay, I've just started to investigate implementation options. @greenpau why can't we pick the best of option 1 and 2? Having a separate project (exporter) forces users to manage another daemon. We can't run a go routine to execute the gRPC API and handle http requests from prometheus? I've just create a branch to only send the received number in each peer: |
@fujita, it depends on the circumstances.
Prom exporters are designed on per app basis. Compare node_exporter and backbox_exporter and you will get the gist. |
I don't understand. I don't see how this is related with the exporter and exposing models.
1 cpu for a container? Why? You can configure what you like.
I had quick look at the official exporters, mysql, consul. Looks like they don't support the config reloading. |
@fujita , see below.
Not in containers. With docker for example, you can configure cpusets. However, that setting just allows you to switch between CPUs, it does not allow running on multiple CPUs concurrently.
Polling = scraping Sync scraping means you make a call to the URL of an exporter, and subsequent data collection starts. Once the data collected it outputs it to your scraper. Async scraping means that data collection runs independent of the scrape calls. In that mode, the scraper gets that data from data collection buffer. Why that matters. Consider your exporter has 100-100s of metrics and data collection takes 1-3s. That's the time your GoBGP server routine competes with the collectors' routines for single CPU time. It obviously does not matter when you collect status of BGP peers, because it probably takes 0.00001s.
Right. Consider you want to change the configuration of the exporter, would you restart GoBGP and drop neighbor relations with your peers? |
Hmm, really? Inside a single container, can you try to run two processes; each execute an endless loop? Then see the output of top command on a host OS. |
Your GoBGP and Prometheus go routines are not separate processes. They are go routines. Hence, will be on the same CPU. |
@fujita , that would work :-) However, that would NOT be parent and child processes like in a container. These would be two separate parents. In a container, you have a single parent (akin init) and bunch of child processes. |
@fujita , refreshed my memory on the subject. Here, there is a snippet to run the test you proposed.
|
Please run a process that create two child processes executing an endless loop and wait forever. See the output of top on the host. |
@fujita , yes, two processes running in a container can be on separate CPUs. My mistake. The test script:
Running the test:
The two processes are
CPU Sets:
The CPU process:
|
Good, now you understand goroutines DON'T compete for a single CPU inside a container. A container is just namespaces and processes from the perspective of Linux kernel so it can use as many CPUs as you have. For me, some official exporters don't support reloading because it's not useful much. So I go with the current design. If someone needs reloading, it's pretty easy for him to implement a own gobgp exporter by using gobgp/exporter as a package; he just needs a main function. |
@fujita , overconfidence 😆 they DO compete when there is only 1 CPU to compete over 😄 They also compete when you have more goroutines than CPUs. I get that an application, over time, might use all CPUs. How do you know that goroutines are being load balanced across all CPUs? |
No difference between the exporter design and the current design on that. |
FYI, check this article for CPU, memory profiling with Go. |
Tally allows you to define and emit metrics in different ways. Its an option that allows you to configure multiple ways of collecting/emitting metrics. Just a suggestion. |
@fujita , I started with external exporter here: https://github.com/greenpau/gobgp_exporter |
@greenpau nice, but why you modify pb.go file? |
@fujita , I guess the primary reason is that I want to control the timeout on the client. The Additionally, ... somewhat related, but not ... since the changes to the API (hiding behind the /internal), the only way to get to some functionality (e.g. things related to ptype, any, etc.) is by adding extra functions in the way I did it with extended client. Not ideal. I would prefer to have that code in upstream, but I don't know whether you want to have it there. |
I think I am close to getting some of the things mentioned here done. |
why not using context.WithTimeout? |
@fujita, because I don’t know how context works. I will take time to read up on it. |
@fujita , I was reading up on the Go's |
@greenpau you do a select on ctx.Done() at some point of your connection. This is automatic with the net package I believe. |
Is anyone interested in?
The text was updated successfully, but these errors were encountered: