Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

statistic counters are not monotonically increasing #119

Closed
lherman-cs opened this issue Dec 29, 2024 · 4 comments
Closed

statistic counters are not monotonically increasing #119

lherman-cs opened this issue Dec 29, 2024 · 4 comments
Assignees
Labels
enhancement New feature or request

Comments

@lherman-cs
Copy link
Contributor

lherman-cs commented Dec 29, 2024

Describe the bug
I expect that as a SocketAddr is uniquely registered (on session allocated) to the statistics table, its metrics will continue to increase monotonically until the same SocketAddr is unregistered (on session closed).

To Reproduce
Steps to reproduce the behavior:

  1. Spawn an agent to monitor the statistic for a SocketAddr
  2. Assert its value over time
let mut last_egress_bytes = 0;
let cur_egress_bytes = statistics.get(&addr).unwrap().send_bytes;
assert!(cur_egress_bytes >= last_egress_bytes);
last_egress_bytes = cur_egress_bytes;

Additional context

Removing these lines fixes the issue:

let map_ = Arc::downgrade(&map);
thread::spawn(move || {
while let Some(map) = map_.upgrade() {
map.read().iter().for_each(|(_, it)| it.clear());
sleep(Duration::from_secs(1));
}
});

What is the expected behavior from these statistics? Also, what's the purpose of the code above?

@mycrl
Copy link
Owner

mycrl commented Dec 30, 2024

You're in line with my earlier idea of what to expect, but I'm currently counting by the second.
This is because the numbers can overflow after a session has been connected for a long time. It's not really that u64 isn't wide enough, it's that the public api is json, and I'm worried that numbers that are too big will have problems being parsed on the client side.

And I've actually been torn on whether or not to keep stacking on stats.

@lherman-cs
Copy link
Contributor Author

Stacking on stats is still a good idea. However, the reset every second causes accounting issues due to poll interval misalignment. For example, if the client polls the stats every 3 seconds, the client will miss ~2 datapoints (t+0 and t+1 stats got reset).

This shouldn't be an issue for big numbers in JSON API. Clients must follow the JSON specification, which treats numbers as a variable length of digits, aka big int.

https://www.json.org/json-en.html
Screenshot 2024-12-30 at 09-19-07 JSON integers limit on size - Stack Overflow

This is on my Firefox console.

JSON.parse("[999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999999]")
Array [ 1e+228 ]
​
0: 1e+228
​
length: 1

<prototype>: Array []

For u64, assuming a 4k video is being streamed (~50Mbps) and we're only concerned with 1 stream as these stats belong to individual sockets, 2^64 / 50 / 1e6 / 3600 / 24 / 30 = <definitely good for multiple lifetimes> 😁

@mycrl
Copy link
Owner

mycrl commented Dec 30, 2024

ok, very good tip.
This is an easy change, I will change it to an incremental implementation.

@mycrl mycrl self-assigned this Dec 30, 2024
@mycrl mycrl added the enhancement New feature or request label Dec 30, 2024
@mycrl
Copy link
Owner

mycrl commented Dec 30, 2024

update by: 2e03745

@mycrl mycrl closed this as completed Dec 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants