Replies: 2 comments
-
It should be relatively easy to extend the existing However the main issue I see with collecting individual duration for each execution of the rule is that it's very sensitive to measurement noise since we're constantly "starting and stopping" the timer and accumulating many small values. It would be more accurate to time the rule running a large number of times then divide the resulting duration by the number of runs in the batch to get a good average duration, but it would require significant changes to the analyzer infrastructure specifically to support collecting metrics. |
Beta Was this translation helpful? Give feedback.
-
What was actually timed here? The single run of the rule, the generation of the diagnostic plus the run, or the mutation too? |
Beta Was this translation helpful? Give feedback.
-
I have just pushed this branch: d92d79b with a proposal on how we can profile individual rules. The code is very simple.
This is the result of this branch:
The core of the idea is:
This way we can use R to get a compile-time calculated index to an array of atomics:
Fnv hash is used because it is very simple and it is going to optimised away, giving us in the end just a simple array index. Allowing us to do:
This means that we can have collisions. Multiple rules will fall into the same bucket. We can control this using the Fnv key. If we find the environment variable
ROME_PERFECT_KEY
, the cli will try to find a key that does not have any collisions.Beta Was this translation helpful? Give feedback.
All reactions