Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: Introducing the Level Trigger Mode #315

Merged
merged 10 commits into from
Apr 6, 2023
Merged

feat: Introducing the Level Trigger Mode #315

merged 10 commits into from
Apr 6, 2023

Conversation

haoel
Copy link
Contributor

@haoel haoel commented Mar 20, 2023

The PR introduces the level trigger mode with three types of altering intervals for continuing DOWN status:

  1. Regular Strategy: Notifications are sent at the same frequency as the probe time, continuing until the maximum number of notifications is reached.

    the interval can be configured by factor.
          interval = factor
    if the factor = 1, then the alert would be sent at the failure of 1, 2,3,4,5,6,7...
                       the interval is 1, 1, 1, 1, 1, 1, 1...
    if the factor = 2, then the alert would be sent at the failure of 1, 3, 5, 7, 9, 11, 13...
                       the interval is 2, 2, 2, 2, 2, 2, 2...
    if the factor = 3, then the alert would be sent at the failure of 1, 4, 7, 10, 13, 16, 19...
                       the interval is 3, 3, 3, 3, 3, 3, 3...
    
  2. Incremental Strategy: Notifications are sent at increasing intervals, continuing until the maximum number of notifications is reached. With this strategy, notifications are sent at the failure of 1, 2, 4, 7, 11, 16, 22, and so on. the intervals are 1, 2, 3, 4, 5, 6...

    the interval is increased linearly.
    
         interval = factor * ( failure times - 1 ) + 1
    
    if the factor = 1, then the alert would be sent at the failure of 1, 2, 4, 7, 11, 16, 22, 29, 37...
                       the interval is 1, 2, 3, 4, 5, 6, 7, 8, 9...
     if the factor = 2, then the alert would be sent at the failure of 1, 3, 7, 13, 21, 31, 43, 57, 73...
                       the interval is 2, 4, 6, 8, 10, 12, 14, 16, 18...
     if the factor = 3, then the alert would be sent at the failure of 1, 4, 10, 19, 31, 46, 64, 85, 109...
                       the interval is 3, 6, 9, 12, 15, 18, 21, 24, 27...
    
  3. Exponential Strategy: Notifications are sent at exponentially increasing intervals, continuing until the maximum number of notifications is reached. With this strategy, notifications are sent at the failure of 1, 2, 4, 8, 16, 32, and so on.

    the interval is increased exponentially.
    
        interval =   failure times + factor * ( failure times - 1 )
    
    if the factor = 1, then the alert would be sent at the failure of 1, 2, 4, 8, 16, 32, 64, 128, 256...
                        the interval is 1, 2, 4, 8, 16, 32, 64, 128...
    if the factor = 2, then the alert would be sent at the failure of 1, 3, 9, 27, 81, 243, 729, 2187, 6561...
                       the interval is 2, 6, 18, 54, 162, 486, 1458, 4374, 13122...
    if the factor = 3, then the alert would be sent at the failure of 1, 4, 16, 64, 256, 1024...
                       the interval is 3, 12, 48, 192, 768...
    
    
    

Note:

The default strategy is regular and the max time is 1, this would be the same as the old behavior - edge trigger.

The configuration can be put into any probe or global setting.

probe:
    http: dummy
    url: http://example.com/
    alert:
        strategy: "regular"   # can be "regular", "increment" & "exponentiation", default is regular
        factor: 1 # the factor impact the alert interval
        max: 10     # the max times to send a notification if the endpoint is continuously down
settings:
   probe:
       alert:
           strategy: "regular"   # can be "regular", "increment" & "exponentiation", default is regular
           factor: 1 # the factor impact the alert interval
           max: 10     # the max times to send a notification if the endpoint is continuously down

Close #314

@codecov-commenter
Copy link

codecov-commenter commented Mar 21, 2023

Codecov Report

Patch coverage: 100.00% and no project coverage change.

Comparison is base (682462a) 99.67% compared to head (1bb6b5b) 99.67%.

📣 This organization is not using Codecov’s GitHub App Integration. We recommend you install it so Codecov can continue to function properly for your repositories. Learn more

Additional details and impacted files
@@           Coverage Diff            @@
##             main     #315    +/-   ##
========================================
  Coverage   99.67%   99.67%            
========================================
  Files          82       83     +1     
  Lines        5503     5618   +115     
========================================
+ Hits         5485     5600   +115     
  Misses         12       12            
  Partials        6        6            
Impacted Files Coverage Δ
conf/conf.go 96.73% <ø> (ø)
global/global.go 100.00% <ø> (ø)
channel/channel.go 100.00% <100.00%> (ø)
global/probe.go 100.00% <100.00%> (ø)
probe/base/base.go 100.00% <100.00%> (ø)
probe/data.go 100.00% <100.00%> (ø)
probe/notification_strategy.go 100.00% <100.00%> (ø)
probe/result.go 100.00% <100.00%> (ø)

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@haoel haoel changed the title [WIP] Introducing the Notification Strategies feat: Introducing the Level Trigger Mode Mar 21, 2023
docs/Manual.md Outdated Show resolved Hide resolved
global/global_test.go Outdated Show resolved Hide resolved
@haoel haoel requested a review from localvar March 21, 2023 07:50
Copy link
Collaborator

@samanhappy samanhappy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, wonderful work!

@haoel haoel added this pull request to the merge queue Apr 6, 2023
Merged via the queue into megaease:main with commit 18fd128 Apr 6, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Introduce repeatable alert
4 participants