Skip to content
This repository has been archived by the owner on Dec 13, 2022. It is now read-only.

fix(clapi): use downtime_id to show and cancel rtdowntimes (19.x) #8212

Merged
merged 2 commits into from
Dec 30, 2019

Conversation

cgagnaire
Copy link

Description

The CLAPI commands "-o RTDOWNTIME -a show -v "HOST"" and "-o RTDOWNTIME -a show -v "SVC"" use the internal_id for the downtimes, instead of the downtime_id which is unique.
Same for "-o RTDOWNTIME -a cancel".

This way, you cannot cancel the exact downtimes you want.

Type of change

  • Patch fixing an issue (non-breaking change)
  • New functionality (non-breaking change)
  • Breaking change (patch or feature) that might cause side effects breaking part of the Software
  • Updating documentation (missing information, typo...)

Target serie

  • 2.8.x
  • 18.10.x
  • 19.04.x
  • 19.10.x
  • 20.04.x (master)

How this pull request can be tested ?

  • Add several downtimes on several hosts and services of several pollers
  • Use CLAPI to list real-time downtimes
  • Try to delete some of those downtimes using listed IDs
  • You will delete the downtimes you want (The external commands in engine will use the internal_id)

Checklist

Community contributors & Centreon team

  • I followed the coding style guidelines provided by Centreon
  • I have commented my code, especially new classes, functions or any legacy code modified. (docblock)
  • I have commented my code, especially hard-to-understand areas of the PR.
  • I have made corresponding changes to the documentation.
  • I have rebased my development branch on the base branch (master, maintenance).

Centreon team only

  • I have made sure that the unit tests related to the story are successful.
  • I have made sure that unit tests cover 80% of the code written for the story.
  • I have made sure that acceptance tests related to the story are successful (local and CI)

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants