Toil reduction tool to prepare before responding to Mackerel alerts
preplert
consists of two parts: a webhook server that receives Mackerel webhooks and sends the payload to Amazon SQS, and a worker that queries various data based on the webhooks and pastes information for alert response as a GraphAnnotation.
$ brew install mashiike/tap/prepalert
Set your Mackerel API key to the environment variable MACKEREL_APIKEY
.
and Execute the following command:
$ prepalert init
Or the following command:
$ prepalert --coinfig <output config path> init
Usage: prepalert <command>
A webhook server for prepare alert memo
Flags:
-h, --help Show context-sensitive help.
--log-level="info" output log-level ($PREPALERT_LOG_LEVEL)
--mackerel-apikey=STRING for access mackerel API ($MACKEREL_APIKEY)
--config="." config path ($PREPALERT_CONFIG)
Commands:
run
run server (default command)
init
create initial config
validate
validate the configuration
exec <alert-id>
Generate a virtual webhook from past alert to execute the rule
version
Show version
Run "prepalert <command> --help" for more information on a command.
If the command is omitted, the run command is executed.
Configuration file is HCL (HashiCorp Configuration Language) format. prepalert init
can generate a initial configuration file.
The most small configuration file is as follows:
prepalert {
required_version = ">=v0.12.0"
sqs_queue_name = "prepalert"
}
locals {
default_message = <<EOF
How do you respond to alerts?
Describe information about your alert response here.
EOF
}
rule "simple" {
// rule execute when org_name is "Macker..." and alert id is "4gx..."
when = [
webhook.org_name == "Macker...",
get_monitor(webhook.alert).id == "4gx...",
]
update_alert {
memo = local.default_message
}
}
prepalert works with AWS Lambda and Amazon SQS.
Lambda Function requires a webhook and a worker
sequenceDiagram
autonumber
Mackerel->>+http lambda function : POST /
http lambda function ->>+Amazon SQS: SendMessage
Amazon SQS-->- http lambda function: 200 Ok
http lambda function-->- Mackerel: 200 Ok
Amazon SQS ->>+ worker lambda function: trigger by AWS Lambda
worker lambda function ->>+ Data Source: query
Data Source -->- worker lambda function: query results
worker lambda function ->>+ Mackerel: Create Graph Annotation
Mackerel-->- worker lambda function : 200 Ok
worker lambda function ->>- Amazon SQS: Success Delete message
Let's solidify the Lambda package with the following zip arcive (runtime provided.al2
)
lambda.zip
├── bootstrap # build binary
└── config.hcl # configuration file
A related document is https://docs.aws.amazon.com/lambda/latest/dg/runtimes-custom.html
for example.
deploy lambda function, prepalert in lambda directory
The example of lambda directory uses lambroll for deployment.
prepalert has a plugin system. you can add custom provider. plugin is gRPC Server program. protocol buffer definition is here
in Go language, following code is example plugin.
package main
import (
"context"
"github.com/mashiike/prepalert/plugin"
"github.com/mashiike/prepalert/provider"
)
type Provider struct {
// ...
}
func (p *Provider) ValidateProviderParameter(ctx context.Context, pp *provider.ProviderParameter) error {
/*
Your custom provider parameter validation
*/
return nil
}
func (p *Provider) GetQuerySchema(ctx context.Context) (*plugin.Schema, error) {
/*
Your custom provider's query paramete schema
*/
return &plugin.Schema{}, nil
}
func (p *Provider) RunQuery(ctx context.Context, req *plugin.RunQueryRequest) (*plugin.RunQueryResponse, error) {
/*
Your custom provider's query execution
*/
return &plugin.RunQueryResponse{
Name: req.QueryName,
Query: "<your query string>",
Columns: []string{"<column name>", "<column name>", "<column name>"},
Rows: [][]string{
{"<row value>", "<row value>", "<row value>"},
{"<row value>", "<row value>", "<row value>"},
{"<row value>", "<row value>", "<row value>"},
},
}, nil
}
func main() {
plugin.ServePlugin(plugin.WithProviderPlugin(&Provider{}))
}
example plugin is cmd/example-http-csv-plugin
configuration is as follows:
prepalert {
required_version = ">=v0.12.0"
sqs_queue_name = "prepalert"
plugins {
http = {
cmd = "go run github.com/mashiike/prepalert/cmd/example-http-csv-plugin@latest" // your plugin execution command
sync_output = true // sync plugin output to prepalert log
}
}
}
provider "http" {
endpoint = "<your csv server endpoint>"
}
query "http" "csv" {}
rule "always" {
when = true
update_alert {
memo = "${query.http.csv.result.query}\n${result_to_table(query.http.csv)}"
}
}
$ PREPALERT_CANYON_ENV=development go run cmd/prepalert/main.go --config testdata/config/simple.hcl
time=2023-10-01T14:55:47.267+09:00 level=INFO msg="create temporary file backend, canyon request body upload to temporary directory" path=/var/folders/rn/jj26k6s93x9c5yblnjq7_dw80000gp/T/canon-240945814 version=v0.12.0 app=prepalert
time=2023-10-01T14:55:47.268+09:00 level=INFO msg="running canyon" env=development version=v0.12.0 app=prepalert
time=2023-10-01T14:55:47.268+09:00 level=INFO msg="enable in memory queue" visibility_timeout=30s max_receive_count=3 version=v0.12.0 app=prepalert
time=2023-10-01T14:55:47.268+09:00 level=INFO msg="starting up with local httpd" address=:8080 version=v0.12.0 app=prepalert
time=2023-10-01T14:55:47.268+09:00 level=INFO msg="staring polling sqs queue" queue=prepalert on_memory_queue_mode=true version=v0.12.0 app=prepalert
prepalert is using canyon.
if you want to use local development, you can use PREPALERT_CANYON_ENV=development
environment variable.
this variable is enable to use local file backend and sqs simulated in memory queue.
MIT License
Copyright (c) 2022 IKEDA Masashi