Puppet module for configuring collectd and plugins.
The simplest use case is to use all of the configurations in
the default collectd.conf
file shipped with collectd. This can
be done by simply including the class:
include ::collectd
Collectd is most useful when configured with customized plugins.
This is accomplished by removing the default collectd.conf
file
and replacing it with a file that includes all alternative
configurations. Configure a node with the following class
declaration:
class { '::collectd':
purge => true,
recurse => true,
purge_config => true,
minimum_version => '5.4',
}
Set purge
, recurse
, and purge_config
to true
in order to override
the default configurations shipped in collectd.conf
and use
custom configurations stored in conf.d
. From here you can set up
additional plugins as shown below.
Specifying the version
or minimum_version
of collectd as shown above reduces
the need for two puppet runs to converge. See
Puppet needs two runs to correctly write my conf, why?
below.
Hiera example in YAML of passing install_options
to the package resource for
managing the collectd package. This parameter must be an array.
collectd::package_install_options:
- '--nogpgcheck'
Example of how to load plugins with no additional configuration:
collectd::plugin { 'battery': }
Where battery
is the name of the plugin.
Note: this should only be done in the case of a class for the plugin not existing in this module.
The module will enable a repo by default.
On CentOS that will be EPEL:
On Ubuntu that'll be the CollectD PPA:
In case you need to change the server where to download the public key from for
whatever reason (AKA: server is down) you can use the parameter
$package_keyserver
Recently, Collectd CI packages are also avaliable from the CI repo
More information is avaliable here:
You can choose the CI repo with the $ci_package_repo
parameter.
$ci_package_repo
has to match '5.4', '5.5', '5.6', '5.7' or 'master' (RC for next release) as
these are the current branches being built in the Collectd CI.
Parameters will vary widely between plugins. See the collectd documentation for each plugin for configurable attributes.
aggregation
(see collectd::plugin::aggregation below)amqp
(see collectd::plugin::amqp below)apache
(see collectd::plugin::apache below)battery
(see collectd::plugin::battery below)bind
(see collectd::plugin::bind below)ceph
(see collectd::plugin::ceph below)cgroups
(see collectd::plugin::cgroups below)chain
(see collectd::plugin::chain below)conntrack
(see collectd::plugin::conntrack below)cpu
(see collectd::plugin::cpu below)cpufreq
(see collectd::plugin::cpufreq below)csv
(see collectd::plugin::csv below)cuda
(see collectd::plugin::cuda below)curl
(see collectd::plugin::curl below)curl_json
(see collectd::plugin::curl_json below)dbi
(see collectd::plugin::dbi below)df
(see collectd::plugin::df below)disk
(see collectd::plugin::disk below)dns
(see collectd::plugin::dns below)dcpmm
(see collectd::plugin::dcpmm below)dpdk_telemetry
(see collectd::plugin::dpdk_telemetry below)entropy
(see collectd::plugin::entropy below)exec
(see collectd::plugin::exec below)ethstat
(see collectd::plugin::ethstat below)fhcount
(see collectd::plugin::fhcount below)filecount
(see collectd::plugin::filecount below)filter
(see collectd::plugin::filter below)genericjmx
(see collectd::plugin::genericjmx below)hddtemp
(see collectd::plugin::hddtemp below)hugepages
(see collectd::plugin::hugepages below)intel_pmu
(see collectd::plugin::intel_pmu below)intel_rdt
(see collectd::plugin::intel_rdt below)interface
(see collectd::plugin::interface below)ipc
(see collectd::plugin::ipc below)ipmi
(see collectd::plugin::ipmi below)iptables
(see collectd::plugin::iptables below)iscdhcp
(see collectd::plugin::iscdhcp below)irq
(see collectd::plugin::irq below)java
(see collectd::plugin::java below)load
(see collectd::plugin::load below)logfile
(see collectd::plugin::logfile below)virt
(see collectd::plugin::virt below)lvm
(see collectd::plugin::lvm below)mcelog
(see collectd::plugin::mcelog below)memcached
(see collectd::plugin::memcached below )memory
(see collectd::plugin::memory below )mongodb
(see collectd::plugin::mongodb below )mysql
(see collectd::plugin::mysql below)netlink
(see collectd::plugin::netlink below)network
(see collectd::plugin::network below)nfs
(see collectd::plugin::nfs below)nginx
(see collectd::plugin::nginx below)ntpd
(see collectd::plugin::ntpd below)numa
(see collectd::plugin::numa below)nut
(see collectd::plugin::nut below)openldap
(see collectd::plugin::openldap below)openvpn
(see collectd::plugin::openvpn below)pcie_errors
(see collectd::plugin::pcie_errors below)perl
(see collectd::plugin::perl below)ping
(see collectd::plugin::ping below)postgresql
(see collectd::plugin::postgresql below)processes
(see collectd::plugin:processes below)protocols
(see collectd::plugin:protocols below)python
(see collectd::plugin::python below)redis
(see collectd::plugin::redis below)rabbitmq
(see collectd-rabbitmq and below for implementation notesrrdcached
(see collectd::plugin::rrdcached below)rrdtool
(see collectd::plugin::rrdtool below)sensors
(see collectd::plugin::sensors below)smart
(see collectd::plugin::smart below)snmp
(see collectd::plugin::snmp below)snmp_agent
(see collectd::plugin::snmp_agent below)statsd
(see collectd::plugin::statsd below)swap
(see collectd::plugin::swap below)syslog
(see collectd::plugin::syslog below)tail
(see collectd::plugin::tail below)target_v5upgrade
(see collectd::plugin::target_v5upgrade below)tcpconns
(see collectd::plugin::tcpconns below)thermal
(see collectd::plugin::thermal below)threshold
(see collect::plugin::threshold below)unixsock
(see collectd::plugin::unixsock below)uptime
(see collectd::plugin::uptime below)users
(see collectd::plugin::users below)uuid
(see collectd::plugin::uuid below)varnish
(see collectd::plugin::varnish below)vmem
(see collectd::plugin::vmem below)write_graphite
(see collectd::plugin::write_graphite below)write_http
(see collectd::plugin::write_http below)write_kafka
(see collectd::plugin::write_kafka below)write_log
(see collectd::plugin::write_log below)write_prometheus
(see collectd::plugin::write_prometheus below)write_network
(see collectd::plugin::write_network below)write_riemann
(see collectd::plugin::write_riemann below)write_sensu
(see collectd::plugin::write_sensu below)write_tsdb
(see collectd::plugin::write_tsdb below)zfs_arc
(see collectd::plugin::zfs_arc below)zookeeper
(see collectd::plugin::zookeeper below)
collectd::plugin::aggregation::aggregator { 'cpu':
plugin => 'cpu',
agg_type => 'cpu',
groupby => ['Host', 'TypeInstance',],
calculateaverage => true,
}
You can as well configure this plugin with a parameterized class :
class { 'collectd::plugin::aggregation':
aggregators => {
'cpu' => {
plugin => 'cpu',
agg_type => 'cpu',
groupby => ["Host", "TypeInstance",],
calculateaverage => true,
},
},
}
class { 'collectd::plugin::amqp':
amqphost => '127.0.0.1',
amqpvhost => 'myvirtualhost',
graphiteprefix => 'collectdmetrics',
amqppersistent => true,
}
class { 'collectd::plugin::apache':
instances => {
'apache80' => {
'url' => 'http://localhost/mod_status?auto',
'user' => 'collectd',
'password' => 'hoh2Coo6'
},
'lighttpd8080' => {
'url' => 'http://localhost:8080/mod_status?auto'
}
},
}
class { 'collectd::plugin::battery':
interval => 30,
values_percentage => true,
report_degraded => true,
query_state_fs => true,
}
class { 'collectd::plugin::bind':
url => 'http://localhost:8053/',
}
class { 'collectd::plugin::ceph':
daemons => [
'[clustername]-osd.0',
'[clustername]-osd.1',
'[clustername]-osd.2',
'[clustername]-mon.[hostname].asok'
],
manage_package => true
}
See collectd plugin_cgroups documentation for more details.
class { 'collectd::plugin::cgroups':
ignore_selected => true,
cgroups => ['array', 'of', 'paths']
}
class { 'collectd::plugin::chain':
chainname => "PostCache",
defaulttarget => "write",
rules => [
{
'match' => {
'type' => 'regex',
'matches' => {
'Plugin' => "^cpu$",
'PluginInstance' => "^[0-9]+$",
},
},
'targets' => [
{
'type' => "write",
'attributes' => {
"Plugin" => "aggregation",
},
},
{
'type' => "stop",
},
],
},
],
}
class { 'collectd::plugin::conntrack':
}
reportbystate
available from collectd version >= 5.5reportbycpu
available from collectd version >= 5.5valuespercentage
available from collectd version >= 5.5reportnumcpu
available from collectd version >= 5.6
See collectd plugin_cpu documentation for more details.
class { 'collectd::plugin::cpu':
reportbystate => true,
reportbycpu => true,
valuespercentage => true,
}
class { 'collectd::plugin::cpufreq':
}
class { 'collectd::plugin::csv':
datadir => '/etc/collectd/var/lib/collectd/csv',
storerates => false,
}
class { 'collectd::plugin::cuda':
}
collectd::plugin::curl::page {
'stock_quotes':
url => 'http://finance.google.com/finance?q=NYSE%3AAMD',
user => 'foo',
password => 'bar',
matches => [
{
'dstype' => 'GaugeAverage',
'instance' => 'AMD',
'regex' => ']*> *([0-9]*\\.[0-9]+) *',
'type' => 'stock_value',
}],
}
You can as well configure this plugin with a parameterized class :
class { 'collectd::plugin::curl':
pages => {
'stock_GM' => {
url => 'http://finance.google.com/finance?q=NYSE%3AGM',
user => 'foo',
password => 'bar',
matches => [
{
'dstype' => 'GaugeAverage',
'instance' => 'AMD',
'regex' => ']*> *([0-9]*\\.[0-9]+) *',
'type' => 'stock_value',
},
],
},
},
}
collectd::plugin::curl_json {
'rabbitmq_overview':
url => 'http://localhost:55672/api/overview',
host => 'rabbitmq.example.net',
instance => 'rabbitmq_overview',
interval => '300',
user => 'user',
password => 'password',
digest => 'false',
verifypeer => 'false',
verifyhost => 'false',
cacert => '/path/to/ca.crt',
header => 'Accept: application/json',
post => '{secret: \"mysecret\"}',
timeout => '1000',
keys => {
'message_stats/publish' => {
'type' => 'gauge',
'instance' => 'overview',
},
}
}
collectd::plugin::dbi::database{'monitoring_node1':
driver => 'mysql',
driveroption => {
'host' => 'hostname',
'username' => 'user',
'password' => 'password',
'dbname' => 'monitoring'
},
query => ['log_delay'],
}
collectd::plugin::dbi::query{'log_delay':
statement => 'SELECT * FROM log_delay_repli;',
results => [{
type => 'gauge',
instanceprefix => 'log_delay',
instancesfrom => 'inet_server_port',
valuesfrom => 'log_delay',
}],
}
You can as well configure this plugin as a parameterized class :
class { 'collectd::plugin::dbi':
package => 'libdbd-mysql',
databases => {
'monitoring_node1' => {
driver => 'mysql',
driveroption => {
'host' => 'hostname',
'username' => 'user',
'password' => 'password',
'dbname' => 'monitoring'
},
query => ['log_delay'],
}
},
}
class { 'collectd::plugin::df':
devices => ['proc','sysfs'],
mountpoints => ['/u'],
fstypes => ['nfs','tmpfs','autofs','gpfs','proc','devpts'],
ignoreselected => true,
}
class { 'collectd::plugin::disk':
disks => ['/^dm/'],
ignoreselected => true,
udevnameattr => 'DM_NAME',
}
class { 'collectd::plugin::dns':
}
See collectd documentation.
Optional. String that must be 'present' or 'absent'.
- Default: 'present'
Optional. String of IP address to ignore.
- Default: undef
String of interface to use. May be interface identifier such as 'eth0' or 'any'.
- Default: 'any'
Numeric for interval in seconds. Changing this can be a disaster. Consult the documentation.
- Default: undef
Boolean to determine if system package for collectd's dns plugin should be managed. If set to true, you must specify the package name for any unsupported OS.
- Default: false
String for name of package. A value of 'USE_DEFAULTS' will set the value based on the platform. This is necessary when setting manage_package on an unsupported platform.
- Default: 'USE_DEFAULTS'
Boolean for SelectNumericQueryTypes configuration option.
- Default: true
class { 'collectd::plugin::dpdk_telemetry':
client_socket_path => '/var/run/.client',
dpdk_socket_path => '/var/run/dpdk/rte/telemetry',
}
class { 'collectd::plugin::dcpmm':
interval => 10.0,
collect_health => false,
collect_perf_metrics => true,
enable_dispatch_all => false,
}
class { 'collectd::plugin::entropy':
}
collectd::plugin::exec::cmd {
'dummy':
user => nobody,
group => nogroup,
exec => ["/bin/echo", "PUTVAL myhost/foo/gauge-flat N:1"],
}
You can also configure this plugin with a parameterized class:
class { 'collectd::plugin::exec':
commands => {
'dummy1' => {
user => nobody,
group => nogroup,
exec => ["/bin/echo", "PUTVAL myhost/foo/gauge-flat1 N:1"],
},
'dummy2' => {
user => nobody,
group => nogroup,
exec => ["/bin/echo", "PUTVAL myhost/foo/gauge-flat2 N:1"],
},
}
}
class { 'collectd::plugin::ethstat':
interfaces => [ 'eth0', 'eth1'],
maps => [
'"rx_csum_offload_errors" "if_rx_errors" "checksum_offload"', '"multicast" "if_multicast"'
],
mappedonly => false,
}
class { 'collectd::plugin::fhcount':
valuesabsolute => true,
valuespercentage => false,
}
class { 'collectd::plugin::fscache':
}
collectd::plugin::filecount::directory {'foodir':
path => '/path/to/dir',
pattern => '*.conf',
mtime => '-5m',
recursive => true,
includehidden => false
}
You can also configure this plugin with a parameterized class:
class { 'collectd::plugin::filecount':
directories => {
'foodir' => {
'path' => '/path/to/dir',
'pattern' => '*.conf',
'mtime' => '-5m',
'recursive' => true,
'includehidden' => false
},
},
}
For backwards compatibility:
class { 'collectd::plugin::filecount':
directories => {
'active' => '/var/spool/postfix/active',
'incoming' => '/var/spool/postfix/incoming'
},
}
The filter plugin implements the powerful filter configuration of collectd. For further details have a look on the collectd manpage.
ensure
("ensure"
,"absent"
): Ob absent it will remove all references of the filter plugins.Note
: The Chain config needs to be purged by the chain define.precachechain
(String): The Name of the default Pre Chain.postcachechain
(String): The Name of the default Post Chain.
class { 'collectd::plugin::filter':
ensure => 'present',
precachechain => 'PreChain',
postcachechain => 'PostChain',
}
This Example will rename the hostname of the mysql plugin.
include collectd::plugin::filter
# define default chains with default target
collectd::plugin::filter::chain { 'PreChain':
target => 'return'
}
collectd::plugin::filter::chain { 'PostChain':
target => 'write'
}
# create a third chain,
$chainname = 'MyAweseomeChain'
collectd::plugin::filter::chain { $chainname:
ensure => present,
target => 'return'
}
# add a new rule to chain
$rulename = 'MyAweseomeRule'
collectd::plugin::filter::rule { $rulename:
chain => $chainname,
}
# add a new match rule, match metrics of the mysql plugin
collectd::plugin::filter::match { "Match mysql plugin":
chain => $chainname,
rule => $rulename,
plugin => 'regex',
options => {
'Plugin' => '^mysql',
}
}
#rewrite hostname
collectd::plugin::filter::target{ "overwrite hostname":
chain => $chainname,
rule => $rulename,
plugin => 'set',
options => {
'Host' => 'hostname.domain',
},
}
# hook the configured chain in the prechain
collectd::plugin::filter::target{ "1_prechain_jump_${chainname}":
chain => 'PreChain',
plugin => 'jump',
options => {
'Chain' => $chainname,
},
}
This define will create a new chain, which is required by targets, matches and rules.
ensure
("ensure"
,"absent"
): When set to absent it will remove the chain with all assigned rules, targets and matches.target
('notification','replace','set','return','stop','write','jump'
): Optional. Set default target if no target has been applied. Strongly recommend for default chains.target_options
(Hash): If target is specified, pass optional hash to define.
see collectd::plugin::filter above
This define will add a new rule to a specific chain
chain
(String): Assign to this chain.
see collectd::plugin::filter above
This define will add a target to a chain or rule.
chain
(String): Assign to this chain.plugin
('notification','replace','set','return','stop','write','jump'
): The plugin of the target.options
(Hash): Optional parameters of the target plugin.rule
(String): Optional. Assign to this rule. If not present, target will be applied at the end of chain without rule matching.
see collectd::plugin::filter above
This define will add a match rule.
chain
(String): Assign to this chain.rule
(String): Assign to this rule.plugin
('regex','timediff','value','empty_counter','hashed'
): The plugin of the match.options
(Hash): Optional parameters of the match plugin.
see collectd::plugin::filter above
include collectd::plugin::genericjmx
collectd::plugin::genericjmx::mbean {
'garbage_collector':
object_name => 'java.lang:type=GarbageCollector,*',
instance_prefix => 'gc-',
instance_from => ['name'],
values => [
{
mbean_type => 'invocations',
table => false,
attribute => 'CollectionCount',
},
{
mbean_type => 'total_time_in_ms',
instance_prefix => 'collection_time',
table => false,
attribute => 'CollectionTime',
},
];
}
collectd::plugin::genericjmx::connection {
'java_app':
host => $fqdn,
service_url => 'service:jmx:rmi:///jndi/rmi://localhost:3637/jmxrmi',
collect => [ 'memory-heap', 'memory-nonheap','garbage_collector' ],
}
class { 'collectd::plugin::hddtemp':
host => '127.0.0.1',
port => 7634,
}
class { 'collectd::plugin::hugepages':
report_per_node_hp => true,
report_root_hp => true,
values_pages => true,
values_bytes => false,
values_percentage => false
}
class { 'collectd::plugin::intel_pmu':
report_hardware_cache_events => true,
report_kernel_pmu_events => true,
report_software_events => true,
}
class { 'collectd::plugin::mcelog':
mceloglogfile => '/var/log/mcelog'
memory => true
mcelogclientsocket => '/var/run/mcelog-client'
persistentnotification => true
}
class { 'collectd::plugin::intel_rdt':
cores => ['0-2' '3,4,6' '8-10,15']
}
class { 'collectd::plugin::interface':
interfaces => ['lo'],
ignoreselected => true
}
class { 'collectd::plugin::irq':
irqs => ['7', '23'],
ignoreselected => true,
}
class { 'collectd::plugin::ipc':
}
class { 'collectd::plugin::ipmi':
ignore_selected => true,
sensors => ['temperature'],
notify_sensor_add => true,
notify_sensor_remove => true,
notify_sensor_not_present => true,
}
class { 'collectd::plugin::iptables':
chains => {
'nat' => 'In_SSH',
'filter' => 'HTTP',
},
chains6 => {
'filter' => 'HTTP6',
},
}
class { 'collectd::plugin::iscdhcp': }
jvmarg options must be declared if declaring loadplugin, as the JVM must be initialized prior to loading collectd java plugins.
class { 'collectd::plugin::java':
jvmarg => ['arg1', 'arg2'],
loadplugin => {"plugin.name" => ["option line 1", "option line 2"]}
}
class { 'collectd::plugin::load':
}
class { 'collectd::plugin::logfile':
log_level => 'warning',
log_file => '/var/log/collected.log'
}
The interface_format parameter was introduced in collectd 5.0 and will therefore be ignored (with a warning) when specified with older versions.
class { 'collectd::plugin::virt':
connection => 'qemu:///system',
interface_format => 'address'
}
class { 'collectd::plugin::lvm': }
The plugin supports multiple instances specified via host+port and socket:
class { 'collectd::plugin::memcached':
instances => {
'sessions1' => {
'host' => '192.168.122.1',
'port' => '11211',
},
'storage1' => {
'host' => '192.168.122.1',
'port' => '11212',
},
'sessions2' => {
'socket' => '/var/run/memcached.sessions.sock',
},
'storage2' => {
'socket' => '/var/run/memcached.storage.sock',
},
}
}
class { 'collectd::plugin::memory':
}
collectd::plugin::mysql::database { 'betadase':
host => 'localhost',
username => 'stahmna',
password => 'secret',
port => '3306',
masterstats => true,
wsrepstats => true,
}
class { 'collectd::plugin::mongodb':
db_user => 'admin',
db_pass => 'adminpass',
}
ensure
(String): String that must be 'present' or 'absent'. Default: 'present'interval
(String): Number of seconds that collectd pauses between data collection. Default: undefdb_host
(String): String that holds the IP of the MongoDB server. Default: '127.0.0.1'db_user
(String): Required. String that specifies the user name of an account that can log into MongoDBdb_user
(String): Required. String that specifies the password of an account that can log into MongoDBconfigured_dbs
(String): Optional. Array of Strings that lists the databases that should be monitored in addition to the "admin"db_port
(String): Required if the configured_dbs parameter is set. Unused otherwise. Integer that specifies with port MongoDB listens on.
class { 'collectd::plugin::mongodb':
db_host => '127.0.0.1',
db_user => 'foo',
db_pass => 'bar',
db_port => '27017',
configured_dbs => ['database', 'names'],
collectd_dir => '/collectd/module/path',
}
class { 'collectd::plugin::netlink':
interfaces => ['eth0', 'eth1'],
verboseinterfaces => ['ppp0'],
qdiscs => ['"eth0" "pfifo_fast-1:0"', '"ppp0"'],
classes => ['"ppp0" "htb-1:10"'],
filters => ['"ppp0" "u32-1:0"'],
ignoreselected => false,
}
collectd::plugin::network::server{'hostname':
port => 25826,
}
collectd::plugin::network::listener{'hostname':
port => 25826,
}
You can as well configure this plugin with a parameterized class :
class { 'collectd::plugin::network':
timetolive => '70',
maxpacketsize => '42',
forward => false,
reportstats => true,
servers => { 'hostname' => {
'port' => '25826',
'interface' => 'eth0',
'securitylevel' => '',
'username' => 'foo',
'password' => 'bar',},
},
listeners => { 'hostname' => {
'port' => '25826',
'interface' => 'eth0',
'securitylevel' => '',
'authfile' => '/etc/collectd/passwd',},
},
}
class { 'collectd::plugin::nfs':
}
class { 'collectd::plugin::nginx':
url => 'https://localhost:8433',
user => 'stats',
password => 'uleePi4A',
}
class { 'collectd::plugin::ntpd':
host => 'localhost',
port => 123,
reverselookups => false,
includeunitid => false,
}
class { 'collectd::plugin::numa':
}
class { 'collectd::plugin::nut':
upss => [ 'ups@localhost:port' ]
}
class { 'collectd::plugin::openldap':
instances => {
'foo' => {
'url' => 'ldap://localhost/'
},
'bar' => {
'url' => 'ldaps://localhost/'
}
},
}
statusfile
(String or Array) Status file(s) to collect data from. (Default/etc/openvpn/openvpn-status.log
)improvednamingschema
(Bool) When enabled, the filename of the status file will be used as plugin instance and the client's "common name" will be used as type instance. This is required when reading multiple status files. (Default:false
)collectcompression
Sets whether or not statistics about the compression used by OpenVPN should be collected. This information is only available in single mode. (Defaulttrue
)collectindividualusers
Sets whether or not traffic information is collected for each connected client individually. If set to false, currently no traffic data is collected at all because aggregating this data in a save manner is tricky. (Defaulttrue
)collectusercount
When enabled, the number of currently connected clients or users is collected. This is especially interesting when CollectIndividualUsers is disabled, but can be configured independently from that option. (Defaultfalse
)
Watch multiple statusfile
s:
class { 'collectd::plugin::openvpn':
statusfile => [
'/etc/openvpn/openvpn-status-tcp.log',
'/etc/openvpn/openvpn-status-udp.log'
],
collectindividualusers => false,
collectusercount => true,
}
Watch the single default statusfile
:
class { 'collectd::plugin::openvpn':
collectindividualusers => false,
collectusercount => true,
}
class { 'collectd::plugin::pcie_errors':
source => undef,
access_dir => undef,
report_masked => false,
persistent_notifications => false,
}
This class has no parameters and will load the actual perl plugin.
It will be automatically included if any perl::plugin
is defined.
include collectd::plugin::perl
This define will load a new perl plugin.
module
(String): name of perl module to load (mandatory)enable_debugger
(False or String): whether to load the perl debugger. See collectd-perl manpage for more details.include_dir
(String or Array): directories to add to @INCprovider
("package"
,"cpan"
,"file"
orfalse
): method to get the plugin codesource
(String): this parameter is consumed by the provider to infer the source of the plugin codedestination
(String or false): path to plugin code ifprovider
isfile
. Ignored otherwise.order
(String containing numbers): order in which the plugin should be loaded. Defaults to"00"
config
(Hash): plugin configuration in form of a hash. This will be converted to a suitable structure understood by liboconfig which is the collectd configuration parser. Defaults to{}
collectd::plugin::perl::plugin { 'foo':
module => 'Collectd::Plugins::Foo',
enable_debugger => "",
include_dir => '/usr/lib/collectd/perl5/lib',
}
collectd::plugin::perl::plugin { 'baz':
module => 'Collectd::Plugins::Baz',
provider => 'file',
source => 'puppet:///modules/myorg/baz_collectd.pm',
destination => '/path/to/my/perl5/modules'
}
Using a plugin from cpan (requires the puppet cpan module)
collectd::plugin::perl::plugin {
'openafs_vos':
module => 'Collectd::Plugins::OpenAFS::VOS',
provider => 'cpan',
source => 'Collectd::Plugins::OpenAFS',
config => {'VosBin' => '/usr/afsws/etc/vos'},
}
collectd::plugin::perl::plugin {
'bar':
module => 'Collectd::Plugins::Bar',
provider => 'package',
source => 'perl-Collectd-Plugins-Bar',
config => {'foo' => 'bar'},
}
class { 'collectd::plugin::ping':
hosts => ['example.com'],
}
collectd::plugin::postgresql::database{'monitoring_node1':
name => 'monitoring',
port => '5433',
instance => 'node1',
host => 'localhost',
user => 'collectd',
password => 'collectd',
query => 'log_delay',
}
collectd::plugin::postgresql::query{'log_delay':
statement => 'SELECT * FROM log_delay_repli;',
params => ['database'],
results => [{
type => 'gauge',
instanceprefix => 'log_delay',
instancesfrom => 'inet_server_port',
valuesfrom => 'log_delay',
}],
}
collectd::plugin::postgresql::writer{'sqlstore':
statement => 'SELECT collectd_insert($1, $2, $3, $4, $5, $6, $7, $8, $9);',
storerates => 'true',
}
You can as well configure this plugin as a parameterized class :
class { 'collectd::plugin::postgresql':
databases => {
'postgres' => {
'host' => '/var/run/postgresql/',
'user' => 'postgres',
'password' => 'postgres',
'sslmode' => 'disable',
'query' => [ 'query_plans', 'queries', 'table_states', 'disk_io' ],
},
'devdb' => {
'host' => 'host.example.com',
'port' => '5432',
'user' => 'postgres',
'password' => 'secret',
'sslmode' => 'prefer',
}
}
}
You can either specify powerdns servers / recursors at once:
class { 'collectd::plugin::powerdns':
recursors => {
'recursor1' => {
'socket' => '/var/run/my-socket',
'collect' => ['cache-hits', 'cache-misses'],
},
'recursor2' => {}
},
servers => {
'server1' => {
'socket' => '/var/run/my-socket',
'collect' => ['latency', 'recursing-answers', 'recursing-questions'],
}
},
}
Or define single server / recursor:
collectd::plugin::powerdns::recursor { 'my-recursor' :
socket => '/var/run/my-socket',
collect => ['cache-hits', 'cache-misses'],
}
collectd::plugin::powerdns::server { 'my-server' :
socket => '/var/run/my-socket',
collect => ['latency', 'recursing-answers', 'recursing-questions'],
}
You can either specify processes / process matches at once:
class { 'collectd::plugin::processes':
processes => ['process1', 'process2'],
process_matches => [
{ name => 'process-all', regex => 'process.*' }
],
}
Or define single processes / process matches:
collectd::plugin::processes::process { 'collectd' : }
collectd::plugin::processes::processmatch { 'elasticsearch' :
regex => '.*java.*org.elasticsearch.bootstrap.Elasticsearch'
}
values
is an array ofProtocol
names,Protocol:ValueName
pairs, or a regex- see
/proc/net/netstat
and/proc/net/snmp
for a list ofProtocol
targets
See collectd.conf documentation for details
class { 'collectd::plugin::protocols':
values => ['/^Tcp:*/', '/^Udp:*/', 'Icmp:InErrors' ],
ignoreselected => false,
}
The plugin uses a fact python_dir
to find the python load path for modules.
python or python3 must be installed as a pre-requisite for the this
fact to give a non-default value.
modulepaths
is an array of paths where will be Collectd looking for Python modules, Puppet will ensure that each of specified directories exists and it is owned byroot
(andchmod 0750
). If you don't specify anymodulepaths
a default value for given distribution will be used.modules
a Hash containing configuration of Python modules, where the key is the module nameglobals
Unlike most other plugins, this one should setGlobals true
. This will cause collectd to export the name of all objects in the Python interpreter for all plugins to see. If you don't do this or your platform does not support it, the embedded interpreter will start anyway but you won't be able to load certain Python modules, e.g. "time".interactive
whentrue
it will launch an interactive Python interpreter that reads from and writes to the terminal (default:false
)logtraces
if a Python script throws an exception it will be logged by collectd with the name of the exception and the message (default:false
)conf_name
name of the file that will contain the python module configuration (default:python-config.conf
)
See collectd-python documentation for more details.
NOTE: Since v3.4.0
the syntax of this plugin has changed. Make sure to update
your existing configuration. Now you can specify multiple Python modules at once:
class { 'collectd::plugin::python':
modulepaths => ['/usr/share/collectd/python'],
modules => {
'elasticsearch' => {
'script_source' => 'puppet:///modules/myorg/elasticsearch_collectd_python.py',
'config' => [{'Cluster' => 'elasticsearch'},],
},
'another-module' => {
'config' => [{'Verbose' => 'true'},],
}
}
logtraces => true,
interactive => false,
}
When script_source
provided, a file called {module}.py
will be created in $modulepath/$module.py
.
Or define single module:
collectd::plugin::python::module {'zk-collectd':
script_source => 'puppet:///modules/myorg/zk-collectd.py',
config => [
{'Hosts' => "localhost:2181",
'Verbose' => true,
'Values' => ["abc","def"],
'Name' => 'My Name',
'Limit' => 4.5,
}
]
}
The resulting configuration would be
Import "zk-collectd"
<Module "zk-collectd">
Hosts "localhost:2181"
Verbose true
Values "abc" "def"
Limit 4.5
</Module>
Each plugin might use different modulepath
, however make sure that all paths
are included in collectd::plugin::python
variable modulepaths
. If no
modulepath
is specified, OS default will be used.
collectd::plugin::python::module {'my-module':
modulepath => '/var/share/collectd',
script_source => 'puppet:///modules/myorg/my-module.py',
config => [
{'Key' => "value",
'Value' => 3.4,
}
]
}
class { 'collectd::plugin::redis':
nodes => {
'node1' => {
'host' => 'localhost',
},
'node2' => {
'host' => 'node2.example.com',
'port' => '6380',
'timeout' => 3000,
},
'node3' => {
'host' => 'node3.example.com',
'queries' => {
'dbsize' => {
'type' => 'count',
'query' => 'DBSIZE',
},
},
},
}
}
Please note the rabbitmq plugin provides a types.db.custom. You will need to add this to collectd::config::typesdb via hiera or in a manifest. Failure to set the types.db.custom content will result in no metrics from the rabbitmq plugin.
The rabbitmq plugin has not been ported to python3 and will fail on CentOS 8 #75
set typesdb to include the collectd-rabbitmq types.db.custom
collectd::config::typesdb:
- /usr/share/collectd/types.db
- /usr/share/collect-rabbitmq/types.db.custom
class { '::collectd::plugin::rabbitmq':
config => {
'Username' => 'admin',
'Password' => $admin_pass,
'Scheme' => 'https',
'Port' => '15671',
'Host' => $facts['fqdn'],
'Realm' => 'RabbitMQ Management',
},
}
class { 'collectd::plugin::rrdcached':
daemonaddress => 'unix:/var/run/rrdcached.sock',
datadir => '/var/lib/rrdcached/db/collectd',
}
class { 'collectd::plugin::rrdtool':
datadir => '/var/lib/collectd/rrd',
createfilesasync => false,
rrarows => 1200,
rratimespan => [3600, 86400, 604800, 2678400, 31622400],
xff => 0.1,
cacheflush => 900,
cachetimeout => 120,
writespersecond => 50
}
class {'collectd::plugin::sensors':
sensors => ['sensors-coretemp-isa-0000/temperature-temp2', 'sensors-coretemp-isa-0000/temperature-temp3'],
ignoreselected => false,
}
class { '::collectd::plugin::smart':
disks => ['/^dm/'],
ignoreselected => true,
}
class {'collectd::plugin::snmp':
data => {
amavis_incoming_messages => {
'type' => 'counter',
'table' => false,
'instance' => 'amavis.inMsgs',
'values' => ['AMAVIS-MIB::inMsgs.0'],
'ignore' => [ '00:00', '*IgnoreString' ],
'invert_match' => false,
}
},
hosts => {
debianvm => {
'address' => '127.0.0.1',
'version' => 2,
'community' => 'public',
'collect' => ['amavis_incoming_messages'],
'interval' => 10
}
},
}
class { 'collectd::plugin::snmp':
data => {
hc_octets => {
'type' => 'if_octets',
'table' => true,
'instance' => 'IF-MIB::ifName',
'values' => ['IF-MIB::ifHCInOctets', 'IF-MIB::ifHCOutOctets'],
},
},
hosts => {
router => {
'address' => '192.0.2.1',
'version' => 3,
'security_level' => 'authPriv',
'username' => 'collectd',
'auth_protocol' => 'SHA',
'auth_passphrase' => 'mekmitasdigoat',
'privacy_protocol' => 'AES',
'privacy_passphrase' => 'mekmitasdigoat',
'collect' => ['hc_octets'],
'interval' => 10,
},
},
}
class {'collectd::plugin::snmp_agent':
table => {
ifTable => {
'indexoid' => 'IF-MIB::ifIndex',
'sizeoid' => 'IF-MIB::ifNumber',
data => [{
ifDescr => {
'plugin' => 'interface',
'oids' => 'IF-MIB::ifDescr'
},
'ifDescr2' => {
'plugin' => 'interface2',
'oids' => 'IF-MIB::ifDescr2'
}
}]
}
},
data => {
memAvailReal => {
'plugin' => 'memory',
'type' => 'memory',
'oids' => '1.3.6.1.4.1.2021.4.6.0',
'typeinstance' => 'free',
'indexkey' => {
'source' => 'PluginInstance'
}
}
}
}
class { 'collectd::plugin::statsd':
host => '0.0.0.0',
port => 8125,
deletecounters => false,
deletetimers => false,
deletegauges => false,
deletesets => false,
timerpercentile => ['50','90'],
}
class { 'collectd::plugin::swap':
reportbydevice => false,
reportbytes => true
}
class { 'collectd::plugin::syslog':
log_level => 'warning'
}
class { 'collectd::plugin::target_v5upgrade':
}
class { 'collectd::plugin::tcpconns':
localports => ['25', '12026'],
remoteports => ['25'],
listening => false,
}
collectd::plugin::tail::file { 'exim-log':
filename => '/var/log/exim4/mainlog',
instance => 'exim',
matches => [
{
regex => 'S=([1-9][0-9]*)',
dstype => 'CounterAdd',
type => 'ipt_bytes',
instance => 'total',
},
{
regex => '\\<R=local_user\\>',
dstype => 'CounterInc',
type => 'counter',
instance => 'local_user',
}
]
}
class { '::collectd::plugin::tail_csv':
metrics => {
'snort-dropped' => {
'type' => 'gauge',
'values_from' => 1,
'instance' => "dropped"
},
},
files => {
'/var/log/snort/snort.stats' => {
'collect' => ['snort-dropped'],
'plugin' => 'snortstats',
'instance' => 'eth0',
'interval' => 600,
'time_from' => 5,
}
}
}
class { '::collectd::plugin::thermal':
devices => ['foo0'],
ignoreselected => false,
}
class { 'collectd::plugin::threshold':
hosts => [
{
name => 'example.com',
plugins => [
{
name => 'load',
types => [
{
name => 'load',
data_source => 'shortterm',
warning_max => $facts.dig('processors', 'count') * 1.2,
failure_max => $facts.dig('processors', 'count') * 1.9,
},
{
name => 'load',
data_source => 'midterm',
warning_max => $facts.dig('processors', 'count') * 1.1,
failure_max => $facts.dig('processors', 'count') * 1.7,
},
{
name => 'load',
data_source => 'longterm',
warning_max => $facts.dig('processors', 'count'),
failure_max => $facts.dig('processors', 'count') * 1.5,
},
],
},
],
},
],
plugins => [
# See plugin definition above
],
types => [
# See types definition above
],
}
class {'collectd::plugin::unixsock':
socketfile => '/var/run/collectd-sock',
socketgroup => 'nagios',
socketperms => '0770',
deletesocket => false,
}
class {'collectd::plugin::uptime':
}
class {'collectd::plugin::users':
}
class {'collectd::plugin::uuid':
uuid_file => '/etc/uuid',
}
class { 'collectd::plugin::varnish':
instances => {
'instanceName' => {
'CollectCache' => 'true',
'CollectBackend' => 'true',
'CollectConnections' => 'true',
'CollectSHM' => 'true',
'CollectESI' => 'false',
'CollectFetch' => 'true',
'CollectHCB' => 'false',
'CollectTotals' => 'true',
'CollectWorkers' => 'true',
}
},
}
class { 'collectd::plugin::vmem':
verbose => true,
}
The write_graphite
plugin writes data to Graphite, an open-source metrics
storage and graphing project.
collectd::plugin::write_graphite::carbon {'my_graphite':
graphitehost => 'graphite.example.org',
graphiteport => 2003,
graphiteprefix => '',
protocol => 'tcp'
}
You can define multiple Graphite backends where will be metrics send. Each backend should have unique title:
collectd::plugin::write_graphite::carbon {'secondary_graphite':
graphitehost => 'graphite.example.org',
graphiteport => 2004,
graphiteprefix => '',
protocol => 'udp',
escapecharacter => '_',
alwaysappendds => true,
storerates => true,
separateinstances => false,
logsenderrors => true
}
The write_http plugin supports two ways of configuration, the old plugin format using urls:
class { 'collectd::plugin::write_http':
urls => {
'collect1.example.org' => { 'format' => 'JSON' },
'collect2.example.org' => {},
}
}
And the new plugin format using nodes:
class { 'collectd::plugin::write_http':
nodes => {
'collect1' => { 'url' => 'collect1.example.org', 'format' => 'JSON' },
'collect2' => { 'url' => 'collect2.example.org'},
}
}
- Requires the Apache Kafka C/C++ library
- Available in collectd version >= 5.5.
class { 'collectd::plugin::write_kafka':
kafka_host => 'localhost',
kafka_port => 9092,
topics => {
'mytopic' => { 'format' => 'JSON' },
},
properties => {
'myproperty' => { 'myvalue' },
},
meta => {
'mymeta' => { 'myvalue' },
}
}
class { 'collectd::plugin::write_log':
format => 'JSON',
}
class { 'collectd::plugin::write_prometheus':
port => '9103',
}
Note: Requires collectd 5.7 or later.
class { 'collectd::plugin::write_network':
servers => {
'collect1.example.org' => { 'serverport' => '25826' },
'collect2.example.org' => { 'serverport' => '25826' }
}
}
class { 'collectd::plugin::write_riemann':
nodes => [
{
'name' => 'riemann.example.org',
'host' => 'riemann.example.org',
'port' => 5555,
'protocol' => 'TCP'
}
],
tags => ['foo'],
attributes => {'bar' => 'baz'},
}
class { 'collectd::plugin::write_sensu':
sensu_host => 'sensu.example.org',
sensu_port => 3030,
}
class { 'collectd::plugin::write_tsdb':
host => 'tsdb.example.org',
port => 4242,
host_tags => ['environment=production', 'colocation=AWS'],
store_rates => false,
always_append_ds => false,
}
class { 'collectd::plugin::zfs_arc':
}
class { 'collectd::plugin::zookeeper':
zookeeper_host => 'localhost',
zookeeper_port => '2181',
}
Collectd needs to know how to handle each collected datapoint.
For this it uses a database file called types.db
Those files can be created using the collectd::typesdb
and collectd::type
define resources.
$db = '/etc/collectd/types.db'
collectd::typesdb { $db: }
collectd::type { "response_size-${db}":
target => $db,
ds_type => 'ABSOLUTE',
min => 0,
max => 10000000,
ds_name => 'value',
}
class { 'collectd':
typesdb => [
'/usr/share/collectd/types.db',
$typesdb,
],
}
Other software may need to read the Collectd types database files. To allow
non-root users to read from a collectd::typesdb
file like so:
$db = '/etc/collectd/types.db'
collectd::typesdb { $db:
mode => '0644',
}
Assuming that the collectdctl command is available on remote nodes puppet tasks exist to
run collectdctl and collect results from nodes. The tasks rely on python3
being available
also.
$ bolt task show collectd::listval
collectd::listval - Lists all available collectd metrics
USAGE:
bolt task run --nodes <node-name> collectd::listval
collectd::listval - Lists all available collectd metrics
USAGE:
bolt task run --nodes <node-name> collectd::listval
$ bolt task show collectd::getval
collectd::getval - Get a particular metric for a host
USAGE:
bolt task run --nodes <node-name> collectd::getval metric=<value>
PARAMETERS:
- metric: String[1]
Name of metric, e.g. load/load-relative
$ bolt -u root task run collectd::getval metric=load/load-relative -n aiadm32.example.org
returns the values of the load metric.
{
"metric": "load/load-relative",
"values": {
"shortterm": "1.750000e-01",
"longterm": "8.000000e-02",
"midterm": "8.500000e-02"
}
}
See metadata.json for supported platforms
Some plugins will need two runs of Puppet to fully generate the configuration
for collectd. See this issue.
This can be avoided by specifying a minimum version ($minimum_version
) for
the collectd class. e.g. Setting this to 1.2.3 will make this module assume on
the first run (when the fact responsible to provide the collectd version is not
yet available) that your systems are running collectd 1.2.3 and generate the
configuration accordingly.
This project contains tests for rspec-puppet.
Quickstart:
gem install bundler
bundle install
bundle exec rake lint
bundle exec rake validate
bundle exec rake rubocop
bundle exec rake spec SPEC_OPTS='--format documentation'
Some plugins or some options in plugins are only available for recent versions of collectd.
This module shall not use unsupported configuration directives. Look at templates/loadplugin.conf.erb for a hands-on example.
Please make use of the search by branch/tags on the collectd github to see when a function has been first released.
Reading the collectd.conf.pod file is good, validating the presence of the code in the .c files is even better.
Puppet-collectd is maintained by VoxPupuli. Before moving to VoxPupuli it was written and maintained by TheCAT in the pdxcat github org.