Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update prom rw exporter #1359

Merged
merged 27 commits into from
Oct 31, 2022
Merged
Show file tree
Hide file tree
Changes from 17 commits
Commits
Show all changes
27 commits
Select commit Hold shift + click to select a range
64634b4
Proto & Tox updates
Seefooo Aug 9, 2022
6db8824
Add some tests & test infra.
Seefooo Aug 11, 2022
b115a46
Update test cases
Seefooo Sep 21, 2022
ad84c56
More fixes from Example run
Seefooo Sep 26, 2022
f45a968
Ran linter
Seefooo Sep 27, 2022
5c2ad66
Version update
Seefooo Sep 27, 2022
f3a5491
Merge branch 'main' into update_prom_rw_exporter
ocelotl Sep 30, 2022
2483633
Update setup.cfg to use >=3.7 instead of 3.8
Seefooo Sep 30, 2022
11b9e8f
Address automated checks
Seefooo Oct 4, 2022
c5fcf18
Merge branch 'main' into update_prom_rw_exporter
srikanthccv Oct 5, 2022
c6f1a8f
Merge branch 'main' into update_prom_rw_exporter
srikanthccv Oct 10, 2022
d134a52
Updates from review
Seefooo Oct 12, 2022
15a03b4
Fix the shutdown method
Seefooo Oct 12, 2022
b1eb757
Merge branch 'main' into update_prom_rw_exporter
aabmass Oct 12, 2022
b8e249d
Remove extra proto files & update regex
Seefooo Oct 18, 2022
b9a74bc
Merge branch 'main' into update_prom_rw_exporter
Seefooo Oct 18, 2022
1cf17d7
More updates from PR review
Seefooo Oct 20, 2022
c7d635b
Merge branch 'main' into update_prom_rw_exporter
Seefooo Oct 21, 2022
4944452
Merge branch 'main' into update_prom_rw_exporter
Seefooo Oct 21, 2022
4450d3a
Merge branch 'main' into update_prom_rw_exporter
srikanthccv Oct 23, 2022
08028b4
Merge branch 'main' into update_prom_rw_exporter
Seefooo Oct 26, 2022
e83d9b5
Undo adding 'gen' to ignore list for black
Seefooo Oct 26, 2022
b0c184f
Merge branch 'main' into update_prom_rw_exporter
Seefooo Oct 27, 2022
54ac259
Fixes from pylint output
Seefooo Oct 28, 2022
e0f25d2
Move to `examples` dir to `example`
Seefooo Oct 28, 2022
0572e16
Cleanup README
Seefooo Oct 31, 2022
6a62431
Merge branch 'main' into update_prom_rw_exporter
Seefooo Oct 31, 2022
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .flake8
Original file line number Diff line number Diff line change
Expand Up @@ -16,6 +16,7 @@ exclude =
target
__pycache__
exporter/opentelemetry-exporter-jaeger/src/opentelemetry/exporter/jaeger/gen/
exporter/opentelemetry-exporter-prometheus-remote-write/src/opentelemetry/exporter/prometheus_remote_write/gen/
exporter/opentelemetry-exporter-jaeger/build/*
docs/examples/opentelemetry-example-app/src/opentelemetry_example_app/grpc/gen/
docs/examples/opentelemetry-example-app/build/*
Expand Down
3 changes: 2 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,7 +39,8 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
([#1253](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1253))
- Add metric instrumentation in starlette
([#1327](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1327))

- Add metric exporter for Prometheus Remote Write
([#1359](https://github.com/open-telemetry/opentelemetry-python-contrib/pull/1359))

### Fixed

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
OpenTelemetry Prometheus Remote Write Exporter
==============================================

|pypi|

.. |pypi| image:: https://badge.fury.io/py/opentelemetry-exporter-prometheus-remote-write.svg
:target: https://pypi.org/project/opentelemetry-exporter-prometheus-remote-write/

This package contains an exporter to send `OTLP`_ metrics from the
`OpenTelemetry Python SDK`_ directly to a `Prometheus Remote Write integrated backend`_
(such as Cortex or Thanos) without having to run an instance of the
Prometheus server.


Installation
------------

::

pip install opentelemetry-exporter-prometheus-remote-write


.. _OpenTelemetry: https://github.com/open-telemetry/opentelemetry-python/
.. _Prometheus Remote Write integrated backend: https://prometheus.io/docs/operating/integrations/


References
----------

* `OpenTelemetry Project <https://opentelemetry.io/>`_
* `Prometheus Remote Write Integration <https://prometheus.io/docs/operating/integrations/>`_
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
FROM python:3.8

RUN apt-get update -y && apt-get install libsnappy-dev -y

WORKDIR /code
COPY . .

RUN pip install -e .
RUN pip install -r ./examples/requirements.txt

CMD ["python", "./examples/sampleapp.py"]
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
# Prometheus Remote Write Exporter Example
This example uses [Docker Compose](https://docs.docker.com/compose/) to set up:

1. A Python program that creates 5 instruments with 5 unique
aggregators and a randomized load generator
2. An instance of [Cortex](https://cortexmetrics.io/) to receive the metrics
data
3. An instance of [Grafana](https://grafana.com/) to visualizse the exported
data

## Requirements
* Have Docker Compose [installed](https://docs.docker.com/compose/install/)

*Users do not need to install Python as the app will be run in the Docker Container*

## Instructions
1. Run `docker-compose up -d` in the the `examples/` directory

The `-d` flag causes all services to run in detached mode and frees up your
terminal session. This also causes no logs to show up. Users can attach themselves to the service's logs manually using `docker logs ${CONTAINER_ID} --follow`

2. Log into the Grafana instance at [http://localhost:3000](http://localhost:3000)
* login credentials are `username: admin` and `password: admin`
* There may be an additional screen on setting a new password. This can be skipped and is optional

3. Navigate to the `Data Sources` page
* Look for a gear icon on the left sidebar and select `Data Sources`

4. Add a new Prometheus Data Source
* Use `http://cortex:9009/api/prom` as the URL
* (OPTIONAl) set the scrape interval to `2s` to make updates appear quickly
* click `Save & Test`

5. Go to `Metrics Explore` to query metrics
* Look for a compass icon on the left sidebar
* click `Metrics` for a dropdown list of all the available metrics
* (OPTIONAL) Adjust time range by clicking the `Last 6 hours` button on the upper right side of the graph
* (OPTIONAL) Set up auto-refresh by selecting an option under the dropdown next to the refresh button on the upper right side of the graph
* Click the refresh button and data should show up on the graph

6. Shutdown the services when finished
* Run `docker-compose down` in the examples directory
Original file line number Diff line number Diff line change
@@ -0,0 +1,100 @@
# This Cortex Config is copied from the Cortex Project documentation
# Source: https://github.com/cortexproject/cortex/blob/master/docs/configuration/single-process-config.yaml

# Configuration for running Cortex in single-process mode.
# This configuration should not be used in production.
# It is only for getting started and development.

# Disable the requirement that every request to Cortex has a
# X-Scope-OrgID header. `fake` will be substituted in instead.
auth_enabled: false

server:
http_listen_port: 9009

# Configure the server to allow messages up to 100MB.
grpc_server_max_recv_msg_size: 104857600
grpc_server_max_send_msg_size: 104857600
grpc_server_max_concurrent_streams: 1000

distributor:
shard_by_all_labels: true
pool:
health_check_ingesters: true

ingester_client:
grpc_client_config:
# Configure the client to allow messages up to 100MB.
max_recv_msg_size: 104857600
max_send_msg_size: 104857600
use_gzip_compression: true

ingester:
# We want our ingesters to flush chunks at the same time to optimise
# deduplication opportunities.
spread_flushes: true
chunk_age_jitter: 0

walconfig:
wal_enabled: true
recover_from_wal: true
wal_dir: /tmp/cortex/wal

lifecycler:
# The address to advertise for this ingester. Will be autodiscovered by
# looking up address on eth0 or en0; can be specified if this fails.
# address: 127.0.0.1

# We want to start immediately and flush on shutdown.
join_after: 0
min_ready_duration: 0s
final_sleep: 0s
num_tokens: 512
tokens_file_path: /tmp/cortex/wal/tokens

# Use an in memory ring store, so we don't need to launch a Consul.
ring:
kvstore:
store: inmemory
replication_factor: 1

# Use local storage - BoltDB for the index, and the filesystem
# for the chunks.
schema:
configs:
- from: 2019-07-29
store: boltdb
object_store: filesystem
schema: v10
index:
prefix: index_
period: 1w

storage:
boltdb:
directory: /tmp/cortex/index

filesystem:
directory: /tmp/cortex/chunks

delete_store:
store: boltdb

purger:
object_store_type: filesystem

frontend_worker:
# Configure the frontend worker in the querier to match worker count
# to max_concurrent on the queriers.
match_max_concurrent: true

# Configure the ruler to scan the /tmp/cortex/rules directory for prometheus
# rules: https://prometheus.io/docs/prometheus/latest/configuration/recording_rules/#recording-rules
ruler:
enable_api: true
enable_sharding: false
storage:
type: local
local:
directory: /tmp/cortex/rules

Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
# Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

version: "3.8"

services:
cortex:
image: quay.io/cortexproject/cortex:v1.5.0
command:
- -config.file=./config/cortex-config.yml
volumes:
- ./cortex-config.yml:/config/cortex-config.yml:ro
ports:
- 9009:9009
grafana:
image: grafana/grafana:latest
ports:
- 3000:3000
sample_app:
build:
context: ../
dockerfile: ./examples/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
psutil
protobuf>=3.13.0
requests>=2.25.0
python-snappy>=0.5.4
opentelemetry-api
opentelemetry-sdk
opentelemetry-proto
Original file line number Diff line number Diff line change
@@ -0,0 +1,114 @@
# Copyright The OpenTelemetry Authors
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

import logging
Seefooo marked this conversation as resolved.
Show resolved Hide resolved
import random
import sys
import time
from logging import INFO

import psutil

from opentelemetry import metrics
from opentelemetry.exporter.prometheus_remote_write import (
PrometheusRemoteWriteMetricsExporter,
)
from opentelemetry.metrics import Observation
from opentelemetry.sdk.metrics import MeterProvider
from opentelemetry.sdk.metrics.export import PeriodicExportingMetricReader

logging.basicConfig(stream=sys.stdout, level=logging.INFO)
logger = logging.getLogger(__name__)


testing_labels = {"environment": "testing"}

exporter = PrometheusRemoteWriteMetricsExporter(
endpoint="http://cortex:9009/api/prom/push",
headers={"X-Scope-Org-ID": "5"},
)
reader = PeriodicExportingMetricReader(exporter, 1000)
provider = MeterProvider(metric_readers=[reader])
metrics.set_meter_provider(provider)
meter = metrics.get_meter(__name__)


# Callback to gather cpu usage
def get_cpu_usage_callback(observer):
for (number, percent) in enumerate(psutil.cpu_percent(percpu=True)):
labels = {"cpu_number": str(number)}
yield Observation(percent, labels)


# Callback to gather RAM usage
def get_ram_usage_callback(observer):
ram_percent = psutil.virtual_memory().percent
yield Observation(ram_percent, {})


requests_counter = meter.create_counter(
name="requests",
description="number of requests",
unit="1",
)

request_min_max = meter.create_counter(
name="requests_min_max",
description="min max sum count of requests",
unit="1",
)

request_last_value = meter.create_counter(
name="requests_last_value",
description="last value number of requests",
unit="1",
)

requests_active = meter.create_up_down_counter(
name="requests_active",
description="number of active requests",
unit="1",
)

meter.create_observable_counter(
callbacks=[get_ram_usage_callback],
name="ram_usage",
description="ram usage",
unit="1",
)

meter.create_observable_up_down_counter(
callbacks=[get_cpu_usage_callback],
name="cpu_percent",
description="per-cpu usage",
unit="1",
)

request_latency = meter.create_histogram("request_latency")

# Load generator
num = random.randint(0, 1000)
while True:
# counters
requests_counter.add(num % 131 + 200, testing_labels)
request_min_max.add(num % 181 + 200, testing_labels)
request_last_value.add(num % 101 + 200, testing_labels)

# updown counter
requests_active.add(num % 7231 + 200, testing_labels)

request_latency.record(num % 92, testing_labels)
logger.log(level=INFO, msg="completed metrics collection cycle")
time.sleep(1)
num += 9791
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
## Instructions
1. Install protobuf tools. Can use your package manager or download from [GitHub](https://github.com/protocolbuffers/protobuf/releases/tag/v21.7)
2. Run `generate-proto-py.sh` from inside the `proto/` directory
Loading