Collection of Python logging, tracing and profiling tools
"Troncos" is the plural of the spanish word "Tronco", which translates to "trunk" or "log".
# With pip
pip install troncos
Troncos is designed to take advantage of ddtrace
made by DataDog.
The ddtrace docs can be found here.
Best practices for traces is a good guide to get started.
- A
span attribute
is a key/value pair that provides context for its span. - A
resource attribute
is a key/value pair that describes the context of how the span was collected.
For more information, read the Attribute and Resource sections in the OpenTelemetry specification.
Configure ddtrace as usual and run configure_tracer
to send spans to Tempo.
This is typically done in settings.py
of you want to profile a Django application,
or in __init__.py
in the root project package.
TRACE_HOST
is usually the host IP of the K8s pod, TRACE_PORT
is usually 4318
when the Grafana agent is used to collect spans using HTTP.
import ddtrace
from troncos.tracing import configure_tracer, Exporter
# Configure tracer as described in the ddtrace docs.
ddtrace.config.django["service_name"] = 'SERVICE_NAME'
# These are added as span attributes
ddtrace.tracer.set_tags(
tags={
"key": "value",
}
)
# Patch third-party modules
ddtrace.patch_all()
# Configure the ddtrace tracer to send traces to Tempo.
configure_tracer(
enabled=False, # Set to True when TRACE_HOST is configured.
service_name='SERVICE_NAME',
exporter=Exporter(
host = "127.0.0.1", # Usually obtained from env variables.
),
resource_attributes={
"app": "app",
"component": "component",
"role": "role",
"tenant": "tenant",
"owner": "owner",
"version": "version",
}
)
ddtrace also uses env variables to configure the service name, environment and version etc.
Add the following environment variables to your application.
DD_ENV="{{ environment }}"
DD_SERVICE="{{ app }}"
DD_VERSION="{{ version }}"
# tracecontext/w3c is usually used to propagate distributed traces across services.
DD_TRACE_PROPAGATION_STYLE_EXTRACT="tracecontext"
DD_TRACE_PROPAGATION_STYLE_INJECT="tracecontext"
By setting the environment variable OTEL_TRACE_DEBUG=True
you will enable traces
to be printed to stdout
via the ConsoleSpanExporter as well as through http/grpc.
Also specifying OTEL_TRACE_DEBUG_FILE=/some/file/path
will output traces to the
specified file path instead of the console/stdout.
Using the GRPC span exporter gives you significant performance gains. If you are running a critical service with high load in production, we recommend using GRPC.
The port is usually 4317
when the Grafana agent is used to collect
spans using GRPC.
poetry add troncos -E grpc
or
[tool.poetry.dependencies]
troncos = {version="?", extras = ["grpc"]}
from troncos.tracing import configure_tracer, Exporter
configure_tracer(
enabled=False, # Set to True when TRACE_HOST is configured.
service_name='SERVICE_NAME',
exporter=Exporter(
host = "127.0.0.1", # Usually obtained from env variables.
port = "4317",
),
)
from troncos.tracing import configure_tracer, Exporter
configure_tracer(
enabled=False, # Set to True when TRACE_HOST is configured.
service_name='SERVICE_NAME',
exporter=Exporter(
host = "127.0.0.1", # Usually obtained from env variables.
headers={"my": "header"},
),
)
Manual instrumentation of your code is described in the ddtrace docs.
Adding the tracing context to your log makes it easier to find relevant traces in Grafana. Troncos include a Structlog processor designed to do this.
import structlog
from troncos.contrib.structlog.processors import trace_injection_processor
structlog.configure(
processors=[
trace_injection_processor,
],
)
Finding relevant traces in Grafana can be difficult. One way to make finding the relevant traces easier it to log every major action in your application. This typically means logging every incoming HTTP request to your server or every task executed by your Celery worker.
The structlog processor above needs to be enabled before logging your major actions is relevant.
Log ASGI requests.
from starlette.applications import Starlette
from troncos.contrib.asgi.logging.middleware import AsgiLoggingMiddleware
application = AsgiLoggingMiddleware(Starlette())
Log Django requests. This is not needed if you run Django with ASGI and use the ASGI middleware.
MIDDLEWARE = [
"troncos.contrib.django.logging.middleware.DjangoLoggingMiddleware",
...
]
` Log Celery tasks. Run the code bellow when you configure Celery.
from troncos.contrib.celery.logging.signals import (
connect_troncos_logging_celery_signals,
)
connect_troncos_logging_celery_signals()
Start the profiler by running the start_py_spy_profiler
method early in your application. This is
typically done in settings.py
of you want to profile a Django application, or in __init__.py
in the root project package.
from troncos.profiling import start_py_spy_profiler
start_py_spy_profiler(server_address="http://127.0.0.1:4100")
Start the profiler by importing the profiler module early in your application. This is
typically done in settings.py
of you want to profile a Django application, or in __init__.py
in the root project package.
import troncos.profiling.auto
Use one of the methods bellow based on your selected framework.
Add the profile view to the url config.
from django.urls import path
from troncos.contrib.django.profiling.views import profiling_view
urlpatterns = [
path("/debug/pprof", profiling_view, name="profiling"),
]
Add the profile view to your router.
from starlette.routing import Route
from troncos.contrib.starlette.profiling.views import profiling_view
routes = [
Route("/debug/pprof", profiling_view),
]
Mount the generic ASGI profiling application. There is no generic way to do this, please check the relevant ASGI framework documentation.
from troncos.contrib.asgi.profiling.app import profiling_asgi_app
# FastAPI example
from fastapi import FastAPI
app = FastAPI()
app.mount("/debug/pprof", profiling_asgi_app)
You can verify that your setup works with the pprof cli:
pprof -http :6060 "http://localhost:8080/debug/pprof"
When you deploy your application, be sure to use the custom oda annotation for scraping:
annotations:
phlare.oda.com/port: "8080"
phlare.oda.com/scrape: "true"
Troncos is not designed to take control over your logger. But, we do include logging related tools to make instrumenting your code easier.
Troncos contains a helper method that lets you configure Structlog.
First, run poetry add structlog
to install structlog in your project.
You can now replace your existing logger config with
from troncos.contrib.structlog import configure_structlog
configure_structlog(format="json", level="INFO")
Troncos has a Structlog processor that can be used to add the span_id
and trace_id
properties to your log. More information can be found in the Tracing
section in this document. This is used by the configure_structlog
helper method
by default.
Finding the relevant traces in Tempo and Grafana can be difficult. The request logging middleware exist to make it easier to connect HTTP requests to traces. More information can be found in the Tracing section in this document.