Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting Metrics to flow in AWS Distro for OpenTelemetry Lambda Support For Python #85

Closed
rberger opened this issue Aug 4, 2022 · 4 comments
Assignees

Comments

@rberger
Copy link

rberger commented Aug 4, 2022

Using the current release arn:aws:lambda:\<region>:901920570463:layer:aws-otel-python-amd64-ver-1-11-1:1 it looks like Opentelemetry metrics are available in the SDK used in a lambda with the above lambda layer if you import _metrics. At least it doesn't give an error.

But I can not get metrics to actually flow to an exporter. I have tried something based on the Python metrics example in the Opentelemetry docs (shown below). The only difference was I could not get the

processors:
  batch:

in the collector config.yml to work in the v1.11.1 lambda. It would always crash the extension. I presume that things are related to that.

I also tried to use the metrics example from the v1.11.1 opentelemetry-python but could not import from opentelemetry.exporter.otlp.proto.grpc._metric_exporter as it seemed to not be available in the aws python lambda layer. Also it had the processors batch line in the collector config.yaml that would cause the lambda to crash.

Here is the lambda code and the collector config.yml I was trying:

import os
import json
from opentelemetry import trace
from typing import Iterable
from opentelemetry import _metrics
from random import randint
import time

# Acquire a tracer
tracer = trace.get_tracer(__name__)

# Acquire a meter and create a counter
meter = _metrics.get_meter(__name__)

my_counter = meter.create_counter(
    "my_counter",
    description="My Counter counting",
)
       
# The lambda H
def handler(event, context):
    with tracer.start_as_current_span("top") as topspan:
        res = randint(1, 6)

        # Counter
        my_counter.add(1, {"res.value": res})

        json_region = os.environ['AWS_REGION']
        topspan.set_attribute("region", json_region)
        topspan.set_attribute("res.value", res)
        time.sleep(10)
        return {
            "statusCode": 200,
            "headers": {
                "Content-Type": "application/json"
            },
            "body": json.dumps({
                "Region ": json_region,
                "res ": res
            })
        }

I've tried many variations of this, none end up exporting metrics to honeycomb (or the logger as far as I can tell) Traces do flow to Honeycomb.

receivers:
  otlp:
    protocols:
      grpc:
      http:

exporters:
  logging:
    loglevel: debug
  awsxray:
  otlp:
    endpoint: "api.honeycomb.io:443"
    headers: {
      "x-honeycomb-team": "Qf0n7UBOs2sG3DL7SA8CDD",
      "x-honeycomb-dataset": "rob"
    }
  otlp/metrics:
    endpoint: "api.honeycomb.io:443"
    headers:
      "x-honeycomb-team": "Qf0n7UBOs2sG3DL7SA8CDD"
      "x-honeycomb-dataset": "rob"

service:
  pipelines:
    traces:
      receivers: [otlp]
      processors: []
      exporters: [logging, awsxray, otlp]
    metrics:
      receivers: [otlp]
      processors: []
      exporters: [logging, otlp/metrics]

Ether I'm doing something wrong (highly probable) or the v1.11.1 AWS Lambda layer for Python is not quite supporting metrics. Hard for me to tell right now and can't find any explicit examples for metrics and the AWS Python Lambda Layer. Any help would be appreciated!

@mhausenblas
Copy link
Member

ADOT PM here. We're working hard on getting this done ASAP, please allow for some more time. Will make sure to keep you posted and as usual, when we officially support it there will be a blog post on it.

@rberger
Copy link
Author

rberger commented Nov 11, 2022

Any news on this?

@Aneurysm9 Aneurysm9 self-assigned this Dec 22, 2022
@Aneurysm9
Copy link
Member

I have created open-telemetry/opentelemetry-python-contrib#1613 which will cause the Lambda instrumentation's handler wrapper to invoke force_flush() on the MeterProvider, which should result in the metric you have defined being exported assuming that the OTEL_METRICS_EXPORTER environment variable is set appropriately. It should be safe to set it to otlp_proto_http.

This change is not yet available in the ADOT Lambda layers, but we will include it once it is released upstream.

@vsakaram
Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants