Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting Duplicate TimeSeries encountered error logs on sending opentelemtry measures and empty resources info #255

Open
moshevi opened this issue Jul 3, 2023 · 4 comments
Assignees
Labels
priority: p2 question Further information is requested

Comments

@moshevi
Copy link

moshevi commented Jul 3, 2023

Hi.
Running kubeflow pipeline on google cloud with python package using these:

opentelemetry-api>=1.18.0,
opentelemetry-sdk>=1.18.0,
opentelemetry_resourcedetector_process>=0.3.0'
opentelemetry-propagator-gcp>=1.5.0,
opentelemetry-resourcedetector-gcp>=1.5.0a0,
opentelemetry-exporter-gcp-monitoring>=1.5.0a0,
opentelemetry_resourcedetector_kubernetes>=0.3.0
  1. I'm getting an opentelemtry error: One or more TimeSeries could not be written: Field timeSeries[3] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.; Field timeSeries[1] had an invalid value: Duplicate TimeSeries encountered. Only one point can be written per TimeSeries per request.
  2. I don't see any resource info regarding my metric on google cloud metric explorer

I'm init my metrics as follows:

       __resources = get_aggregated_resources(
           [
               KubernetesResourceDetector(),
               GoogleCloudResourceDetector(),
               ProcessResourceDetector(),
           ],
           timeout=60,
       )

       metrics.set_meter_provider(
          meterProvider(
               views=[
                   view.View(
                       instrument_name='*.duration',
                       aggregation=__view.ExplicitBucketHistogramAggregation((
                           0, 25, 50, 75, 100, 200, 400, 600, 800, 1000, 2000,
                           4000, 6000, 10000, 20000, 30000, 60000, 120000))
                   )
               ],
               metric_readers=[
                   PeriodicExportingMetricReader(
                       CloudMonitoringMetricsExporter(
                           add_unique_identifier=True,
                           prefix='custom.googleapis.com/companyName/my-shared-libary',
                       ),
                   ),
               ],
               resource=__resources
           )
       )

       counter=metrics.get_meter('MeterName').create_counter(name='counter_metric_name', unit='1')
       hist=metrics.get_meter('MeterName').create_histogram(name='hist_metric_name',unit='ms')

And sending metrics as follows:

labels={'product':'productA','status':'OK'}
counter.add(1, labels)
duration = round((datetime.now() - startTime).total_seconds() * 1000)
hist.record(duration, labels)

The measure seems ok on the google cloud metric explorer but I wonder why I see this error on logs and why I don't see the resources data there, any idea ?

@moshevi moshevi changed the title Getting Duplicate TimeSeries encountered error logs on sending opentelemtry measures Getting Duplicate TimeSeries encountered error logs on sending opentelemtry measures and empty resources info Jul 3, 2023
@aabmass
Copy link
Collaborator

aabmass commented Jul 17, 2023

@moshevi are you using gunicorn or uwsgi? We sometimes see this error if your process is forking.

@dashpole dashpole added bug Something isn't working question Further information is requested priority: p2 and removed bug Something isn't working labels Aug 14, 2023
@aabmass
Copy link
Collaborator

aabmass commented Nov 15, 2023

@moshevi is this still an issue you're facing?

@moshevi
Copy link
Author

moshevi commented Nov 16, 2023

@aabmass Sorry, missed your message? what do you mean by gunicorn or uwsgi, in which python package?
And yes I still get this error.
Do you see any issue in my code that can cause this error ?

@aabmass
Copy link
Collaborator

aabmass commented Jul 29, 2024

This is most likely a resource detection issue. The error "Duplicate TimeSeries encountered" usually means multiple copies of your application are writing the same exact metric "stream" (all the same resource and metric labels). Gunicorn and uwsgi are frequently used to run python applications with multiple worker processes and there are some caveats with OpenTelemetry. Do you know if your kubeflow is using Gunicorn or uwsgi?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority: p2 question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants