-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DataDog scaler makes errors in keda-metrics-apiserver and causes keda-operator crashing loop back #3448
Comments
I have tried to reproduce the issue and I can't. Not with v2.7.1 nor with main branch, on every call it works. Surely it's because my scenario is spin up using the e2e test and that's not enough. EDIT: I think that I have caught the issue, but if you could share the payload to ensure that I'm right would be nice |
Hi @JorTurFer {
"status": "ok",
"resp_version": 1,
"series": [
{
"end": 1659099139000,
"attributes": {},
"metric": "(trace.express.request.hits / kubernetes.cpu.requests)",
"interval": 10,
"tag_set": [],
"start": 1659099000000,
"length": 139,
"query_index": 0,
"aggr": "sum",
"scope": "*",
"pointlist": [
[
1659099000000,
116577.9038193589
],
[
1659099001000,
null
],
[
1659099002000,
null
],
[
1659099003000,
null
],
[
1659099004000,
null
],
[
1659099005000,
null
],
[
1659099006000,
null
],
[
1659099007000,
null
],
[
1659099008000,
null
],
[
1659099009000,
null
],
[
1659099010000,
107776.85848370226
],
[
1659099011000,
null
],
[
1659099012000,
null
],
[
1659099013000,
null
],
[
1659099014000,
null
],
[
1659099015000,
null
],
[
1659099016000,
null
],
[
1659099017000,
null
],
[
1659099018000,
null
],
[
1659099019000,
null
],
[
1659099020000,
105353.55332080477
],
[
1659099021000,
null
],
[
1659099022000,
null
],
[
1659099023000,
null
],
[
1659099024000,
null
],
[
1659099025000,
null
],
[
1659099026000,
null
],
[
1659099027000,
null
],
[
1659099028000,
null
],
[
1659099029000,
null
],
[
1659099030000,
106338.44041912442
],
[
1659099031000,
null
],
[
1659099032000,
null
],
[
1659099033000,
null
],
[
1659099034000,
null
],
[
1659099035000,
null
],
[
1659099036000,
null
],
[
1659099037000,
null
],
[
1659099038000,
null
],
[
1659099039000,
null
],
[
1659099040000,
109651.45491616814
],
[
1659099041000,
null
],
[
1659099042000,
null
],
[
1659099043000,
null
],
[
1659099044000,
null
],
[
1659099045000,
null
],
[
1659099046000,
null
],
[
1659099047000,
null
],
[
1659099048000,
null
],
[
1659099049000,
null
],
[
1659099050000,
107195.36102251594
],
[
1659099051000,
null
],
[
1659099052000,
null
],
[
1659099053000,
null
],
[
1659099054000,
null
],
[
1659099055000,
null
],
[
1659099056000,
null
],
[
1659099057000,
null
],
[
1659099058000,
null
],
[
1659099059000,
null
],
[
1659099060000,
100136.95037469952
],
[
1659099061000,
null
],
[
1659099062000,
null
],
[
1659099063000,
null
],
[
1659099064000,
null
],
[
1659099065000,
null
],
[
1659099066000,
null
],
[
1659099067000,
null
],
[
1659099068000,
null
],
[
1659099069000,
null
],
[
1659099070000,
102741.12424014426
],
[
1659099071000,
null
],
[
1659099072000,
null
],
[
1659099073000,
null
],
[
1659099074000,
null
],
[
1659099075000,
null
],
[
1659099076000,
null
],
[
1659099077000,
null
],
[
1659099078000,
null
],
[
1659099079000,
null
],
[
1659099080000,
100032.39217535657
],
[
1659099081000,
null
],
[
1659099082000,
null
],
[
1659099083000,
null
],
[
1659099084000,
null
],
[
1659099085000,
null
],
[
1659099086000,
null
],
[
1659099087000,
null
],
[
1659099088000,
null
],
[
1659099089000,
null
],
[
1659099090000,
99512.41089939019
],
[
1659099091000,
null
],
[
1659099092000,
null
],
[
1659099093000,
null
],
[
1659099094000,
null
],
[
1659099095000,
null
],
[
1659099096000,
null
],
[
1659099097000,
null
],
[
1659099098000,
null
],
[
1659099099000,
null
],
[
1659099100000,
96548.7361863122
],
[
1659099101000,
null
],
[
1659099102000,
null
],
[
1659099103000,
null
],
[
1659099104000,
null
],
[
1659099105000,
null
],
[
1659099106000,
null
],
[
1659099107000,
null
],
[
1659099108000,
null
],
[
1659099109000,
null
],
[
1659099110000,
97700.42777091639
],
[
1659099111000,
null
],
[
1659099112000,
null
],
[
1659099113000,
null
],
[
1659099114000,
null
],
[
1659099115000,
null
],
[
1659099116000,
null
],
[
1659099117000,
null
],
[
1659099118000,
null
],
[
1659099119000,
null
],
[
1659099120000,
92992.53179363097
],
[
1659099121000,
null
],
[
1659099122000,
null
],
[
1659099123000,
null
],
[
1659099124000,
null
],
[
1659099125000,
null
],
[
1659099126000,
null
],
[
1659099127000,
null
],
[
1659099128000,
null
],
[
1659099129000,
null
],
[
1659099130000,
105294.97427162394
],
[
1659099131000,
null
],
[
1659099132000,
null
],
[
1659099133000,
null
],
[
1659099134000,
null
],
[
1659099135000,
null
],
[
1659099136000,
null
],
[
1659099137000,
null
],
[
1659099138000,
null
]
],
"expression": "(sum:trace.express.request.hits{*}.as_rate() / avg:kubernetes.cpu.requests{*})",
"unit": null,
"display_name": "(trace.express.request.hits / kubernetes.cpu.requests)"
}
],
"to_date": 1659099139000,
"query": "sum:trace.express.request.hits{*}.as_rate()/avg:kubernetes.cpu.requests{*}",
"message": "",
"res_type": "time_series",
"times": [],
"from_date": 1659099000000,
"group_by": [],
"values": []
} |
In fact, my suspicions are correct, the latest point doesn't have value. IDK if it's correct just failing (current behaviour) or we should get the latest valid value. I don't have any expertise on this, let's wait till @arapulido gives her opinion. |
Let me also ask datadog support if this is expected behavior |
The issue still exists in 2.8.0 keda-metrics-apiserver logs
keda-operator logs
|
by chance, could you debug the operator locally? I can't imagine how this could be happening, both accessors are covered by the previous ifs... |
Hi, we encountered the same issue with 2.7.3 & 2.8.0.
And we get this error instead of a panic (in keda-operator):
|
Report
When I use the query
sum:trace.express.request.hits{*}.as_rate()/avg:kubernetes.cpu.requests{*}
in datadog scaler, the metrics-apiserver will havepanic: runtime error: invalid memory address or nil pointer dereference
and causes keda-operator crashingHere is my scaledobject manifest
I already confirmed that the query works without problem on datadog UI and with curl command
Expected Behavior
The query
sum:trace.express.request.hits{*}.as_rate()/avg:kubernetes.cpu.requests{*}
returns the result as expectedActual Behavior
metric-apiserver has errors and operator start crashing
Steps to Reproduce the Problem
If there is no
as_rate()
in query, it works without problemsum:trace.express.request.hits{*}/avg:kubernetes.cpu.requests{*}
but as long as I add the
as_rate()
as below, the metrics-apiserver starts returning error and keda-operator start crashingsum:trace.express.request.hits{*}.as_rate()/avg:kubernetes.cpu.requests{*}
Logs from KEDA operator
keda-metrics-apiserver
keda-operator
KEDA Version
2.7.1
Kubernetes Version
1.21
Platform
Amazon Web Services
Scaler Details
datadog
Anything else?
No response
The text was updated successfully, but these errors were encountered: