v3: normalizer: smarter dedup#3262
Conversation
demmer
left a comment
There was a problem hiding this comment.
Overall this looks good -- I have a few comments on style and one general point about propagating errors out of the normalizer that applies both before and after this change.
go/vt/sqlparser/normalizer.go
Outdated
There was a problem hiding this comment.
Unrelated to this particular change, but it seems like we should be propagating errors in the Normalize traversal out instead of silently swallowing them?
For PII reasons we actually do depend on normalize working properly, so it would be better for us to fail the query with an error than let it go through unnormalized.
There was a problem hiding this comment.
I looked at the error conditions. They are all for invalid inputs, which will be caught downstream. So, it's better that normalizer ignores them and completes everything else. That way, any other PII would have been converted to bind vars.
go/vt/sqlparser/normalizer.go
Outdated
There was a problem hiding this comment.
Following up on the above comment re error propagation, this should also return an error if the value should have been converted to a bind var but failed for some reason.
go/vt/sqlparser/normalizer.go
Outdated
There was a problem hiding this comment.
Is the reason for this because the comparison is likely to be expensive CPU wise? If so we should add an additional comment indicating why.
go/vt/sqlparser/normalizer.go
Outdated
There was a problem hiding this comment.
I'm not sure what the standard go style is, but I would find this particular bit easier to read as:
if node.Operator != InStr && node.Operator != NoInStr {
return
}
There was a problem hiding this comment.
Changed. It might have been my bias against double negations.
go/vt/sqlparser/normalizer.go
Outdated
There was a problem hiding this comment.
If you make the above change then this comment would be less necessary
Based on real-life workloads, we found that it may not be a good idea to dedup all values. Specifically, values used in DMLs like INSERT etc. should not be deduped. They actually end up polluting the plan cache with all kinds of combinations. With the change, values are deduped only if they are within selects. This deduping happens even if there are subqueries withing DMLS, while the DML parts are still not deduped. Additionally, it doesn't make sense to take the effort to dedup values that are too long. So, I've added a check where if a value is longer than 256 bytes, we blindly create a new bind var.
Based on real-life workloads, we found that it may not be a good
idea to dedup all values. Specifically, values used in DMLs like
INSERT etc. should not be deduped. They actually end up polluting
the plan cache with all kinds of combinations.
With the change, values are deduped only if they are within selects.
This deduping happens even if there are subqueries withing DMLS,
while the DML parts are still not deduped.
Additionally, it doesn't make sense to take the effort to dedup
values that are too long. So, I've added a check where if a value
is longer than 256 bytes, we blindly create a new bind var.