Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prepared statements results fails partition pruning if not PointGet #55630

Closed
mjonss opened this issue Aug 23, 2024 · 1 comment · Fixed by #55631
Closed

Prepared statements results fails partition pruning if not PointGet #55630

mjonss opened this issue Aug 23, 2024 · 1 comment · Fixed by #55631
Assignees
Labels
affects-8.1 component/tablepartition This issue is related to Table Partition of TiDB. severity/major sig/sql-infra SIG: SQL Infra type/bug The issue is confirmed as a bug.

Comments

@mjonss
Copy link
Contributor

mjonss commented Aug 23, 2024

Bug Report

Please answer these questions before submitting your issue. Thanks!

1. Minimal reproduce step (Required)

Create a sysbench table

sysbench /opt/homebrew/share/sysbench/select_random_ranges.lua --mysql-host=127.0.0.1 --mysql-port=4000 --mysql-user=root --mysql-db=test --tables=1 --table_size=1000000 prepare

Create a similar table, but partitioned and copy the data from the table created by sysbench and finally rename to sbtest1

create table tpart (id int, k int, c char(120),pad char(60), primary key (id,k) clustered, key (k)) partition by range (k) interval (100000) FIRST PARTITION LESS THAN (200000) LAST PARTITION LESS THAN (10100000);
batch on id limit 1000 insert into tpart select * from sbtest1;
rename table sbtest1 to t, tpart to sbtest1;

Run sysbench_random_ranges, note that sysbench is using prepared statements!

sysbench /opt/homebrew/share/sysbench/select_random_ranges.lua --mysql-host=127.0.0.1 --mysql-port=4000 --mysql-user=root --mysql-db=test --tables=1 --table_size=1000000 --time=30 --threads=64 run

2. What did you expect to see? (Required)

performance/qps/transactions per seconds to be in the same order of magnitude as for non partitioned table, not 1/50 or less QPS.

3. What did you see instead (Required)

QPS of ~500 vs ~24k (of non partitioned table).

4. What is your TiDB version? (Required)

tidb_version(): Release Version: v8.3.0
Edition: Community
Git Commit Hash: 1a0c3ac
Git Branch: release-8.3
UTC Build Time: 2024-08-23 14:33:35
GoVersion: go1.22.1
Race Enabled: false
Check Table Before Drop: false
Store: tikv

@mjonss mjonss added the type/bug The issue is confirmed as a bug. label Aug 23, 2024
@mjonss
Copy link
Contributor Author

mjonss commented Aug 23, 2024

With this diff, partitioning pruning work and QPS is even a few percentage higher than the non-partitioned table instead:

diff --git a/pkg/planner/core/rule_partition_processor.go b/pkg/planner/core/rule_partition_processor.go
index 1d9e3a0708..092ed936eb 100644
--- a/pkg/planner/core/rule_partition_processor.go
+++ b/pkg/planner/core/rule_partition_processor.go
@@ -1587,9 +1587,11 @@ func (p *rangePruner) extractDataForPrune(sctx base.PlanContext, expr expression
        // the constExpr may not a really constant when coming here.
        // Suppose the partition expression is 'a + b' and we have a condition 'a = 2',
        // the constExpr is '2 + b' after the replacement which we can't evaluate.
-       if !expression.ConstExprConsiderPlanCache(constExpr, sctx.GetSessionVars().StmtCtx.UseCache()) {
-               return ret, false
-       }
+       /*
+               if !expression.ConstExprConsiderPlanCache(constExpr, sctx.GetSessionVars().StmtCtx.UseCache()) {
+                       return ret, false
+               }
+       */
        c, isNull, err := constExpr.EvalInt(sctx.GetExprCtx().GetEvalCtx(), chunk.Row{})
        if err == nil && !isNull {
                ret.c = c

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
affects-8.1 component/tablepartition This issue is related to Table Partition of TiDB. severity/major sig/sql-infra SIG: SQL Infra type/bug The issue is confirmed as a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants