Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bug: limit with a big number cause memory allocation panic #17337

Closed
1 of 2 tasks
b41sh opened this issue Jan 20, 2025 · 1 comment · Fixed by #17339
Closed
1 of 2 tasks

bug: limit with a big number cause memory allocation panic #17337

b41sh opened this issue Jan 20, 2025 · 1 comment · Fixed by #17339
Assignees
Labels
C-bug Category: something isn't working sqlancer

Comments

@b41sh
Copy link
Member

b41sh commented Jan 20, 2025

Search before asking

  • I had searched in the issues and found no similar issues.

Version

main

What's Wrong?

The query execution time gets longer as the limit value increases, and if it gets too big it can cause a memory allocate panic.

How to Reproduce?

root@0.0.0.0:48000/default> create table tt10(c0int int64);
0 row written in 0.051 sec. Processed 0 row, 0 B (0 row/s, 0 B/s)

root@0.0.0.0:48000/default> insert into tt10 values(-1571986382),(-664763703),(163007680),(1011646100),(1158832829),(1496463364);
insert into
  tt10
values
(-1571986382),
(-664763703),
(163007680),
(1011646100),
(1158832829),
(1496463364)

┌─────────────────────────┐
│ number of rows inserted │
│          UInt64         │
├─────────────────────────┤
│                       6 │
└─────────────────────────┘
6 rows written in 0.058 sec. Processed 6 rows, 49 B (103.45 rows/s, 844 B/s)

root@0.0.0.0:48000/default>
root@0.0.0.0:48000/default> SELECT tt10.c0int FROM tt10 GROUP BY tt10.c0int LIMIT 10 OFFSET 0;

┌─────────────────┐
│      c0int      │
│ Nullable(Int64) │
├─────────────────┤
│     -1571986382 │
│      -664763703 │
│       163007680 │
│      1011646100 │
│      1158832829 │
│      1496463364 │
└─────────────────┘
6 rows read in 0.071 sec. Processed 6 rows, 49 B (84.51 rows/s, 690 B/s)

root@0.0.0.0:48000/default> SELECT tt10.c0int FROM tt10 GROUP BY tt10.c0int LIMIT 2147483649 OFFSET 0;

┌─────────────────┐
│      c0int      │
│ Nullable(Int64) │
├─────────────────┤
│     -1571986382 │
│      -664763703 │
│       163007680 │
│      1011646100 │
│      1158832829 │
│      1496463364 │
└─────────────────┘
6 rows read in 12.342 sec. Processed 6 rows, 49 B (0.49 row/s, 3 B/s)

root@0.0.0.0:48000/default> SELECT tt10.c0int FROM tt10 GROUP BY tt10.c0int LIMIT 214748364900000 OFFSET 0;

error: APIError: QueryFailed: [1104]memory allocation of 13743895353600000 bytes failed

Are you willing to submit PR?

  • Yes I am willing to submit a PR!
@b41sh b41sh added C-bug Category: something isn't working sqlancer labels Jan 20, 2025
@b41sh b41sh changed the title bug: limit a big number cause memory allocation painc bug: limit with a big number cause memory allocation painc Jan 20, 2025
@b41sh b41sh changed the title bug: limit with a big number cause memory allocation painc bug: limit with a big number cause memory allocation panic Jan 21, 2025
@forsaken628
Copy link
Collaborator

ref #14629

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-bug Category: something isn't working sqlancer
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants