forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 2
[SPARK-40267][DOC] Add description for ExecutorAllocationManager metrics #1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
warrenzhu25
pushed a commit
that referenced
this pull request
Oct 19, 2022
…ly equivalent children in `RewriteDistinctAggregates` ### What changes were proposed in this pull request? In `RewriteDistinctAggregates`, when grouping aggregate expressions by function children, treat children that are semantically equivalent as the same. ### Why are the changes needed? This PR will reduce the number of projections in the Expand operator when there are multiple distinct aggregations with superficially different children. In some cases, it will eliminate the need for an Expand operator. Example: In the following query, the Expand operator creates 3\*n rows (where n is the number of incoming rows) because it has a projection for each of function children `b + 1`, `1 + b` and `c`. ``` create or replace temp view v1 as select * from values (1, 2, 3.0), (1, 3, 4.0), (2, 4, 2.5), (2, 3, 1.0) v1(a, b, c); select a, count(distinct b + 1), avg(distinct 1 + b) filter (where c > 0), sum(c) from v1 group by a; ``` The Expand operator has three projections (each producing a row for each incoming row): ``` [a#87, null, null, 0, null, UnscaledValue(c#89)], <== projection #1 (for regular aggregation) [a#87, (b#88 + 1), null, 1, null, null], <== projection apache#2 (for distinct aggregation of b + 1) [a#87, null, (1 + b#88), 2, (c#89 > 0.0), null]], <== projection apache#3 (for distinct aggregation of 1 + b) ``` In reality, the Expand only needs one projection for `1 + b` and `b + 1`, because they are semantically equivalent. With the proposed change, the Expand operator's projections look like this: ``` [a#67, null, 0, null, UnscaledValue(c#69)], <== projection #1 (for regular aggregations) [a#67, (b#68 + 1), 1, (c#69 > 0.0), null]], <== projection apache#2 (for distinct aggregation on b + 1 and 1 + b) ``` With one less projection, Expand produces 2\*n rows instead of 3\*n rows, but still produces the correct result. In the case where all distinct aggregates have semantically equivalent children, the Expand operator is not needed at all. Benchmark code in the JIRA (SPARK-40382). Before the PR: ``` distinct aggregates: Best Time(ms) Avg Time(ms) Stdev(ms) Rate(M/s) Per Row(ns) Relative ------------------------------------------------------------------------------------------------------------------------ all semantically equivalent 14721 14859 195 5.7 175.5 1.0X some semantically equivalent 14569 14572 5 5.8 173.7 1.0X none semantically equivalent 14408 14488 113 5.8 171.8 1.0X ``` After the PR: ``` distinct aggregates: Best Time(ms) Avg Time(ms) Stdev(ms) Rate(M/s) Per Row(ns) Relative ------------------------------------------------------------------------------------------------------------------------ all semantically equivalent 3658 3692 49 22.9 43.6 1.0X some semantically equivalent 9124 9214 127 9.2 108.8 0.4X none semantically equivalent 14601 14777 250 5.7 174.1 0.3X ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? New unit tests. Closes apache#37825 from bersprockets/rewritedistinct_issue. Authored-by: Bruce Robbins <[email protected]> Signed-off-by: Wenchen Fan <[email protected]>
warrenzhu25
pushed a commit
that referenced
this pull request
Apr 10, 2023
…nto w/ and w/o `ansi` suffix to pass sql analyzer test in ansi mode ### What changes were proposed in this pull request? After apache#40496, run ``` SPARK_ANSI_SQL_MODE=true build/sbt "sql/testOnly org.apache.spark.sql.SQLQueryTestSuite" ``` There is one test faild with `spark.sql.ansi.enabled = true` ``` [info] - timestampNTZ/datetime-special.sql_analyzer_test *** FAILED *** (11 milliseconds) [info] timestampNTZ/datetime-special.sql_analyzer_test [info] Expected "...date(999999, 3, 18, [false) AS make_date(999999, 3, 18)#x, make_date(-1, 1, 28, fals]e) AS make_date(-1, ...", but got "...date(999999, 3, 18, [true) AS make_date(999999, 3, 18)#x, make_date(-1, 1, 28, tru]e) AS make_date(-1, ..." Result did not match for query #1 [info] select make_date(999999, 3, 18), make_date(-1, 1, 28) (SQLQueryTestSuite.scala:777) [info] org.scalatest.exceptions.TestFailedException: ``` The failure reason is the last parameter of function `MakeDate` is `failOnError: Boolean = SQLConf.get.ansiEnabled`. So this pr split `timestampNTZ/datetime-special.sql` into w/ and w/o ansi to mask this test difference. ### Why are the changes needed? Make SQLQueryTestSuite test pass with `spark.sql.ansi.enabled = true`. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - Pass GitHub Actions - Manual checked `SPARK_ANSI_SQL_MODE=true build/sbt "sql/testOnly org.apache.spark.sql.SQLQueryTestSuite"` Closes apache#40552 from LuciferYang/SPARK-42921. Authored-by: yangjie01 <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
warrenzhu25
pushed a commit
that referenced
this pull request
Jun 5, 2024
### What changes were proposed in this pull request? This PR uses SMALLINT (as TINYINT ranges [0, 255]) instead of BYTE to fix the ByteType mapping for MsSQLServer JDBC ```java [info] com.microsoft.sqlserver.jdbc.SQLServerException: Column, parameter, or variable #1: Cannot find data type BYTE. [info] at com.microsoft.sqlserver.jdbc.SQLServerException.makeFromDatabaseError(SQLServerException.java:265) [info] at com.microsoft.sqlserver.jdbc.SQLServerStatement.getNextResult(SQLServerStatement.java:1662) [info] at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:898) [info] at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:793) [info] at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7417) [info] at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:3488) [info] at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:262) [info] at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:237) [info] at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeUpdate(SQLServerStatement.java:733) [info] at org.apache.spark.sql.jdbc.JdbcDialect.createTable(JdbcDialects.scala:267) ``` ### Why are the changes needed? bugfix ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? new tests ### Was this patch authored or co-authored using generative AI tooling? no Closes apache#46164 from yaooqinn/SPARK-47938. Lead-authored-by: Kent Yao <[email protected]> Co-authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
warrenzhu25
pushed a commit
that referenced
this pull request
Aug 12, 2024
…rtition data results should return user-facing error
### What changes were proposed in this pull request?
Create an example parquet table with partitions and insert data in Spark:
```
create table t(col1 string, col2 string, col3 string) using parquet location 'some/path/parquet-test' partitioned by (col1, col2);
insert into t (col1, col2, col3) values ('a', 'b', 'c');
```
Go into the `parquet-test` path in the filesystem and try to copy parquet data file from path `col1=a/col2=b` directory into `col1=a`. After that, try to create new table based on parquet data in Spark:
```
create table broken_table using parquet location 'some/path/parquet-test';
```
This query errors with internal error. Stack trace excerpts:
```
org.apache.spark.SparkException: [INTERNAL_ERROR] Eagerly executed command failed. You hit a bug in Spark or the Spark plugins you use. Please, report this bug to the corresponding communities or vendors, and provide the full stack trace. SQLSTATE: XX000
...
Caused by: java.lang.AssertionError: assertion failed: Conflicting partition column names detected: Partition column name list #0: col1
Partition column name list #1: col1, col2For partitioned table directories, data files should only live in leaf directories.
And directories at the same level should have the same partition column name.
Please check the following directories for unexpected files or inconsistent partition column names: file:some/path/parquet-test/col1=a
file:some/path/parquet-test/col1=a/col2=b
at scala.Predef$.assert(Predef.scala:279)
at org.apache.spark.sql.execution.datasources.PartitioningUtils$.resolvePartitions(PartitioningUtils.scala:391)
...
```
Fix this by changing internal error to user-facing error.
### Why are the changes needed?
Replace internal error with user-facing one for valid sequence of Spark SQL operations.
### Does this PR introduce _any_ user-facing change?
Yes, it presents the user with regular error instead of internal error.
### How was this patch tested?
Added checks to `ParquetPartitionDiscoverySuite` which simulate the described scenario by manually breaking parquet table in the filesystem.
### Was this patch authored or co-authored using generative AI tooling?
No.
Closes apache#47668 from nikolamand-db/SPARK-49163.
Authored-by: Nikola Mandic <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
warrenzhu25
pushed a commit
that referenced
this pull request
Aug 9, 2025
…in/load-spark-env.sh ### What changes were proposed in this pull request? The last action in [bin/load-spark-env.sh](https://github.com/apache/spark/blob/d5da49d56d7dec5f8a96c5252384d865f7efd4d9/bin/load-spark-env.sh#L68) performs a test to determine whether running in a terminal or not, and whether `stdin` is reading from a pipe. A more portable test is needed. ### Why are the changes needed? The current approach relies on `ps` with options that vary significantly between different Unix-like systems. Specifically, it prints an error message in both `cygwin` and `msys2` (and by extension, in all of the variations of `git-for-windows`). It doesn't print an error message, but fails to detect a terminal session in `Linux` and `Osx/Darwin homebrew` (always thinks STDIN is a pipe). Here's what the problem looks like in a `cygwin64` session (with `set -x` just ahead of the section of interest): If called directly: ```bash $ bin/load-spark-env.sh ++ ps -o stat= -p 1947 ps: unknown option -- o Try `ps --help' for more information. + [[ ! '' =~ \+ ]] + [[ -p /dev/stdin ]] + export 'SPARK_BEELINE_OPTS= -Djline.terminal=jline.UnsupportedTerminal' + SPARK_BEELINE_OPTS=' -Djline.terminal=jline.UnsupportedTerminal' ``` Interestingly, due to the 2-part test, it does the right thing w.r.t. the Terminal test, the main problem being the error message. If called downstream from a pipe: ```bash $ echo "yo" | bin/load-spark-env.sh ++ ps -o stat= -p 1955 ps: unknown option -- o Try `ps --help' for more information. + [[ ! '' =~ \+ ]] + [[ -p /dev/stdin ]] ``` Again, it correctly detects the pipe environment, but with an error message. In WSL2 Ubuntu, the test doesn't correctly detect a non-pipe terminal session: ```bash # /opt/spark$ bin/load-spark-env.sh ++ ps -o stat= -p 1423 + [[ ! S+ =~ \+ ]] # echo "yo!" | bin/load-spark-env.sh ++ ps -o stat= -p 1416 + [[ ! S+ =~ \+ ]] ``` In `apache#134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024`, the same failure occurs (it doesn't recognize terminal environments). ### Does this PR introduce _any_ user-facing change? This is a proposed bug fix, and, other than fixing the bug, should be invisible to users. ### How was this patch tested? The patch was verified to behave as intended in terminal sessions, both interactive and piped, in the following 5 environments. ``` - Linux quadd 5.15.0-124-generic apache#134-Ubuntu SMP Fri Sep 27 20:20:17 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux - Linux d5 5.15.153.1-microsoft-standard-WSL2 #1 SMP Fri Mar 29 23:14:13 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux - MINGW64_NT-10.0-22631 d5 3.5.4-0bc1222b.x86_64 2024-09-04 18:28 UTC x86_64 Msys - CYGWIN_NT-10.0-22631 d5 3.5.3-1.x86_64 2024-04-03 17:25 UTC x86_64 Cygwin - Darwin suemac.local 23.6.0 Darwin Kernel Version 23.6.0: Mon Jul 29 21:14:21 PDT 2024; root:xnu-10063.141.2~1/RELEASE_ARM64_T8103 arm64 ``` The test was to manually run the following script, verifying the expected response to both pipe and terminal sessions. ```bash #!/bin/bash if [ -e /usr/bin/tty -a "`tty`" != "not a tty" -a ! -p /dev/stdin ]; then echo "not a pipe" else echo "is a pipe" fi ``` The output of the manual test in all 5 tested environments. ``` philwalkquadd:/opt/spark $ isPipe not a pipe # $ echo "yo" | isPipe is a pipe # ``` ### Was this patch authored or co-authored using generative AI tooling? No Closes apache#48937 from philwalk/portability-fix-for-load-spark-env.sh. Authored-by: philwalk <[email protected]> Signed-off-by: yangjie01 <[email protected]> (cherry picked from commit 8d26008) Signed-off-by: yangjie01 <[email protected]>
warrenzhu25
added a commit
that referenced
this pull request
Nov 25, 2025
- Identified 5 bugs: 2 critical, 2 medium, 1 low priority - Bug #1: Resource profile index mismatch (line 408) - Bug apache#2: Tasks array indexing wrong (lines 420, 426) - Bug apache#3: Incorrect average calculation (line 1341) - Bug apache#4: Missing minimum threshold check (line 1316) - Bug apache#5: Unclear counter semantics (line 121) Plan includes: - Detailed root cause analysis with concrete examples - Step-by-step implementation instructions - Index mapping approach (Option B) for Bug apache#2 - 6-commit strategy with tests passing between each - Comprehensive test coverage additions
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
Add description for ExecutorAllocationManager metrics
Why are the changes needed?
Some ExecutorAllocationManager metrics are hard to know what stands for just from metric name.
Does this PR introduce any user-facing change?
no
How was this patch tested?
No code change.