Skip to content

Conversation

@HyukjinKwon
Copy link

What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)

How was this patch tested?

(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)

Please review http://spark.apache.org/contributing.html before opening a pull request.

@HyukjinKwon HyukjinKwon merged commit 93d094f into MaxGekk:from_csv Oct 15, 2018
@HyukjinKwon HyukjinKwon deleted the address-from_csv_1 branch October 16, 2018 12:40
MaxGekk pushed a commit that referenced this pull request Oct 31, 2018
…/`to_avro`

## What changes were proposed in this pull request?

Previously in from_avro/to_avro, we override the method `simpleString` and `sql` for the string output. However, the override only affects the alias naming:
```
Project [from_avro('col,
...
, (mode,PERMISSIVE)) AS from_avro(col, struct<col1:bigint,col2:double>, Map(mode -> PERMISSIVE))#11]
```
It only makes the alias name quite long: `from_avro(col, struct<col1:bigint,col2:double>, Map(mode -> PERMISSIVE))`).

We should follow `from_csv`/`from_json` here, to override the method prettyName only, and we will get a clean alias name

```
... AS from_avro(col)#11
```

## How was this patch tested?

Manual check

Closes apache#22890 from gengliangwang/revise_from_to_avro.

Authored-by: Gengliang Wang <[email protected]>
Signed-off-by: gatorsmile <[email protected]>
MaxGekk pushed a commit that referenced this pull request Feb 16, 2019
…from_avro`/`to_avro`

Back port apache#22890 to branch-2.4.
It is a bug fix for this issue:
https://issues.apache.org/jira/browse/SPARK-26063

## What changes were proposed in this pull request?

Previously in from_avro/to_avro, we override the method `simpleString` and `sql` for the string output. However, the override only affects the alias naming:
```
Project [from_avro('col,
...
, (mode,PERMISSIVE)) AS from_avro(col, struct<col1:bigint,col2:double>, Map(mode -> PERMISSIVE))#11]
```
It only makes the alias name quite long: `from_avro(col, struct<col1:bigint,col2:double>, Map(mode -> PERMISSIVE))`).

We should follow `from_csv`/`from_json` here, to override the method prettyName only, and we will get a clean alias name

```
... AS from_avro(col)#11
```

## How was this patch tested?

Manual check

Closes apache#23047 from gengliangwang/backport_avro_pretty_name.

Authored-by: Gengliang Wang <[email protected]>
Signed-off-by: hyukjinkwon <[email protected]>
MaxGekk pushed a commit that referenced this pull request Dec 31, 2019
### Why are the changes needed?
`EnsureRequirements` adds `ShuffleExchangeExec` (RangePartitioning) after Sort if `RoundRobinPartitioning` behinds it. This will cause 2 shuffles, and the number of partitions in the final stage is not the number specified by `RoundRobinPartitioning.

**Example SQL**
```
SELECT /*+ REPARTITION(5) */ * FROM test ORDER BY a
```

**BEFORE**
```
== Physical Plan ==
*(1) Sort [a#0 ASC NULLS FIRST], true, 0
+- Exchange rangepartitioning(a#0 ASC NULLS FIRST, 200), true, [id=#11]
   +- Exchange RoundRobinPartitioning(5), false, [id=#9]
      +- Scan hive default.test [a#0, b#1], HiveTableRelation `default`.`test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [a#0, b#1]
```

**AFTER**
```
== Physical Plan ==
*(1) Sort [a#0 ASC NULLS FIRST], true, 0
+- Exchange rangepartitioning(a#0 ASC NULLS FIRST, 5), true, [id=#11]
   +- Scan hive default.test [a#0, b#1], HiveTableRelation `default`.`test`, org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe, [a#0, b#1]
```

### Does this PR introduce any user-facing change?
No

### How was this patch tested?
Run suite Tests and add new test for this.

Closes apache#26946 from stczwd/RoundRobinPartitioning.

Lead-authored-by: lijunqing <[email protected]>
Co-authored-by: stczwd <[email protected]>
Signed-off-by: Wenchen Fan <[email protected]>
MaxGekk pushed a commit that referenced this pull request Feb 26, 2024
…n properly

### What changes were proposed in this pull request?
Make `ResolveRelations` handle plan id properly

### Why are the changes needed?
bug fix for Spark Connect, it won't affect classic Spark SQL

before this PR:
```
from pyspark.sql import functions as sf

spark.range(10).withColumn("value_1", sf.lit(1)).write.saveAsTable("test_table_1")
spark.range(10).withColumnRenamed("id", "index").withColumn("value_2", sf.lit(2)).write.saveAsTable("test_table_2")

df1 = spark.read.table("test_table_1")
df2 = spark.read.table("test_table_2")
df3 = spark.read.table("test_table_1")

join1 = df1.join(df2, on=df1.id==df2.index).select(df2.index, df2.value_2)
join2 = df3.join(join1, how="left", on=join1.index==df3.id)

join2.schema
```

fails with
```
AnalysisException: [CANNOT_RESOLVE_DATAFRAME_COLUMN] Cannot resolve dataframe column "id". It's probably because of illegal references like `df1.select(df2.col("a"))`. SQLSTATE: 42704
```

That is due to existing plan caching in `ResolveRelations` doesn't work with Spark Connect

```
=== Applying Rule org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations ===
 '[#12]Join LeftOuter, '`==`('index, 'id)                     '[#12]Join LeftOuter, '`==`('index, 'id)
!:- '[#9]UnresolvedRelation [test_table_1], [], false         :- '[#9]SubqueryAlias spark_catalog.default.test_table_1
!+- '[#11]Project ['index, 'value_2]                          :  +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
!   +- '[#10]Join Inner, '`==`('id, 'index)                   +- '[#11]Project ['index, 'value_2]
!      :- '[#7]UnresolvedRelation [test_table_1], [], false      +- '[#10]Join Inner, '`==`('id, 'index)
!      +- '[#8]UnresolvedRelation [test_table_2], [], false         :- '[#9]SubqueryAlias spark_catalog.default.test_table_1
!                                                                   :  +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
!                                                                   +- '[#8]SubqueryAlias spark_catalog.default.test_table_2
!                                                                      +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_2`, [], false

Can not resolve 'id with plan 7
```

`[#7]UnresolvedRelation [test_table_1], [], false` was wrongly resolved to the cached one
```
:- '[#9]SubqueryAlias spark_catalog.default.test_table_1
   +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
```

### Does this PR introduce _any_ user-facing change?
yes, bug fix

### How was this patch tested?
added ut

### Was this patch authored or co-authored using generative AI tooling?
ci

Closes apache#45214 from zhengruifeng/connect_fix_read_join.

Authored-by: Ruifeng Zheng <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
MaxGekk pushed a commit that referenced this pull request Sep 13, 2024
…r `postgreSQL/float4.sql` and `postgreSQL/int8.sql`

### What changes were proposed in this pull request?
This pr regenerate Java 21 golden file for `postgreSQL/float4.sql` and `postgreSQL/int8.sql` to fix Java 21 daily test.

### Why are the changes needed?
Fix Java 21 daily test:
- https://github.com/apache/spark/actions/runs/10823897095/job/30030200710

```
[info] - postgreSQL/float4.sql *** FAILED *** (1 second, 100 milliseconds)
[info]   postgreSQL/float4.sql
[info]   Expected "...arameters" : {
[info]       "[ansiConfig" : "\"spark.sql.ansi.enabled\"",
[info]       "]expression" : "'N A ...", but got "...arameters" : {
[info]       "[]expression" : "'N A ..." Result did not match for query #11
[info]   SELECT float('N A N') (SQLQueryTestSuite.scala:663)
...
[info] - postgreSQL/int8.sql *** FAILED *** (2 seconds, 474 milliseconds)
[info]   postgreSQL/int8.sql
[info]   Expected "...arameters" : {
[info]       "[ansiConfig" : "\"spark.sql.ansi.enabled\"",
[info]       "]sourceType" : "\"BIG...", but got "...arameters" : {
[info]       "[]sourceType" : "\"BIG..." Result did not match for query apache#66
[info]   SELECT CAST(q1 AS int) FROM int8_tbl WHERE q2 <> 456 (SQLQueryTestSuite.scala:663)
...
[info] *** 2 TESTS FAILED ***
[error] Failed: Total 3559, Failed 2, Errors 0, Passed 3557, Ignored 4
[error] Failed tests:
[error] 	org.apache.spark.sql.SQLQueryTestSuite
[error] (sql / Test / test) sbt.TestsFailedException: Tests unsuccessful
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
- Pass Github Acitons
- Manual checked: `build/sbt "sql/testOnly org.apache.spark.sql.SQLQueryTestSuite" with Java 21, all test passed
`

### Was this patch authored or co-authored using generative AI tooling?
No

Closes apache#48089 from LuciferYang/SPARK-49578-FOLLOWUP.

Authored-by: yangjie01 <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
MaxGekk pushed a commit that referenced this pull request Sep 19, 2025
…e` building

### What changes were proposed in this pull request?

This PR aims to add `libwebp-dev` to recover `spark-rm/Dockerfile` building.

### Why are the changes needed?

`Apache Spark` release docker image compilation has been broken for last 7 days due to the SparkR package compilation.
- https://github.com/apache/spark/actions/workflows/release.yml
    - https://github.com/apache/spark/actions/runs/17425825244

```
#11 559.4 No package 'libwebpmux' found
...
#11 559.4 -------------------------- [ERROR MESSAGE] ---------------------------
#11 559.4 <stdin>:1:10: fatal error: ft2build.h: No such file or directory
#11 559.4 compilation terminated.
#11 559.4 --------------------------------------------------------------------
#11 559.4 ERROR: configuration failed for package 'ragg'
```

### Does this PR introduce _any_ user-facing change?

No, this is a fix for Apache Spark release tool.

### How was this patch tested?

Manually build.

```
$ cd dev/create-release/spark-rm
$ docker build .
```

**BEFORE**

```
...
Dockerfile:83
--------------------
  82 |     # See more in SPARK-39959, roxygen2 < 7.2.1
  83 | >>> RUN Rscript -e "install.packages(c('devtools', 'knitr', 'markdown',  \
  84 | >>>     'rmarkdown', 'testthat', 'devtools', 'e1071', 'survival', 'arrow',  \
  85 | >>>     'ggplot2', 'mvtnorm', 'statmod', 'xml2'), repos='https://cloud.r-project.org/')" && \
  86 | >>>     Rscript -e "devtools::install_version('roxygen2', version='7.2.0', repos='https://cloud.r-project.org')" && \
  87 | >>>     Rscript -e "devtools::install_version('lintr', version='2.0.1', repos='https://cloud.r-project.org')" && \
  88 | >>>     Rscript -e "devtools::install_version('pkgdown', version='2.0.1', repos='https://cloud.r-project.org')" && \
  89 | >>>     Rscript -e "devtools::install_version('preferably', version='0.4', repos='https://cloud.r-project.org')"
  90 |
--------------------
ERROR: failed to build: failed to solve:
```

**AFTER**
```
...
 => [ 6/22] RUN add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu jammy-cran40/'                                                             3.8s
 => [ 7/22] RUN Rscript -e "install.packages(c('devtools', 'knitr', 'markdown',      'rmarkdown', 'testthat', 'devtools', 'e1071', 'survival', 'arrow',       892.2s
 => [ 8/22] RUN add-apt-repository ppa:pypy/ppa                                                                                                                15.3s
...
```

After merging this PR, we can validate via the daily release dry-run CI.

- https://github.com/apache/spark/actions/workflows/release.yml

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#52290 from dongjoon-hyun/SPARK-53539.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant