Skip to content

Conversation

@pull
Copy link

@pull pull bot commented Oct 26, 2022

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

gengliangwang and others added 4 commits October 25, 2022 22:39
…if data type is changed

### What changes were proposed in this pull request?

Avoid reordering Add for canonicalizing if it is decimal type and the result data type is changed.
Expressions are canonicalized for comparisons and explanations. For non-decimal Add expression, the order can be sorted by hashcode, and the result is supposed to be the same.
However, for Add expression of Decimal type, the behavior is different: Given decimal (p1, s1) and another decimal (p2, s2), the result integral part is `max(p1-s1, p2-s2) +1`, the result decimal part is `max(s1, s2)`. Thus the result data type is `(max(p1-s1, p2-s2) +1 + max(s1, s2), max(s1, s2))`.
Thus the order matters:

For `(decimal(12,5) + decimal(12,6)) + decimal(3, 2)`, the first add `decimal(12,5) + decimal(12,6)` results in `decimal(14, 6)`, and then `decimal(14, 6) + decimal(3, 2)`  results in `decimal(15, 6)`
For `(decimal(12, 6) + decimal(3,2)) + decimal(12, 5)`, the first add `decimal(12, 6) + decimal(3,2)` results in `decimal(13, 6)`, and then `decimal(13, 6) + decimal(12, 5)` results in `decimal(14, 6)`

In the following query:
```
create table foo(a decimal(12, 5), b decimal(12, 6)) using orc
select sum(coalesce(a+b+ 1.75, a)) from foo
```
At first `coalesce(a+b+ 1.75, a)` is resolved as `coalesce(a+b+ 1.75, cast(a as decimal(15, 6))`. In the canonicalized version, the expression becomes `coalesce(1.75+b+a, cast(a as decimal(15, 6))`. As explained above, `1.75+b+a` is of decimal(14, 6), which is different from  `cast(a as decimal(15, 6)`. Thus the following error will happen:
```
java.lang.IllegalArgumentException: requirement failed: All input types must be the same except nullable, containsNull, valueContainsNull flags. The input types found are
	DecimalType(14,6)
	DecimalType(15,6)
	at scala.Predef$.require(Predef.scala:281)
	at org.apache.spark.sql.catalyst.expressions.ComplexTypeMergingExpression.dataTypeCheck(Expression.scala:1149)
	at org.apache.spark.sql.catalyst.expressions.ComplexTypeMergingExpression.dataTypeCheck$(Expression.scala:1143)
```
This PR is to fix the bug.
### Why are the changes needed?

Bug fix
### Does this PR introduce _any_ user-facing change?

No
### How was this patch tested?

A new test case

Closes #38379 from gengliangwang/fixDecimalAdd.

Lead-authored-by: Gengliang Wang <[email protected]>
Co-authored-by: Gengliang Wang <[email protected]>
Signed-off-by: Gengliang Wang <[email protected]>
…P_RATE`

### What changes were proposed in this pull request?
In the PR, I propose to fix a typo by renaming the error class INCORRECT_RUMP_UP_RATE to INCORRECT_RAMP_UP_RATE, and rename the method `incorrectRumpUpRate()` to `incorrectRampUpRate()`.

### Why are the changes needed?
To don't confuse user, and fix the typo.

### Does this PR introduce _any_ user-facing change?
No. The error class hasn't released yet.

### How was this patch tested?
By running the affected tests:
```
$ build/sbt "test:testOnly *RateStreamProviderSuite"
$ build/sbt "core/testOnly *SparkThrowableSuite"
```

Closes #38391 from MaxGekk/rename-INCORRECT_RUMP_UP_RATE.

Authored-by: Max Gekk <[email protected]>
Signed-off-by: Max Gekk <[email protected]>
…or fatal error

### What changes were proposed in this pull request?
Before when the Executor plugin fails to initialize for fatal error, the Executor shows active in sparkUI but does not accept any job. As expected Executor should exit when initialization failed with fatal error.
The root cause (thanks for help mridulm and Ngone51 ):  The thread pool execution uses the inappropriate API `submit` instead of `execute`. This leads to  `SparkUncaughtExceptionHandler` can't catch the fatal error.

### Why are the changes needed?
As expected Executor should exit when initialization failed with fatal error. Before Executor was active but can't do anything which is confused.

### Does this PR introduce any user-facing change?
No
### How was this patch tested?
new added test

Closes #37779 from yabola/executor.

Authored-by: chenliang.lu <[email protected]>
Signed-off-by: Mridul <mridul<at>gmail.com>
### What changes were proposed in this pull request?

This the final PR that proposes to migrate the execution errors onto temporary error classes with the prefix `_LEGACY_ERROR_TEMP`

The error classes are prefixed with `_LEGACY_ERROR_TEMP_` indicates the dev-facing error messages, and won't be exposed to end users.

### Why are the changes needed?

To speed-up the error class migration.

The migration on temporary error classes allow us to analyze the errors, so we can detect the most popular error classes.

### Does this PR introduce _any_ user-facing change?

No

### How was this patch tested?

```
$ build/sbt "sql/testOnly org.apache.spark.sql.SQLQueryTestSuite"
$ build/sbt "test:testOnly *SQLQuerySuite"
$ build/sbt -Phive-thriftserver "hive-thriftserver/testOnly org.apache.spark.sql.hive.thriftserver.ThriftServerQueryTestSuite"
```

Closes #38177 from itholic/SPARK-40540-2276-2300.

Authored-by: itholic <[email protected]>
Signed-off-by: Max Gekk <[email protected]>
@pull pull bot merged commit bef9797 into huangxiaopingRD:master Oct 26, 2022
pull bot pushed a commit that referenced this pull request Feb 24, 2024
…n properly

### What changes were proposed in this pull request?
Make `ResolveRelations` handle plan id properly

### Why are the changes needed?
bug fix for Spark Connect, it won't affect classic Spark SQL

before this PR:
```
from pyspark.sql import functions as sf

spark.range(10).withColumn("value_1", sf.lit(1)).write.saveAsTable("test_table_1")
spark.range(10).withColumnRenamed("id", "index").withColumn("value_2", sf.lit(2)).write.saveAsTable("test_table_2")

df1 = spark.read.table("test_table_1")
df2 = spark.read.table("test_table_2")
df3 = spark.read.table("test_table_1")

join1 = df1.join(df2, on=df1.id==df2.index).select(df2.index, df2.value_2)
join2 = df3.join(join1, how="left", on=join1.index==df3.id)

join2.schema
```

fails with
```
AnalysisException: [CANNOT_RESOLVE_DATAFRAME_COLUMN] Cannot resolve dataframe column "id". It's probably because of illegal references like `df1.select(df2.col("a"))`. SQLSTATE: 42704
```

That is due to existing plan caching in `ResolveRelations` doesn't work with Spark Connect

```
=== Applying Rule org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations ===
 '[#12]Join LeftOuter, '`==`('index, 'id)                     '[#12]Join LeftOuter, '`==`('index, 'id)
!:- '[#9]UnresolvedRelation [test_table_1], [], false         :- '[#9]SubqueryAlias spark_catalog.default.test_table_1
!+- '[#11]Project ['index, 'value_2]                          :  +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
!   +- '[#10]Join Inner, '`==`('id, 'index)                   +- '[#11]Project ['index, 'value_2]
!      :- '[#7]UnresolvedRelation [test_table_1], [], false      +- '[#10]Join Inner, '`==`('id, 'index)
!      +- '[#8]UnresolvedRelation [test_table_2], [], false         :- '[#9]SubqueryAlias spark_catalog.default.test_table_1
!                                                                   :  +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
!                                                                   +- '[#8]SubqueryAlias spark_catalog.default.test_table_2
!                                                                      +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_2`, [], false

Can not resolve 'id with plan 7
```

`[#7]UnresolvedRelation [test_table_1], [], false` was wrongly resolved to the cached one
```
:- '[#9]SubqueryAlias spark_catalog.default.test_table_1
   +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
```

### Does this PR introduce _any_ user-facing change?
yes, bug fix

### How was this patch tested?
added ut

### Was this patch authored or co-authored using generative AI tooling?
ci

Closes apache#45214 from zhengruifeng/connect_fix_read_join.

Authored-by: Ruifeng Zheng <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
pull bot pushed a commit that referenced this pull request Sep 13, 2024
…r `postgreSQL/float4.sql` and `postgreSQL/int8.sql`

### What changes were proposed in this pull request?
This pr regenerate Java 21 golden file for `postgreSQL/float4.sql` and `postgreSQL/int8.sql` to fix Java 21 daily test.

### Why are the changes needed?
Fix Java 21 daily test:
- https://github.com/apache/spark/actions/runs/10823897095/job/30030200710

```
[info] - postgreSQL/float4.sql *** FAILED *** (1 second, 100 milliseconds)
[info]   postgreSQL/float4.sql
[info]   Expected "...arameters" : {
[info]       "[ansiConfig" : "\"spark.sql.ansi.enabled\"",
[info]       "]expression" : "'N A ...", but got "...arameters" : {
[info]       "[]expression" : "'N A ..." Result did not match for query #11
[info]   SELECT float('N A N') (SQLQueryTestSuite.scala:663)
...
[info] - postgreSQL/int8.sql *** FAILED *** (2 seconds, 474 milliseconds)
[info]   postgreSQL/int8.sql
[info]   Expected "...arameters" : {
[info]       "[ansiConfig" : "\"spark.sql.ansi.enabled\"",
[info]       "]sourceType" : "\"BIG...", but got "...arameters" : {
[info]       "[]sourceType" : "\"BIG..." Result did not match for query #66
[info]   SELECT CAST(q1 AS int) FROM int8_tbl WHERE q2 <> 456 (SQLQueryTestSuite.scala:663)
...
[info] *** 2 TESTS FAILED ***
[error] Failed: Total 3559, Failed 2, Errors 0, Passed 3557, Ignored 4
[error] Failed tests:
[error] 	org.apache.spark.sql.SQLQueryTestSuite
[error] (sql / Test / test) sbt.TestsFailedException: Tests unsuccessful
```

### Does this PR introduce _any_ user-facing change?
No

### How was this patch tested?
- Pass Github Acitons
- Manual checked: `build/sbt "sql/testOnly org.apache.spark.sql.SQLQueryTestSuite" with Java 21, all test passed
`

### Was this patch authored or co-authored using generative AI tooling?
No

Closes apache#48089 from LuciferYang/SPARK-49578-FOLLOWUP.

Authored-by: yangjie01 <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
huangxiaopingRD pushed a commit that referenced this pull request Sep 2, 2025
…plan properly

### What changes were proposed in this pull request?
Make `ResolveRelations` handle plan id properly

cherry-pick bugfix apache#45214 to 3.5

### Why are the changes needed?
bug fix for Spark Connect, it won't affect classic Spark SQL

before this PR:
```
from pyspark.sql import functions as sf

spark.range(10).withColumn("value_1", sf.lit(1)).write.saveAsTable("test_table_1")
spark.range(10).withColumnRenamed("id", "index").withColumn("value_2", sf.lit(2)).write.saveAsTable("test_table_2")

df1 = spark.read.table("test_table_1")
df2 = spark.read.table("test_table_2")
df3 = spark.read.table("test_table_1")

join1 = df1.join(df2, on=df1.id==df2.index).select(df2.index, df2.value_2)
join2 = df3.join(join1, how="left", on=join1.index==df3.id)

join2.schema
```

fails with
```
AnalysisException: [CANNOT_RESOLVE_DATAFRAME_COLUMN] Cannot resolve dataframe column "id". It's probably because of illegal references like `df1.select(df2.col("a"))`. SQLSTATE: 42704
```

That is due to existing plan caching in `ResolveRelations` doesn't work with Spark Connect

```
=== Applying Rule org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations ===
 '[#12]Join LeftOuter, '`==`('index, 'id)                     '[#12]Join LeftOuter, '`==`('index, 'id)
!:- '[#9]UnresolvedRelation [test_table_1], [], false         :- '[#9]SubqueryAlias spark_catalog.default.test_table_1
!+- '[#11]Project ['index, 'value_2]                          :  +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
!   +- '[#10]Join Inner, '`==`('id, 'index)                   +- '[#11]Project ['index, 'value_2]
!      :- '[#7]UnresolvedRelation [test_table_1], [], false      +- '[#10]Join Inner, '`==`('id, 'index)
!      +- '[#8]UnresolvedRelation [test_table_2], [], false         :- '[#9]SubqueryAlias spark_catalog.default.test_table_1
!                                                                   :  +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
!                                                                   +- '[#8]SubqueryAlias spark_catalog.default.test_table_2
!                                                                      +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_2`, [], false

Can not resolve 'id with plan 7
```

`[#7]UnresolvedRelation [test_table_1], [], false` was wrongly resolved to the cached one
```
:- '[#9]SubqueryAlias spark_catalog.default.test_table_1
   +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
```

### Does this PR introduce _any_ user-facing change?
yes, bug fix

### How was this patch tested?
added ut

### Was this patch authored or co-authored using generative AI tooling?
ci

Closes apache#46291 from zhengruifeng/connect_fix_read_join_35.

Authored-by: Ruifeng Zheng <[email protected]>
Signed-off-by: Ruifeng Zheng <[email protected]>
huangxiaopingRD pushed a commit that referenced this pull request Sep 2, 2025
…plan properly

### What changes were proposed in this pull request?
Make `ResolveRelations` handle plan id properly

cherry-pick bugfix apache#45214 to 3.4

### Why are the changes needed?
bug fix for Spark Connect, it won't affect classic Spark SQL

before this PR:
```
from pyspark.sql import functions as sf

spark.range(10).withColumn("value_1", sf.lit(1)).write.saveAsTable("test_table_1")
spark.range(10).withColumnRenamed("id", "index").withColumn("value_2", sf.lit(2)).write.saveAsTable("test_table_2")

df1 = spark.read.table("test_table_1")
df2 = spark.read.table("test_table_2")
df3 = spark.read.table("test_table_1")

join1 = df1.join(df2, on=df1.id==df2.index).select(df2.index, df2.value_2)
join2 = df3.join(join1, how="left", on=join1.index==df3.id)

join2.schema
```

fails with
```
AnalysisException: [CANNOT_RESOLVE_DATAFRAME_COLUMN] Cannot resolve dataframe column "id". It's probably because of illegal references like `df1.select(df2.col("a"))`. SQLSTATE: 42704
```

That is due to existing plan caching in `ResolveRelations` doesn't work with Spark Connect

```
=== Applying Rule org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations ===
 '[#12]Join LeftOuter, '`==`('index, 'id)                     '[#12]Join LeftOuter, '`==`('index, 'id)
!:- '[#9]UnresolvedRelation [test_table_1], [], false         :- '[#9]SubqueryAlias spark_catalog.default.test_table_1
!+- '[#11]Project ['index, 'value_2]                          :  +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
!   +- '[#10]Join Inner, '`==`('id, 'index)                   +- '[#11]Project ['index, 'value_2]
!      :- '[#7]UnresolvedRelation [test_table_1], [], false      +- '[#10]Join Inner, '`==`('id, 'index)
!      +- '[#8]UnresolvedRelation [test_table_2], [], false         :- '[#9]SubqueryAlias spark_catalog.default.test_table_1
!                                                                   :  +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
!                                                                   +- '[#8]SubqueryAlias spark_catalog.default.test_table_2
!                                                                      +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_2`, [], false

Can not resolve 'id with plan 7
```

`[#7]UnresolvedRelation [test_table_1], [], false` was wrongly resolved to the cached one
```
:- '[#9]SubqueryAlias spark_catalog.default.test_table_1
   +- 'UnresolvedCatalogRelation `spark_catalog`.`default`.`test_table_1`, [], false
```

### Does this PR introduce _any_ user-facing change?
yes, bug fix

### How was this patch tested?
added ut

### Was this patch authored or co-authored using generative AI tooling?
ci

Closes apache#46290 from zhengruifeng/connect_fix_read_join_34.

Authored-by: Ruifeng Zheng <[email protected]>
Signed-off-by: Ruifeng Zheng <[email protected]>
pull bot pushed a commit that referenced this pull request Sep 9, 2025
…e` building

### What changes were proposed in this pull request?

This PR aims to add `libwebp-dev` to recover `spark-rm/Dockerfile` building.

### Why are the changes needed?

`Apache Spark` release docker image compilation has been broken for last 7 days due to the SparkR package compilation.
- https://github.com/apache/spark/actions/workflows/release.yml
    - https://github.com/apache/spark/actions/runs/17425825244

```
#11 559.4 No package 'libwebpmux' found
...
#11 559.4 -------------------------- [ERROR MESSAGE] ---------------------------
#11 559.4 <stdin>:1:10: fatal error: ft2build.h: No such file or directory
#11 559.4 compilation terminated.
#11 559.4 --------------------------------------------------------------------
#11 559.4 ERROR: configuration failed for package 'ragg'
```

### Does this PR introduce _any_ user-facing change?

No, this is a fix for Apache Spark release tool.

### How was this patch tested?

Manually build.

```
$ cd dev/create-release/spark-rm
$ docker build .
```

**BEFORE**

```
...
Dockerfile:83
--------------------
  82 |     # See more in SPARK-39959, roxygen2 < 7.2.1
  83 | >>> RUN Rscript -e "install.packages(c('devtools', 'knitr', 'markdown',  \
  84 | >>>     'rmarkdown', 'testthat', 'devtools', 'e1071', 'survival', 'arrow',  \
  85 | >>>     'ggplot2', 'mvtnorm', 'statmod', 'xml2'), repos='https://cloud.r-project.org/')" && \
  86 | >>>     Rscript -e "devtools::install_version('roxygen2', version='7.2.0', repos='https://cloud.r-project.org')" && \
  87 | >>>     Rscript -e "devtools::install_version('lintr', version='2.0.1', repos='https://cloud.r-project.org')" && \
  88 | >>>     Rscript -e "devtools::install_version('pkgdown', version='2.0.1', repos='https://cloud.r-project.org')" && \
  89 | >>>     Rscript -e "devtools::install_version('preferably', version='0.4', repos='https://cloud.r-project.org')"
  90 |
--------------------
ERROR: failed to build: failed to solve:
```

**AFTER**
```
...
 => [ 6/22] RUN add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu jammy-cran40/'                                                             3.8s
 => [ 7/22] RUN Rscript -e "install.packages(c('devtools', 'knitr', 'markdown',      'rmarkdown', 'testthat', 'devtools', 'e1071', 'survival', 'arrow',       892.2s
 => [ 8/22] RUN add-apt-repository ppa:pypy/ppa                                                                                                                15.3s
...
```

After merging this PR, we can validate via the daily release dry-run CI.

- https://github.com/apache/spark/actions/workflows/release.yml

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#52290 from dongjoon-hyun/SPARK-53539.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
pull bot pushed a commit that referenced this pull request Nov 16, 2025
…rsion at the end

### What changes were proposed in this pull request?

This PR aims to fix `spark-rm` Dockefile to install `pkgdown` version at the end.

### Why are the changes needed?

Although `pkgdown` is supposed to be `2.0.1`, it's changed at the next package installation like the following. We should install `pkgdown` at the end to make it sure.
https://github.com/apache/spark/blob/0311f44e33e5cf8ba60ccc330de3df4f688f5847/dev/create-release/spark-rm/Dockerfile#L89

- https://github.com/apache/spark/actions/workflows/release.yml
  - https://github.com/apache/spark/actions/runs/19386198324/job/55473421715

```
#11 1007.3 Downloading package from url: https://cloud.r-project.org/src/contrib/Archive/preferably/preferably_0.4.tar.gz
#11 1008.9 pkgdown (2.0.1 -> 2.2.0) [CRAN]
#11 1008.9 Installing 1 packages: pkgdown
#11 1008.9 Installing package into '/usr/local/lib/R/site-library'
#11 1008.9 (as 'lib' is unspecified)
#11 1009.4 trying URL 'https://cloud.r-project.org/src/contrib/pkgdown_2.2.0.tar.gz'
#11 1009.7 Content type 'application/x-gzip' length 1280630 bytes (1.2 MB)
#11 1009.7 ==================================================
#11 1009.7 downloaded 1.2 MB
#11 1009.7
#11 1010.2 * installing *source* package 'pkgdown' ...
#11 1010.2 ** package 'pkgdown' successfully unpacked and MD5 sums checked
#11 1010.2 ** using staged installation
#11 1010.3 ** R
#11 1010.3 ** inst
#11 1010.3 ** byte-compile and prepare package for lazy loading
#11 1013.1 ** help
#11 1013.2 *** installing help indices
#11 1013.2 *** copying figures
#11 1013.2 ** building package indices
#11 1013.5 ** installing vignettes
#11 1013.5 ** testing if installed package can be loaded from temporary location
#11 1013.8 ** testing if installed package can be loaded from final location
#11 1014.1 ** testing if installed package keeps a record of temporary installation path
#11 1014.1 * DONE (pkgdown)
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manual review.

```
$ dev/create-release/do-release-docker.sh -d /tmp/spark-4.1.0 -n -s docs

$ docker run -it --rm --entrypoint /bin/bash spark-rm
spark-rm923a388425fa:/opt/spark-rm/output$ Rscript -e 'installed.packages()' | grep pkgdown | head -n1
pkgdown      "pkgdown"      "/usr/local/lib/R/site-library" "2.0.1"
```

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#53083 from dongjoon-hyun/SPARK-54371.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
huangxiaopingRD pushed a commit that referenced this pull request Nov 25, 2025
…e` building

### What changes were proposed in this pull request?

This PR aims to add `libwebp-dev` to recover `spark-rm/Dockerfile` building.

### Why are the changes needed?

`Apache Spark` release docker image compilation has been broken for last 7 days due to the SparkR package compilation.
- https://github.com/apache/spark/actions/workflows/release.yml
    - https://github.com/apache/spark/actions/runs/17425825244

```
#11 559.4 No package 'libwebpmux' found
...
#11 559.4 -------------------------- [ERROR MESSAGE] ---------------------------
#11 559.4 <stdin>:1:10: fatal error: ft2build.h: No such file or directory
#11 559.4 compilation terminated.
#11 559.4 --------------------------------------------------------------------
#11 559.4 ERROR: configuration failed for package 'ragg'
```

### Does this PR introduce _any_ user-facing change?

No, this is a fix for Apache Spark release tool.

### How was this patch tested?

Manually build.

```
$ cd dev/create-release/spark-rm
$ docker build .
```

**BEFORE**

```
...
Dockerfile:83
--------------------
  82 |     # See more in SPARK-39959, roxygen2 < 7.2.1
  83 | >>> RUN Rscript -e "install.packages(c('devtools', 'knitr', 'markdown',  \
  84 | >>>     'rmarkdown', 'testthat', 'devtools', 'e1071', 'survival', 'arrow',  \
  85 | >>>     'ggplot2', 'mvtnorm', 'statmod', 'xml2'), repos='https://cloud.r-project.org/')" && \
  86 | >>>     Rscript -e "devtools::install_version('roxygen2', version='7.2.0', repos='https://cloud.r-project.org')" && \
  87 | >>>     Rscript -e "devtools::install_version('lintr', version='2.0.1', repos='https://cloud.r-project.org')" && \
  88 | >>>     Rscript -e "devtools::install_version('pkgdown', version='2.0.1', repos='https://cloud.r-project.org')" && \
  89 | >>>     Rscript -e "devtools::install_version('preferably', version='0.4', repos='https://cloud.r-project.org')"
  90 |
--------------------
ERROR: failed to build: failed to solve:
```

**AFTER**
```
...
 => [ 6/22] RUN add-apt-repository 'deb https://cloud.r-project.org/bin/linux/ubuntu jammy-cran40/'                                                             3.8s
 => [ 7/22] RUN Rscript -e "install.packages(c('devtools', 'knitr', 'markdown',      'rmarkdown', 'testthat', 'devtools', 'e1071', 'survival', 'arrow',       892.2s
 => [ 8/22] RUN add-apt-repository ppa:pypy/ppa                                                                                                                15.3s
...
```

After merging this PR, we can validate via the daily release dry-run CI.

- https://github.com/apache/spark/actions/workflows/release.yml

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#52290 from dongjoon-hyun/SPARK-53539.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
huangxiaopingRD pushed a commit that referenced this pull request Nov 25, 2025
…rsion at the end

### What changes were proposed in this pull request?

This PR aims to fix `spark-rm` Dockefile to install `pkgdown` version at the end.

### Why are the changes needed?

Although `pkgdown` is supposed to be `2.0.1`, it's changed at the next package installation like the following. We should install `pkgdown` at the end to make it sure.
https://github.com/apache/spark/blob/0311f44e33e5cf8ba60ccc330de3df4f688f5847/dev/create-release/spark-rm/Dockerfile#L89

- https://github.com/apache/spark/actions/workflows/release.yml
  - https://github.com/apache/spark/actions/runs/19386198324/job/55473421715

```
#11 1007.3 Downloading package from url: https://cloud.r-project.org/src/contrib/Archive/preferably/preferably_0.4.tar.gz
#11 1008.9 pkgdown (2.0.1 -> 2.2.0) [CRAN]
#11 1008.9 Installing 1 packages: pkgdown
#11 1008.9 Installing package into '/usr/local/lib/R/site-library'
#11 1008.9 (as 'lib' is unspecified)
#11 1009.4 trying URL 'https://cloud.r-project.org/src/contrib/pkgdown_2.2.0.tar.gz'
#11 1009.7 Content type 'application/x-gzip' length 1280630 bytes (1.2 MB)
#11 1009.7 ==================================================
#11 1009.7 downloaded 1.2 MB
#11 1009.7
#11 1010.2 * installing *source* package 'pkgdown' ...
#11 1010.2 ** package 'pkgdown' successfully unpacked and MD5 sums checked
#11 1010.2 ** using staged installation
#11 1010.3 ** R
#11 1010.3 ** inst
#11 1010.3 ** byte-compile and prepare package for lazy loading
#11 1013.1 ** help
#11 1013.2 *** installing help indices
#11 1013.2 *** copying figures
#11 1013.2 ** building package indices
#11 1013.5 ** installing vignettes
#11 1013.5 ** testing if installed package can be loaded from temporary location
#11 1013.8 ** testing if installed package can be loaded from final location
#11 1014.1 ** testing if installed package keeps a record of temporary installation path
#11 1014.1 * DONE (pkgdown)
```

### Does this PR introduce _any_ user-facing change?

No.

### How was this patch tested?

Manual review.

```
$ dev/create-release/do-release-docker.sh -d /tmp/spark-4.1.0 -n -s docs

$ docker run -it --rm --entrypoint /bin/bash spark-rm
spark-rm923a388425fa:/opt/spark-rm/output$ Rscript -e 'installed.packages()' | grep pkgdown | head -n1
pkgdown      "pkgdown"      "/usr/local/lib/R/site-library" "2.0.1"
```

### Was this patch authored or co-authored using generative AI tooling?

No.

Closes apache#53083 from dongjoon-hyun/SPARK-54371.

Authored-by: Dongjoon Hyun <[email protected]>
Signed-off-by: Dongjoon Hyun <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants