forked from apache/spark
-
Notifications
You must be signed in to change notification settings - Fork 1
[pull] master from apache:master #60
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Merged
Merged
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
…ark Sweep (CMS) Garbage Collector ### What changes were proposed in this pull request? JEP 363 removed the CMS garbage collector. So, we are removing the recommendation to use it in this PR. ### Why are the changes needed? Fix misleading doc ### Does this PR introduce _any_ user-facing change? no ### How was this patch tested? doc build ### Was this patch authored or co-authored using generative AI tooling? no Closes #44746 from yaooqinn/SPARK-46729. Authored-by: Kent Yao <[email protected]> Signed-off-by: Kent Yao <[email protected]>
…)` on client side ### What changes were proposed in this pull request? before #44689, `df["*"]` and `sf.col("*")` are both convert to `UnresolvedStar`, and then `Count(UnresolvedStar)` is converted to `Count(1)` in Analyzer: https://github.com/apache/spark/blob/381f3691bd481abc8f621ca3f282e06db32bea31/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala#L1893-L1897 in that fix, we introduced a new node `UnresolvedDataFrameStar` for `df["*"]` which will be replaced to `ResolvedStar` later. Unfortunately, it doesn't match `Count(UnresolvedStar)` any more. So it causes: ``` In [1]: from pyspark.sql import functions as sf In [2]: df1 = spark.createDataFrame([{"id": 1, "val": "v"}]) In [3]: df1.select(sf.count(df1["*"])) Out[3]: DataFrame[count(id, val): bigint] ``` which should be ``` In [3]: df1.select(sf.count(df1["*"])) Out[3]: DataFrame[count(1): bigint] ``` In vanilla Spark, it is up to the `count` function to make such conversion `sf.count(df1["*"])` -> `sf.count(sf.lit(1))`, see https://github.com/apache/spark/blob/e8dfcd3081abe16b2115bb2944a2b1cb547eca8e/sql/core/src/main/scala/org/apache/spark/sql/functions.scala#L422-L436 So it is a natural way to fix this behavior on the client side. ### Why are the changes needed? to keep the behavior ### Does this PR introduce _any_ user-facing change? it fix a behavior change introduced in #44689 ### How was this patch tested? added ut ### Was this patch authored or co-authored using generative AI tooling? no Closes #44752 from zhengruifeng/connect_fix_count_df_star. Authored-by: Ruifeng Zheng <[email protected]> Signed-off-by: Ruifeng Zheng <[email protected]>
### What changes were proposed in this pull request? this PR is a followup of #44689, to fix `dataset.col("*")` in Scala Client ### Why are the changes needed? fix `dataset.col("*")` resolution ### Does this PR introduce _any_ user-facing change? yes, bug fix ### How was this patch tested? added ut ### Was this patch authored or co-authored using generative AI tooling? no Closes #44748 from zhengruifeng/connect_scala_df_star. Authored-by: Ruifeng Zheng <[email protected]> Signed-off-by: Ruifeng Zheng <[email protected]>
…or classes ### What changes were proposed in this pull request? In the PR, I propose to port the existing `classifyException()` method which accepts a description to new one w/ an error class added by #44358. The modified JDBC dialects are: DB2, H2, Oracle, MS SQL Server, MySQL and PostgreSQL. ### Why are the changes needed? The old method `classifyException()` which accepts a `description` only has been deprecated already by ... ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? By existing integration tests, and the modified test suite: ``` $ build/sbt "test:testOnly *JDBCV2Suite" ``` ### Was this patch authored or co-authored using generative AI tooling? No. Closes #44739 from MaxGekk/port-jdbc-classifyException. Authored-by: Max Gekk <[email protected]> Signed-off-by: Max Gekk <[email protected]>
…erators
### What changes were proposed in this pull request?
When using pandas UDFs with iterators, if users enable the profiling spark conf, a warning indicating non-support should be raised, and profiling should be disabled.
However, currently, after raising the not-supported warning, the memory profiler is still being enabled.
The PR proposed to fix that.
### Why are the changes needed?
A bug fix to eliminate misleading behavior.
### Does this PR introduce _any_ user-facing change?
The noticeable changes will affect only those using the PySpark shell. This is because, in the PySpark shell, the memory profiler will raise an error, which in turn blocks the execution of the UDF.
### How was this patch tested?
Manual test.
### Was this patch authored or co-authored using generative AI tooling?
Setup:
```py
$ ./bin/pyspark --conf spark.python.profile=true
>>> from typing import Iterator
>>> from pyspark.sql.functions import *
>>> import pandas as pd
>>> pandas_udf("long")
... def plus_one(iterator: Iterator[pd.Series]) -> Iterator[pd.Series]:
... for s in iterator:
... yield s + 1
...
>>> df = spark.createDataFrame(pd.DataFrame([1, 2, 3], columns=["v"]))
```
Before:
```
>>> df.select(plus_one(df.v)).show()
UserWarning: Profiling UDFs with iterators input/output is not supported.
Traceback (most recent call last):
...
OSError: could not get source code
```
After:
```
>>> df.select(plus_one(df.v)).show()
/Users/xinrong.meng/spark/python/pyspark/sql/udf.py:417: UserWarning: Profiling UDFs with iterators input/output is not supported.
+-----------+
|plus_one(v)|
+-----------+
| 2|
| 3|
| 4|
+-----------+
```
Closes #44668 from xinrong-meng/fix_mp.
Authored-by: Xinrong Meng <[email protected]>
Signed-off-by: Xinrong Meng <[email protected]>
…ip Pandas/PyArrow tests if not available ### What changes were proposed in this pull request? This PR aims to skip `Pandas`-related or `PyArrow`-related tests in `pyspark.sql.tests.test_group` if they are not installed. This regression was introduced by - #44322 - #42767 ### Why are the changes needed? Since `Pandas` and `PyArrow` are optional, we need to skip the tests instead of failures. - https://github.com/apache/spark/actions/runs/7543495430/job/20534809039 ``` ====================================================================== ERROR: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/dongjoon/APACHE/spark-merge/python/pyspark/sql/pandas/utils.py", line 28, in require_minimum_pandas_version import pandas ModuleNotFoundError: No module named 'pandas' ``` ``` ====================================================================== ERROR: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/__w/spark/spark/python/pyspark/sql/pandas/utils.py", line 61, in require_minimum_pyarrow_version import pyarrow ModuleNotFoundError: No module named 'pyarrow' ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? - Manually with the Python installation without Pandas. ``` $ python/run-tests.py --testnames pyspark.sql.tests.test_group Running PySpark tests. Output is in /Users/dongjoon/APACHE/spark-merge/python/unit-tests.log Will test against the following Python executables: ['python3.9', 'pypy3'] Will test the following Python tests: ['pyspark.sql.tests.test_group'] python3.9 python_implementation is CPython python3.9 version is: Python 3.9.18 pypy3 python_implementation is PyPy pypy3 version is: Python 3.10.13 (f1607341da97ff5a1e93430b6e8c4af0ad1aa019, Sep 28 2023, 20:47:55) [PyPy 7.3.13 with GCC Apple LLVM 13.1.6 (clang-1316.0.21.2.5)] Starting test(python3.9): pyspark.sql.tests.test_group (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/ac9269b6-f0df-4d06-88b8-e5e710202b60/python3.9__pyspark.sql.tests.test_group__9zjp5i4z.log) Starting test(pypy3): pyspark.sql.tests.test_group (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/cab6ebed-e49f-4d86-80db-0dc3928079e3/pypy3__pyspark.sql.tests.test_group__thw6hily.log) Finished test(pypy3): pyspark.sql.tests.test_group (6s) ... 3 tests were skipped Finished test(python3.9): pyspark.sql.tests.test_group (7s) ... 3 tests were skipped Tests passed in 7 seconds Skipped tests in pyspark.sql.tests.test_group with pypy3: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] Pandas >= 1.4.4 must be installed; however, it was not found.' test_group_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] Pandas >= 1.4.4 must be installed; however, it was not found.' test_order_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] Pandas >= 1.4.4 must be installed; however, it was not found.' Skipped tests in pyspark.sql.tests.test_group with python3.9: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) test_group_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) test_order_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) ``` - Manually with the Python installation without Pyarrow. ``` $ python/run-tests.py --testnames pyspark.sql.tests.test_group Running PySpark tests. Output is in /Users/dongjoon/APACHE/spark-merge/python/unit-tests.log Will test against the following Python executables: ['python3.9', 'pypy3'] Will test the following Python tests: ['pyspark.sql.tests.test_group'] python3.9 python_implementation is CPython python3.9 version is: Python 3.9.18 pypy3 python_implementation is PyPy pypy3 version is: Python 3.10.13 (f1607341da97ff5a1e93430b6e8c4af0ad1aa019, Sep 28 2023, 20:47:55) [PyPy 7.3.13 with GCC Apple LLVM 13.1.6 (clang-1316.0.21.2.5)] Starting test(pypy3): pyspark.sql.tests.test_group (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/7f1a665e-a679-467c-8ab4-a4532e0b2300/pypy3__pyspark.sql.tests.test_group__i67erhb4.log) Starting test(python3.9): pyspark.sql.tests.test_group (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/47b90765-8ad7-4da0-aa7b-c12cd266847e/python3.9__pyspark.sql.tests.test_group__190hx0tm.log) Finished test(python3.9): pyspark.sql.tests.test_group (6s) ... 3 tests were skipped Finished test(pypy3): pyspark.sql.tests.test_group (7s) ... 3 tests were skipped Tests passed in 7 seconds Skipped tests in pyspark.sql.tests.test_group with pypy3: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] PyArrow >= 4.0.0 must be installed; however, it was not found.' test_group_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] PyArrow >= 4.0.0 must be installed; however, it was not found.' test_order_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] PyArrow >= 4.0.0 must be installed; however, it was not found.' Skipped tests in pyspark.sql.tests.test_group with python3.9: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) test_group_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) test_order_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) ``` ### Was this patch authored or co-authored using generative AI tooling? No. Closes #44759 from dongjoon-hyun/SPARK-46735. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
…r/map_zip_with` ### What changes were proposed in this pull request? This pr refine docstring of `str_to_map/map_filter/map_zip_with` and add some new examples. ### Why are the changes needed? To improve PySpark documentation ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Pass Github Actions ### Was this patch authored or co-authored using generative AI tooling? No Closes #44747 from LuciferYang/SPARK-46730. Authored-by: yangjie01 <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
…ct's artifact management ### What changes were proposed in this pull request? Similar with SPARK-44794, propagate JobArtifactState to broadcast/subquery thread. This is an example: ```scala val add1 = udf((i: Long) => i + 1) val tableA = spark.range(2).alias("a") val tableB = broadcast(spark.range(2).select(add1(col("id")).alias("id"))).alias("b") tableA.join(tableB). where(col("a.id")===col("b.id")). select(col("a.id").alias("a_id"), col("b.id").alias("b_id")). collect(). mkString("[", ", ", "]") ``` Before this pr, this example will throw exception `ClassNotFoundException`. Subquery and Broadcast execution use a separate ThreadPool which don't have the `JobArtifactState`. ### Why are the changes needed? Fix bug. Make Subquery/Broadcast thread work with Connect's artifact management. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? Add a new test to `ReplE2ESuite` ### Was this patch authored or co-authored using generative AI tooling? No Closes #44753 from xieshuaihu/broadcast_artifact. Authored-by: xieshuaihu <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
### What changes were proposed in this pull request? The pr aims to upgrade upload-artifact action from v3 to v4. After PR #44698, our environment variable(`PYTHON_TO_TEST`) is correctly passed and assigned value. We will bring back this PR: #44662 ### Why are the changes needed? - v4.0.0 release notes: https://github.com/actions/upload-artifact/releases/tag/v4.0.0 They have numerous performance and behavioral improvements. - v3 VS v4: actions/upload-artifact@v3...v4.0.0 ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass GA. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #44728 from panbingkun/SPARK-46474_GO_AHEAD. Authored-by: panbingkun <[email protected]> Signed-off-by: Hyukjin Kwon <[email protected]>
…bc driver Hi, thanks for checking the PR. This is a small bug fix to make Scala Spark works with Clickhouse's array type. Let me know if this could cause problem on other DB types. (Please help to trigger CI if possible. I failed to make the build pipeline run - any help is appreciated) ### Why are the changes needed? The PR is to fix issue describe at: ClickHouse/clickhouse-java#1505 When using spark to write an array of string to Clickhouse, the Clickhouse JDBC driver throws `java.lang.IllegalArgumentException: Unknown data type: string` exception. The exception was due to Spark JDBC utils passing an invalid type value `string` (should be `String`). The original type values retrieved from Clickhouse JDBC is correct, but Spark JDBC utils attempts to convert to type string to lower case: https://github.com/apache/spark/blob/6b931530d75cb4f00236f9c6283de8ef450963ad/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/jdbc/JdbcUtils.scala#L639 ### What changes were proposed in this pull request? - Remove the lowercase cast. The string value retrieved from JDBC driver implementation should be passed as-is, Spark shouldn't try to modify the value. ### Does this PR introduce _any_ user-facing change? No ### How was this patch tested? - Follow the reproduce step at: ClickHouse/clickhouse-java#1505 - Create ClickHouse table with array string type - Write scala Spark job to write to clickhouse - Verify the change fixes the issue ### Was this patch authored or co-authored using generative AI tooling? No Closes #44459 from phanhuyn/do-not-lower-case-jdbc-array-type. Lead-authored-by: Kent Yao <[email protected]> Co-authored-by: Nguyen Phan Huy <[email protected]> Signed-off-by: Kent Yao <[email protected]>
…enchmark ### What changes were proposed in this pull request? This PR aims to use the default ORC compression in `OrcReadBenchmark`. ### Why are the changes needed? After SPARK-46648, Apache Spark will use `Zstandard` as the default ORC compression codec. We need to benchmark this one. ### Does this PR introduce _any_ user-facing change? No, this is a test-only PR. ### How was this patch tested? Manual review. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #44761 from dongjoon-hyun/SPARK-46737. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
… `OrcFileFormat` ### What changes were proposed in this pull request? This PR aims to add ORC compression tests for `hive` module `OrcFileFormat`. ### Why are the changes needed? - https://github.com/apache/orc/blob/branch-1.9/java/core/src/java/org/apache/orc/CompressionKind.java ```java public enum CompressionKind { NONE, ZLIB, SNAPPY, LZO, LZ4, ZSTD } ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? Pass the CIs. ### Was this patch authored or co-authored using generative AI tooling? No. Closes #44765 from dongjoon-hyun/SPARK-46742. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
### What changes were proposed in this pull request? 1, Combine pip installations for lint and doc together 2, pip list before run tests ### Why are the changes needed? 1, to avoid potential conflicts, for example: existing `sphinx==4.5.0` requires `docutils<0.18,>=0.14`, while unpinned `sphinx` requires `docutils<0.21,>=0.18.1`. If we install them with different commands, upgrade of `sphinx` might be broken. 2, to make it easier to debug, for example: #44727 (comment) `sphinxcontrib-*` were installed twice with different versions, which is confusing. ### Does this PR introduce _any_ user-facing change? no, infra-only ### How was this patch tested? ci ### Was this patch authored or co-authored using generative AI tooling? no Closes #44754 from zhengruifeng/infra_combine_pip_installation. Authored-by: Ruifeng Zheng <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
pull bot
pushed a commit
that referenced
this pull request
Nov 22, 2024
…ead pool ### What changes were proposed in this pull request? This PR aims to use a meaningful class name prefix for REST Submission API thread pool instead of the default value of Jetty QueuedThreadPool, `"qtp"+super.hashCode()`. https://github.com/dekellum/jetty/blob/3dc0120d573816de7d6a83e2d6a97035288bdd4a/jetty-util/src/main/java/org/eclipse/jetty/util/thread/QueuedThreadPool.java#L64 ### Why are the changes needed? This is helpful during JVM investigation. **BEFORE (4.0.0-preview2)** ``` $ SPARK_MASTER_OPTS='-Dspark.master.rest.enabled=true' sbin/start-master.sh $ jstack 28217 | grep qtp "qtp1925630411-52" #52 daemon prio=5 os_prio=31 cpu=0.07ms elapsed=19.06s tid=0x0000000134906c10 nid=0xde03 runnable [0x0000000314592000] "qtp1925630411-53" #53 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=19.06s tid=0x0000000134ac6810 nid=0xc603 runnable [0x000000031479e000] "qtp1925630411-54" #54 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=19.06s tid=0x000000013491ae10 nid=0xdc03 runnable [0x00000003149aa000] "qtp1925630411-55" #55 daemon prio=5 os_prio=31 cpu=0.08ms elapsed=19.06s tid=0x0000000134ac9810 nid=0xc803 runnable [0x0000000314bb6000] "qtp1925630411-56" #56 daemon prio=5 os_prio=31 cpu=0.04ms elapsed=19.06s tid=0x0000000134ac9e10 nid=0xda03 runnable [0x0000000314dc2000] "qtp1925630411-57" #57 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=19.06s tid=0x0000000134aca410 nid=0xca03 runnable [0x0000000314fce000] "qtp1925630411-58" #58 daemon prio=5 os_prio=31 cpu=0.04ms elapsed=19.06s tid=0x0000000134acaa10 nid=0xcb03 runnable [0x00000003151da000] "qtp1925630411-59" #59 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=19.06s tid=0x0000000134acb010 nid=0xcc03 runnable [0x00000003153e6000] "qtp1925630411-60-acceptor-0108e9815-ServerConnector1e497474{HTTP/1.1, (http/1.1)}{M3-Max.local:6066}" #60 daemon prio=3 os_prio=31 cpu=0.11ms elapsed=19.06s tid=0x00000001317ffa10 nid=0xcd03 runnable [0x00000003155f2000] "qtp1925630411-61-acceptor-11d90f2aa-ServerConnector1e497474{HTTP/1.1, (http/1.1)}{M3-Max.local:6066}" #61 daemon prio=3 os_prio=31 cpu=0.10ms elapsed=19.06s tid=0x00000001314ed610 nid=0xcf03 waiting on condition [0x00000003157fe000] ``` **AFTER** ``` $ SPARK_MASTER_OPTS='-Dspark.master.rest.enabled=true' sbin/start-master.sh $ jstack 28317 | grep StandaloneRestServer "StandaloneRestServer-52" #52 daemon prio=5 os_prio=31 cpu=0.09ms elapsed=60.06s tid=0x00000001284a8e10 nid=0xdb03 runnable [0x000000032cfce000] "StandaloneRestServer-53" #53 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=60.06s tid=0x00000001284acc10 nid=0xda03 runnable [0x000000032d1da000] "StandaloneRestServer-54" #54 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=60.06s tid=0x00000001284ae610 nid=0xd803 runnable [0x000000032d3e6000] "StandaloneRestServer-55" #55 daemon prio=5 os_prio=31 cpu=0.09ms elapsed=60.06s tid=0x00000001284aec10 nid=0xd703 runnable [0x000000032d5f2000] "StandaloneRestServer-56" #56 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=60.06s tid=0x00000001284af210 nid=0xc803 runnable [0x000000032d7fe000] "StandaloneRestServer-57" #57 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=60.06s tid=0x00000001284af810 nid=0xc903 runnable [0x000000032da0a000] "StandaloneRestServer-58" #58 daemon prio=5 os_prio=31 cpu=0.06ms elapsed=60.06s tid=0x00000001284afe10 nid=0xcb03 runnable [0x000000032dc16000] "StandaloneRestServer-59" #59 daemon prio=5 os_prio=31 cpu=0.05ms elapsed=60.06s tid=0x00000001284b0410 nid=0xcc03 runnable [0x000000032de22000] "StandaloneRestServer-60-acceptor-04aefbaa8-ServerConnector44284d85{HTTP/1.1, (http/1.1)}{M3-Max.local:6066}" #60 daemon prio=3 os_prio=31 cpu=0.13ms elapsed=60.05s tid=0x000000015cda1a10 nid=0xcd03 runnable [0x000000032e02e000] "StandaloneRestServer-61-acceptor-148976251-ServerConnector44284d85{HTTP/1.1, (http/1.1)}{M3-Max.local:6066}" #61 daemon prio=3 os_prio=31 cpu=0.12ms elapsed=60.05s tid=0x000000015cd1c810 nid=0xce03 waiting on condition [0x000000032e23a000] ``` ### Does this PR introduce _any_ user-facing change? No, the thread names are accessed during the debugging. ### How was this patch tested? Manual review. ### Was this patch authored or co-authored using generative AI tooling? No. Closes apache#48924 from dongjoon-hyun/SPARK-50385. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: panbingkun <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
See Commits and Changes for more details.
Created by
pull[bot]
Can you help keep this open source service alive? 💖 Please sponsor : )