-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-45047][PYTHON][CONNECT] DataFrame.groupBy support ordinals
#42767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Closed
Conversation
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Contributor
Author
|
cc @HyukjinKwon if this fix is fine, I will support other APIs in followup PRs |
HyukjinKwon
approved these changes
Sep 4, 2023
dongjoon-hyun
approved these changes
Sep 4, 2023
Member
dongjoon-hyun
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, LGTM. Thank you.
Merged to master for Apache Spark 4.0.
Contributor
Author
|
@HyukjinKwon @dongjoon-hyun thanks for review |
dongjoon-hyun
added a commit
that referenced
this pull request
Jan 16, 2024
…ip Pandas/PyArrow tests if not available ### What changes were proposed in this pull request? This PR aims to skip `Pandas`-related or `PyArrow`-related tests in `pyspark.sql.tests.test_group` if they are not installed. This regression was introduced by - #44322 - #42767 ### Why are the changes needed? Since `Pandas` and `PyArrow` are optional, we need to skip the tests instead of failures. - https://github.com/apache/spark/actions/runs/7543495430/job/20534809039 ``` ====================================================================== ERROR: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/Users/dongjoon/APACHE/spark-merge/python/pyspark/sql/pandas/utils.py", line 28, in require_minimum_pandas_version import pandas ModuleNotFoundError: No module named 'pandas' ``` ``` ====================================================================== ERROR: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "/__w/spark/spark/python/pyspark/sql/pandas/utils.py", line 61, in require_minimum_pyarrow_version import pyarrow ModuleNotFoundError: No module named 'pyarrow' ``` ### Does this PR introduce _any_ user-facing change? No. ### How was this patch tested? - Manually with the Python installation without Pandas. ``` $ python/run-tests.py --testnames pyspark.sql.tests.test_group Running PySpark tests. Output is in /Users/dongjoon/APACHE/spark-merge/python/unit-tests.log Will test against the following Python executables: ['python3.9', 'pypy3'] Will test the following Python tests: ['pyspark.sql.tests.test_group'] python3.9 python_implementation is CPython python3.9 version is: Python 3.9.18 pypy3 python_implementation is PyPy pypy3 version is: Python 3.10.13 (f1607341da97ff5a1e93430b6e8c4af0ad1aa019, Sep 28 2023, 20:47:55) [PyPy 7.3.13 with GCC Apple LLVM 13.1.6 (clang-1316.0.21.2.5)] Starting test(python3.9): pyspark.sql.tests.test_group (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/ac9269b6-f0df-4d06-88b8-e5e710202b60/python3.9__pyspark.sql.tests.test_group__9zjp5i4z.log) Starting test(pypy3): pyspark.sql.tests.test_group (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/cab6ebed-e49f-4d86-80db-0dc3928079e3/pypy3__pyspark.sql.tests.test_group__thw6hily.log) Finished test(pypy3): pyspark.sql.tests.test_group (6s) ... 3 tests were skipped Finished test(python3.9): pyspark.sql.tests.test_group (7s) ... 3 tests were skipped Tests passed in 7 seconds Skipped tests in pyspark.sql.tests.test_group with pypy3: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] Pandas >= 1.4.4 must be installed; however, it was not found.' test_group_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] Pandas >= 1.4.4 must be installed; however, it was not found.' test_order_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] Pandas >= 1.4.4 must be installed; however, it was not found.' Skipped tests in pyspark.sql.tests.test_group with python3.9: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) test_group_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) test_order_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) ``` - Manually with the Python installation without Pyarrow. ``` $ python/run-tests.py --testnames pyspark.sql.tests.test_group Running PySpark tests. Output is in /Users/dongjoon/APACHE/spark-merge/python/unit-tests.log Will test against the following Python executables: ['python3.9', 'pypy3'] Will test the following Python tests: ['pyspark.sql.tests.test_group'] python3.9 python_implementation is CPython python3.9 version is: Python 3.9.18 pypy3 python_implementation is PyPy pypy3 version is: Python 3.10.13 (f1607341da97ff5a1e93430b6e8c4af0ad1aa019, Sep 28 2023, 20:47:55) [PyPy 7.3.13 with GCC Apple LLVM 13.1.6 (clang-1316.0.21.2.5)] Starting test(pypy3): pyspark.sql.tests.test_group (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/7f1a665e-a679-467c-8ab4-a4532e0b2300/pypy3__pyspark.sql.tests.test_group__i67erhb4.log) Starting test(python3.9): pyspark.sql.tests.test_group (temp output: /Users/dongjoon/APACHE/spark-merge/python/target/47b90765-8ad7-4da0-aa7b-c12cd266847e/python3.9__pyspark.sql.tests.test_group__190hx0tm.log) Finished test(python3.9): pyspark.sql.tests.test_group (6s) ... 3 tests were skipped Finished test(pypy3): pyspark.sql.tests.test_group (7s) ... 3 tests were skipped Tests passed in 7 seconds Skipped tests in pyspark.sql.tests.test_group with pypy3: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] PyArrow >= 4.0.0 must be installed; however, it was not found.' test_group_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] PyArrow >= 4.0.0 must be installed; however, it was not found.' test_order_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... skipped '[PACKAGE_NOT_INSTALLED] PyArrow >= 4.0.0 must be installed; however, it was not found.' Skipped tests in pyspark.sql.tests.test_group with python3.9: test_agg_func (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) test_group_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) test_order_by_ordinal (pyspark.sql.tests.test_group.GroupTests) ... SKIP (0.000s) ``` ### Was this patch authored or co-authored using generative AI tooling? No. Closes #44759 from dongjoon-hyun/SPARK-46735. Authored-by: Dongjoon Hyun <[email protected]> Signed-off-by: Dongjoon Hyun <[email protected]>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
What changes were proposed in this pull request?
make
DataFrame.groupByaccept ordinalsWhy are the changes needed?
for feature parity
this PR focus on the
groupBymethodDoes this PR introduce any user-facing change?
yes, new feature
How was this patch tested?
added ut
Was this patch authored or co-authored using generative AI tooling?
no