Skip to content

Add a test for a simple function taking array of arrays as input#491

Closed
mbasmanova wants to merge 1 commit intofacebookincubator:mainfrom
mbasmanova:array-of-arrays
Closed

Add a test for a simple function taking array of arrays as input#491
mbasmanova wants to merge 1 commit intofacebookincubator:mainfrom
mbasmanova:array-of-arrays

Conversation

@mbasmanova
Copy link
Contributor

No description provided.

@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Oct 25, 2021
@facebook-github-bot
Copy link
Contributor

@mbasmanova has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@mbasmanova merged this pull request in dbf0a6e.

facebook-github-bot pushed a commit that referenced this pull request Sep 15, 2022
Summary:
X-link: pytorch/torcharrow#491

Tests and benchmarks targets are now de-coupled.
That means they can be built independently.
Shared functionality is moved to a common utility library.

Resolves #1704

Pull Request resolved: #2439

Reviewed By: Yuhta

Differential Revision: D39484543

Pulled By: kgpai

fbshipit-source-id: 5ac888c81a6bbfbc5a1a1c4cfd41fa2c86199bc4
rui-mo pushed a commit to rui-mo/velox that referenced this pull request Mar 17, 2023
This patch adds S3 support, below is the command to build the package:
mvn clean package -Pbackends-velox -Pspark-3.2 -DskipTests -Dcheckstyle.skip -Dbuild_cpp=ON -Dbuild_velox=ON -Dbuild_velox_from_source=ON -Dbuild_arrow=ON -Dvelox_enable_s3=ON

below S3 configures are required in spark-defaults.conf

spark.hadoop.fs.s3a.impl           org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.aws.credentials.provider org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider
spark.hadoop.fs.s3a.access.key     xxxx
spark.hadoop.fs.s3a.secret.key     xxxx
spark.hadoop.fs.s3a.endpoint https://s3.us-west-1.amazonaws.com
spark.hadoop.fs.s3a.connection.ssl.enabled true
spark.hadoop.fs.s3a.path.style.access false

This patch also did some cleanup on the HDFS connector, if it's not required then Gluten will not build/include the dependencies

Signed-off-by: Yuan Zhou yuan.zhou@intel.com
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. Merged

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants