Skip to content

Conversation

@agsachin
Copy link
Contributor

@agsachin agsachin commented Aug 11, 2016

What changes were proposed in this pull request?

(Please fill in changes proposed in this fix)
Summery we need to make two changes

  1. in DataSourceStratgy.scala. use sqlContext.sparkContext.hadoopConfiguration instead of SparkHadoopUtil.get.conf
  2. update def newConfiguration(conf: SparkConf): Configuration = {..} function to copy only s3 and swift related confs

How was this patch tested?

This patch was manually tested on local machine in standalone cluster mode.
To simulate the case of executor failure. We manually killed the executor then master started new executor with updated config params.

(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
no

…s in standalone spark cluster

@lresende
Copy link
Member

Jenkins test this please

}
// Copy any "spark.hadoop.foo=bar" system properties into conf as "foo=bar"
conf.getAll.foreach { case (key, value) =>
if (key.startsWith("spark.hadoop.")) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the comment above does not apply for the whole loop anymore and should be moved to this if statement

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

@lresende
Copy link
Member

Minor, the title should be [CORE] instead of [SPARK CORE]

@lresende
Copy link
Member

Also, will this also fix the scenario where the user has provided the properties programmatically ?

@agsachin agsachin changed the title [SPARK-13979][Spark Core][WIP]Killed executor is re spawned without AWS key… [SPARK-13979][Core][WIP]Killed executor is re spawned without AWS key… Aug 16, 2016
@agsachin
Copy link
Contributor Author

agsachin commented Aug 16, 2016

@lresende yes this fix works for the the scenario where the user has provided the properties programmatically

like for swift:-
sc.hadoopConfiguration.set("fs.swift2...", "")
...
and for amazon-S3:-
sc.hadoopConfiguration.set("fs.s3....", "org.apache.hadoop.fs.s3a.S3AFileSystem")

@agsachin agsachin changed the title [SPARK-13979][Core][WIP]Killed executor is re spawned without AWS key… [SPARK-13979][Core] Killed executor is re spawned without AWS key… Aug 17, 2016
}
// Copy any "fs.swift2d.foo=bar" properties into conf as "fs.swift2d.foo=bar"
else if (key.startsWith("fs.swift2d")){
hadoopConf.set(key, value)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's swift2d? It's not the swift client in hadoop-openstack, which is fs.swift

Copy link
Contributor Author

@agsachin agsachin Aug 21, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

swift2d is used when u are using Stocator --> https://github.com/SparkTC/stocator. now I have updated for hadoop-openstack also

 have updated for hadoop-openstack package properties
added comment with jira number
@steveloughran
Copy link
Contributor

I'd like to propose that the list of filesystem properties to propagate is actually defined as a list in a spark property, default could be "fs.s3a, fs.s3n, fs.s3, fs.swift, fs.wasb". This will allow people to extend it (google GCFS, Allyun OSS HADOOP-12756, etc, without having to patch spark or wait for a new release. And they get the option to turn off the feature whatsoever. By using the full fs.prefix you could also let people add other properties to carry over.

One thing I'm confused by though: why do spawned executors get the properties, but not respawned ones?

// Copy any "fs.s3.foo=bar" or "fs.s3a.foo=bar" or "fs.s3n.foo=bar" properties into conf
else if (key.startsWith("fs.s3")){
hadoopConf.set(key, value)
}
Copy link
Contributor

@steveloughran steveloughran Sep 13, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s3 is the AWS EMR filesystem, but an obsolete one on ASF Hadoop. I would recommend the list of

s3, s3n, s3a, swift, adl, wasb, oss, gs

(edited 10/oct, set list in sync with what I believe is current set)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, I would copy everything under fs.s3a, fs.s3, etc. Why? there's a lot more than passwords: for s3a we include: proxy info and passwords, a list of alternate s3 auth mechanisms (e.g. declaring using IAM), etc, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@steveloughran can u help me with some default setting for adl, wasb, oss, gs to test or just syntax for them so that i can decide on filter conditions

@steveloughran
Copy link
Contributor

Could an automated test be done here. propagation can be tested with a a function run on the
executor (such as a map) which fails if the required properties are missing

  1. (set some properties in the conf)
  2. restart the executor
  3. run a map which fails if things weren't copied
Seq("s3n", s3a.aws", ...).parallelize.map { name -> (check for fs.s3a properties) or fail }

@lresende
Copy link
Member

lresende commented Oct 6, 2016

@agsachin Are you planning to address these updates on this PR ? It would be good to have this as part of Spark as it affects multiple usage scenarios in cloud platforms and other cases as well.

@HyukjinKwon
Copy link
Member

(gentle ping @agsachin)

@agsachin
Copy link
Contributor Author

agsachin commented Feb 15, 2017

@lresende @steveloughran @HyukjinKwon can we go ahead with key.startsWith("fs."). so that we don't need to check so many conditions.

@steveloughran
Copy link
Contributor

spark.hadoop.fs.* would work.

The (not yet shipped in ASF code) Azure Data Lake FS has, for reasons I don't know and have only just noticed, adopted "dfs.adl" as their prefix. That's going to be fixed before it gets out the door, so fs.* will work there too.

@agsachin
Copy link
Contributor Author

@steveloughran I have updated the pull request to fs.*

@steveloughran
Copy link
Contributor

steveloughran commented Feb 25, 2017

  1. It's good to have some tests
    2.-I note that appendS3AndSparkHadoopConfigurations() has a weakness in how it propagates env vars: no propagation of the session environment AWS_SESSION_TOKEN variable to "fs.s3a.session.token", same for region. That's probably significant enough to have its own JIRA though

update issue 2 is fixed

@jiangxb1987
Copy link
Contributor

@agsachin Are you still working on this? And if it is so, would you please update the description and provide some snapshot to demo the behaviors before & after the changes? It would also be great if some test cases could be provided(I hope it won't be too hard).

@jiangxb1987
Copy link
Contributor

add to whitelist

@jiangxb1987
Copy link
Contributor

test this please

@steveloughran
Copy link
Contributor

steveloughran commented Jun 21, 2017

Testing should not be too hard. Here's my untested attempt

    val sconf = new SparkConf(false)
    sconf.set("fs.example.value", "true")
    val conf = new Configuration(false)
    new SparkHadoopUtil().appendS3AndSparkHadoopConfigurations(sconf, conf)
    assert(conf.getBoolean("fs.example.value", false))

@HyukjinKwon
Copy link
Member

gentle ping @agsachin.

@agsachin
Copy link
Contributor Author

Thanks @jiangxb2987 will add this test case by tomorrow and will be update the pr with results

@asfgit asfgit closed this in 3a45c7f Aug 5, 2017
@steveloughran
Copy link
Contributor

I know this hasn't been updated, but it is still important. I can take it on if all it needs is a test case

@jiangxb1987
Copy link
Contributor

@steveloughran Please feel free to reopen this PR. Thanks!

zifeif2 pushed a commit to zifeif2/spark that referenced this pull request Nov 22, 2025
## What changes were proposed in this pull request?

This PR proposes to close stale PRs, mostly the same instances with apache#18017

Closes apache#14085 - [SPARK-16408][SQL] SparkSQL Added file get Exception: is a directory …
Closes apache#14239 - [SPARK-16593] [CORE] [WIP] Provide a pre-fetch mechanism to accelerate shuffle stage.
Closes apache#14567 - [SPARK-16992][PYSPARK] Python Pep8 formatting and import reorganisation
Closes apache#14579 - [SPARK-16921][PYSPARK] RDD/DataFrame persist()/cache() should return Python context managers
Closes apache#14601 - [SPARK-13979][Core] Killed executor is re spawned without AWS key…
Closes apache#14830 - [SPARK-16992][PYSPARK][DOCS] import sort and autopep8 on Pyspark examples
Closes apache#14963 - [SPARK-16992][PYSPARK] Virtualenv for Pylint and pep8 in lint-python
Closes apache#15227 - [SPARK-17655][SQL]Remove unused variables declarations and definations in a WholeStageCodeGened stage
Closes apache#15240 - [SPARK-17556] [CORE] [SQL] Executor side broadcast for broadcast joins
Closes apache#15405 - [SPARK-15917][CORE] Added support for number of executors in Standalone [WIP]
Closes apache#16099 - [SPARK-18665][SQL] set statement state to "ERROR" after user cancel job
Closes apache#16445 - [SPARK-19043][SQL]Make SparkSQLSessionManager more configurable
Closes apache#16618 - [SPARK-14409][ML][WIP] Add RankingEvaluator
Closes apache#16766 - [SPARK-19426][SQL] Custom coalesce for Dataset
Closes apache#16832 - [SPARK-19490][SQL] ignore case sensitivity when filtering hive partition columns
Closes apache#17052 - [SPARK-19690][SS] Join a streaming DataFrame with a batch DataFrame which has an aggregation may not work
Closes apache#17267 - [SPARK-19926][PYSPARK] Make pyspark exception more user-friendly
Closes apache#17371 - [SPARK-19903][PYSPARK][SS] window operator miss the `watermark` metadata of time column
Closes apache#17401 - [SPARK-18364][YARN] Expose metrics for YarnShuffleService
Closes apache#17519 - [SPARK-15352][Doc] follow-up: add configuration docs for topology-aware block replication
Closes apache#17530 - [SPARK-5158] Access kerberized HDFS from Spark standalone
Closes apache#17854 - [SPARK-20564][Deploy] Reduce massive executor failures when executor count is large (>2000)
Closes apache#17979 - [SPARK-19320][MESOS][WIP]allow specifying a hard limit on number of gpus required in each spark executor when running on mesos
Closes apache#18127 - [SPARK-6628][SQL][Branch-2.1] Fix ClassCastException when executing sql statement 'insert into' on hbase table
Closes apache#18236 - [SPARK-21015] Check field name is not null and empty in GenericRowWit…
Closes apache#18269 - [SPARK-21056][SQL] Use at most one spark job to list files in InMemoryFileIndex
Closes apache#18328 - [SPARK-21121][SQL] Support changing storage level via the spark.sql.inMemoryColumnarStorage.level variable
Closes apache#18354 - [SPARK-18016][SQL][CATALYST][BRANCH-2.1] Code Generation: Constant Pool Limit - Class Splitting
Closes apache#18383 - [SPARK-21167][SS] Set kafka clientId while fetch messages
Closes apache#18414 - [SPARK-21169] [core] Make sure to update application status to RUNNING if executors are accepted and RUNNING after recovery
Closes apache#18432 - resolve com.esotericsoftware.kryo.KryoException
Closes apache#18490 - [SPARK-21269][Core][WIP] Fix FetchFailedException when enable maxReqSizeShuffleToMem and KryoSerializer
Closes apache#18585 - SPARK-21359
Closes apache#18609 - Spark SQL merge small files to big files Update InsertIntoHiveTable.scala

Added:
Closes apache#18308 - [SPARK-21099][Spark Core] INFO Log Message Using Incorrect Executor I…
Closes apache#18599 - [SPARK-21372] spark writes one log file even I set the number of spark_rotate_log to 0
Closes apache#18619 - [SPARK-21397][BUILD]Maven shade plugin adding dependency-reduced-pom.xml to …
Closes apache#18667 - Fix the simpleString used in error messages
Closes apache#18782 - Branch 2.1

Added:
Closes apache#17694 - [SPARK-12717][PYSPARK] Resolving race condition with pyspark broadcasts when using multiple threads

Added:
Closes apache#16456 - [SPARK-18994] clean up the local directories for application in future by annother thread
Closes apache#18683 - [SPARK-21474][CORE] Make number of parallel fetches from a reducer configurable
Closes apache#18690 - [SPARK-21334][CORE] Add metrics reporting service to External Shuffle Server

Added:
Closes apache#18827 - Merge pull request 1 from apache/master

## How was this patch tested?

N/A

Author: hyukjinkwon <[email protected]>

Closes apache#18780 from HyukjinKwon/close-prs.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants