-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-21099][Spark Core] INFO Log Message Using Incorrect Executor I… #18308
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…dle Timeout Value
|
LGTM. BTW can you please complement the PR description, thanks. |
|
Jenkins test this please |
|
Test build #78086 has finished for PR 18308 at commit
|
| if (testing || executorsRemoved.nonEmpty) { | ||
| executorsRemoved.foreach { removedExecutorId => | ||
| newExecutorTotal -= 1 | ||
| val hasCachedBlocks = SparkEnv.get.blockManager.master.hasCachedBlocks(executorId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This variable executorId is not defined, should change to removedExecutorId.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And final semicolon ";" is not necessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Btw, there could be a chance when querying executor from BlockManager, the executor/block manager was already removed, so we will potentially get false.
|
@jerryshao - I pushed another commit fixing removedExecutorId and removing ';'...thx for feedback on that. Regarding the edge case where the block manager was already removed and could potentially get |
|
ok to test |
|
Test build #78126 has finished for PR 18308 at commit
|
Yes, that's the case, we cannot differentiate this two scenarios. But I think it is fine, since it is just a log issue and hard for us to differentiate them in the current code. |
|
Test build #3799 has finished for PR 18308 at commit
|
| if (testing || executorsRemoved.nonEmpty) { | ||
| executorsRemoved.foreach { removedExecutorId => | ||
| newExecutorTotal -= 1 | ||
| val hasCachedBlocks = SparkEnv.get.blockManager.master.hasCachedBlocks(removedExecutorId) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a pretty expensive call just so you can print a more accurate log message... at the very least it should be inside the logInfo call:
logInfo {
expensiveStuff()
"This is the log message referencing the expensive stuff."
}
But I'd prefer a solution that doesn't make this call at all, since INFO is more often than not enabled.
|
Can one of the admins verify this patch? |
|
Ping @ihazem |
|
Ping @ihazem to update or close |
| val timeout = if (hasCachedBlocks) cachedExecutorIdleTimeoutS else executorIdleTimeoutS | ||
| logInfo(s"Removing executor $removedExecutorId because it has been idle for " + | ||
| s"$executorIdleTimeoutS seconds (new desired total will be $newExecutorTotal)") | ||
| s"${if (SparkEnv.get.blockManager.master.hasCachedBlocks(executorId)) cachedExecutorIdleTimeoutS else executorIdleTimeoutS} seconds (new desired total will be $newExecutorTotal)") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is hard to read. @vanzin 's idea is better. Can you split this into two statements? cheap one at info, expensive one at debug?
|
Can you update the title to: ? |
|
Is this the final modified code @ihazem , why do you have Can you please at least do a round of self-review before pushing the changes. |
|
@jerryshao - sorry about that...let me remove the logic outside the logInfo statements. Missed that. I'll do some more self-review and peer-review going fwd before submitting to not waste everyone's time here. @jiangxb1987 - i'll update title per your recommendation @srowen - do you or @vanzin have any recommendations on making a cheaper call in the info level? I can definitely make the more expensive call at debug/warn, but could use some direction on a cheaper call for the info level. Basically, we want to use the cachedExecutorIdleTimeoutS if the executor has cached blocks (whereas right now it uses executorIdleTimeoutS). Thx! |
|
There is no cheaper call you can make. Which is the problem here. So either you have to modify the code so that it keeps track of why the executor has timed out (i.e. more state being passed around), or you need to tweak the message so that it says that the timeout is different when there are cached blocks on the executor. I'm not so sure there's a lot to gain from printing the exact timeout value in this message, so I'm leaning towards just tweaking the message a little bit (e.g. add "or $otherTimeout ms if executor had cached blocks"), or even not doing anything. |
## What changes were proposed in this pull request? This PR proposes to close stale PRs, mostly the same instances with apache#18017 Closes apache#14085 - [SPARK-16408][SQL] SparkSQL Added file get Exception: is a directory … Closes apache#14239 - [SPARK-16593] [CORE] [WIP] Provide a pre-fetch mechanism to accelerate shuffle stage. Closes apache#14567 - [SPARK-16992][PYSPARK] Python Pep8 formatting and import reorganisation Closes apache#14579 - [SPARK-16921][PYSPARK] RDD/DataFrame persist()/cache() should return Python context managers Closes apache#14601 - [SPARK-13979][Core] Killed executor is re spawned without AWS key… Closes apache#14830 - [SPARK-16992][PYSPARK][DOCS] import sort and autopep8 on Pyspark examples Closes apache#14963 - [SPARK-16992][PYSPARK] Virtualenv for Pylint and pep8 in lint-python Closes apache#15227 - [SPARK-17655][SQL]Remove unused variables declarations and definations in a WholeStageCodeGened stage Closes apache#15240 - [SPARK-17556] [CORE] [SQL] Executor side broadcast for broadcast joins Closes apache#15405 - [SPARK-15917][CORE] Added support for number of executors in Standalone [WIP] Closes apache#16099 - [SPARK-18665][SQL] set statement state to "ERROR" after user cancel job Closes apache#16445 - [SPARK-19043][SQL]Make SparkSQLSessionManager more configurable Closes apache#16618 - [SPARK-14409][ML][WIP] Add RankingEvaluator Closes apache#16766 - [SPARK-19426][SQL] Custom coalesce for Dataset Closes apache#16832 - [SPARK-19490][SQL] ignore case sensitivity when filtering hive partition columns Closes apache#17052 - [SPARK-19690][SS] Join a streaming DataFrame with a batch DataFrame which has an aggregation may not work Closes apache#17267 - [SPARK-19926][PYSPARK] Make pyspark exception more user-friendly Closes apache#17371 - [SPARK-19903][PYSPARK][SS] window operator miss the `watermark` metadata of time column Closes apache#17401 - [SPARK-18364][YARN] Expose metrics for YarnShuffleService Closes apache#17519 - [SPARK-15352][Doc] follow-up: add configuration docs for topology-aware block replication Closes apache#17530 - [SPARK-5158] Access kerberized HDFS from Spark standalone Closes apache#17854 - [SPARK-20564][Deploy] Reduce massive executor failures when executor count is large (>2000) Closes apache#17979 - [SPARK-19320][MESOS][WIP]allow specifying a hard limit on number of gpus required in each spark executor when running on mesos Closes apache#18127 - [SPARK-6628][SQL][Branch-2.1] Fix ClassCastException when executing sql statement 'insert into' on hbase table Closes apache#18236 - [SPARK-21015] Check field name is not null and empty in GenericRowWit… Closes apache#18269 - [SPARK-21056][SQL] Use at most one spark job to list files in InMemoryFileIndex Closes apache#18328 - [SPARK-21121][SQL] Support changing storage level via the spark.sql.inMemoryColumnarStorage.level variable Closes apache#18354 - [SPARK-18016][SQL][CATALYST][BRANCH-2.1] Code Generation: Constant Pool Limit - Class Splitting Closes apache#18383 - [SPARK-21167][SS] Set kafka clientId while fetch messages Closes apache#18414 - [SPARK-21169] [core] Make sure to update application status to RUNNING if executors are accepted and RUNNING after recovery Closes apache#18432 - resolve com.esotericsoftware.kryo.KryoException Closes apache#18490 - [SPARK-21269][Core][WIP] Fix FetchFailedException when enable maxReqSizeShuffleToMem and KryoSerializer Closes apache#18585 - SPARK-21359 Closes apache#18609 - Spark SQL merge small files to big files Update InsertIntoHiveTable.scala Added: Closes apache#18308 - [SPARK-21099][Spark Core] INFO Log Message Using Incorrect Executor I… Closes apache#18599 - [SPARK-21372] spark writes one log file even I set the number of spark_rotate_log to 0 Closes apache#18619 - [SPARK-21397][BUILD]Maven shade plugin adding dependency-reduced-pom.xml to … Closes apache#18667 - Fix the simpleString used in error messages Closes apache#18782 - Branch 2.1 Added: Closes apache#17694 - [SPARK-12717][PYSPARK] Resolving race condition with pyspark broadcasts when using multiple threads Added: Closes apache#16456 - [SPARK-18994] clean up the local directories for application in future by annother thread Closes apache#18683 - [SPARK-21474][CORE] Make number of parallel fetches from a reducer configurable Closes apache#18690 - [SPARK-21334][CORE] Add metrics reporting service to External Shuffle Server Added: Closes apache#18827 - Merge pull request 1 from apache/master ## How was this patch tested? N/A Author: hyukjinkwon <[email protected]> Closes apache#18780 from HyukjinKwon/close-prs.
…dle Timeout Value
What changes were proposed in this pull request?
Updated the INFO logging method (logInfo) to use $timeout instead, which uses cachedExecutorIdleTimeoutS if the executor has cached blocks
How was this patch tested?
Testing was done by doing the following:
executorIdleTimeout=30
cachedExecutorIdleTimeout=20
shell.log.level=INFO
scala> val textFile = sc.textFile("/user/spark/applicationHistory/app_1234")
scala> textFile.cache().count()
Please review http://spark.apache.org/contributing.html before opening a pull request.