[SPARK-20835][Core]It should exit directly when the --total-executor-cores parameter is setted less than 0 when submit a application#18060
Conversation
|
I agree it should fail faster, but shouldn't many args be validated here? CC @vanzin |
|
@srowen The other parameters are validated at some other places, for example, the --executor-memory parameter is validated at org.apache.spark.memory.UnifiedMemoryManager$.getMaxMemory, the app can exit with error: "java.lang.NumberFormatException" if it is a negative number. But the --total-executor-cores parameter is not validated at anywhere, the app can not exit if it is a negative number. |
|
Yeah but maybe a good idea to valid all these aren't negative upfront. Fail faster and more consistently. |
|
@srowen Thank you for suggestion. I have validated other numerical parameters here, any other suggestion? |
|
Test build #3750 has finished for PR 18060 at commit
|
|
Jenkins, retest this please |
|
@SparkQA Retest this please,thanks! |
|
Test build #3757 has finished for PR 18060 at commit
|
|
Test build #3760 has finished for PR 18060 at commit
|
|
Merged to master |
What changes were proposed in this pull request?
In my test, the submitted app running with out an error when the --total-executor-cores less than 0
and given the warnings:
"2017-05-22 17:19:36,319 WARN org.apache.spark.scheduler.TaskSchedulerImpl: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources";
It should exit directly when the --total-executor-cores parameter is setted less than 0 when submit a application
(Please fill in changes proposed in this fix)
How was this patch tested?
Run the ut tests
(Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
(If this patch involves UI changes, please attach a screenshot; otherwise, remove this)
Please review http://spark.apache.org/contributing.html before opening a pull request.