Skip to content

Conversation

@Astralidea
Copy link

before offer resouce, first meet meetsConstraints to filter some offer.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a typo there. I think the comment isn't necessary though, the name of the variable is clear enough.

@dragos
Copy link
Contributor

dragos commented Jan 15, 2016

ok to test

@SparkQA
Copy link

SparkQA commented Jan 15, 2016

Test build #49460 has finished for PR 10768 at commit 6e8a028.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@Astralidea
Copy link
Author

@dragos A question, should I also change MesosClusterDispatcher.scala code for cmd argument in?

@SparkQA
Copy link

SparkQA commented Jan 16, 2016

Test build #49516 has finished for PR 10768 at commit ef698a9.

  • This patch fails Scala style tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@SparkQA
Copy link

SparkQA commented Jan 16, 2016

Test build #49517 has finished for PR 10768 at commit f4d586f.

  • This patch passes all tests.
  • This patch merges cleanly.
  • This patch adds no public classes.

@dragos
Copy link
Contributor

dragos commented Jan 16, 2016

@Astralidea I don't understand your question, sorry.

@Astralidea
Copy link
Author

@dragos
I start a mesos framework use
./start-mesos-dispatcher.sh --master mesos://zk://xxx:2181,yyy:2181,l-zzz/mesos
should modify code for give a argument for this mian method to start dispacher like
./start-mesos-dispatcher.sh --master mesos://zk://xxx:2181,yyy:2181,l-zzz/mesos --constraints colo:cn2
??
to do this, it also need to modify MesosClusterDispatcher.scala to parse the command line argument and passed the configiration to spark.mesos.constraints

@dragos
Copy link
Contributor

dragos commented Jan 18, 2016

Thanks for clarifying. I think this is a bit confusing. You want to respect the Mesos constraints that belong to each particular job that is submitted to the dispatcher. Those configuration options should come from a submission request, not from the Spark config options that are used to launch the dispatcher (probably found inside schedulerProperties)

Right now, the code would only launch drivers on one particular set of constraints, defined when the dispatcher is launched. I believe the better solution is to allow each Spark job to define its Mesos constraints independently, when submitting.

@BrickXu
Copy link

BrickXu commented Jan 18, 2016

@dragos great idea!

@Astralidea
Copy link
Author

@dragos Thank you for your answers. so the code may does not need to change other.
I will make a test when I change conf file spark-defaults.conf. Wheather it will take effect on deploy a driver.That is .when submit a job.will it read spark-defaults.conf every time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the problem: This setting is read only once, when the dispatcher is started. What you want is to pick up the constraints set on the submitted job. Have a look at how other job-specific settings are treated in the code (there's a submitProperties variable, or something similar).

@dragos
Copy link
Contributor

dragos commented Jan 19, 2016

@Astralidea The code needs to change, but probably the change is minor. See my comment

@andrewor14
Copy link
Contributor

FYI #10949 is another patch for the same issue.

@andrewor14
Copy link
Contributor

Also @Astralidea please change the title to include [MESOS] instead of [CORE]

@Astralidea Astralidea changed the title [SPARK-12832][CORE] Fix dispatcher does not have a constraints config [SPARK-12832][MESOS] Fix dispatcher does not have a constraints config Feb 2, 2016
@Astralidea
Copy link
Author

@dragos Sorry about busy these days, In your comment, The feature is great. Because if I change the configuration I did not redeploy mesos-dispacher, but In my System it is enough to use I didn't change config everyday, it is merely stable. and I did not figure out how to reload configure every time. If I have time to do this I will push another patch.

@dragos
Copy link
Contributor

dragos commented Feb 2, 2016

@Astralidea I think we should focus on getting #10949 in, which implements exactly this behavior.

@Astralidea
Copy link
Author

@dragos ok, I hope this issue could fix in next version. only read once spark.mesos.constraints is enough.

@Astralidea Astralidea closed this Feb 3, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants