-
Notifications
You must be signed in to change notification settings - Fork 29k
[SPARK-4449][Core] Specify port range in spark #3314
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 5 commits
5f5fda8
4203b7f
1773b12
cd1a88d
da05d64
fa5bcbb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -1652,25 +1652,28 @@ private[spark] object Utils extends Logging { | |
| * Attempt to start a service on the given port, or fail after a number of attempts. | ||
| * Each subsequent attempt uses 1 + the port used in the previous attempt (unless the port is 0). | ||
| * | ||
| * @param startPort The initial port to start the service on. | ||
| * @param port The initial port to start the service on. | ||
| * @param maxRetries Maximum number of retries to attempt. | ||
| * A value of 3 means attempting ports n, n+1, n+2, and n+3, for example. | ||
| * @param startService Function to start service on a given port. | ||
| * This is expected to throw java.net.BindException on port collision. | ||
| */ | ||
| def startServiceOnPort[T]( | ||
| startPort: Int, | ||
| port: Int, | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. in this api, startPort means it would try from this port. I think startPort is a better name.
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Have used |
||
| startService: Int => (T, Int), | ||
| conf: SparkConf = new SparkConf(), | ||
| serviceName: String = "", | ||
| maxRetries: Int = portMaxRetries): (T, Int) = { | ||
| val serviceString = if (serviceName.isEmpty) "" else s" '$serviceName'" | ||
| val startPort = conf.getInt("spark.port.min", 1024) | ||
| val endPort = conf.getInt("spark.port.max", 65536) | ||
| for (offset <- 0 to maxRetries) { | ||
| // Do not increment port if startPort is 0, which is treated as a special port | ||
| val tryPort = if (startPort == 0) { | ||
| startPort | ||
| // Do not increment port if port is 0, which is treated as a special port | ||
| val tryPort = if (port == 0) { | ||
| port | ||
|
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Perhaps I'm missing something, why aren't we applying the restriction when using port 0 (ephemeral port), most of the things default to 0 which is pick a port, we want those to end up in this range. It seems like this would be more clear if the range is specified, just to ignore port passed in and iterate over that range.
Contributor
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. sorry I think your last comment/question hits on this issue. that seems better. As long as all the services default to port 0 (other then web ui) this seems fine. That way if user does specify a port explicitly it will still use that. |
||
| } else { | ||
| // If the new port wraps around, do not try a privilege port | ||
| ((startPort + offset - 1024) % (65536 - 1024)) + 1024 | ||
| // If the new port wraps around, ensure it is in range(startPort, endPort) | ||
| ((port + offset) % (endPort - startPort + 1)) + startPort | ||
| } | ||
| try { | ||
| val (service, port) = startService(tryPort) | ||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
seems a bit odd here to be grabbing the spark conf out of the securityManager. Generally it would be better to pass into HttpServer itself. I'll leave that up to one of the core committers though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think both ok. If we pass into HttpServer a sparkConf, actually it is same with
securityManager.sparkConf