-
Notifications
You must be signed in to change notification settings - Fork 321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Excluding certain parameter values from future trials by abandoning them does not work #471
Comments
Hi, @mkhan037, thank you for reporting this! This looks like a it might be a bug –– we'll investigate and get back to you. |
Hi, @lena-kashtelyan just following up to see if there is any update on this. Also, I was thinking of using the following strategy to tackle the issue if abandoning the trials does not work. Please let me know if they seem feasible or not.
My concern is whether setting the expected improvement to 0 for specific points can lead to model fitting issues. Please let me know your opinion. Thank you for your time. |
Just to make sure I understand: Do you know the memory requirements so that runs don't OOM? Or do you need to estimate that as well? If these limits are unknown, then you'd have to estimate this binary response (feasible/not feasible). This probably wouldn't work well with our default models. There are specialized strategies for this (see e.g. https://arxiv.org/pdf/1907.10383.pdf), but we currently do not have those methods implemented. If you do know the mapping from nodes to memory then things get easier since you don't need to estimate this, but we'd have to encode those as parameter constraints. It seems that even if you have different node types you could still use a linear constraint. If you parameterized your search space with a parameter for the number of nodes for each node type (say |
Hi @Balandat, thanks for the response. The amount of total cluster memory that avoid OOM can be calculated (it is a formula involving some execution statistics). Thanks for the suggestion on using I hope that clarifies a bit more about the problem scenario. Thank you for your time. |
Update on this: abandoning trials not working was in fact a bug, and the fix should be on master today. Thank you for pointing it out, @mkhan037! |
The fix for this is now on master as of f6457e6; we will be releasing a new stable version imminently as well. |
Hi @lena-kashtelyan , thanks for taking the time to fix the bug! I will check out the master branch and test later. In the meantime feel free to close this issue. |
This should now be fixed in latest stable release, 0.1.20. |
Hi, I am trying to minimize the execution cost of some distributed applications through optimal resource allocation. Currently, I am exploring the optimization of a small search space, which will be expanded after we figure out the necessary implementation details. The search space right now only consists of a range parameter that indicates the number of worker nodes needed for execution.
When the total available memory allocated for the execution of these distributed applications is lower than some specific values, it either fails to execute or execution cost is sub-optimal due to taking a long time. We want to avoid executing these configurations, as it would take a lot of time and cost.
We can estimate the amount of memory needed by these applications to at least execute successfully. We can employ a linear constraint on the number of nodes(workers) for our toy search space. However, when we would expand the search space to contain different types of nodes, the constraint would not work, as different node types contain different amounts of memory and which can be calculated as follows.
total_cluster_memory = get_memory_amount_for_node_type(node_type) * worker_count
To avoid actually executing the clearly non-optimal and costly cluster configurations, we considered returning artificially high values to deter Ax from suggesting them in future iterations. However, this type of solution was discouraged in this comment.
Another possible solution we pursued was abandoning the suggested cluster configurations that do not have enough memory. According to this comment in #372, Ax client would not retry abandoned points. However, we see that even after abandoning a point, it again gets suggested by the Ax client.
We ran the code for an application that needs at least 4*48 GB memory for execution, thus we would need 4 worker nodes, as each node has 48 GBs of memory. We detect whether the suggested configuration should be skipped or not using the function
skip_sample_based_on_memory_amount
. We re-ran this a few times, and in many cases, Ax keeps suggesting configurations that we marked abandoned. One such example shows that Ax repeatedly suggested configuration with worker number of 2, which was marked abandoned.How to tackle such an issue? Another path can be using the total number of cores and the total memory as a search space parameter. However, that would lead to many suggested configurations being invalid, as the granularity of cores and memory depends on the type of worker node.
The text was updated successfully, but these errors were encountered: