implement dynamic emr clusters mode #88
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
🔄 Pull Request
📝 Description
users will now be able to specify EMR cluster name/id/arn dynamically in the query. to enable this, set dyanmic_emr_cluster_mode in config.yaml to true. for usage with name, the cluster must be active (since cluster name can be resued)
added "dyanmic_emr_cluster_mode" boolean in config this mode can't be used together with static servers specification
all tool calls now require "server_spec" parameter
servers_spec = {
"static_server_spec": {
"server_name": str,
"default_client": bool
},
"dynamic_emr_server_spec": {
"emr_cluster_arn": str,
"emr_cluster_id": str,
"emr_cluster_arn": str
}
}
in static mode, the static_server_spec is used.
in dyanmic mode the dynamic_emr_server_spec is used.
dynamically created spark clients are cached:
by arn: lifetime
by id: lifetime
by name: for the session
created EMRclient to find the relevant cluster when needed
🎯 Type of Change
🧪 Testing
task test
)🔬 Test Commands Run
🛠️ New Tools Added (if applicable)
new_tool_name
📸 Screenshots (if applicable)
✅ Checklist
📚 Related Issues
Fixes #86
Related to #(issue number)
🤔 Additional Context
🎉 Thank you for contributing! Your effort helps make Spark monitoring more intelligent.