-
Notifications
You must be signed in to change notification settings - Fork 2.3k
Add playbooks to provision AWS VMs for CI #9744
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add playbooks to provision AWS VMs for CI #9744
Conversation
test/ci/create_aws_vm.yml
Outdated
| assign_public_ip: yes | ||
| instance_tags: | ||
| Name: "{{ outer_item.name_template }}-{{ item }}" | ||
| "kubernetes.io/cluster/{{ aws_cluster_id }}": "true" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
TODO: This needs fixing, as this var doesn't get templated for some reason
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@vrutkovs i suspect the reason for that is because is not a json. What i ended up doing internally for same thing was:
- name: Set ec2_tags fact for EC2 instances
set_fact:
ec2_tags: |
{
"kubernetes.io/cluster/{{ openshift_clusterid }}": "true",
"Name": "{{ name }}"
}
followed by
- name: Add tags to {{ name }} instance
ec2_tag:
region: "{{ aws_region }}"
resource: "{{ instance_id }}"
state: present
tags: "{{ ec2_tags }}"
hth
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also i believe you want to change the tag's value to one of the 2 values https://github.com/kubernetes/kubernetes/blob/master/pkg/cloudprovider/providers/aws/tags.go#L43-L51
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that did the trick, thanks!
Not sure if we care about owned vs. shared, might add this later
4852912 to
ae4ed43
Compare
103727f to
a3ddb54
Compare
test/ci/create_aws_vm.yml
Outdated
| region: "{{ aws_region }}" | ||
| when: is_atomic | ||
| with_items: "{{ ec2.results }}" | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i suspect you do need to tag your volumes with same kubernetes.io/cluster/ key and value above
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
you want this especially if you planning to use PVs ... just saying
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a volume for Atomic's docker storage, it doesn't know about clusters and such. In any case it seems the bot on our instance is deleting those correctly without any tags
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
okay, fair point
babbd68 to
c57dad8
Compare
6130efd to
4475ffc
Compare
| {{ entry }} ansible_host='{{ hostvars[entry]['ansible_host'] }}' aws_id='{{ hostvars[entry]['aws_id'] }}' {{ addon_opts }} | ||
| {% endfor %} | ||
| {% endif %} | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: you may want to have a nice generated ini file and as such you could do with a bit of trimming the whitespace - http://jinja.pocoo.org/docs/dev/templates/#whitespace-control
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I had this initially and it turned into a nightmare :(
This inv is temporary anyway, all it needs is the hostnames and host-specific params (just one actually) so I wouldn't care about it much
1e51abd to
85251f9
Compare
test/ci/launch.yml
Outdated
| instance_tags: "{{ aws_instance_tags }}" | ||
| volumes: "{{ item.aws_volumes | default(omit) }}" | ||
| register: ec2 | ||
| with_items: "{{ vms }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
aws_instances?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
test/ci/launch.yml
Outdated
| aws_region: "{{ aws_region }}" | ||
| aws_ip: "{{ item.instances.0.public_ip }}" | ||
| aws_id: "{{ item.instances.0.id }}" | ||
| node_group: "{{ item.instances.0.tags['ansible-node-group'] }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think 'node_group' is the correct variable name here. openshift_node_group_name ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed. This incorrectly set for in-memory inventory, but was correctly rendered in hosts/inventory
| - nodes | ||
| aws_flavor: t2.large | ||
| aws_security_group: public | ||
| node_group: "node-config-all-in-one" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could be better handled by group vars.
Possible inventory groups:
[OSEv3:children]
etcd
nodes
[nodes:children]
masters
compute
[etcd:children]
masters
[masters]
master1.example.com
[master:vars]
openshift_node_group_name: node-config-master-infra
[compute]
node1.example.com
[compute:vars]
openshift_node_group_name: node-config-computeObviously, we can define vars sections in appropriate group_vars files and we can adjust groups for different scenarios.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That'd complicate vars.yml - a list of groups to create and assign the instance to the group? In the end I'd still have to set per-instance aws_ip and aws_id, so why not put openshift_node_group_name there too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That'd complicate vars.yml - a list of groups to create and assign the instance to the group? In the end I'd still have to set per-instance aws_ip and aws_id, so why not put openshift_node_group_name there too?
I'm not sure what you mean. You don't need to 'create' the groups, you just make this the base template of the inventory file (since we're not using dynamic inventory).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That'd limit us in groups choice. Instead of hardcoding the list of those in the template these groups are now being created automagically. So later on we could try scale up tests, by adding hosts in new_nodes group for instance.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of this implementation, I do think we need to update our group reference architecture to reflect what an actual user should do as far as group hierarchy. I'm against using hostvars where we could use groupvars because the syntax is different between the two in ini files and users never seem to understand what hostvars actually are (you'll see inventories with same host in two groups, eg masters and nodes, and competing values set on each instance of the host).
In any case, we can revisit this later.
fa28be3 to
772af74
Compare
772af74 to
6446f05
Compare
| connection: local | ||
| gather_facts: false | ||
| tasks: | ||
| - include_vars: "{{ item }}" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are we saying that there should be a vars.yaml on the host that is running ansible that supplies some variables? This file is provided by CI? Let's add a comment if so.
| - nodes | ||
| aws_flavor: t2.large | ||
| aws_security_group: public | ||
| node_group: "node-config-all-in-one" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That'd complicate vars.yml - a list of groups to create and assign the instance to the group? In the end I'd still have to set per-instance aws_ip and aws_id, so why not put openshift_node_group_name there too?
I'm not sure what you mean. You don't need to 'create' the groups, you just make this the base template of the inventory file (since we're not using dynamic inventory).
michaelgugino
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
| - nodes | ||
| aws_flavor: t2.large | ||
| aws_security_group: public | ||
| node_group: "node-config-all-in-one" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not a fan of this implementation, I do think we need to update our group reference architecture to reflect what an actual user should do as far as group hierarchy. I'm against using hostvars where we could use groupvars because the syntax is different between the two in ini files and users never seem to understand what hostvars actually are (you'll see inventories with same host in two groups, eg masters and nodes, and competing values set on each instance of the host).
In any case, we can revisit this later.
|
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: michaelgugino, vrutkovs The full list of commands accepted by this bot can be found here. The pull request process is described here DetailsNeeds approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/cherrypick release-3.10 |
|
@vrutkovs: new pull request created: #9936 DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
This PR adds several simple playbooks to create AWS VMs, create inventory file from a template and deprovision VMs. See
test/ci/README.mdupdate for a suggested workflow.These playbooks would later on be used on Prow CI similar to GCP. We can't use
playbooks/aws/for this as an average dev account can't create ELBs, launch configs etc.vars.ymlis able to define custom AMIs, so Atomic would be supported - its also able to deploy more complex sets of VMs, e.g. LB - 3 masters - 2 infra nodes - 2 compute nodes.Cluster config in
test/ci/inventory/group_vars/OSEv3is pretty minimal and requires two env vars to be set:RPM_REPOto be set inopenshift_additional_reposIMAGE_FORMATfororeg_urlTODO:
launch.ymlto run prerequisites and deprovision automaticallyCurrently settings from a temporary inventory are not including
test/ci/inventory/group_vars, so a simpleinclude_playbookis insufficient.openshift/origin-ansibledocker image to make this workhost_vars/localhostReworked this to use
test/ci/vars.ymlinsteadWe still need it to run upgrades until we switch to dynamic enventory
TBD:
This would allow us to run tests with per-PR settings (for instance, enable ASB temporary)