Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
17 changes: 14 additions & 3 deletions README_GCE.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,18 +39,29 @@ Create a gce.ini file for GCE
* gce_service_account_pem_file_path - Full path from previous steps
* gce_project_id - Found in "Projects", it list all the gce projects you are associated with. The page lists their "Project Name" and "Project ID". You want the "Project ID"

Mandatory customization variables (check the values according to your tenant):
* zone = europe-west1-d
* network = default
* gce_machine_type = n1-standard-2
* gce_machine_image = preinstalled-slave-50g-v5


1. vi ~/.gce/gce.ini
1. make the contents look like this:
```
[gce]
gce_service_account_email_address = long...@developer.gserviceaccount.com
gce_service_account_pem_file_path = /full/path/to/project_id-gce_key_hash.pem
gce_project_id = project_id
zone = europe-west1-d
network = default
gce_machine_type = n1-standard-2
gce_machine_image = preinstalled-slave-50g-v5

```
1. Setup a sym link so that gce.py will pick it up (link must be in same dir as gce.py)
1. Define the environment variable GCE_INI_PATH so gce.py can pick it up and bin/cluster can also read it
```
cd openshift-ansible/inventory/gce
ln -s ~/.gce/gce.ini gce.ini
export GCE_INI_PATH=~/.gce/gce.ini
```


Expand Down
12 changes: 8 additions & 4 deletions bin/cluster
Original file line number Diff line number Diff line change
Expand Up @@ -142,10 +142,14 @@ class Cluster(object):
"""
config = ConfigParser.ConfigParser()
if 'gce' == provider:
config.readfp(open('inventory/gce/hosts/gce.ini'))

for key in config.options('gce'):
os.environ[key] = config.get('gce', key)
gce_ini_default_path = os.path.join(
'inventory/gce/hosts/gce.ini')
gce_ini_path = os.environ.get('GCE_INI_PATH', gce_ini_default_path)
if os.path.exists(gce_ini_path):
config.readfp(open(gce_ini_path))

for key in config.options('gce'):
os.environ[key] = config.get('gce', key)

inventory = '-i inventory/gce/hosts'
elif 'aws' == provider:
Expand Down
9 changes: 6 additions & 3 deletions inventory/gce/hosts/gce.py
Original file line number Diff line number Diff line change
Expand Up @@ -120,6 +120,7 @@ def get_gce_driver(self):
os.path.dirname(os.path.realpath(__file__)), "gce.ini")
gce_ini_path = os.environ.get('GCE_INI_PATH', gce_ini_default_path)


# Create a ConfigParser.
# This provides empty defaults to each key, so that environment
# variable configuration (as opposed to INI configuration) is able
Expand Down Expand Up @@ -173,6 +174,7 @@ def get_gce_driver(self):
args[1] = os.environ.get('GCE_PEM_FILE_PATH', args[1])
kwargs['project'] = os.environ.get('GCE_PROJECT', kwargs['project'])


# Retrieve and return the GCE driver.
gce = get_driver(Provider.GCE)(*args, **kwargs)
gce.connection.user_agent_append(
Expand Down Expand Up @@ -211,16 +213,17 @@ def node_to_dict(self, inst):
'gce_image': inst.image,
'gce_machine_type': inst.size,
'gce_private_ip': inst.private_ips[0],
'gce_public_ip': inst.public_ips[0],
# Hosts don't always have a public IP name
#'gce_public_ip': inst.public_ips[0],
'gce_name': inst.name,
'gce_description': inst.extra['description'],
'gce_status': inst.extra['status'],
'gce_zone': inst.extra['zone'].name,
'gce_tags': inst.extra['tags'],
'gce_metadata': md,
'gce_network': net,
# Hosts don't have a public name, so we add an IP
'ansible_ssh_host': inst.public_ips[0]
# Hosts don't always have a public IP name
#'ansible_ssh_host': inst.public_ips[0]
}

def get_instance(self, instance_name):
Expand Down
2 changes: 1 addition & 1 deletion inventory/openstack/hosts/nova.py
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@
# executed with no parameters, return the list of
# all groups and hosts

NOVA_CONFIG_FILES = [os.getcwd() + "/nova.ini",
NOVA_CONFIG_FILES = [os.path.join(os.path.dirname(os.path.realpath(__file__)), "nova.ini"),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Even though this change makes sense, this seems unrelated to the purpose of this PR, why's this here?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actually, this doesn't make sense to me. The way it's written it would use nova.ini in the same location as nova.py (ie in the git checkout) which I don't suspect anyone would modify in place. I think it'd be more likely that they'd expect it to look for it in cwd, do you agree?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I agree with you that this change is not related at all to GCE ... however this change makes sense :

nova.ini is located in the same directory than nova.py
Getcwd will return the location from where we execute the script nova.py resulting in an error if the script is executed from directory different. So this is fixing that issue.

On AWS, GCE, LIBVIRT we take the .ini file in the same location as the .py
So I think it was just a mistake to use getcwd for Openstack

Anyways we can take this change off and make an other PR.
It's up to you to judge if it's worth for a minor change.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm fine with leaving it in.

os.path.expanduser(os.environ.get('ANSIBLE_CONFIG', "~/nova.ini")),
"/etc/ansible/nova.ini"]

Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,15 @@
---
- set_fact: k8s_type=infra
- set_fact: sub_host_type="{{ type }}"
- set_fact: number_infra="{{ count }}"

- name: Generate infra instance names(s)
set_fact:
scratch_name: "{{ cluster_id }}-{{ k8s_type }}-{{ sub_host_type }}-{{ '%05x' | format(1048576 | random) }}"
register: infra_names_output
with_sequence: count={{ number_infra }}

- set_fact:
infra_names: "{{ infra_names_output.results | default([], true)
| oo_collect('ansible_facts')
| oo_collect('scratch_name') }}"
4 changes: 4 additions & 0 deletions playbooks/gce/openshift-cluster/config.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@
- set_fact:
g_ssh_user_tmp: "{{ deployment_vars[deployment_type].ssh_user }}"
g_sudo_tmp: "{{ deployment_vars[deployment_type].sudo }}"
use_sdn: "{{ do_we_use_openshift_sdn }}"
sdn_plugin: "{{ sdn_network_plugin }}"

- include: ../../common/openshift-cluster/config.yml
vars:
Expand All @@ -22,3 +24,5 @@
openshift_debug_level: 2
openshift_deployment_type: "{{ deployment_type }}"
openshift_hostname: "{{ gce_private_ip }}"
openshift_use_openshift_sdn: "{{ hostvars.localhost.use_sdn }}"
os_sdn_network_plugin_name: "{{ hostvars.localhost.sdn_plugin }}"
49 changes: 49 additions & 0 deletions playbooks/gce/openshift-cluster/join_node.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
---
- name: Populate oo_hosts_to_update group
hosts: localhost
gather_facts: no
vars_files:
- vars.yml
tasks:
- name: Evaluate oo_hosts_to_update
add_host:
name: "{{ node_ip }}"
groups: oo_hosts_to_update
ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"

- include: ../../common/openshift-cluster/update_repos_and_packages.yml

- name: Populate oo_masters_to_config host group
hosts: localhost
gather_facts: no
vars_files:
- vars.yml
tasks:
- name: Evaluate oo_nodes_to_config
add_host:
name: "{{ node_ip }}"
ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
groups: oo_nodes_to_config

- name: Evaluate oo_first_master
add_host:
name: "{{ groups['tag_env-host-type-' ~ cluster_id ~ '-openshift-master'][0] }}"
ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user }}"
ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
groups: oo_first_master
when: "'tag_env-host-type-{{ cluster_id }}-openshift-master' in groups"

#- include: config.yml
- include: ../../common/openshift-node/config.yml
vars:
openshift_cluster_id: "{{ cluster_id }}"
openshift_debug_level: 4
openshift_deployment_type: "{{ deployment_type }}"
openshift_hostname: "{{ ansible_default_ipv4.address }}"
openshift_use_openshift_sdn: true
openshift_node_labels: "{{ lookup('oo_option', 'openshift_node_labels') }} "
os_sdn_network_plugin_name: "redhat/openshift-ovs-subnet"
osn_cluster_dns_domain: "{{ hostvars[groups.oo_first_master.0].openshift.dns.domain }}"
osn_cluster_dns_ip: "{{ hostvars[groups.oo_first_master.0].openshift.dns.ip }}"
54 changes: 27 additions & 27 deletions playbooks/gce/openshift-cluster/launch.yml
Original file line number Diff line number Diff line change
Expand Up @@ -28,33 +28,33 @@
type: "{{ k8s_type }}"
g_sub_host_type: "{{ sub_host_type }}"

- include: ../../common/openshift-cluster/set_node_launch_facts_tasks.yml
vars:
type: "infra"
count: "{{ num_infra }}"
- include: tasks/launch_instances.yml
vars:
instances: "{{ infra_names }}"
cluster: "{{ cluster_id }}"
type: "{{ k8s_type }}"
g_sub_host_type: "{{ sub_host_type }}"

- set_fact:
a_infra: "{{ infra_names[0] }}"
- add_host: name={{ a_infra }} groups=service_master

# - include: ../../common/openshift-cluster/set_infra_launch_facts_tasks.yml
# vars:
# type: "infra"
# count: "{{ num_infra }}"
# - include: tasks/launch_instances.yml
# vars:
# instances: "{{ infra_names }}"
# cluster: "{{ cluster_id }}"
# type: "{{ k8s_type }}"
# g_sub_host_type: "{{ sub_host_type }}"
#
# - set_fact:
# a_infra: "{{ infra_names[0] }}"
# - add_host: name={{ a_infra }} groups=service_master
#
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@menren hey, sorry for missing this earlier, but I just noticed this.

So we use infra nodes, why is this commented out?

We want it like the AWS playbook which has this section in.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@twiest

We use set_node_launch_facts_tasks.yml for infra_node (https://github.com/openshift/openshift-ansible/blob/master/playbooks/gce/openshift-cluster/launch.yml#L31) : that's not working obviously, the playbook stop on error while trying to launch infra nodes instances.

I tried to repair it with set_infra_launch_facts_tasks.yml, the instances were correctly launched, however I encountered many issues at that time (in August).

Currently the GCE deployment isn't working at all, with or without infra-node. I suppose nobody is working with it. If somebody is working with it, he/she has to tell me how it can work in the current state.

By commenting this code, the rest is working.
We have to wait an other patch in order to make the infra-node work.

This PR fixes the standard deployment without infra and will allow people to work on GCE deployment and repair the infra-node

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see. I just tried and I was able to launch by making the infra section look the same on gce as it does in aws.

So, it now looks like this in my branch:

  - include: ../../common/openshift-cluster/set_node_launch_facts_tasks.yml
    vars:
      type: "infra"
      count: "{{ num_infra }}"
  - include: tasks/launch_instances.yml
    vars:
      instances: "{{ node_names }}"
      cluster: "{{ cluster_id }}"
      type: "{{ k8s_type }}"
      g_sub_host_type: "{{ sub_host_type }}"

  - add_host:
      name: "{{ master_names.0 }}"
      groups: service_master
    when: master_names is defined and master_names.0 is defined

Locally, I also removed this file as it's no longer being used:

playbooks/common/openshift-cluster/set_infra_launch_facts_tasks.yml

I'm ok if we get this as a separate PR as I have some other things I had to patch to be able to launch a cluster in GCE. I'll @ mention you when I create my PR to make sure it works for you too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@twiest good if you can repair the infra nodes for GCE because I work without it and don't know how it works exactly

- include: update.yml

- name: Deploy OpenShift Services
hosts: service_master
connection: ssh
gather_facts: yes
roles:
- openshift_registry
- openshift_router

- include: ../../common/openshift-cluster/create_services.yml
vars:
g_svc_master: "{{ service_master }}"
#
#- name: Deploy OpenShift Services
# hosts: service_master
# connection: ssh
# gather_facts: yes
# roles:
# - openshift_registry
# - openshift_router
#
#- include: ../../common/openshift-cluster/create_services.yml
# vars:
# g_svc_master: "{{ service_master }}"

- include: list.yml
4 changes: 2 additions & 2 deletions playbooks/gce/openshift-cluster/list.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,11 +14,11 @@
groups: oo_list_hosts
ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated)
with_items: groups[scratch_group] | default([], true) | difference(['localhost']) | difference(groups.status_terminated | default([], true))

- name: List instance(s)
hosts: oo_list_hosts
gather_facts: no
tasks:
- debug:
msg: "public ip:{{ hostvars[inventory_hostname].gce_public_ip }} private ip:{{ hostvars[inventory_hostname].gce_private_ip }}"
msg: "private ip:{{ hostvars[inventory_hostname].gce_private_ip }}"
21 changes: 13 additions & 8 deletions playbooks/gce/openshift-cluster/tasks/launch_instances.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,33 +10,38 @@
service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
project_id: "{{ lookup('env', 'gce_project_id') }}"
zone: "{{ lookup('env', 'zone') }}"
network: "{{ lookup('env', 'network') }}"
# unsupported in 1.9.+
#service_account_permissions: "datastore,logging-write"
tags:
- created-by-{{ lookup('env', 'LOGNAME') |default(cluster, true) }}
- env-{{ cluster }}
- host-type-{{ type }}
- sub-host-type-{{ sub_host_type }}
- sub-host-type-{{ g_sub_host_type }}
- env-host-type-{{ cluster }}-openshift-{{ type }}
when: instances |length > 0
register: gce

- name: Add new instances to groups and set variables needed
add_host:
hostname: "{{ item.name }}"
ansible_ssh_host: "{{ item.public_ip }}"
ansible_ssh_host: "{{ item.name }}"
ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
groups: "{{ item.tags | oo_prepend_strings_in_list('tag_') | join(',') }}"
gce_public_ip: "{{ item.public_ip }}"
gce_private_ip: "{{ item.private_ip }}"
with_items: gce.instance_data
with_items: gce.instance_data | default([], true)

- name: Wait for ssh
wait_for: port=22 host={{ item.public_ip }}
with_items: gce.instance_data
wait_for: port=22 host={{ item.name }}
with_items: gce.instance_data | default([], true)

- name: Wait for user setup
command: "ssh -o StrictHostKeyChecking=no -o PasswordAuthentication=no -o ConnectTimeout=10 -o UserKnownHostsFile=/dev/null {{ hostvars[item.name].ansible_ssh_user }}@{{ item.public_ip }} echo {{ hostvars[item.name].ansible_ssh_user }} user is setup"
register: result
until: result.rc == 0
retries: 20
delay: 10
with_items: gce.instance_data
retries: 30
delay: 5
with_items: gce.instance_data | default([], true)
55 changes: 34 additions & 21 deletions playbooks/gce/openshift-cluster/terminate.yml
Original file line number Diff line number Diff line change
@@ -1,25 +1,18 @@
---
- name: Terminate instance(s)
hosts: localhost
connection: local
gather_facts: no
vars_files:
- vars.yml
tasks:
- set_fact: scratch_group=tag_env-host-type-{{ cluster_id }}-openshift-node
- set_fact: scratch_group=tag_env-{{ cluster_id }}
- add_host:
name: "{{ item }}"
groups: oo_hosts_to_terminate, oo_nodes_to_terminate
groups: oo_hosts_to_terminate
ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated)

- set_fact: scratch_group=tag_env-host-type-{{ cluster_id }}-openshift-master
- add_host:
name: "{{ item }}"
groups: oo_hosts_to_terminate, oo_masters_to_terminate
ansible_ssh_user: "{{ deployment_vars[deployment_type].ssh_user | default(ansible_ssh_user, true) }}"
ansible_sudo: "{{ deployment_vars[deployment_type].sudo }}"
with_items: groups[scratch_group] | default([]) | difference(['localhost']) | difference(groups.status_terminated)
with_items: groups[scratch_group] | default([], true) | difference(['localhost']) | difference(groups.status_terminated | default([], true))

- name: Unsubscribe VMs
hosts: oo_hosts_to_terminate
Expand All @@ -32,14 +25,34 @@
lookup('oo_option', 'rhel_skip_subscription') | default(rhsub_skip, True) |
default('no', True) | lower in ['no', 'false']

- include: ../openshift-node/terminate.yml
vars:
gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
gce_project_id: "{{ lookup('env', 'gce_project_id') }}"
- name: Terminate instances(s)
hosts: localhost
connection: local
gather_facts: no
vars_files:
- vars.yml
tasks:

- name: Terminate instances that were previously launched
local_action:
module: gce
state: 'absent'
name: "{{ item }}"
service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
project_id: "{{ lookup('env', 'gce_project_id') }}"
zone: "{{ lookup('env', 'zone') }}"
with_items: groups['oo_hosts_to_terminate'] | default([], true)
when: item is defined

- include: ../openshift-master/terminate.yml
vars:
gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
gce_project_id: "{{ lookup('env', 'gce_project_id') }}"
#- include: ../openshift-node/terminate.yml
# vars:
# gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
# gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
# gce_project_id: "{{ lookup('env', 'gce_project_id') }}"
#
#- include: ../openshift-master/terminate.yml
# vars:
# gce_service_account_email: "{{ lookup('env', 'gce_service_account_email_address') }}"
# gce_pem_file: "{{ lookup('env', 'gce_service_account_pem_file_path') }}"
# gce_project_id: "{{ lookup('env', 'gce_project_id') }}"
8 changes: 5 additions & 3 deletions playbooks/gce/openshift-cluster/vars.yml
Original file line number Diff line number Diff line change
@@ -1,8 +1,11 @@
---
do_we_use_openshift_sdn: true
sdn_network_plugin: redhat/openshift-ovs-subnet
# os_sdn_network_plugin_name can be ovssubnet or multitenant, see https://docs.openshift.org/latest/architecture/additional_concepts/sdn.html#ovssubnet-plugin-operation
deployment_vars:
origin:
image: centos-7
ssh_user:
image: preinstalled-slave-50g-v5
ssh_user: root
sudo: yes
online:
image: libra-rhel7
Expand All @@ -12,4 +15,3 @@ deployment_vars:
image: rhel-7
ssh_user:
sudo: yes

Loading