These playbooks are meant to run as a Workflow to demonstrate the use of Cisco IOS and Service Now collections from Galaxy and the Red Hat Automation Hub. Each Playbook can run in a different Python virtual enviroment (venv
).
This workflow parses data from a Cisco IOS device to create and update an incident. The output should look like this in Service Now:
pip3 install pysnow pyats[library]
ansible-galaxy collection install servicenow.servicenow
ansible-galaxy collection install cisco.ios
ansible-galaxy role install -r roles/requirements.yml
While both roles and collections can be specified in one requirements file, they need to be installed separately. The ansible-galaxy role install -r requirements.yml
will only install roles and ansible-galaxy collection install -r requirements.yml -p ./
will only install collections. See Installing roles and collections from the same requirements.yml file.
You can pass extra vars snow_instance
, snow_user
, and snow_password
or export environment variables:
export SNOW_INSTANCE=<dev12345> # without '.service-now.com'
export SN_USERNAME=<admin>
export SN_PASSWORD=<password>
Run any of the playbooks:
⇨ ansible-playbook create-incident.yml
PLAY [Using ServiceNow Collection] ******************************************************************************************************************************************
TASK [Create an incident] ***************************************************************************************************************************************************
changed: [localhost]
TASK [debug] ****************************************************************************************************************************************************************
ok: [localhost] => {
"new_incident.record.number": "INC0010002"
}
To run the rest of them, you need to export the incident number you got.
export SNOW_NUMBER=INC0010002
First, you need to export SN_INSTANCE
, SN_USERNAME
, and SN_PASSWORD
. These identify your Service Now instance and credentials. For example:
export SN_INSTANCE=<dev12345.service-now.com> # with '.service-now.com'
export SN_USERNAME=<admin>
export SN_PASSWORD=<password>
-
You can grab the display your full inventory
⇨ ansible-inventory -i inventory-snow.yml --list { "_meta": { "hostvars": { "DataNoc_gw01_lanset_net": { "sn_fqdn": "", "sn_ip_address": "208.187.161.1", "sn_name": "DataNoc.gw01.lanset.net", "sn_sys_class_name": "Network Gear" }, "IP_Router_1": { "sn_fqdn": "", "sn_ip_address": "10.0.0.1", "sn_name": "IP-Router-1", "sn_sys_class_name": "IP Router" }, ... } }, "all": { "children": [ "group_IP_Router", "group_IP_Switch", "group_Network_Gear", "ungrouped" ] }, "group_IP_Router": { "hosts": [ "IP_Router_1", "IP_Router_2", "IP_Router_3" ] }, "group_IP_Switch": { "hosts": [ "IP_Switch_1" ] }, "group_Network_Gear": { "hosts": [ "DataNoc_gw01_lanset_net", "San_Diego_Gateway", "nc6500_a01", "ny8500_nbxs08", "ny8500_nbxs09" ] } }
-
Or see it as a graph based on the
keyed_groups
⇨ ansible-inventory -i inventory-snow.yml --graph @all: |--@group_IP_Router: | |--IP_Router_1 | |--IP_Router_2 | |--IP_Router_3 |--@group_IP_Switch: | |--IP_Switch_1 |--@group_Network_Gear: | |--DataNoc_gw01_lanset_net | |--San_Diego_Gateway | |--nc6500_a01 | |--ny8500_nbxs08 | |--ny8500_nbxs09 |--@ungrouped:
In Tower, you will want to import this on Inventory sources. Have ANSIBLE_INVENTORY_ENABLED: auto
as well as the other required variables in the ENVIRONMENT VARIABLES
section.
Last, but not least, you can add hosts to your inventory with snow_record
. Take a look at this example
- Galaxy - Service Now
- Ansible + ServiceNow Part 1: Opening and Closing Tickets
- Ansible + ServiceNow Part 2: Parsing facts from network devices using PyATS/Genie
- Ansible + ServiceNow Part 3: Making outbound RESTful API calls to Red Hat Ansible Tower
- Governing Self-Service Cloud Provisioning
- Using inventory plugins
- Jinja2 to construct vars and groups based on existing inventory