Auto Patch schedule with Satellite 6.3 and Ansible Tower Part 2

In Part one we went over the setup of Satellite and Katello-cvmanager to perform the publish and promote in Satellite.

Now we want to create the playbooks that Ansible Tower with use in order to  patch the servers and then setup the next lifecycle environment.

You want to create these playbooks, You can skip this section and grab them from my github repo HERE but as always understand what is being done before you just run them.

The publish playbook will be first and can simply be written as such

---
- name: publish all new content views
  hosts: satellite
  gather_facts: false
  tasks:
  - name: update all content views
    shell: ./cvmanager --config=publish.yml --wait publish
    args:
      chdir: /var/katello-cvmanager/
I lock down this playbook for the hosts to only ever go against the satellite server as seen in the hosts.
Also in my environment I publish all my content views together, now if you need each content view to be on a separate patch schedule then you will have multiple files for the publish and so turn the publish.yml into a variable.
We also need to create a promote playbook and this one needs to use variables.
---
- name: Promote lifecycle to latest published content view
  hosts: satellite
  gather_facts: false
  tasks:
  - name: update all content views
    shell: ./cvmanager --config={{ item }}.yml --wait promote
    args:
      chdir: /var/katello-cvmanager/
    with_items:
      - "{{ lifecycle }}"
All we have done is specify the config to be a variable called lifecycle and we will define this in Tower.
I also need to have 2 patching playbooks, I need 1 for non HA servers as well as 1 for HA servers the only difference will be HA will go in a serial fashion so we don’t take down a HA server farm the 2 playbooks look like this.
Non-HA
---
- name: Patch non HA Linux Servers
  hosts: foreman_hostgroup_web_non_ha
  tasks:
  - name: upgrade all packages
    yum:
    name: '*'
    state: latest

  - name: Check for reboot hint.
    shell: LAST_KERNEL=$(rpm -q --last kernel | awk 'NR==1{sub(/kernel-/,""); print $1}'); CURRENT_KERNEL=$(uname -r); if [ $LAST_KERNEL != $CURRENT_KERNEL ]; then echo 'reboot'; else echo 'no'; fi
    ignore_errors: true
    register: reboot_hint

  - name: reboot
    shell: ( sleep 3 && /sbin/reboot & )
    async: 0
    poll: 0
    when: reboot_hint.stdout.find("reboot") != -1
    register: reboot

  - name: Wait for the server to come back
    wait_for_connection:
    delay: 15
    timeout: 300
HA Playbook
- name: Patch HA Linux Servers
  hosts: foreman_hostgroup_web_ha
  serial:
    - 1
    - 50%

  tasks:
  - name: upgrade all packages
    yum:
      name: '*'
      state: latest

  - name: Check for reboot hint.
    shell: LAST_KERNEL=$(rpm -q --last kernel | awk 'NR==1{sub(/kernel-/,""); print $1}'); CURRENT_KERNEL=$(uname -r); if [ $LAST_KERNEL != $CURRENT_KERNEL ]; then echo 'reboot'; else echo 'no'; fi
    ignore_errors: true
    register: reboot_hint

  - name: reboot
    shell: ( sleep 3 && /sbin/reboot & )
    async: 0
    poll: 0
    when: reboot_hint.stdout.find("reboot") != -1
    register: reboot

  - name: Wait for the server to come back
    wait_for_connection:
    delay: 15
    timeout: 300
Note the Serial at the top I set this as a combination of both a number and a percentage, the fact is you may have different HA setups that can be larger than 2, this is why I use a static number and a percentage value, if we set this to just 50% yet you have 10 servers then there is a potential to break 5 servers if we set it to 50% having it run just on 1 to start with the worse case scenario would be to have 1 broken server and all others stay in the current state.
Once we have got through the HA servers patched and rebooted if needed we would now have our entire Dev Rhel7 lifecycle group patched and we would want to proceed with QA. For this example I am just going to set this up with a 7 day window  the following playbook with take control of this for us.
Schedule the next version
---

- name: schedule next lifecycle patching run
  hosts: 127.0.0.1
  connection: local
  vars_files:
    - vault.yml
  tasks:
  - name: set rrule variable to be 7 days from today
    set_fact:
      next_cycle_date: "{{ '%Y%m%d' | strftime( ( ansible_date_time.epoch | int ) + ( 86400 * 7 ) ) }}"

  - name: schedule next patch group
    uri:
      url: "https://{{ tower_host }}/api/v2/workflow_job_templates/{{ next_sat_env }}/schedules/"
      method: POST
      body:
        name: "Linux_patching_{{ next_cycle_date }}"
rrule: "DTSTART:{{ next_cycle_date }}T040000Z RRULE:FREQ=DAILY;INTERVAL=1;COUNT=1"
        enabled: true
      body_format: json
      force_basic_auth: yes
      status_code: 201
      user: "{{ tower_user }}"
      password: "{{ tower_pass }}"
      validate_certs: no
What we are going to do is do an api call back to Tower to set a schedule for the next Ansible Workflow to run 7 days from now, there are a couple of variables we need the workflow job template id which we will get and set in Tower itself and the tower user and password of which we are setting inside of an encrypted vaulted file.
At this point we want to have the playbooks set in a source control and bring them into Ansible Tower.

Leave a Reply

Your email address will not be published. Required fields are marked *