Skip to main content

% ins3cure.com

Red Hat Certified Engineer (EX294) Sample Exam

I think it was in late 2019 when Red Hat updated their RHCE exam, now based on RHEL 8 and the Ansible Automation Platform. The performance-based exam code is EX294 (more information here) and by passing this exam you become a Red Hat Certified Engineer.

There are not too many EX294 mock exams and the best I have found is this one by lisenet.com. Thanks, Tomas!

Of course there are many ways to solve it, but this is mine.

Lisenet: Ansible Sample Exam for RHCE EX294

The original site here -> https://www.lisenet.com/2019/ansible-sample-exam-for-ex294/. Please visit the site to read the complete assignments.

Preparation

First of all I created 5 RHEL 8 virtual machines. Remember you can get free developer RHEL 8 subscriptions. CentOS 8 / Rocky Linux 8 should also work. Machines are:

  • ansible-control.hl.local
  • ansible2.hl.local
  • ansible3.hl.local
  • ansible4.hl.local
  • ansible5.hl.local

I used 1 vCPU, 2 GB memory and 20 GB thin provisioned disks. ansible5 needs an extra 1 GB Disk for some tasks. I assigned static IP addresses to all of them and added everything to the /etc/hosts file.

If RHEL 8 is chosen remember you have to register the systems:

# subscription-manager register --username <username> --password <password> --force
# subscription-manager attach

In the control machine you need some additional steps to install Ansible:

# subscription-manager repos --enable ansible-2-for-rhel-8-x86_64-rpms
# yum -y install ansible

Task 1: Ansible Installation and Configuration

First tasks is to manually setup the environment: create automation user in ansible-control and configure new ansible defaults:

  • The roles path should include /home/automation/plays/roles, as well as any other path that may be required for the course of the sample exam.
  • The inventory file path is /home/automation/plays/inventory.
  • Privilege escalation is disabled by default.
  • Ansible should be able to manage 10 hosts at a single time.
  • Ansible should connect to all managed nodes using the automation user.

Our working directory will be /home/automation/plays

I created /home/automation/plays/ansible.cfg with this content:

[defaults]
roles_path = /home/automation/plays/roles
inventory = /home/automation/plays/inventory
remote_user = automation
forks = 10

[privilege_escalation]
become = false

And the inventory file:

[proxy]
ansible2.hl.local

[webservers]
ansible3.hl.local
ansible4.hl.local

[database]
ansible5.hl.local

Task 2: Ad-Hoc Commands

For this task we have to use ad-hoc commands to prepare the remaining machines to be properly used via ansible. This is my script:

#!/bin/bash

# create user automation
ansible all -u root --ask-pass -m user -a "name=automation state=present" 

# create .ssh directory
ansible all -u root --ask-pass -m file -a "path=/home/automation/.ssh state=directory owner=automation group=automation mode=0700" 

# copy id_rsa.pub
ansible all -u root --ask-pass -m copy -a "src=/home/automation/.ssh/id_rsa.pub dest=/home/automation/.ssh/authorized_keys owner=automation group=automation mode=0600" 

# add sudo permission
ansible all -u root --ask-pass -m copy -a "content='automation ALL=(ALL) NOPASSWD: ALL' dest=/etc/sudoers.d/automation owner=root group=root mode=0600"

Task 3: File Content

I used the magic variable inventory_hostname to match the conditions:

---
- name: task 3
  hosts: all
  become: yes
  tasks:
    - name: copy content to HAProxy
      copy:
        content: "Welcome to HAProxy server"
        dest: /etc/motd
      when: inventory_hostname in groups["proxy"]
    - name: copy content to webservers
      copy:
        content: "Welcome to Apache server"
        dest: /etc/motd
      when: inventory_hostname in groups["webservers"]
    - name: copy content to Database
      copy:
        content: "Welcome to MySQL server"
        dest: /etc/motd
      when: inventory_hostname in groups["database"]

Task 4: Configure SSH Server

---
- name: task 4
  hosts: all
  become: yes
  tasks:
    - name: configure sshd daemon
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: "^Banner"
        line: Banner /etc/motd
    - name: disable X11Forwarding
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: "^X11Forwarding"
        line: X11Forwarding no
    - name: set MaxAuthTries = 3
      lineinfile:
        path: /etc/ssh/sshd_config
        regexp: "^MaxAuthTries"
        line: MaxAuthTries 3
    - name: restart ssh server
      service:
        name: sshd
        state: restarted
        enabled: yes

Task 5: Ansible Vault

Ansible vault commands are needed here. For example: ansible-vault view secret.yml --vault-password-file vault_key. At the end of the exercise you must have two files:

  • secret.yml: encrypted file with credentials
  • vault_key: the encryption key

Task 6: Users and Groups

We will have to loop through a list of users:

---
users:
  - username: alice
    uid: 1201
  - username: vincent
    uid: 1202
  - username: sandy
    uid: 2201
  - username: patrick
    uid: 2202

This is my playbook:

---
- name: task 6
  hosts: all
  become: yes

  vars_files:
    - ./vars/user_list.yml
    - ./secret.yml

  tasks:
    - name: ensure group wheel exists
      group:
        name: wheel
        state: present

    - name: create users in webservers group
      loop: "{{ users }}"
      user:
        name: "{{ item.username }}"
        password: "{{ user_password | password_hash('sha512')}}"
        update_password: on_create
        groups: wheel
        shell: /bin/bash
      when:
        - inventory_hostname in groups['webservers']
        - item.uid | string | first == "1"

    - name: create users in database group
      loop: "{{ users }}"
      user:
        name: "{{ item.username }}"
        password: "{{ user_password | password_hash('sha512')}}"
        update_password: on_create
        groups: wheel
        shell: /bin/bash
      when:
        - inventory_hostname in groups['database']
        - item.uid | string | first == "2"

    - name: create .ssh directory (webservers)
      loop: "{{ users }}"
      file:
        name: "/home/{{ item.username }}/.ssh"
        state: directory
        owner: "{{ item.username }}"
        group: "{{ item.username }}"
        mode: 0700
      when:
        - inventory_hostname in groups['webservers']
        - item.uid | string | first == "1"

    - name: create .ssh directory (database)
      loop: "{{ users }}"
      file:
        name: "/home/{{ item.username }}/.ssh"
        state: directory
        owner: "{{ item.username }}"
        group: "{{ item.username }}"
        mode: 0700
      when:
        - inventory_hostname in groups['database']
        - item.uid | string | first == "2"

    - name: copy ssh authorized key (webservers)
      loop: "{{ users }}"
      copy:
        src: "/home/automation/.ssh/id_rsa.pub"
        dest: "/home/{{ item.username }}/.ssh/authorized_keys"
        owner: "{{ item.username }}"
        group: "{{ item.username }}"
        mode: 0600
      when:
        - inventory_hostname in groups['webservers'] 
        - item.uid|string|first == "1"

   - name: copy ssh authorized key (database)
      loop: "{{ users }}"
      copy:
        src: "/home/automation/.ssh/id_rsa.pub"
        dest: "/home/{{ item.username }}/.ssh/authorized_keys"
        owner: "{{ item.username }}"
        group: "{{ item.username }}"
        mode: 0600
      when:
        - inventory_hostname in groups['database'] 
        - item.uid|string|first == "2"

Ths playbook is run as:

ansible-playbook users.yml --vault-password-file vault_key

It took some time to figure out how to do the “starts with ‘1’” thingy. Turns out that it was really much simpler using just:

  • item.uid < 2000
  • item.uid >= 2000

Task 7: Scheduled Tasks

I chose to use a md5 hash for the job name because it looks fancy to me (and I wanted to learn how to do it). But it is not a requirement and a descriptive name could be more appropriate depending on the environment.

---
- name: task 7
  hosts: proxy
  become: yes

  tasks:
    - name: create a cron job with a fancy name on proxy hosts
      cron:
        name: "{{ 'proxy: append date to time.log' | hash('md5') }}"
        minute: "0"
        job: "date >> /var/log/time.log"

Task 8: Software Repositories

I did not know wich module to use so I run:

ansible-doc -l|grep yum
yum                       Manages packages with the `yum' package manager                                                    
yum_repository            Add or remove YUM repositories

to find it. This is my possible solution:

---
- name: task 8
  hosts: database
  become: yes

  tasks:
    - name: create yum repository
      yum_repository:
        name: "mysql80-community"
        baseurl: http://repo.mysql.com/yum/mysql-8.0-community/el/8/x86_64/
        description: "MySQL 8.0 YUM Repo"
        enabled: true
        gpgkey: http://repo.mysql.com/RPM-GPG-KEY-mysql
        gpgcheck: true

Task 9: Create and Work with Roles

To create the role template:

cd roles
ansible-galaxy role init --offline sample-mysql

The playbook:

---
- name: task 9
  hosts: database
  become: true
  vars_files:
    - secret.yml
  roles:
    - sample-mysql

The role file:

---
# tasks file for sample-mysql

I spent quite some time in this one. First of all, `mysql-community-server` is not available in the configure repos, so I used `mysql-server` instead. But it took some time to figure out that `mysql` and `python3-PyMySQL` are also required.

- name: create primary partition 
  parted:
    device: /dev/nvme0n2
    number: 1
    flags: [ lvm ]
    state: present
    part_end: 800MB

- name: create VG vg_database using the primary partition created above
  lvg:
    vg: vg_database
    pvs: /dev/nvme0n2p1

- name: create LV lv_mysql size 512MB in the VG vg_database
  lvol:
    vg: vg_database
    lv: lv_mysql
    size: 512m

- name: create an XFS filesystem on lv_mysql
  filesystem:
    fstype: xfs
    dev: /dev/vg_database/lv_mysql

- name: ensure mount point /mnt/mysql_backups exists
  file:
    path: /mnt/mysql_backups
    state: directory
    owner: root
    group: root
    mode: 775

- name: permanently mount filesystem
  mount:
    path: /mnt/mysql_backups
    src: /dev/vg_database/lv_mysql
    fstype: xfs
    state: mounted

- name: install mysql-server
  yum:
    name: "{{ item }}"
    state: latest
  loop:
    - mysql-server
    - mysql
    - python3-PyMySQL

- name: allow mysql traffic
  firewalld:
    service: mysql
    permanent: true
    immediate: true
    state: enabled

- name: start and enable mysql
  service:
    name: mysqld
    state: started
    enabled: true

- name: configure root user
  mysql_user:
    check_implicit_admin: true
    login_host: localhost
    login_user: root
    login_password: ''
    name: root
    password: "{{ database_password }}"
    state: present
    update_password: always
  # no_log: true

- name: deploy configuration
  template:
    src: mysql.j2
    dest: /etc/my.cnf
    owner: root
    group: root
    mode: 0644

Task 10: Create and Work with Roles (Some More)

The playbook:

---
- name: task 10
  hosts: webservers
  become: true
  roles:
    - sample-apache

The role:

---
# tasks file for roles/sample-apache

- name: install apache
  yum:
    name: "{{ item }}"
    state: latest
  loop:
    - httpd
    - mod_ssl
    - php

- name: allow incoming http/https traffic
  firewalld:
    service: "{{ item }}"
    immediate: true
    permanent: true
    state: enabled
  loop:
    - http
    - https

- name: start and enable the apache service
  service:
    name: httpd
    state: started
    enabled: true

- name: update index.html
  template:
    src: index.html.j2
    dest: /var/www/html/index.html
    owner: root
    group: root
    mode: 0644
  notify: restart apache

The handlers file:

---
# handlers file for roles/sample-apache

- name: restart apache
  service:
    name: httpd
    state: restarted

Task 11: Download Roles From Ansible Galaxy and Use Them

Install the role:

ansible-galaxy install geerlingguy.haproxy

The playbook:

---
- name: task 11
  hosts: proxy
  become: true

  vars:
    haproxy_backend_servers:
      - name: ansible3
        address: 172.16.10.203:80
      - name: ansible4
        address: 172.16.10.204:80
    haproxy_backend_balance_method: 'roundrobin'
    haproxy_backend_mode: 'http'

  roles:
    - geerlingguy.haproxy
  
  tasks:
    - name: enable http traffic to proxy
      firewalld:
        service: http
        state: enabled
        immediate: true
        permanent: true

Note: it may be a good idea to add firewalld installation and configuration just in case.

Task 12: Security

Install roles:

yum install rhel-system-roles

Since the installed roles are not in the path, the ansible.cfg has to be modified:

[defaults]
roles_path = /home/automation/plays/roles:/usr/share/ansible/roles
[...]

The playbook:

---
- name: task 12
  hosts: webservers
  become: true

  vars:
    - selinux_booleans:
      - name: httpd_can_network_connect
        state: on
        persistent: true

  roles:
    - rhel-system-roles.selinux

Task 13: Use Conditionals to Control Play Execution

To find the fact:

ansible ansible2.hl.local -m setup | grep -A10 memory
[...]
        "ansible_memory_mb": {
            "nocache": {
                "free": 1447,
                "used": 343
            },
            "real": {
                "free": 961,
                "total": 1790,
                "used": 829
            },
            "swap": {
[...]

So the variable to use is ansible_memory_mb.real.total. And the playbook:

---
- name: task 13
  hosts: all
  become: true

  tasks:
    - name: set vm.swappiness to 10 if server has 2GB memory
      sysctl:
        name: vm.swappiness
        value: 10
        state: present
      when:
        - ansible_memory_mb.real.total >= 2048 

    - name: report not enough total memory
      debug:
        msg: "Server memory less than 2048MB ({{ ansible_memory_mb.real.total }}MB)"
      when:
        - ansible_memory_mb.real.total < 2048

Task 14: Use Archiving

---
- name: task 14
  hosts: database
  become: true

  tasks:
    - name: create database list file
      copy:
        content: "dev,test,qa,prod"
        dest: /mnt/mysql_backups/database_list.txt

    - name: archive file
      archive:
        path: /mnt/mysql_backups/database_list.txt
        dest: /mnt/mysql_backups/archive.gz
        format: "gz"

Task 15: Work with Ansible Facts

WARNING: I am not 100% sure this one is working properly

The file gets created but çi am not able to get the custom facts this way:

ansible ansible5.hl.local -m setup -a "filter=ansible_local"
ansible5.hl.local | SUCCESS => {
    "ansible_facts": {
        "ansible_local": {},
        "discovered_interpreter_python": "/usr/libexec/platform-python"
    },
    "changed": false
}

As you can see the ansible_local variable is empty. However this works:

    - name: test
      debug:
        msg: "{{ ansible_local }}"

and returns the custom fact:

ok: [ansible5.hl.local] => {
    "msg": {
        "custom": {
            "sample_exam": {
                "server_role": "mysql"
            }
        }
    }
}

I am not really sure if this is the normal behaviour. In any case this is my (maybe wrong) playbook:

---
- name: task 15
  hosts: database
  become: true

  tasks:
    - name: ensure facts directory exists
      file:
        path: /etc/ansible/facts.d
        state: directory
        recurse: true

    - name: create custom fact
      copy:
        content: "[sample_exam]\nserver_role = mysql\n"
        dest: /etc/ansible/facts.d/custom.fact

Task 16: Software Packages

---
- name: task 16
  hosts: all
  become: true

  tasks:
    - name: install software in proxy group
      yum:
        name: "{{ item }}"
        state: latest
      loop:
        - tcpdump
        - mailx
      when:
        - inventory_hostname in groups['proxy']

    - name: install software in database group
      yum:
        name: "{{ item }}"
        state: latest
      loop:
        - lsof
        - mailx
      when:
        - inventory_hostname in groups['database']

Task 17: Services

There is no module that I am aware of to accomplish this. You need to know a bit about systemd internals but it is as easy as creating a soft link.

---
- name: task 17
  hosts: webservers
  become: true

  tasks:
    - name: set default target to multi-user
      file:
        src: /usr/lib/systemd/system/multi-user.target
        dest: /etc/systemd/system/default.target
        state: link

Task 18. Create and Use Templates to Create Customised Configuration Files

The playbook:

---
- name: task 18
  hosts: database
  become: true

  tasks:
    - name: deploy server list
      template:
        src: server_list.j2
        dest: /etc/server_list.txt
        owner: automation
        group: automation
        mode: 0600
        setype: net_conf_t

The template:

{% for host in groups["all"] %}
{{ hostvars[host]['inventory_hostname'] }}
{% endfor %}

Tips & tricks

comments powered by Disqus