Containerized Ansible Automation Platform Installation Guide

Red Hat Ansible Automation Platform 2.4

Containerized Ansible Automation Platform Installation Guide

Red Hat Customer Content Services

Abstract

Containerized Ansible Automation Platform Installation Guide

Making open source more inclusive

Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright’s message.

Providing feedback on Red Hat documentation

If you have a suggestion to improve this documentation, or find an error, please contact technical support at https://access.redhat.com to create an issue on the Ansible Automation Platform Jira project using the docs-product component.

Chapter 1. Ansible Automation Platform containerized installation

Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments.

This guide helps you to understand the installation requirements and processes behind our new containerized version of Ansible Automation Platform. This initial version is based upon Ansible Automation Platform 2.4 and is being released as a Technical Preview. Please see Technology Preview Features Support Scope to understand what a technical preview entails.

Prerequisites

  • A RHEL 9.2 based host. Minimal OS base install is recommended.
  • A non-root user for the RHEL host, with sudo or other Ansible supported privilege escalation (sudo recommended). This user is responsible for the installation of containerized Ansible Automation Platform.
  • It is recommended setting up an SSH public key authentication for the non-root user. For guidelines on setting up an SSH public key authentication for the non-root user, see How to configure SSH public key authentication for passwordless login.
  • SSH keys are only required when installing on remote hosts. If doing a self contained local VM based installation, you can use ansible_connection: local as per the example which does not require SSH.
  • Internet access from the RHEL host if using the default online installation method.

1.1. System Requirements

Your system must meet the following minimum system requirements to install and run Red Hat Containerized Ansible Automation Platform.

Memory

16Gb RAM

CPU

4 CPU

Disk space

40Gb

Disk IOPs

1500

1.2. Preparing the RHEL host for containerized installation

Procedure

Containerized Ansible Automation Platform runs the component services as podman based containers on top of a RHEL host. The installer takes care of this once the underlying host has been prepared. Use the following instructions for this.

  1. Log into your RHEL host as your non-root user.
  2. Run dnf repolist to validate only the BaseOS and appstream repos are setup and enabled on the host:

    $ dnf repolist
    Updating Subscription Management repositories.
    repo id                                                    repo name
    rhel-9-for-x86_64-appstream-rpms                           Red Hat Enterprise Linux 9 for x86_64 - AppStream (RPMs)
    rhel-9-for-x86_64-baseos-rpms                              Red Hat Enterprise Linux 9 for x86_64 - BaseOS (RPMs)
  3. Ensure that these repos and only these repos are available to the host OS. If you need to know how to do this use this guide: Chapter 10. Managing custom software repositories Red Hat Enterprise Linux
  4. Ensure that the host has DNS configured and can resolve hostnames and IPs using a fully qualified domain name (FQDN). This is essential to ensure services can talk to one another.

Using unbound DNS

To configure unbound DNS refer to Chapter 2. Setting up an unbound DNS server Red Hat Enterprise Linux 9.

Optional

To have the installer automatically pick up and apply your Ansible Automation Platform subscription manifest license, use this guide to generate a manifest file which can be downloaded for the installer: Chapter 2. Obtaining a manifest file Red Hat Ansible Automation Platform 2..

1.3. Installing ansible-core

Procedure

  1. Install ansible-core and other tools:

    sudo dnf install -y ansible-core wget git rsync
  2. Set a fully qualified hostname:

    sudo hostnamectl set-hostname your-FQDN-hostname

1.4. Downloading Ansible Automation Platform

Procedure

  1. Download the latest installer tarball from access.redhat.com. This can be done directly within the RHEL host, which saves time.
  2. If you have downloaded the tarball and optional manifest zip file onto your laptop, copy them onto your RHEL host.

    Decide where you would like the installer to reside on the filesystem. Installation related files will be created under this location and require at least 10Gb for the initial installation.

  3. Unpack the installer tarball into your installation directory, and cd into the unpacked directory.

    1. online installer

      $ tar xfvz ansible-automation-platform-containerized-setup-2.4-2.tar.gz
    2. bundled installer

      $ tar xfvz ansible-automation-platform-containerized-setup-bundle-2.4-2-<arch name>.tar.gz

1.5. Using postinstall feature of containerized Ansible Automation Platform

Use the experimental postinstaller feature of containerized Ansible Automation Platform to define and load the configuration during the initial installation. This uses a configuration-as-code approach, where you simply define your configuration to be loaded as simple YAML files.

  1. To use this optional feature, you need to uncomment the following vars in the inventory file:

    controller_postinstall=true
  2. The default is false, so you need to enable this to activate the postinstaller. You need a Ansible Automation Platform license for this feature that must reside on the local filesystem so it can be automatically loaded:

    controller_license_file=/full_path_to/manifest_file.zip
  3. You can pull your configuration-as-code from a Git based repository. To do this, set the following variables to dictate where you pull the content from and where to store it for upload to the Ansible Automation Platform controller:

    controller_postinstall_repo_url=https://your_cac_scm_repo
    controller_postinstall_dir=/full_path_to_where_you_want_the pulled_content_to_reside
  4. The controller_postinstall_repo_url variable can be used to define the postinstall repository URL which must include authentication information.

    http(s)://<host>/<repo>.git (public repository without http(s) authentication)
    http(s)://<user>:<password>@<host>:<repo>.git (private repository with http(s) authentication)
    git@<host>:<repo>.git (public/private repository with ssh authentication)
    Note

    When using ssh based authentication, the installer does not configure anything for you, so you must configure everything on the installer node.

Definition files use the infra certified collections. The controller_configuration collection is preinstalled as part of the installation and uses the installation controller credentials you supply in the inventory file for access into the Ansible Automation Platform controller. You simply need to give the YAML configuration files.

You can setup Ansible Automation Platform configuration attributes such as credentials, LDAP settings, users and teams, organizations, projects, inventories and hosts, job and workflow templates.

The following example shows a sample your-config.yml file defining and loading controller job templates. The example demonstrates a simple change to the preloaded demo example provided with an Ansible Automation Platform installation.

/full_path_to_your_configuration_as_code/
├── controller
    	└── job_templates.yml
controller_templates:
 - name: Demo Job Template
   execution_environment: Default execution environment
   instance_groups:
 	- default
   inventory: Demo Inventory

1.6. Installing containerized Ansible Automation Platform

Installation of Ansible Automation Platform is controlled with inventory files. Inventory files define the hosts and containers used and created, variables for components, and other information needed to customize the installation.

For convenience an example inventory file is provided, that you can copy and modify to quickly get started.

Note

There is no default database choice given in the inventory file. You must follow the instructions in the inventory file to make the appropriate choice between an internally provided postgres, or provide your own externally managed and supported database option.

Edit the inventory file by replacing the < > placeholders with your specific variables, and uncommenting any lines specific to your needs.

# This is the AAP installer inventory file
# Please consult the docs if you're unsure what to add
# For all optional variables please consult the included README.md

# This section is for your AAP Controller host(s)
# -------------------------------------------------
[automationcontroller]
fqdn_of_your_rhel_host ansible_connection=local

# This section is for your AAP Automation Hub host(s)
# -----------------------------------------------------
[automationhub]
fqdn_of_your_rhel_host ansible_connection=local

# This section is for your AAP EDA Controller host(s)
# -----------------------------------------------------
[automationeda]
fqdn_of_your_rhel_host ansible_connection=local

# This section is for your AAP Execution host(s)
# ------------------------------------------------
#[execution_nodes]
#fqdn_of_your_rhel_host

# This section is for the AAP database(s)
# -----------------------------------------
# Uncomment the lines below and amend appropriately if you want AAP to install and manage the postgres databases
# Leave commented out if you intend to use your own external database and just set appropriate _pg_hosts vars
# see mandatory sections under each AAP component
#[database]
#fqdn_of_your_rhel_host ansible_connection=local

[all:vars]

# Common variables needed for installation
# ----------------------------------------
postgresql_admin_username=postgres
postgresql_admin_password=<set your own>
# If using the online (non-bundled) installer, you need to set RHN registry credentials
registry_username=<your RHN username>
registry_password=<your RHN password>
# If using the bundled installer, you need to alter defaults by using:
#bundle_install=true
# The bundle directory must include /bundle in the path
#bundle_dir=<full path to the bundle directory>
# To add more decision environment images you need to set the de_extra_images variable
#de_extra_images=[{'name': 'Custom decision environment', 'image': '<registry>/<namespace>/<image>:<tag>'}]
# To add more execution environment images you need to set the ee_extra_images variable
#ee_extra_images=[{'name': 'Custom execution environment', 'image': '<registry>/<namespace>/<image>:<tag>'}]
# To use custom TLS CA certificate/key you need to set these variables
#ca_tls_cert=<full path to your TLS CA certificate file>
#ca_tls_key=<full path to your TLS CA key file>

# AAP Database - optional
# --------------------------
# To use custom TLS certificate/key you need to set these variables
#postgresql_tls_cert=<full path to your TLS certificate file>
#postgresql_tls_key=<full path to your TLS key file>

# AAP Controller - mandatory
# --------------------------
controller_admin_password=<set your own>
controller_pg_host=fqdn_of_your_rhel_host
controller_pg_password=<set your own>

# AAP Controller - optional
# -------------------------
# To use the postinstall feature you need to set these variables
#controller_postinstall=true
#controller_license_file=<full path to your manifest .zip file>
#controller_postinstall_dir=<full path to your config-as-code directory>
# When using config-as-code in a git repository
#controller_postinstall_repo_url=<url to your config-as-code git repository>
#controller_postinstall_repo_ref=main
# To use custom TLS certificate/key you need to set these variables
#controller_tls_cert=<full path to your TLS certificate file>
#controller_tls_key=<full path to your TLS key file>

# AAP Automation Hub - mandatory
# ------------------------------
hub_admin_password=<set your own>
hub_pg_host=fqdn_of_your_rhel_host
hub_pg_password=<set your own>

# AAP Automation Hub - optional
# -----------------------------
# To use the postinstall feature you need to set these variables
#hub_postinstall=true
#hub_postinstall_dir=<full path to your config-as-code directory>
# When using config-as-code in a git repository
#hub_postinstall_repo_url=<url to your config-as-code git repository>
#hub_postinstall_repo_ref=main
# To customize the number of worker containers
#hub_workers=2
# To use the collection signing feature you need to set these variables
#hub_collection_signing=true
#hub_collection_signing_key=<full path to your gpg key file>
# To use the container signing feature you need to set these variables
#hub_container_signing=true
#hub_container_signing_key=<full path to your gpg key file>
# To use custom TLS certificate/key you need to set these variables
#hub_tls_cert=<full path to your TLS certificate file>
#hub_tls_key=<full path to your TLS key file>

# AAP EDA Controller - mandatory
# ------------------------------
eda_admin_password=<set your own>
eda_pg_host=fqdn_of_your_rhel_host
eda_pg_password=<set your own>

# AAP EDA Controller - optional
# -----------------------------
# When using an external controller node unmanaged by the installer.
#controller_main_url=https://fqdn_of_your_rhel_host
# To customize the number of default/activation worker containers
#eda_workers=2
#eda_activation_workers=2
# To use custom TLS certificate/key you need to set these variables
#eda_tls_cert=<full path to your TLS certificate file>
#eda_tls_key=<full path to your TLS key file>

# AAP Execution Nodes - optional
# -----------------------------
#receptor_port=27199
#receptor_protocol=tcp
# To use custom TLS certificate/key you need to set these variables
#receptor_tls_cert=<full path to your TLS certificate file>
#receptor_tls_key=<full path to your TLS key file>
# To use custom RSA key pair you need to set these variables
#receptor_signing_private_key=<full path to your RSA private key file>
#receptor_signing_public_key=<full path to your RSA public key file>

Use the following command to install containerized Ansible Automation Platform:

ansible-playbook -i inventory ansible.containerized_installer.install
Note
If your privilege escalation requires a password to be entered, append *-K* to the command line. You will then be prompted for the *BECOME* password.

You can use increasing verbosity, up to 4 v’s (-vvvv) to see the details of the installation process.

Note

This can significantly increase installation time, so it is recommended that you use it only as needed or requested by Red Hat support.

1.7. Accessing automation controller, automation hub, and Event-Driven Ansible controller

After the installation completes, these are the default protocol and ports used:

  • http/https protocol
  • Ports 8080/8443 for automation controller
  • Ports 8081/8444 for automation hub
  • Ports 8082/8445 for Event-Driven Ansible controller

These can be changed. Consult the README.md for further details. It is recommended that you leave the defaults unless you need to change them due to port conflicts or other factors.

Accessing automation controller UI

The automation controller UI is available by default at:

https://<your_rhel_host>:8443

Log in as the admin user with the password you created for controller_admin_password.

If you supplied the license manifest as part of the installation, the Ansible Automation Platform dashboard is displayed. If you did not supply a license file, the Subscription screen is displayed where you must supply your license details. This is documented here: Chapter 1. Activating Red Hat Ansible Automation Platform.

Accessing automation hub UI

The automation hub UI is available by default at:

https://<hub node>:8444

Log in as the admin user with the password you created for hub_admin_password.

Accessing Event-Driven Ansible UI

The Event-Driven Ansible UI is available by default at:

https://<eda node>:8445

Log in as the admin user with the password you created for eda_admin_password.

1.8. Using custom TLS certificates

By default, the installer generates TLS certificates and keys for all services which are signed by a custom Certificate Authority (CA). You can provide a custom TLS certificate/key for each service. If that certificate is signed by a custom CA, you must provide the CA TLS certificate and key.

  • Certificate Authority
ca_tls_cert=/full/path/to/tls/certificate
ca_tls_key=/full/path/to/tls/key
  • Automation Controller
controller_tls_cert=/full/path/to/tls/certificate
controller_tls_key=/full/path/to/tls/key
  • Automation Hub
hub_tls_cert=/full/path/to/tls/certificate
hub_tls_key=/full/path/to/tls/key
  • Automation EDA
eda_tls_cert=/full/path/to/tls/certificate
eda_tls_key=/full/path/to/tls/key
  • Postgresql
postgresql_tls_cert=/full/path/to/tls/certificate
postgresql_tls_key=/full/path/to/tls/key
  • Receptor
receptor_tls_cert=/full/path/to/tls/certificate
receptor_tls_key=/full/path/to/tls/key

1.9. Using custom Receptor signing keys

Receptor signing is now enabled by default unless receptor_disable_signing=true is set, and a RSA key pair (public/private) is generated by the installer. However, you can provide custom RSA public/private keys by setting the path variable.

receptor_signing_private_key=/full/path/to/private/key
receptor_signing_public_key=/full/path/to/public/key

1.10. Enabling Automation hub collection and container signing

Automation hub allows you to sign Ansible collections and container images. This feature is not enabled by default, and you must provide the GPG key.

hub_collection_signing=true
hub_collection_signing_key=/full/path/to/collections/gpg/key
hub_container_signing=true
hub_container_signing_key=/full/path/to/containers/gpg/key

When the GPG key is protected by a passphrase, you must provide the passphrase.

hub_collection_signing_pass=<collections gpg key passphrase>
hub_container_signing_pass=<containers gpg key passphrase>

1.11. Adding execution nodes

The containerized installer can deploy remote execution nodes. This is handled by the execution_nodes group in the ansible inventory file.

[execution_nodes]
fqdn_of_your_execution_host

An execution node is by default configured as an execution type running on port 27199 (TCP). This can be changed by the following variables:

  • receptor_port=27199
  • receptor_protocol=tcp
  • receptor_type=hop

Receptor type value can be either execution or hop, while the protocol is either TCP or UDP. By default, the nodes in the execution_nodes group will be added as peers for the controller node. However, you can change the peers configuration by using the receptor_peers variable.

[execution_nodes]
fqdn_of_your_execution_host
fqdn_of_your_hop_host receptor_type=hop receptor_peers=’[“fqdn_of_your_execution_host”]’

1.12. Uninstalling containerized Ansible Automation Platform

To uninstall a containerized deployment, execute the uninstall.yml playbook.

$ ansible-playbook -i inventory ansible.containerized_installer.uninstall

This will stop all systemd units and containers and then delete all resources used by the containerized installer such as:

  • config and data directories/files
  • systemd unit files
  • podman containers and images
  • RPM packages

To keep container images, you can set the container_keep_images variable to true.

$ ansible-playbook -i inventory ansible.containerized_installer.uninstall -e container_keep_images=true

To keep postgresql databases, you can set the postgresql_keep_databases variable to true.

$ ansible-playbook -i </path/to/inventory> ansible.containerized_installer.uninstall -e postgresql_keep_databases=true
Note

You will have to use the same django secret key values rather than the auto-generated ones.

Legal Notice

Copyright © 2024 Red Hat, Inc.
The text of and illustrations in this document are licensed by Red Hat under a Creative Commons Attribution–Share Alike 3.0 Unported license ("CC-BY-SA"). An explanation of CC-BY-SA is available at http://creativecommons.org/licenses/by-sa/3.0/. In accordance with CC-BY-SA, if you distribute this document or an adaptation of it, you must provide the URL for the original version.
Red Hat, as the licensor of this document, waives the right to enforce, and agrees not to assert, Section 4d of CC-BY-SA to the fullest extent permitted by applicable law.
Red Hat, Red Hat Enterprise Linux, the Shadowman logo, the Red Hat logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCE are trademarks of Red Hat, Inc., registered in the United States and other countries.
Linux® is the registered trademark of Linus Torvalds in the United States and other countries.
Java® is a registered trademark of Oracle and/or its affiliates.
XFS® is a trademark of Silicon Graphics International Corp. or its subsidiaries in the United States and/or other countries.
MySQL® is a registered trademark of MySQL AB in the United States, the European Union and other countries.
Node.js® is an official trademark of Joyent. Red Hat is not formally related to or endorsed by the official Joyent Node.js open source or commercial project.
The OpenStack® Word Mark and OpenStack logo are either registered trademarks/service marks or trademarks/service marks of the OpenStack Foundation, in the United States and other countries and are used with the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStack Foundation, or the OpenStack community.
All other trademarks are the property of their respective owners.