This document describes the process to deploy an OCP cluster on a RHOS infrastructure.
The installation process uses the OpenShift Container Platform installer obtained from https://mirror.openshift.com/.
Glossary of terms used.
- OCP
-
OpenShift Container Platform
- RHCOS
-
Red Hat Core OS
- RHOSP
-
Red Hat OpenStack Platform
Playbooks to deploy and remove an OCP cluster to RHOS.
Playbook File | Description |
---|---|
|
Deploy an OCP cluster on RHOS. |
|
Remove an OCP cluster on RHOS. |
|
Print information from the deployed OCP cluster. This playbook will print cluster information such as Console URL, kubeadmin password, … |
The OCP installation process requires the use of the OCP pull secret. This secret can be obained from https://console.redhat.com/openshift/install/pull-secret.
As part of the installation process this information will be added
to the install-config.yaml
and used in the OCP installation
process.
{
"auths": {
"cloud.openshift.com": {"auth": "wwwwwwwwww", "email": "[email protected]"}
,"quay.io": {"auth": "xxxxxxxxxxxxx", "email": "[email protected]"}
,"registry.connect.redhat.com": {"auth": "yyyyyyyyyyyyy", "email": "[email protected]"}
,"registry.redhat.io": {"auth": "zzzzzzzzzzzzzzzzz", "email": "[email protected]"}
}
}
ℹ️
|
The commands described hereafter use the |
In the installation process the several files are downloaded, amongst which is the OCP installer software and a RHCOS image. During the execution of the OCP installer the downloaded RHCOS image must be uploaded into RHOSP. Although the image is cached, locally, this part of the process takes an amount of time that cannot be disregarded.
The default installation process uses the controller (localhost
) as the
installation executor. This means all files are downloaded/uploaded to/from
the local workstation. This approach has several drawbacks such as having
to rely on the network infrastructure of the workstation being limited by
it’s bandwith ([download] and [upload]).
To mitigate this problem the installation process can, and we suggest it is, executed from a remote host, which might be a temporary host. This host will be referred as the bootstrap host hereafter.
To be able to use a temporary boostrap host it must be created prior to the
execution of the installation process. The name of this RHOSP Host can be
any name although we recomend including the name of the cluster as prefix
and adding a suffix such as -tmp-bootstrap-server
.
To create the boostrap host execute the RHOSP playbook created for that purpose.
ansible-playbook ansible/playbook/openstack/openstack_vm_create_passwordstore.yml -e '{"openstack": {"vm": {"network": "provider_net_shared","image": "Fedora-Cloud-Base-37", "flavor": "m1.small"}}}' -e vm_name=ocp-xyz-tmp-bootstrap-server
After creating the bootstrap host execute the steps provided in the Deploy OCP Cluster on RHOS section.
ℹ️
|
The bootstrap server could be removed once the OCP cluster has been created (optional). |
The deployment playbook supports the follow variable entries.
Variable | Description | ||
---|---|---|---|
string |
VM name for the bootstrap host. If defined the installation process will be performed not on the
|
||
string / required |
Name to be assigned to the OCP cluster
|
||
string / required |
Root folder where for the installation. A new sub-folder with the
|
||
json / required |
String of the OCP pull secret for the user. |
||
string |
Flavor to be used on the Control Plane hosts. Default ⇒ |
||
string |
Flavor to be used on the Compute hosts. Default ⇒ |
Execute the playbook. Please note that this playbook uses sudo
permission
to create several folders so the Ansible user must have sudo
permission.
We’re using the -K
switch to ask for the become
password which is only
required if the user as sudo permission with password. The folders created
will be associated (uid:gid`
) with the Ansible user used to connect to the
host.
ansible-playbook ansible/playbook/ocp/ocp_openstack_install.yml (1)
-e ocp_root_directory=(2)
-e ocp_cluster_name=(3)
-e openshift_pull_secret=(4)
-K (5)
-
Playbook that implements the OCP deployment.
-
Root directory for the installation.
-
Name to be given to the cluster.
-
OCP pull secret for the user.
-
Ask for the become password.
ansible-playbook ansible/playbook/ocp/ocp_openstack_install.yml \
-e ocp_root_directory=/opt/ocp \
-e ocp_cluster_name=ocp-sdev \
-e openshift_pull_secret=${OCP_PULL_SECRET} \
-K
The playbook will result on the deployment of several RHOS VMs for control plane and worker nodes.
ℹ️
|
Note on the RHOS VM flavors
The RHOS flavors to be used on the VMs that will result on the OCP cluster are
defined by the Ansible Role default flavor configuration
link:../../roles/ocp_cluster/defaults/main.yml[role=include] Instructions on how to obtain the list of available flavors is described on our OpenStack CLI README file. |
The result of the deployment process is the following:
-
OCP cluster deployed on RHOS instances as defined in the number and flavor of main and worker nodes
-
RHOS instance that will serve as jump server to the OCP cluster
-
Installation directory stored on the passwordstore and copied to the jump server
-
OCP authentication information stored on the passwordstore
🔥
|
At this point the bootstrap server, if used, is no longer required. Check that the installation folder is safely stored both on the jump server as well as on the local passwordstore before removing it. |
|
For the removal process to be successfull the OCP installation directory
( |
The deployment playbook supports the follow variable entries.
Variable | Description |
---|---|
string |
VM name for the host that contains the OCP installation folder. If defined the installation process will be performed not on the
|
string / required |
Name of the OCP cluster |
string / required |
Root folder where for the installation. TODO: must be added as part of the ansible inventory |
ansible-playbook ansible/playbook/ocp/ocp_openstack_remove.yml \
-e ocp_root_directory=/opt/ocp \
-e ocp_cluster_name=ocp-sdev \
-e ocp_bootstrap_host=ocp-sdev-xxxxx-jump-server
To collect information on the OCP cluster execute the
ocp_openstack_info
playbook located at the ansible/playbook/ocp/
folder.
Variable | Description |
---|---|
string / required |
Root folder for the OCP installation. Either define the |
string / required |
Name of the OCP cluster Either define the |
string |
Location of the installation directory. Either define the |
string / required |
Root folder where for the installation. A new sub-folder with the
|
ansible-playbook ansible/playbook/ocp/ocp_openstack_info.yml \
-e ocp_root_directory=/opt/ocp \
-e ocp_cluster_name=ocp-sdev \
-e vm_name=ocp-sdev-zzzzz-jump-server -vv
This playbook init’s a jump server by performing the following tasks:
-
Downloads the OCP and k8s CLI binaries into the jump server
-
Copy the OCP cluster installation folder from passwordstore into the jump server.
Variable | Description |
---|---|
string / required |
Root folder for the OCP installation. Either define the |
string / required |
Name of the OCP cluster Either define the |
string |
Folder that will contain the OCP and k8s CLI binaries Default ⇒ |
string / required |
Root folder where for the installation. A new sub-folder with the
|
ansible-playbook ansible/playbook/ocp/rhosp_init_jump_server_pass.yml \
-e ocp_root_directory=/home/snowdrop/ocp \
-e ocp_cluster_name=ocp-sdev \
-e vm_name=ocp-jump-server \
-e ocp_cluster_bin_directory=/home/snowdrop/.local/bin \
-vv