OpenShift Installation Methods - Examples

Gineesh Gineesh Follow · 16 mins read
Share this

Installing an OKD 4.x Cluster

Reference Installing an OKD 4.5 Cluster

Install OpenShift 4.x

How to install OpenShift 4 on Bare Metal - (UPI) (Video)

Method I - Setup an OpenShift All-In-One

Install required packages

yum install ansible docker wget -y
systemctl enable docker
systemctl start docker

Disable Firewall

Or you have to open required ports.

systemctl disable firewalld
systemctl stop firewalld

Installing OpenShift CLI

Method 1 - Standard CentOS repositories

yum -y install centos-release-openshift-origin39
yum -y install origin-clients

Method 2 - Download and extract openshift origin

Create a directory for data (anywhere)

mkdir /data
cd /data

Check the latest version if you need.

wget ""
tar -xzvf openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz 
cd openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit

Add the directory you untarred the release into to your path:

export PATH=$PATH:/data/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit/

# or

export PATH=$PATH:`pwd`

Configure the Docker daemon with an insecure registry parameter of

cat > /etc/docker/daemon.json <<DELIM
   "insecure-registries": [

Restart docker service

service docker restart

Initiate cluster

oc cluster up --base-dir="/data/clusterup" --public-hostname=<IP>
  • --base-dir=BASE_DIR : Directory on Docker host for cluster up configuration

(oc cluster up –public-hostname= – openshift.local.clusterup/openshift-controller-manager/openshift-master.kubeconfig



Add a user

# oc create user redhat created
# oc adm policy add-cluster-role-to-user cluster-admin redhat
cluster role "cluster-admin" added: "redhat"

Method II - Setup minishift

Setup Virtual Environment

Install minishift

  • Download and manually install minishift.
  • On mac brew cask install minishift (in case of issue to install, try export HOMEBREW_NO_ENV_FILTERING=1)

Start minishift cluster

minishift start --vm-driver virtualbox

## setup VirtualBox Permanently
minishift config set vm-driver virtualbox

Once started, access the console using url or open in browser

minishift console

Setup oc access

$ minishift oc-env
export PATH="/home/john/.minishift/cache/oc/v1.5.0:$PATH"

## Run this command to configure your shell:
# eval $(minishift oc-env)

Method III - OpenShift 4 - OKD - All in One Quick Cluster

Method IV - OpenShift Full Cluster

OpenSHift 4.x

Create clouds.yaml

      project_name: ocp4-dev
      username: ocpadmin
      password: ocpadmin
    region_name: RegionOne

OpenShift 4.2 Installation

Red Hat OpenShift 4.x Installation (Evaluation)

Baremetal Installation

OpenShift on Baremetal (UPI)

OpenShift All-In-One - OKD Using Ansible


Extras OpenSHift 4.x

Create clouds.yaml

      project_name: ocp4-dev
      username: ocpadmin
      password: ocpadmin
    region_name: RegionOne

OpenShift 4.2 Installation

OpenShift 4.1 Installation

Baremetal Installation

Install OpenShift 4.x Cluster on VMWare

Install Pre-Req

$ sudo yum install wget git vim

Download VMWare root CA certificates to System trust

$ wget --no-check-certificate https://vcenter.lab.local/certs/
$ unzip
$ sudo cp certs/lin/* /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust extract

Ref : Adding vCenter root CA certificates to your system trust

Install Terraform

$ sudo yum install -y yum-utils
$ sudo yum-config-manager --add-repo
$ sudo yum -y install terraform

Refer Terraform Installation Methods

Add VMWare Credential

$ cat .config/ocp/vsphere.yaml
vsphere-user: [email protected]
vsphere-password: "123!"
vsphere-dc: DC1
vsphere-cluster: AZ1

Configure DNS

Create install-config.yaml

apiVersion: v1
baseDomain: lab.local
- hyperthreading: Enabled
  name: worker
  replicas: 0
  hyperthreading: Enabled
  name: master
  replicas: 3
  name: ocp4
    username: [email protected]
    password: supersecretpassword
    datacenter: DC1
    defaultDatastore: AZ1
fips: false 
pullSecret: 'YOUR_PULL_SECRET'

Init Cluster

## Creating Installer Configuration File
$ ./openshift-install create install-config --dir=ocp46 --log-level=debug

## Generating Kubernetes Manifests
$ ./openshift-install create manifests --dir=ocp46 --log-level=debug

## Make master nodes schedulable
## edit ocp46/manifests/cluster-scheduler-02-config.yml and make 
$ cat manifests/cluster-scheduler-02-config.yml
kind: Scheduler
  creationTimestamp: null
  name: cluster
  mastersSchedulable: true
    name: ""
status: {}

## Generating Ignition Configuration Files
$ ./openshift-install create ignition-configs --dir=ocp46 --log-level=debug

## Create the cluster
$ ./openshift-install create cluster --dir=ocp46 --log-level=debug

## NOTE; In a pre-existing infrastructure installation, you cannot use the openshift-install 
## create cluster command to deploy the cluster because you have already installed the cluster nodes. 
## The bootstrap node installation triggers the cluster installation, so execute the following command 
## sequence to monitor the cluster installation:
[user@demo ~]$ openshift-install wait-for bootstrap-complete --dir=ocp46 --log-level=debug
[user@demo ~]$ openshift-install wait-for install-complete --dir=ocp46 --log-level=debug
## openshift-install wait-for commands do not trigger the cluster installation. 
## It is a recommended practice to use them to monitor the cluster installation.

Monitoring OpenShift Installations

$ export KUBECONFIG=ocp46/auth/kubeconfig

Deleting a Cluster

## destroy a cluster
$ ./openshift-install destroy cluster --dir=$HOME/ocp46-cluster


Deploy OpenShift Using OpenShift Installer (AWS)

(27 Jan 2021)

Installer configures entire cloud infrastructure:

  • VMs
  • Load balancers
  • Storage
  • Networking
  • Other resources

Using OpenShift Installer

openshift-install create cluster --dir=$HOME/mycluster

Installer prompts for

  • SSH public key
  • Platform: aws
  • Region: Default in AWS is us-east-1
  • Base domain: Public route 53 domain, needs to exist prior to installation
  • Cluster name: Must be unique within AWS account
  • Pull secret: From Get Started with OpenShift as single-line JSON

Setup Bastion Host

  • Create a bastion node and login to the host.
$ ssh user@bastion-host
$ echo $GUID

Configure Bastion VM to Run OpenShift Installer

Install AWS CLI

# Download the latest AWS Command Line Interface
curl "" -o ""

# Install the AWS CLI into /bin/aws
./awscli-bundle/install -i /usr/local/aws -b /bin/aws

# Validate that the AWS CLI works
aws --version

# Clean up downloaded files
rm -rf /root/awscli-bundle /root/

Download OpenShift Installation Binary

tar zxvf openshift-install-linux-${OCP_VERSION}.tar.gz -C /usr/bin
rm -f openshift-install-linux-${OCP_VERSION}.tar.gz /usr/bin/
chmod +x /usr/bin/openshift-install

Download and Install OC CLI

tar zxvf openshift-client-linux-${OCP_VERSION}.tar.gz -C /usr/bin
rm -f openshift-client-linux-${OCP_VERSION}.tar.gz /usr/bin/
chmod +x /usr/bin/oc
  • Check that the OpenShift installer and CLI are in /usr/bin:
ls -l /usr/bin/{oc,openshift-install}

# Setup auto-completion
oc completion bash >/etc/bash_completion.d/openshift
  • logout from root

Configure AWS CLI Credential

export REGION=us-east-2

mkdir $HOME/.aws
cat << EOF >>  $HOME/.aws/credentials
aws_access_key_id = ${AWSKEY}
aws_secret_access_key = ${AWSSECRETKEY}
region = $REGION

# test AWS Access
aws sts get-caller-identity
  • Open and select “Try it in the Cloud”, Select AWS
  • Choose “Installer-Provisioned-Infrastructure”
  • Copy the Pull Secret (save to file for copy)

  • Create an ssh key pair
ssh-keygen -f ~/.ssh/cluster-${GUID}-key -N ''

Install OpenShift Container Platform

  • Installer will generate Ignition configs for the bootstrap, master, and worker machines.
  • The process for bootstrapping a cluster looks like the following:
    • The bootstrap machine boots and starts hosting the remote resources required for the master machines to boot.
    • The master machines fetch the remote resources from the bootstrap machine and finish booting.
    • The master machines use the bootstrap node to form an etcd cluster.
    • The bootstrap node starts a temporary Kubernetes control plane using the newly created etcd cluster.
    • The temporary control plane schedules the production control plane to the master machines.
    • The temporary control plane shuts down, yielding to the production control plane.
    • The bootstrap node injects OpenShift-specific components into the newly formed control plane.
    • The installer then tears down the bootstrap node.
  • The result of this bootstrapping process is a fully running OpenShift cluster. The cluster will then download and configure the remaining components needed for day-to-day operation, including the creation of worker machines on supported platforms.

Run OpenShift Installer

Run the OpenShift installer and answer the prompts:

  • Select your Public Key (which you have created earlier)
  • Select aws as the Platform.
  • Select any Region near you.
  • Select as the Base Domain.
  • For the Cluster Name, type cluster-101 (or any other name)
  • When prompted, paste the contents of your Pull Secret in JSON format. Do not include any spaces or white characters and make sure it is in one line
$ openshift-install create cluster --dir $HOME/cluster-101

# Sample answers
? SSH Public Key /home/user/.ssh/
? Platform aws
INFO Credentials loaded from the "default" profile in file "/home/user/.aws/credentials"
? Region us-east-2 (Ohio)
? Base Domain
? Cluster Name cluster-101
? Pull Secret [? for help] ***************************************************************************************************************************************************************

# wait for installer to finish

INFO Creating infrastructure resources...
INFO Waiting up to 20m0s for the Kubernetes API at 
INFO API v1.19.0+9f84db3 up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO Destroying the bootstrap resources...        
INFO Waiting up to 40m0s for the cluster at to initialize... 
INFO Waiting up to 10m0s for the openshift-console route to be created... 
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/username/cluster-d0ce/auth/kubeconfig'
INFO Access the OpenShift web-console here:
INFO Login to the console with user: "kubeadmin", and password: "YOUR_INITIAL_PASSWORD" 
INFO Time elapsed: 37m56s
  • Make note of the following items from the output of the install command:
    • The location of the kubeconfig file, which is required for setting the KUBECONFIG environment variable and, as suggested, sets the OpenShift user ID to system:admin.
    • The kubeadmin user ID and associated password (GEveR-tBVTB-jJUJB-iC9Jn in the example).
    • The password for the kubeadmin user is also written into the auth/kubeadmin-password file.
    • The URL of the web console - ( in the example) and the credentials (again) to log into the web console.
  • Refer ${HOME}/cluster-101/.openshift_install.log for logs and troubleshooting.

multi-step installation.

# Create the installation configuration: 
openshift-install create install-config --dir $HOME/cluster-${GUID}.

# Update the generated install-config.yaml file—for example, change the AWS EC2 instance types.

# Create the YAML manifest templates: 
openshift-install create manifests --dir $HOME/cluster-${GUID}
#  Changing the manifest templates is unsupported.

# Create the YAML manifests:
openshift-install create manifests --dir $HOME/cluster-${GUID}
#  Changing the manifests is unsupported.

# Create the Ignition configuration files: 
openshift-install create ignition-configs --dir $HOME/cluster-${GUID}
# Changing the Ignition configuration files is unsupported.

# Install the cluster: 
openshift-install create cluster --dir $HOME/cluster-${GUID}.

# To delete the cluster, use: 
openshift-install destroy cluster --dir $HOME/cluster-${GUID}.

Clean Up Cluster

openshift-install destroy cluster --dir $HOME/cluster-${GUID}

# Delete all of the files created by the OpenShift installer:
rm -rf $HOME/.kube
rm -rf $HOME/cluster-${GUID}

Validate Cluster

# Setup CLI
export KUBECONFIG=$HOME/cluster-${GUID}/auth/kubeconfig
echo "export KUBECONFIG=$HOME/cluster-${GUID}/auth/kubeconfig" >>$HOME/.bashrc

$ oc whoami

# get console and login with kubeadmin & password from installation log
$ oc whoami --show-console

OpenShift Installation on Red Hat Virtualization (RHV)/oVirt

Refer same steps in VMWare Setup



  1. API DNS - eg: api.ocp46.ocp4.lab.local
  2. Apps Wildcard - eg: *.apps.ocp46.ocp4.lab.local

Troubleshooting OpenShift Installation

Installing Red Hat Advanced Cluster Management (ACM) for Kubernetes

Setup environment for the ACM Installation

  • Setup OpenShift cluster and verify
$ oc get machinesets -n openshift-machine-api
## you need to use larger instance but for demo we will skip this part

Create a new OpenShift Project/Namespace for ACM

$ oc new-project open-cluster-management

Create an image-pull secret

$ oc create secret docker-registry YOUR_SECRET_NAME \ \
  --docker-username=YOUR_REDHAT_USERNAME \
$ oc create secret docker-registry image-pull-secret --docker-username=[email protected] --docker-password='PASSWORD'

Install ACM and subscribe to the ACM Operator group

## create OperatorGroup - acm-operator.yaml
kind: OperatorGroup
  name: acm-operator
  - open-cluster-management

## create it
$ oc apply -f acm-operator.yaml

## Create ACM Subscription - acm-subscription.yaml
kind: Subscription
  name: acm-operator-subscription
  sourceNamespace: openshift-marketplace
  source: redhat-operators
  channel: release-1.0
  installPlanApproval: Automatic
  name: advanced-cluster-management

## create it
$ oc apply -f acm-subscription.yaml

Install ACM and subscribe using the OpenShift web console

  • Open OpenShift Web Console
  • OperatorHub -> Search advanced cluster -> Find Advanced Cluster Management for Kubernetes
  • Select Project, Version and do install
  • Wait for Operator Installation to be completed

Create the MultiClusterHub resource

## Create the MultiClusterHub from the CLI
## create the file
kind: MultiClusterHub
  name: multiclusterhub
  namespace: open-cluster-management
  imagePullSecret: YOUR_SECRET_NAME

## create it
$ oc apply -f multicluster-acm.yaml

From Console -> Select installed operator -> Select Advanced Cluster Management for Kubernetes -> Goto MultiClusterHub -> Create new

Verify the ACM installation

-> Check events in Advanced Cluster Management for Kubernetes -> Check Route and Access it



Written by Gineesh Follow
Author, Automation and Containerization Guy,

Latest Stories