openshift,

OpenShift Installation Methods - Examples

Gini Gini Follow · 19 mins read
Share this

Reference Installing an OKD 4.5 Cluster

Install OpenShift 4.x

How to install OpenShift 4 on Bare Metal - (UPI) (Video)

Method I - Setup an OpenShift All-In-One

Install required packages

yum install ansible docker wget -y
systemctl enable docker
systemctl start docker

Disable Firewall

Or you have to open required ports.

systemctl disable firewalld
systemctl stop firewalld

Installing OpenShift CLI

Method 1 - Standard CentOS repositories

yum -y install centos-release-openshift-origin39
yum -y install origin-clients

Method 2 - Download and extract openshift origin

Create a directory for data (anywhere)

mkdir /data
cd /data

Check the latest version if you need.

wget "https://github.com/openshift/origin/releases/download/v3.11.0/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz"
tar -xzvf openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit.tar.gz 
cd openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit

Add the directory you untarred the release into to your path:

export PATH=$PATH:/data/openshift-origin-client-tools-v3.11.0-0cbc58b-linux-64bit/

# or

export PATH=$PATH:`pwd`

Configure the Docker daemon with an insecure registry parameter of 172.30.0.0/16

cat > /etc/docker/daemon.json <<DELIM
{
   "insecure-registries": [
     "172.30.0.0/16"
   ]
}
DELIM

Restart docker service

service docker restart

Initiate cluster

oc cluster up --base-dir="/data/clusterup" --public-hostname=<IP>
  • --base-dir=BASE_DIR : Directory on Docker host for cluster up configuration

(oc cluster up –public-hostname=35.239.51.76 –routing-suffix=35.239.51.76.xip.io) openshift.local.clusterup/openshift-controller-manager/openshift-master.kubeconfig

Ref:

  • https://github.com/openshift/origin/blob/release-3.11/docs/cluster_up_down.md
  • https://medium.com/@fabiojose/working-with-oc-cluster-up-a052339ea219

Add a user

# oc create user redhat
user.user.openshift.io/redhat created
# oc adm policy add-cluster-role-to-user cluster-admin redhat
cluster role "cluster-admin" added: "redhat"

Method II - Setup minishift

Setup Virtual Environment

Install minishift

  • Download and manually install minishift.
  • On mac brew cask install minishift (in case of issue to install, try export HOMEBREW_NO_ENV_FILTERING=1)

Start minishift cluster

minishift start --vm-driver virtualbox

## setup VirtualBox Permanently
minishift config set vm-driver virtualbox

Once started, access the console using url or open in browser

minishift console

Setup oc access

$ minishift oc-env
export PATH="/home/john/.minishift/cache/oc/v1.5.0:$PATH"

## Run this command to configure your shell:
# eval $(minishift oc-env)

Method III - OpenShift 4 - OKD - All in One Quick Cluster

https://github.com/openshift/okd/releases

Method IV - OpenShift Full Cluster

CodeReady Containers - CRC (OpenShift 4.x)

Red Hat CodeReady Containers

References:

Download CRC Package

Refer : Install OpenShift on a laptop with CodeReady Containers

  • Visit https://www.openshift.com/try
  • Choose install on Laptop -> https://cloud.redhat.com/openshift/install/crc/installer-provisioned
  • Download for your OS choise (Windows10, MacOS, Linux)
  • Move package to your machine folder
  • extract the package tar -xf FILENAME.tar.xz

  • Download and keep the pull secret from same location. You need that later during crc start

Required software packages

CodeReady Containers requires the libvirt and NetworkManager packages.

yum install NetworkManager
wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/latest/crc-linux-amd64.tar.xz

Enable sudo for user

[[email protected] ~]$ sudo cat /etc/sudoers.d/devops
[sudo] password for devops:
devops ALL=(ALL) NOPASSWD: ALL

Setup Cluster

$ cd crc-linux-1.11.0-amd64
$ export PATH=/home/devops/crc-linux-1.11.0-amd64:$PATH

## set up your host operating system for the CodeReady Containers virtual machine.
## Use normal user account
$ crc setup

## Start cluster
## if you are on a terminal without GUI, you will have difficulty to copy/paste pull secret content. in that you can mention the pull secret ful
$ crc start -p /path-to/pull-secret

Access your Cluster

$ eval $(crc oc-env)
$ oc login -u developer -p developer

## see credentials
$ crc console --credentials
To login as a regular user, run 'oc login -u developer -p developer https://api.crc.testing:6443'.
To login as an admin, run 'oc login -u kubeadmin -p 8rynV-SeYLc-h8Ij7-YPYcz https://api.crc.testing:6443'

## Access Console
$ crc console
Opening the OpenShift Web Console in the default browser...

Troubleshooting

  • Issue : After crc start and crc console, oc login fails with Internal error occurred: unexpected response: 503 for a while #740

  • Issue : Unable to connect to console

$ crc ip
$ nmcli conn show

OpenSHift 4.x

https://github.com/openshift/okd

Create clouds.yaml

clouds:
  ocp4-dev:
    auth:
      auth_url: http://10.6.1.209:35357/
      project_name: ocp4-dev
      username: ocpadmin
      password: ocpadmin
    region_name: RegionOne

OpenShift 4.2 Installation

https://docs.openshift.com/container-platform/4.2/installing/installing_bare_metal/installing-bare-metal.html

Red Hat OpenShift 4.x Installation (Evaluation)

https://access.redhat.com/documentation/en-us/openshift_container_platform/4.1/html/installing/index

Baremetal Installation

https://access.redhat.com/documentation/en-us/openshift_container_platform/4.1/html/installing/installing-on-bare-metal https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html https://blog.openshift.com/openshift-4-bare-metal-install-quickstart/

OpenShift on Baremetal (UPI)

OpenShift All-In-One - OKD Using Ansible

Ref: https://github.com/Gepardec/ansible-role-okd https://galaxy.ansible.com/gepardec/okd https://computingforgeeks.com/setup-openshift-origin-local-cluster-on-centos/

Extras OpenSHift 4.x

https://github.com/openshift/okd

Create clouds.yaml

clouds:
  ocp4-dev:
    auth:
      auth_url: http://10.6.1.209:35357/
      project_name: ocp4-dev
      username: ocpadmin
      password: ocpadmin
    region_name: RegionOne

OpenShift 4.2 Installation

https://docs.openshift.com/container-platform/4.2/installing/installing_bare_metal/installing-bare-metal.html

OpenShift 4.1 Installation

https://access.redhat.com/documentation/en-us/openshift_container_platform/4.1/html/installing/index

Baremetal Installation

https://access.redhat.com/documentation/en-us/openshift_container_platform/4.1/html/installing/installing-on-bare-metal https://docs.openshift.com/container-platform/4.1/installing/installing_bare_metal/installing-bare-metal.html https://blog.openshift.com/openshift-4-bare-metal-install-quickstart/

https://blog.openshift.com/revamped-openshift-all-in-one-aio-for-labs-and-fun/

Install OpenShift 4.x Cluster on VMWare

https://labs.consol.de/container/platform/openshift/2020/01/31/ocp43-installation-vmware.html

Install Pre-Req

$ sudo yum install wget git vim

Download VMWare root CA certificates to System trust

$ wget https://vcenter.lab.local/certs/download.zip
$ unzip download.zip
$ sudo cp certs/lin/* /etc/pki/ca-trust/source/anchors/
$ sudo update-ca-trust extract

Ref : Adding vCenter root CA certificates to your system trust

Install Terraform

$ sudo yum install -y yum-utils
$ sudo yum-config-manager --add-repo https://rpm.releases.hashicorp.com/RHEL/hashicorp.repo
$ sudo yum -y install terraform

Refer Terraform Installation Methods

Add VMWare Credential

$ cat .config/ocp/vsphere.yaml
vsphere-user: [email protected]
vsphere-password: "123!"
vsphere-server: 192.168.1.100
vsphere-dc: DC1
vsphere-cluster: AZ1

Configure DNS

https://blog.ktz.me/configure-unbound-dns-for-openshift-4/

Create install-config.yaml

apiVersion: v1
baseDomain: lab.local
compute:
- hyperthreading: Enabled
  name: worker
  replicas: 0
controlPlane:
  hyperthreading: Enabled
  name: master
  replicas: 3
metadata:
  name: ocp4
platform:
  vsphere:
    vcenter: 10.6.1.198
    username: [email protected]
    password: supersecretpassword
    datacenter: DC1
    defaultDatastore: AZ1
fips: false 
pullSecret: 'YOUR_PULL_SECRET'
sshKey: 'YOUR_SSH_PUBKEY'

Init Cluster

## Creating Installer Configuration File
$ ./openshift-install create install-config --dir=ocp46 --log-level=debug

## Generating Kubernetes Manifests
$ ./openshift-install create manifests --dir=ocp46 --log-level=debug

## Make master nodes schedulable
## edit ocp46/manifests/cluster-scheduler-02-config.yml and make 
$ cat manifests/cluster-scheduler-02-config.yml
apiVersion: config.openshift.io/v1
kind: Scheduler
metadata:
  creationTimestamp: null
  name: cluster
spec:
  mastersSchedulable: true
  policy:
    name: ""
status: {}

## Generating Ignition Configuration Files
$ ./openshift-install create ignition-configs --dir=ocp46 --log-level=debug

## Create the cluster
$ ./openshift-install create cluster --dir=ocp46 --log-level=debug

## NOTE; In a pre-existing infrastructure installation, you cannot use the openshift-install 
## create cluster command to deploy the cluster because you have already installed the cluster nodes. 
## The bootstrap node installation triggers the cluster installation, so execute the following command 
## sequence to monitor the cluster installation:
[[email protected] ~]$ openshift-install wait-for bootstrap-complete --dir=ocp46 --log-level=debug
[[email protected] ~]$ openshift-install wait-for install-complete --dir=ocp46 --log-level=debug
## openshift-install wait-for commands do not trigger the cluster installation. 
## It is a recommended practice to use them to monitor the cluster installation.

Monitoring OpenShift Installations

$ export KUBECONFIG=ocp46/auth/kubeconfig

Deleting a Cluster

## destroy a cluster
$ ./openshift-install destroy cluster --dir=$HOME/ocp46-cluster

baremetal

https://docs.openshift.com/container-platform/4.3/installing/installing_bare_metal/installing-bare-metal.html#cluster-entitlements_installing-bare-metal

Deploy OpenShift Using OpenShift Installer (AWS)

(27 Jan 2021)

Installer configures entire cloud infrastructure:

  • VMs
  • Load balancers
  • Storage
  • Networking
  • Other resources

Using OpenShift Installer

openshift-install create cluster --dir=$HOME/mycluster

Installer prompts for

  • SSH public key
  • Platform: aws
  • Region: Default in AWS is us-east-1
  • Base domain: Public route 53 domain, needs to exist prior to installation
  • Cluster name: Must be unique within AWS account
  • Pull secret: From Get Started with OpenShift as single-line JSON

Setup Bastion Host

  • Create a bastion node and login to the host.
$ ssh [email protected]
$ echo $GUID
d0ce

Configure Bastion VM to Run OpenShift Installer

Install AWS CLI

# Download the latest AWS Command Line Interface
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip

# Install the AWS CLI into /bin/aws
./awscli-bundle/install -i /usr/local/aws -b /bin/aws

# Validate that the AWS CLI works
aws --version

# Clean up downloaded files
rm -rf /root/awscli-bundle /root/awscli-bundle.zip

Download OpenShift Installation Binary

OCP_VERSION=4.6.4
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/${OCP_VERSION}/openshift-install-linux-${OCP_VERSION}.tar.gz
tar zxvf openshift-install-linux-${OCP_VERSION}.tar.gz -C /usr/bin
rm -f openshift-install-linux-${OCP_VERSION}.tar.gz /usr/bin/README.md
chmod +x /usr/bin/openshift-install

Download and Install OC CLI

wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/${OCP_VERSION}/openshift-client-linux-${OCP_VERSION}.tar.gz
tar zxvf openshift-client-linux-${OCP_VERSION}.tar.gz -C /usr/bin
rm -f openshift-client-linux-${OCP_VERSION}.tar.gz /usr/bin/README.md
chmod +x /usr/bin/oc
  • Check that the OpenShift installer and CLI are in /usr/bin:
ls -l /usr/bin/{oc,openshift-install}

# Setup auto-completion
oc completion bash >/etc/bash_completion.d/openshift
  • logout from root

Configure AWS CLI Credential

export AWSKEY=<YOURACCESSKEY>
export AWSSECRETKEY=<YOURSECRETKEY>
export REGION=us-east-2

mkdir $HOME/.aws
cat << EOF >>  $HOME/.aws/credentials
[default]
aws_access_key_id = ${AWSKEY}
aws_secret_access_key = ${AWSSECRETKEY}
region = $REGION
EOF

# test AWS Access
aws sts get-caller-identity
  • Open https://www.openshift.com/try and select “Try it in the Cloud”, Select AWS
  • Choose “Installer-Provisioned-Infrastructure”
  • Copy the Pull Secret (save to file for copy)

  • Create an ssh key pair
ssh-keygen -f ~/.ssh/cluster-${GUID}-key -N ''

Install OpenShift Container Platform

  • Installer will generate Ignition configs for the bootstrap, master, and worker machines.
  • The process for bootstrapping a cluster looks like the following:
    • The bootstrap machine boots and starts hosting the remote resources required for the master machines to boot.
    • The master machines fetch the remote resources from the bootstrap machine and finish booting.
    • The master machines use the bootstrap node to form an etcd cluster.
    • The bootstrap node starts a temporary Kubernetes control plane using the newly created etcd cluster.
    • The temporary control plane schedules the production control plane to the master machines.
    • The temporary control plane shuts down, yielding to the production control plane.
    • The bootstrap node injects OpenShift-specific components into the newly formed control plane.
    • The installer then tears down the bootstrap node.
  • The result of this bootstrapping process is a fully running OpenShift cluster. The cluster will then download and configure the remaining components needed for day-to-day operation, including the creation of worker machines on supported platforms.

Run OpenShift Installer

Run the OpenShift installer and answer the prompts:

  • Select your Public Key (which you have created earlier)
  • Select aws as the Platform.
  • Select any Region near you.
  • Select cluster.yourdomain.com as the Base Domain.
  • For the Cluster Name, type cluster-101 (or any other name)
  • When prompted, paste the contents of your Pull Secret in JSON format. Do not include any spaces or white characters and make sure it is in one line
$ openshift-install create cluster --dir $HOME/cluster-101

# Sample answers
? SSH Public Key /home/user/.ssh/cluster-101-key.pub
? Platform aws
INFO Credentials loaded from the "default" profile in file "/home/user/.aws/credentials"
? Region us-east-2 (Ohio)
? Base Domain cluster.yourdomain.com
? Cluster Name cluster-101
? Pull Secret [? for help] ***************************************************************************************************************************************************************

# wait for installer to finish

*********************************************
INFO Creating infrastructure resources...
INFO Waiting up to 20m0s for the Kubernetes API at https://api.cluster-d0ce.d0ce.sandbox1072.opentlc.com:6443... 
INFO API v1.19.0+9f84db3 up
INFO Waiting up to 30m0s for bootstrapping to complete...
INFO Destroying the bootstrap resources...        
INFO Waiting up to 40m0s for the cluster at https://api.cluster-d0ce.d0ce.sandbox1072.opentlc.com:6443 to initialize... 
INFO Waiting up to 10m0s for the openshift-console route to be created... 
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/gineesh.madapparambath-fujitsu.c/cluster-d0ce/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.cluster-d0ce.d0ce.sandbox1072.opentlc.com
INFO Login to the console with user: "kubeadmin", and password: "6TVYn-33gLY-7ZjrA-eqRQE" 
INFO Time elapsed: 37m56s
  • Make note of the following items from the output of the install command:
    • The location of the kubeconfig file, which is required for setting the KUBECONFIG environment variable and, as suggested, sets the OpenShift user ID to system:admin.
    • The kubeadmin user ID and associated password (GEveR-tBVTB-jJUJB-iC9Jn in the example).
    • The password for the kubeadmin user is also written into the auth/kubeadmin-password file.
    • The URL of the web console - (https://console-openshift-console.apps.cluster-.sandbox.opentlc.com in the example) and the credentials (again) to log into the web console.
  • Refer ${HOME}/cluster-101/.openshift_install.log for logs and troubleshooting.

multi-step installation.

# Create the installation configuration: 
openshift-install create install-config --dir $HOME/cluster-${GUID}.

# Update the generated install-config.yaml file—for example, change the AWS EC2 instance types.

# Create the YAML manifest templates: 
openshift-install create manifests --dir $HOME/cluster-${GUID}
#  Changing the manifest templates is unsupported.

# Create the YAML manifests:
openshift-install create manifests --dir $HOME/cluster-${GUID}
#  Changing the manifests is unsupported.

# Create the Ignition configuration files: 
openshift-install create ignition-configs --dir $HOME/cluster-${GUID}
# Changing the Ignition configuration files is unsupported.

# Install the cluster: 
openshift-install create cluster --dir $HOME/cluster-${GUID}.

# To delete the cluster, use: 
openshift-install destroy cluster --dir $HOME/cluster-${GUID}.

Clean Up Cluster

openshift-install destroy cluster --dir $HOME/cluster-${GUID}

# Delete all of the files created by the OpenShift installer:
rm -rf $HOME/.kube
rm -rf $HOME/cluster-${GUID}

Validate Cluster

# Setup CLI
export KUBECONFIG=$HOME/cluster-${GUID}/auth/kubeconfig
echo "export KUBECONFIG=$HOME/cluster-${GUID}/auth/kubeconfig" >>$HOME/.bashrc

$ oc whoami
system:admin

# get console and login with kubeadmin & password from installation log
$ oc whoami --show-console
https://console-openshift-console.apps.cluster-d0ce.d0ce.sandbox1072.opentlc.com

OpenShift Installation on Red Hat Virtualization (RHV)/oVirt

Refer same steps in VMWare Setup

Requirements

DNS

  1. API DNS - eg: api.ocp46.ocp4.lab.local
  2. Apps Wildcard - eg: *.apps.ocp46.ocp4.lab.local

Troubleshooting OpenShift Installation

Installing Red Hat Advanced Cluster Management (ACM) for Kubernetes

Setup environment for the ACM Installation

  • Setup OpenShift cluster and verify
$ oc get machinesets -n openshift-machine-api
## you need to use larger instance but for demo we will skip this part

Create a new OpenShift Project/Namespace for ACM

$ oc new-project open-cluster-management

Create an image-pull secret

$ oc create secret docker-registry YOUR_SECRET_NAME \
  --docker-server=registry.access.redhat.com/rhacm1-tech-preview \
  --docker-username=YOUR_REDHAT_USERNAME \
  --docker-password=YOUR_REDHAT_PASSWORD
$ oc create secret docker-registry image-pull-secret --docker-server=registry.access.redhat.com/rhacm1-tech-preview --docker-username=[email protected]='[email protected]#T#2019'

Install ACM and subscribe to the ACM Operator group

## create OperatorGroup - acm-operator.yaml
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
  name: acm-operator
spec:
  targetNamespaces:
  - open-cluster-management

## create it
$ oc apply -f acm-operator.yaml

## Create ACM Subscription - acm-subscription.yaml
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
  name: acm-operator-subscription
spec:
  sourceNamespace: openshift-marketplace
  source: redhat-operators
  channel: release-1.0
  installPlanApproval: Automatic
  name: advanced-cluster-management

## create it
$ oc apply -f acm-subscription.yaml

Install ACM and subscribe using the OpenShift web console

  • Open OpenShift Web Console
  • OperatorHub -> Search advanced cluster -> Find Advanced Cluster Management for Kubernetes
  • Select Project, Version and do install
  • Wait for Operator Installation to be completed

Create the MultiClusterHub resource

## Create the MultiClusterHub from the CLI
## create the file
apiVersion: operators.open-cluster-management.io/v1beta1
kind: MultiClusterHub
metadata:
  name: multiclusterhub
  namespace: open-cluster-management
spec:
  imagePullSecret: YOUR_SECRET_NAME

## create it
$ oc apply -f multicluster-acm.yaml

From Console -> Select installed operator -> Select Advanced Cluster Management for Kubernetes -> Goto MultiClusterHub -> Create new

Verify the ACM installation

-> Check events in Advanced Cluster Management for Kubernetes -> Check Route and Access it

References

References

Gini
Written by Gini Follow
Backpacker, Foodie, Techie

Latest Stories

Featured