OpenShift4: vSphere + Static IP

There are many ways to install OCP4. One of the most common ask is how to install OCP4 with the static IP address on the vSphere environment. This is one of the use cases that I want to test out and hope I can share my lessons learned.

Environment:

  • vSphere 6.7 Update2
  • Run install from macOS Mojave 10.14.5

Requirements:

  • No DHCP server
  • Need to use static IP addresses

Problems I had:

Error #1: Dracut: FATAL: Sorry, ‘ip=dhcp’ does not make sense for multiple interface configurations.

dracut.png

Cause:

When I tried to overwrite the IP address by setting the kernel parameters using ip=<ip>::<gateway>:<net mask>:<FQDN>:<interface>:none with cloning from OVA.

Solution:

Setting the IP parameter before the initramfs is created from the rhcos-install.iso instead of from OVA.

Here are steps to create custom ISO with the parameters to simplify the process. You can use the downloaded ISO, but it will be a lot of typing, so the following steps are very useful when creating many VMs from the ISO.

sudo mount rhcos-410.8.20190425.1-installer.iso /mnt/
mkdir /tmp/rhcos
rsync -a /mnt/* /tmp/rhcos/
cd /tmp/rhcos
vi isolinux/isolinux.cfg
  • Modify the boot entry at the end of the file similar to this:
label linux
  menu label ^Install RHEL CoreOS
  kernel /images/vmlinuz
  append initrd=/images/initramfs.img nomodeset rd.neednet=1 coreos.inst=yes ip=192.168.1.124::192.168.1.1:255.255.255.0:bootstrap.ocp4.example.com:ens192:none nameserver=192.168.1.188 coreos.inst.install_dev=sda coreos.inst.image_url=http://192.168.1.231:8080/rhcos-4.1.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.1.231:8080/static.ign

where:

ip=<ip address>::<gateway>:<netmask>:<hostname>:<interface>:none

nameserver=<DNS> 

coreos.inst.image_url=http://<webserver host:port>/rhcos-4.1.0-x86_64-metal-bios.raw.gz

coreos.inst.ignition_url=http://<webserver host:port>/<master or worker ignition>.ign 

  • Create new ISO as /tmp/rhcos_install.iso
sudo mkisofs -U -A "RHCOS-x86_64" -V "RHCOS-x86_64" -volset "RHCOS-x86_64" -J -joliet-long -r -v -T -x ./lost+found -o /tmp/rhcos_install.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot .
  • Update the custom ISO to the datastore for VM creation.

Error #2: No such host

no such host.png

Cause:

Most likely the network did not set up correctly when the master or worker start.

Solution:

In my case, this is an issue when creating masters/workers from an OVA and network configuration did not get setup when the RHCOS is booted.

Error #3: Getting EOF from LB

EOF.png

Cause:

Most likely the DNS and webserver configuration errors.

Solution:

Make sure all FQDN resolve to the correct IPs and restart related services.

Error #4: X509 cert error

x509error.png

Cause:

The reason in my case was the clocks on all servers were not synced and required to regenerate my SSH key.

Solution:

NTP was setup on DNS and webserver and make sure the clock is synced across. I also regenerate the SSH and update my install-config.yaml file.

Prerequisites:

The above components are required in my setup. I used the link [3] in the Reference section to setup DNS, load balancer, and webserver. I configured NTP on my DNS, webserver, load balancer and make sure I configure the time on my ESXi server as well. The filetranspiler is an awesome tool for manipulating the ignition files. I used it thought out the test here.

Preparing the infrastructure:

I started my installation with OCP 4 official documentation for vSphere (Reference [1] below).

  • SSH keygen

Captured my example steps here. Please use your own value.

ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/ocp4vsphere
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/ocp4vsphere
  • Download OpenShift 4 installer
    • extract it
    • chmod +x openshift-installer
    • mv to /usr/local/bin directory
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-mac-4.1.7.tar.gz
apiVersion: v1
baseDomain: example.com 
compute:
- hyperthreading: Enabled   
  name: worker
  replicas: 0 
controlPlane:
  hyperthreading: Enabled   
  name: master
  replicas: 3 
metadata:
  name: ocp4
platform:
  vsphere:
    vcenter: <vCenter host>
    username: <administrator>
    password: <password>
    datacenter: dc
    defaultDatastore: datastore
pullSecret: '<your pull seceret>' 
sshKey: '<your public ssh key>'
  • Create ignition files
openshift-install create ignition-configs --dir=<installation_directory>
  • Prepare for creating bootstrap with hostname and the static IP
    • Download filetranspiler:
      • git clone https://github.com/ashcrow/filetranspiler
    • Copy <installation_directory>/bootstrap.ign to <filetranspile_directory>/
    • Create bootstrap hostname file:
      echo "bootstrap.ocp4.example.com" > hostname
    • move hostname file to <filetranspile_directory>/bootstrap/etc/
    • Create ifcfg-ens192 file under

      <filetranspile_directory>/bootstrap/etc/sysconfig/network-scripts with following content

      NAME=ens192
      DEVICE=ens192
      TYPE=Ethernet
      BOOTPROTO=none
      ONBOOT=yes
      IPADDR=<bootstrap IP address>
      NETMASK=<netmask>
      GATEWAY=<gateway>
      DOMAIN=example.com
      DNS1=<dns>
      PREFIX=24
      DEFROUTE=yes
      IPV6INIT=no
    • Run this command to create new boostrap ignition file:
      cd <filetranspile_directory>
      ./filetranspile -i bootstrap.ign -f bootstrap -o bootstrap-static.ign
    • Upload bootstrap-static.ign to the webserver:
      scp bootstrap-static.ign user@<webserverip>:/var/www/html/bootstrap.ign
    • Create an append-bootstrap.ign. Example as shown below.
      {
        "ignition": {
          "config": {
            "append": [
              {
                "source": "http://<webserverip:port>/bootstrap.ign", 
                "verification": {}
              }
            ]
          },
          "timeouts": {},
          "version": "2.1.0"
        },
        "networkd": {},
        "passwd": {},
        "storage": {},
        "systemd": {}
      }
    • Encode the append-bootstrap.ign file.
      openssl base64 -A -in append-bootstrap.ign -out append-bootstrap.64
    • Upload master0-static.ign to the webserver:
      scp master0-static.ign user@<webserverip>:/var/www/html/master0.ign
      • Note that master0.ign is used in the kernel parameter when installing the ISO.
    • Create VM from the custom ISO
      • Create VM with 4 CPU and 16 RAM
      • Select the custom ISO
      • add “disk.EnableUUID”: Specify TRUE under VM Options and Edit Configuration.
      • Power on the VM
      • Go the VM console:
      • Screen Shot 2019-07-26 at 1.37.09 PM.png
      • Hit <Tab>
      • Screen Shot 2019-07-26 at 1.37.22 PM.png
      • you can modify the parameters per each server here.
      • Hit <enter>
      • The server will reboot after installation.
  • Repeat for all masters and workers.

Installation:

  • When you have all the VMs created, run the following command.
$ openshift-install --dir=ocp4 wait-for bootstrap-complete --log-level debug

DEBUG OpenShift Installer v4.1.7-201907171753-dirty 
DEBUG Built from commit 5175a461235612ac64d576aae09939764ac1845d 
INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp4.example.com:6443... 
INFO API v1.13.4+3a25c9b up                       
INFO Waiting up to 30m0s for bootstrapping to complete... 
DEBUG Bootstrap status: complete                  
INFO It is now safe to remove the bootstrap resources 

 

Verification

  • Log in:
$ export KUBECONFIG=ocp4/auth/kubeconfig
$ oc whoami

$ oc get nodes
NAME                       STATUS   ROLES    AGE     VERSION
master0.ocp4.example.com   Ready    master   35m     v1.13.4+205da2b4a
master1.ocp4.example.com   Ready    master   35m     v1.13.4+205da2b4a
master2.ocp4.example.com   Ready    master   35m     v1.13.4+205da2b4a
worker0.ocp4.example.com   Ready    worker   20m     v1.13.4+205da2b4a
worker1.ocp4.example.com   Ready    worker   11m     v1.13.4+205da2b4a
worker2.ocp4.example.com   Ready    worker   5m25s   v1.13.4+205da2b4a
  • Validate all CSR is approved
$ oc get csr

NAME        AGE     REQUESTOR                                                                   CONDITION
csr-6vqqn   35m     system:node:master1.ocp4.example.com                                        Approved,Issued
csr-7hlkk   20m     system:node:worker0.ocp4.example.com                                        Approved,Issued
csr-9p6sw   11m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-b4cst   35m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-gx4dz   5m33s   system:node:worker2.ocp4.example.com                                        Approved,Issued
csr-kqcfv   11m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-lh5zg   35m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-m2hvl   35m     system:node:master0.ocp4.example.com                                        Approved,Issued
csr-npb4l   35m     system:node:master2.ocp4.example.com                                        Approved,Issued
csr-rdpgm   20m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-s2d7z   11m     system:node:worker1.ocp4.example.com                                        Approved,Issued
csr-sx2r5   6m      system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-tvgbq   35m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-vvp2h   6m11s   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  • Patching the images registry for non-production environment
$oc project openshift-image-registry
$oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
config.imageregistry.operator.openshift.io/cluster patched

Next step?

To improve the process, we need to automate this.

Reference:

[1] OpenShift 4 Official Installation Documentation for vSphere

[2] http://Using Static IP for OCP4 Installation Guide

[3] Setting Up Pre-requisites Guide

Knative on OCP 4.1 Test Run

Install OCP 4.1.2

This blog assumes that you went to try.openshift.com and created your OCP 4.1 IPI cluster. If you have not, you can go to try.openshift.com –> Get Started to set up an OCP 4.1 cluster.

Install Istio (Maistra 0.11)

Istio is required before installing Knative. However, Knative operator will install the minimum Istio components if Istio is not installed on the platform. For my test, I did install service mesh on OCP 4.1 using the community version. Here are my steps:

  • Install service mesh operator
oc new-project istio-operator
oc new-project istio-system
oc project istio-operator
oc apply -f https://raw.githubusercontent.com/Maistra/istio-operator/maistra-0.11/deploy/maistra-operator.yaml
  • Service Mesh operator is up and running
#to get the name of the operator pod
oc get pods
#view the logs of the pod
oc logs <name of the pod from above step>

#log shown as below
{"level":"info","ts":1562602857.4691303,"logger":"kubebuilder.controller","caller":"controller/controller.go:153","msg":"Starting workers","Controller":"servicemeshcontrolplane-controller","WorkerCount":1}
  • Create custom resource as cr.yaml using the below content.
apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
metadata:
  name: basic-install
spec:
  # NOTE, if you remove all children from an element, you should remove the
  # element too.  An empty element is interpreted as null and will override all
  # default values (i.e. no values will be specified for that element, not even
  # the defaults baked into the chart values.yaml).
  istio:
    global:
      proxy:
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 128Mi

    gateways:
      istio-egressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
      istio-ingressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
        # set to true to enable IOR
        ior_enabled: true

    mixer:
      policy:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false

      telemetry:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 1G
          limits:
            cpu: 500m
            memory: 4G

    pilot:
      # disable autoscaling for use in smaller environments
      autoscaleEnabled: false
      # increase random sampling rate for development/testing
      traceSampling: 100.0

    kiali:
      # change to false to disable kiali
      enabled: true

      # to use oauth, remove the following 'dashboard' section (note, oauth is broken on OCP 4.0 with kiali 0.16.2)
      # create a secret for accessing kiali dashboard with the following credentials
      dashboard:
        user: admin
        passphrase: admin

    tracing:
      # change to false to disable tracing (i.e. jaeger)
      enabled: true
  • Install service mesh
oc project istio-system
oc create -f cr.yaml

#it will take a while to have all the pods up
watch 'oc get pods'
  • When service mesh is available
Every 2.0s: oc get pod -n istio-system                                                                                              Mon Jul  8 16:39:46 2019

NAME                                      READY   STATUS    RESTARTS   AGE
elasticsearch-0                           1/1     Running   0          13m
grafana-86dc5978b8-k2dvl                  1/1     Running   0          9m15s
ior-6656b5cfdb-cjt7z                      1/1     Running   0          9m55s
istio-citadel-7678d4749b-bjqq8            1/1     Running   0          14m
istio-egressgateway-66d8b969b8-wmcfm	  1/1     Running   0          9m55s
istio-galley-7f57cd4c6c-6d2r8             1/1     Running   0          11m
istio-ingressgateway-7794d8d4fc-dd72g     1/1     Running   0          9m55s
istio-pilot-77d65868d4-68lzd              2/2     Running   0          10m
istio-policy-7486f4cb6c-fdw6q             2/2     Running   0          11m
istio-sidecar-injector-66d49c6865-clqzm   1/1     Running   0          9m39s
istio-telemetry-799557976b-9ljz4          2/2     Running   0          11m
jaeger-agent-b7bz8                        1/1     Running   0          13m
jaeger-agent-j4dnp                        1/1     Running   0          13m
jaeger-agent-xmwzz                        1/1     Running   0          13m
jaeger-collector-96756f879-n889z          1/1     Running   3          13m
jaeger-query-6f4456546c-mwjkk             1/1     Running   3          13m
kiali-c58c8476d-wzhj6                     1/1     Running   0          8m45s
prometheus-5cb5d7549b-lmjtk               1/1     Running   0          14m

Install Knative 0.6

  • Install Knative serving operator
    • Click Catalog -> OperatorHub -> search for “knative” keyword
    • Click “Knative Serving Operator”
    • Click “Install”

Screen Shot 2019-07-08 at 9.45.57 AM.png

  • Install Knative eventing operator
    • Click Catalog -> OperatorHub -> search for “knative” keyword
    • Click “Knative Eventing Operator”
    • Click “Install”

Screen Shot 2019-07-08 at 9.47.24 AM.png

  • I also manually scale up my nodes to prepare for the tutorial deployment.
  • Click Compute -> Machine Sets
  • Click “3 dots” at the end of each machine set-> click Edit Count -> enter 2

Screen Shot 2019-07-08 at 9.49.15 AM.png

  • validate (Installed Operator under openshift-operators project)

Screen Shot 2019-07-08 at 9.52.02 AM.png

Install Knative Client – kn

Knative client CLI (kn) can list, create, delete, and update Knative service.

$ which kn
/usr/local/bin/kn
$ kn version
Version:      v20190625-13ff277
Build Date:   2019-06-25 09:52:20
Git Revision: 13ff277
Dependencies:
- serving:    

 

Let’s Have Some Fun

Knative serving via kn

I am using my already build an image that is out in docker hub for this example. Here are the steps to create a simple Knative service.

oc new-project knative-demo
oc adm policy add-scc-to-user anyuid -z default -n knative-demo
oc adm policy add-scc-to-user privileged -z default -n knative-demo
kn service create mysvc --image docker.io/piggyvenus/greeter:0.0.1

List Knative service

$ kn service list
NAME    DOMAIN                                                        GENERATION   AGE   CONDITIONS   READY   REASON
mysvc   mysvc.knative-demo.apps.cluster-6c33.sandbox661.opentlc.com   1            29s   3 OK / 3     True    

Execute the service

$ curl mysvc.knative-demo.apps.cluster-6c33.sandbox661.opentlc.com
Hi  greeter => '0bd7a995d27e' : 1

Knative serving via a YAML file

Create Knative service YAML can also do the trick. Example, as shown below.

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: greeter
spec:
  runLatest:
    configuration:
      revisionTemplate:
        spec:
          container:
            image: docker.io/piggyvenus/greeter:0.0.1
            livenessProbe:
              httpGet:
                path: /healthz
            readinessProbe:
              httpGet:
                path: /healthz

 

Create Knative service

oc apply -f service.yaml

Check out Knative service resources

oc get deployment
oc get pods
oc get services.serving.knative.dev
oc get configuration.serving.knative.dev
oc get routes.serving.knative.dev

Invoke Knative service

oc get routes.serving.knative.dev
curl mysvc.knative-demo.apps.cluster-6c33.sandbox661.opentlc.com

Please check out Knative tutorial! There are more examples of Knative. I hope you find this blog useful.