Installing OpenShift using Temporary Credentials

One of the most frequently asked questions recently is how to install OpenShift on AWS with temporary credentials. The default OpenShift provisioning using AWS key and secret, which requires the Administrator privileges. The temporary credential often refers to AWS Security Token Service (STS), which allows end-users to assume an IAM role resulting in short-lived credentials.

Developers or platform teams will require approval from their security team to access the company AWS account. It can be challenging in some organizations to get access to Administrator privileges.

OpenShift 4.7 support for AWS Secure Token Service in manual mode is in Tech Preview. I decided to explore a little deeper—the exercise based on the information both on the OpenShift documentation and the upstream repos. I am recording the notes from my test run. I hope you will find it helpful.

OpenShift 4 version

OCP 4.7.9

Build sts-preflight binary

git clone https://github.com/sjenning/sts-preflight.git
go get github.com/sjenning/sts-preflight
cd <sts-preflight directory>
go build .

Getting the AWS STS

As an AWS administrator, I found the sts-preflight tool helpful in this exercise. The documentation has the manual steps, but I choose to use the sts-preflight tool here.

  • Create STS infrastructure in AWS:
./sts-preflight  create --infra-name <sts infra name> --region <aws region>

# ./sts-preflight  create --infra-name sc-example --region us-west-1
2021/04/28 13:24:42 Generating RSA keypair
2021/04/28 13:24:56 Writing private key to _output/sa-signer
2021/04/28 13:24:56 Writing public key to _output/sa-signer.pub
2021/04/28 13:24:56 Copying signing key for use by installer
2021/04/28 13:24:56 Reading public key
2021/04/28 13:24:56 Writing JWKS to _output/keys.json
2021/04/28 13:24:57 Bucket sc-example-installer created
2021/04/28 13:24:57 OIDC discovery document at .well-known/openid-configuration updated
2021/04/28 13:24:57 JWKS at keys.json updated
2021/04/28 13:24:57 OIDC provider created arn:aws:iam::##########:oidc-provider/s3.us-west-1.amazonaws.com/sc-example-installer
2021/04/28 13:24:57 Role created arn:aws:iam::##########:role/sc-example-installer
2021/04/28 13:24:58 AdministratorAccess attached to Role sc-example-installer
  • Create an OIDC token:
# ./sts-preflight token
2021/04/28 13:27:06 Token written to _output/token
  • Get STS credential:
# ./sts-preflight assume
Run these commands to use the STS credentials
export AWS_ACCESS_KEY_ID=<temporary key>
export AWS_SECRET_ACCESS_KEY=<temporary secret>
export AWS_SESSION_TOKEN=<session token>
  • The above short-lived key, secret, and token can be given to the person who are installing OpenShift.
  • Export all the AWS environment variables before proceeding to installation.

Start the Installation

As a Developer or OpenShift Admin, you will get the temporary credentials information and export the AWS environment variables before installing the OCP cluster.

# oc adm release extract quay.io/openshift-release-dev/ocp-release:4.7.9-x86_64 --credentials-requests --cloud=aws --to=./credreqs ; cat ./credreqs/*.yaml > credreqs.yaml
  • Create install-config.yaml for installation:
# ./openshift-install create install-config --dir=./sc-sts
? SSH Public Key /root/.ssh/id_rsa.pub
? Platform aws
INFO Credentials loaded from default AWS environment variables
? Region us-east-1
? Base Domain sc.ocp4demo.live
? Cluster Name sc-sts
? Pull Secret [? for help] 
INFO Install-Config created in: sc-sts
  • Make sure that we install the cluster in Manual mode:
# cd sc-sts
# echo "credentialsMode: Manual" >> install-config.yaml
  • Create install manifests:
# cd ..
# ./openshift-install create manifests --dir=./sc-sts
  • Using the sts-preflight tool to create AWS resources. Make sure you are in the sts-preflight directory:
#./sts-preflight create --infra-name sc-example --region us-west-1 --credentials-requests-to-roles ./credreqs.yaml
2021/04/28 13:45:34 Generating RSA keypair
2021/04/28 13:45:42 Writing private key to _output/sa-signer
2021/04/28 13:45:42 Writing public key to _output/sa-signer.pub
2021/04/28 13:45:42 Copying signing key for use by installer
2021/04/28 13:45:42 Reading public key
2021/04/28 13:45:42 Writing JWKS to _output/keys.json
2021/04/28 13:45:42 Bucket sc-example-installer already exists and is owned by us
2021/04/28 13:45:42 OIDC discovery document at .well-known/openid-configuration updated
2021/04/28 13:45:42 JWKS at keys.json updated
2021/04/28 13:45:43 Existing OIDC provider found arn:aws:iam::000000000000:oidc-provider/s3.us-west-1.amazonaws.com/sc-example-installer
2021/04/28 13:45:43 Existing Role found arn:aws:iam::000000000000:role/sc-example-installer
2021/04/28 13:45:43 AdministratorAccess attached to Role sc-example-installer
2021/04/28 13:45:43 Role arn:aws:iam::000000000000:role/sc-example-openshift-machine-api-aws-cloud-credentials created
2021/04/28 13:45:43 Saved credentials configuration to: _output/manifests/openshift-machine-api-aws-cloud-credentials-credentials.yaml
2021/04/28 13:45:43 Role arn:aws:iam::000000000000:role/sc-example-openshift-cloud-credential-operator-cloud-credential- created
2021/04/28 13:45:44 Saved credentials configuration to: _output/manifests/openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml
2021/04/28 13:45:44 Role arn:aws:iam::000000000000:role/sc-example-openshift-image-registry-installer-cloud-credentials created
2021/04/28 13:45:44 Saved credentials configuration to: _output/manifests/openshift-image-registry-installer-cloud-credentials-credentials.yaml
2021/04/28 13:45:44 Role arn:aws:iam::000000000000:role/sc-example-openshift-ingress-operator-cloud-credentials created
2021/04/28 13:45:44 Saved credentials configuration to: _output/manifests/openshift-ingress-operator-cloud-credentials-credentials.yaml
2021/04/28 13:45:45 Role arn:aws:iam::000000000000:role/sc-example-openshift-cluster-csi-drivers-ebs-cloud-credentials created
2021/04/28 13:45:45 Saved credentials configuration to: _output/manifests/openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml
  • Copy the generated manifest files and tls directory from sts-preflight/_output directory to installation directory:
# cp sts-preflight/_output/manifests/* sc-scs/manifests/
# cp -a sts-preflight/_output/tls sc-scs/
  • I ran both ./sts-preflight token and ./sts-preflight assume again to make sure I have enough time to finish my installation
  • Export the AWS environment variables.
  • I did not further restrict the role in my test.
  • Start to provision a OCP cluster:
# ./openshift-install create cluster --log-level=debug --dir=./sc-sts
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/mufg-sts/sc-sts-test/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sc-sts-test.xx.live
INFO Login to the console with user: "kubeadmin", and password: "xxxxxxxxxxx"
DEBUG Time elapsed per stage:
DEBUG     Infrastructure: 7m28s
DEBUG Bootstrap Complete: 11m6s
DEBUG  Bootstrap Destroy: 1m21s
DEBUG  Cluster Operators: 12m28s
INFO Time elapsed: 32m38s

#Cluster was created successfully.
  • Verify the components are assuming the IAM roles:
# oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode
[default]
role_arn = arn:aws:iam::000000000000:role/sc-sts-test-openshift-image-registry-installer-cloud-credentials
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
  • Adding and deleting worker node works as well:
Increase the count from one of the MachineSets from Administrator console, worker node was able to provisioned.
Decrease the count from one of the MachineSets from Administrator console, worker node was deleted.

Delete the Cluster

  • Obtain a new temporary credential:
cd <sts-preflight directory>
# ./sts-preflight token
2021/04/29 08:19:01 Token written to _output/token

# ./sts-preflight assume
Run these commands to use the STS credentials
export AWS_ACCESS_KEY_ID=<temporary key>
export AWS_SECRET_ACCESS_KEY=<temporary secret>
export AWS_SESSION_TOKEN=<session token>
  • Export all AWS environment variables using the result output from last step
  • Delete the cluster:
# ./openshift-install destroy cluster --log-level=debug --dir=./sc-sts-test
DEBUG OpenShift Installer 4.7.9
DEBUG Built from commit fae650e24e7036b333b2b2d9dfb5a08a29cd07b1
INFO Credentials loaded from default AWS environment variables
DEBUG search for matching resources by tag in us-east-1 matching aws.Filter{"kubernetes.io/cluster/sc-sts-rj4pw":"owned"}
...
INFO Deleted                                       id=vpc-0bbacb9858fe280f9
INFO Deleted                                       id=dopt-071e7bf4cfcc86ad6
DEBUG search for matching resources by tag in us-east-1 matching aws.Filter{"kubernetes.io/cluster/sc-sts-test-rj4pw":"owned"}
DEBUG search for matching resources by tag in us-east-1 matching aws.Filter{"openshiftClusterID":"ab9baacf-a44f-47e8-8096-25df62c3b1dc"}
DEBUG no deletions from us-east-1, removing client
DEBUG search for IAM roles
DEBUG search for IAM users
DEBUG search for IAM instance profiles
DEBUG Search for and remove tags in us-east-1 matching kubernetes.io/cluster/sc-sts-test-rj4pw: shared
DEBUG No matches in us-east-1 for kubernetes.io/cluster/sc-sts-test-rj4pw: shared, removing client
DEBUG Purging asset "Metadata" from disk
DEBUG Purging asset "Master Ignition Customization Check" from disk
DEBUG Purging asset "Worker Ignition Customization Check" from disk
DEBUG Purging asset "Terraform Variables" from disk
DEBUG Purging asset "Kubeconfig Admin Client" from disk
DEBUG Purging asset "Kubeadmin Password" from disk
DEBUG Purging asset "Certificate (journal-gatewayd)" from disk
DEBUG Purging asset "Cluster" from disk
INFO Time elapsed: 4m39s

References

Application Data Replication

My use case is to replicate the stateful Springboot application for disaster recovery. The application runs on OpenShift, and we want to leverage the existing toolsets to solve this problem. If it is just replicating the data from one data center to another, it should be super simple, right? In this blog, I share my journey of picking my solution.

The requirements are:

  • No code change
  • Cannot use ssh to copy the data
  • Cannot run the pod for replication using privileged containers
  • Must meet the security requirements

Solution 1: Writing the data to object storage

The simplest solution would be to have the application write the data to an object bucket, so we can mirror the object storage directly. However, it requires code changes for all the current applications.

Solution 2: Use rsync replication with VolSync Operator

We tested will the rsync-based replication using VolSync Operator. This will not be a good choice because it violates security policies on using SSH and UID 0 within containers.

Solution 3: Use rsync-tls replication with VolSync Operator

This is the one that meets all the requirements, and I am testing it out.

My test environment includes the following:

  • OpenShift (OCP) 4.11
  • OpenShift Data Foundation (ODF) 4.11
  • Advanced Cluster Security (ACS) 3.74.1
  • VolSync Operator 0.70

Setup

  • Install two OCP 4.11 clusters
  • Install and configure ODF on both OCP clusters
  • Install and configure ACS central on one of the cluster
  • Install and configure ACS secured cluster on both cluster
  • Install VolSync Operator on both clusters
  • Install a sample stateful application

Configure the rsync-tls replication CRs on the source and destination clusters

On the secondary cluster, under the namespace of the application

  • Click “Installed Operators” > VolSync
  • Click the “Replication Destination” tab
  • Click “Create ReplicationDestination” and select the “Current namespace only” option
  • On the Create ReplicationDestination screen, select YAML view
  • Replace the only “spec” section in the YAML with the below YAML
spec:
 rsyncTLS:
   accessModes:
     - ReadWriteMany
   capacity: 1Gi
   copyMethod: Snapshot
   serviceType: LoadBalancer
   storageClassName: ocs-storagecluster-cephfs
   volumeSnapshotClassName: ocs-storagecluster-cephfsplugin-snapclass

Notes:
The serviceType is LoadBalancer. see Reference [2] for more details on picking the Service Type. Since I am using ODF, ocs-storagecluster-cephfs and ocs-storagecluster-cephfsplugin-snapclass are the storageClassName and volumeSnapshotClassName, respectively.

  • Check the status from the ReplicationDestination CR; the update should be similar, as shown below.
status:
 conditions:
   - lastTransitionTime: '2023-03-29T06:02:42Z'
     message: Synchronization in-progress
     reason: SyncInProgress
     status: 'True'
     type: Synchronizing
 lastSyncStartTime: '2023-03-29T06:02:07Z'
 latestMoverStatus: {}
 rsyncTLS:
    address: >-
      a5ac4da21394f4ef4b79b4178c8787ea-d67ec11e8f219710.elb.us-east-2.amazonaws.com
    keySecret: volsync-rsync-tls-ostoy-rep-dest

Notes:
We will need the value of the address and the keySecret under the “rsyncTLS” section to set up the source cluster for replication.

  • Copy the keySecret from the destination cluster to the source cluster
  • Log in to the destination cluster, and run the following command to create the psk.txt file.
oc extract secret/volsync-rsync-tls-ostoy-rep-dest --to=../ --keys=psk.txt -n
  • Log in to the source cluster, and execute the following command to create the keySecret.
oc create secret generic volsync-rsync-tls-ostoy-rep-dest --from-file=psk.txt -n
  • Now you are ready to create the ReplicationSource.
  • Log in to your source cluster from the UI
  • Click “Installed Operators” > VolSync
  • Click the “Replication Source” tab
  • Click “Create ReplicationSource” and select the “Current namespace only” option
  • On the Create ReplicationSource screen, select YAML view
  • Replace the only “spec” section in the YAML with the below YAML
spec:
  rsyncTLS:
    address: >-
      a5ac4da21394f4ef4b79b4178c8787ea-d67ec11e8f219710.elb.us-east-2.amazonaws.com
    copyMethod: Clone
    keySecret: volsync-rsync-tls-ostoy-rep-dest
  sourcePVC: ostoy-pvc
  trigger:
    schedule: '*/5 * * * *'

I am using the address that was provided to me from the status of the ReplicationDestination CR and using the same keySecret that was from the destination.

  • On the destination OCP console, click “Storage” > VolumeSnapShots, and you will see a snapshot has been created.

  • Click “PersistentVolumeClaims”. There is a copy PVC from the source created under the namespace where you create your ReplicationDestination CR. Note the name of the PVC “volsync-ostoy-rep-dest-dst” here.
  • Let’s add some new content to the application on the source cluster.
  • Scale down the deployment for this application on the source
  • On the destination cluster, ensure the application uses “volsync-ostoy-rep-dest-dst” as the PVC in the deployment.
  • Deployment of the sample application on the Destination
  • Check the application and verify the new content was copied to the Destination.
  • The last task is verifying if the solution violates policies using SSH and UID 0.
  • Log in to the ACS console and enable the related policies.
  • Check if any related policies are violated under the application namespace and search by namespace from the violation menu.

References:

Getting Started on OpenShift Compliance Operator

There are many documents out there on the OpenShift Compliance Operator. I share this with customers who want to learn how to work with OpenShift Operator and helped them to get started on the OpenShift Compliance Operator.

In this blog, I will walk you through how to generate the OpenSCAP evaluation report using the OpenShift Compliance Operator.

OpenShift Compliance Operator can be easily installed on OpenShift 4 as a security feature with the OpenShift Container Platform. The Compliance Operator uses OpenSCAP, a NIST-certified tool, to scan and enforce security policies provided by the content.

Prerequisites

Overview

The compliance operator uses many custom resources. The diagram below helps me to understand the relationship between all the resources. In addition, the OpenShift documentation has details about the Compliance Operator custom resources.

Steps to Generate OpenSCAP Evaluation Report

Some default custom resources come as part of the compliance operator installation, such as ProfileBunble, Profiles, and ScanSetting.

First, we need to create the ScanSettingBinding, which defines the Profiles and the ScanSetting. The ScanSettingBinding tells Compliance Operator to evaluate for Profile(s) A with the specific scan setting.

  • Log in OpenShift Cluster
# oc login -u <username> https://api.<clusterid>.<subdomain>
# oc project openshift-compliance
  • The default compliance profiles will be available once the operator is installed. The command below lists out all compliance profiles Custom Resource Definition (CRD) profiles.compliance.openshift.io.
# oc get profiles.compliance.openshift.io
  • To get custom resource ScanSetting via the below command. It shows two default scan settings.
# oc get ScanSetting
NAME                 AGE
default              2d10h
default-auto-apply   2d10h
  • Check out the “default” ScanSetting
Name:         default
Namespace:    openshift-compliance
Labels:       <none>
Annotations:  <none>
API Version:  compliance.openshift.io/v1alpha1
Kind:         ScanSetting
Metadata:
  Creation Timestamp:  2021-10-19T16:22:18Z
  Generation:          1
  Managed Fields:
...
  Resource Version:  776981
  UID:               f453726d-665a-432e-88a9-a4ad60176ac7
Raw Result Storage:
  Pv Access Modes:
    ReadWriteOnce
  Rotation:  3
  Size:      1Gi
Roles:
  worker
  master
Scan Tolerations:
  Effect:    NoSchedule
  Key:       node-role.kubernetes.io/master
  Operator:  Exists
Schedule:    0 1 * * *
Events:      <none>
  • Create ScanSettingBinding as shown in scan-setting-binding-example.yaml below.
# cat scan-setting-binding-example.yaml
apiVersion: compliance.openshift.io/v1alpha1
kind: ScanSettingBinding
metadata:
  name: cis-compliance
profiles:
  - name: ocp4-cis-node
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
  - name: ocp4-cis
    kind: Profile
    apiGroup: compliance.openshift.io/v1alpha1
settingsRef:
  name: default
  kind: ScanSetting
  apiGroup: compliance.openshift.io/v1alpha1
  • Create the above sample ScanSettingBinding custom resource.
# oc create -f scan-setting-binding-example.yaml
  • Verify the creation of the ScanSettingBinding
# oc get scansettingbinding
  • Custom resource ComplianceSuites is to help tracking the state of the scans. The following command is to check the state of the scan you defined in your ScanSettingBinding.
# oc get compliancesuite
NAME             PHASE     RESULT
cis-compliance   RUNNING   NOT-AVAILABLE
  • ComplianceScan custom resource needs all the parameters to run OpenSCAP, such as profile id, image to get the content from, and data stream file path. It also can constain operational parameter.
# oc get compliancescan
NAME                   PHASE   RESULT
ocp4-cis               DONE    NON-COMPLIANT
  • While the custom resource ComplianceCheckResult shows the aggregate result of the scan, it is useful to review the raw result from the scanner. The raw results are produced in the ARF format and can be large. Therefore, Compliance Operator creates a persistent volume (PV) for the raw result from the scan. Let’s take a look if the PVC is created for the scan.
# oc get pvc
NAME                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
ocp4-cis               Bound    pvc-5ee57b02-2f6b-4997-a45c-3c4df254099d   1Gi        RWO            gp2            27m
ocp4-cis-node-master   Bound    pvc-57c7c411-fc9f-4a4d-a713-de91c934af1a   1Gi        RWO            gp2            27m
ocp4-cis-node-worker   Bound    pvc-7266404a-6691-4f3d-9762-9e30e50fdadb   1Gi        RWO            gp2            28m
  • Once we know the raw result is created, we need the oc-compliance tool to get the raw result XML file. You will need to login to the registry.redhat.io.
# podman login -u <user> registry.redhat.io
  • Download the oc-compliance tool
podman run --rm --entrypoint /bin/cat registry.redhat.io/compliance/oc-compliance-rhel8 /usr/bin/oc-compliance > ~/usr/bin/oc-compliance
  • Fetch the raw results to a temporary location (/tmp/cis-compliance)
# oc-compliance fetch-raw scansettingbindings cis-compliance -o /tmp/cis-compliance
Fetching results for cis-compliance scans: ocp4-cis-node-worker, ocp4-cis-node-master, ocp4-cis
Fetching raw compliance results for scan 'ocp4-cis-node-worker'.....
The raw compliance results are available in the following directory: /tmp/cis-compliance/ocp4-cis-node-worker
Fetching raw compliance results for scan 'ocp4-cis-node-master'.....
The raw compliance results are available in the following directory: /tmp/cis-compliance/ocp4-cis-node-master
Fetching raw compliance results for scan 'ocp4-cis'...........
The raw compliance results are available in the following directory: /tmp/cis-compliance/ocp4-cis
  • Inspect the output filesystem and extract the *.bzip2 file
# cd /tmp/cis-compliance/ocp4-cis
# ls
ocp4-cis-api-checks-pod.xml.bzip2

# bunzip2 -c  ocp4-cis-api-checks-pod.xml.bzip2  > /tmp/cis-compliance/ocp4-cis/ocp4-cis-api-checks-pod.xml

# ls /tmp/cis-compliance/ocp4-cis/ocp4-cis-api-checks-pod.xml
/tmp/cis-compliance/ocp4-cis/ocp4-cis-api-checks-pod.xml
  • Convert ARF XML to html
# oscap xccdf generate report ocp4-cis-api-checks-pod.xml > report.html
  • View the HTML as shown below.

Reference

Thank you Juan Antonio Osorio Robles for sharing the diagram!

Argocd SSO Set up

  • Go to Administrator Console
  • Create a new project called keycloak
  • Click Operator
  • Click OperatorHub
  • Click on the Red Hat Single Sign-On Operator
  • Click Install
  • Click Install
  • Click “Create instance” in the Keycloak tile
  • The Keycloak CR is shown below
apiVersion: keycloak.org/v1alpha1
kind: Keycloak
metadata:
  name: keycloak-dev
  labels:
    app: keycloak-dev
  namespace: keycloak
spec:
  externalAccess:
    enabled: true
  instances: 1
  • Click Create
  • Go to Workloads > Pods
  • Click the keycload-dev creation and return “true”
$ oc get keycloak keycloak-dev -n keycloak -o jsonpath='{.status.ready}'
true
  • Operators > Installed Operators > Red Hat Single Sign-On Operator
  • Click Create instance
  • Enter the KeycloakRealm as shown below
apiVersion: keycloak.org/v1alpha1
kind: KeycloakRealm
metadata:
  name: keycloakrealm
  labels:
    realm: keycloakrealm
  namespace: keycloak
spec:
  instanceSelector:
    matchLabels:
      app: keycloak-dev
  realm:
    enabled: true
    displayName: "Keycloak-dev Realm"
    realm: keycloakrealm
  • Click Create
  • Make sure it returns true
$ oc get keycloakrealm keycloakrealm -n keycloak -o jsonpath='{.status.ready}'
true
  • Get the Keycloak Admin user secret name
$ oc get keycloak keycloak-dev --output="jsonpath={.status.credentialSecret}"
credential-keycloak-dev
  • Get the Admin username and password
$ oc get secret credential-keycloak-dev -o go-template='{{range $k,$v := .data}}{{printf "%s: " $k}}{{if not $v}}{{$v}}{{else}}{{$v | base64decode}}{{end}}{{"\n"}}{{end}}'
  • Run the following to find out the URLs of Keycloak:
KEYCLOAK_URL=https://$(oc get route keycloak --template='{{ .spec.host }}')/auth &&
echo "" &&
echo "Keycloak:                 $KEYCLOAK_URL" &&
echo "Keycloak Admin Console:   $KEYCLOAK_URL/admin" &&
echo "Keycloak Account Console: $KEYCLOAK_URL/realms/myrealm/account" &&
echo ""
  • Open a browser with the Admin URL
  • Login with the admin username and password
  • Click Client on the left nav
  • Click Create on the right top corner
  • Enter the Argocd URL and the name of the client as ‘argocd’
  • Click Save
  • Set Access Type to confidential
  • Set Valid Redirect URIs to <argocd-url>/auth/callback
  • Set Base URL to /applications
  • Click Save
  • Scroll up and click “Credential” tab
  • IMPORTANT: Copy the secret and you will need this later
  • Configure the Group claim
  • Click Client Scope on the left nav
  • Click Create on the right
  • Set Name as group
  • Set Protocol as openid-connecgt
  • Display On Content Scope: on
  • Include to Token Scope: on
  • Click save
  • Click “Mappers” tab
  • Click Create on the top right
  • Set name as groups
  • Set Mapper Type as Group Membership
  • Set Token Claim Name as groups`
  • Click Clients on the left nav
  • Click argocd
  • Click “Client Scopes” tab
  • Select groups > Add selected
  • Click Groups on left nav
  • Click Create
  • Set the name as ArgoCDAdmins
  • Click Save
  • Encode the argocd credential you saved before
echo -n '<argocd credential>' | base64
  • Edit the argocd-secret
oc edit secret argocd-secret -n openshift-gitops
  • add the “oidc.keycloak.clientSecret: <encoded credential> as shown below.
apiVersion: v1
kind: Secret
metadata:
  name: argocd-secret
data:
  oidc.keycloak.clientSecret: <encoded credential>
  • Edit argocd Custom Resource
oc edit argocd -n openshift-gitops
  • Add the following into the yaml. Make sure update the issuer to make your settings
oidcConfig: |
    name: OpenShift Single Sign-On
    issuer: https://keycloak-keycloak.apps.cluster-72c5r.72c5r.sandbox1784.opentlc.com/auth/realms/keycloakrealm
    clientID: argocd
    clientSecret: $oidc.keycloak.clientSecret
    requestedScopes: ["openid", "profile", "email", "groups"]
  • From OpenShift Console top right corner, click About
  • Copy the API URL from the following screen
  • Go back to Keycloak, click Identity Providers on left nav
  • Select OpenShift v4 from the dropdown list
  • Set Display Name: Login with Openshift
  • Set Client ID: keycload-broker
  • Set Client Secret: <anything that you can remember>
  • Set Base URL: API URL
  • Set Default Scopes: user:full
  • Click Save
  • Add an Oauth Client
oc create -f <(echo '
kind: OAuthClient
apiVersion: oauth.openshift.io/v1
metadata:
 name: keycloak-broker 
secret: "12345" 
redirectURIs:
- "https://keycloak-keycloak.apps.cluster-72c5r.72c5r.sandbox1784.opentlc.com/auth/realms/keycloakrealm/broker/openshift-v4/endpoint" 
grantMethod: prompt 
')
  • Configure the RBAC
oc edit configmap argocd-rbac-cm -n openshift-gitops
  • Modify the data as shown below
apiVersion: v1
kind: ConfigMap
metadata:
  name: argocd-rbac-cm
data:
  policy.csv: |
    g, ArgoCDAdmins, role:admin
  • Go to the Argocd URL, you will see the SSO icon. Click “LOG IN VIA OPENSHIFT”
  • Click Log in Openshift

Red Hat OpenShift on Amazon (ROSA) is GA!

I have previously blogged about the pre-GA ROSA, and now it is GA. I decided to write up my GA experience on ROSA.

Let’s get started here.

Enable ROSA on AWS

After logging into AWS, enter openshift in the search box on the top of the page.

Click on the “Red Hat OpenShift Service on AWS” Service listed.

It will then take you to a page as shown below and click to enable the OpenShift service.

Once it is complete, it will show Service enabled.

Click to download the CLI and click on the OS where you run your ROSA CLI. It will start downloading to your local drive.

Set up ROSA CLI

Extract the downloaded CLI file and rosa add to your local path.

tar zxf rosa-macosx.tar.gz
mv rosa /usr/local/bin/rosa

Setting AWS Account

I have set up my AWS account as my IAM user account with proper access per the documentation. There is more information about the account access requirements for ROSA. It is available here.

I have configured my AWS key and secret in my .aws/credentials.

Create Cluster

Verify AWS account access.

rosa verify permissions

Returns:

I: Validating SCP policies...
I: AWS SCP policies ok

Verify the quota for the AWS account.

rosa verify quota --region=us-west-2

Returns:

I: Validating AWS quota...
I: AWS quota ok

Obtain Offline Access Token from the management portal cloud.redhat.com (if you don’t have one yet) by clicking Create One Now link

Go to https://cloud.redhat.com/openshift/token/rosa, and you will have to log in and prompt to accept terms as shown below.

Click View Terms and Conditions.

Check the box to agree the terms and click Submit.

Copy the token from cloud.redhat.com.

rosa login --token=<your cloud.redhat.com token>

Returns:

I: Logged in as 'your_username' on 'https://api.openshift.com'

Verify the login

rosa whoami

Returns:

AWS Account ID:               ############
AWS Default Region:           us-west-2
AWS ARN:                      arn:aws:iam::############:user/username
OCM API:                      https://api.openshift.com
OCM Account ID:               xxxxxyyyyyzzzzzwwwwwxxxxxx
OCM Account Name:             User Name
OCM Account Username:         User Name
OCM Account Email:            name@email.com
OCM Organization ID:          xxxxxyyyyyzzzzzwwwwwxxxxxx
OCM Organization Name:        company name
OCM Organization External ID: 11111111

Configure account and make sure everyone setup correctly

rosa init

Returns

I: Logged in as 'your_username' on 'https://api.openshift.com'
I: Validating AWS credentials...
I: AWS credentials are valid!
I: Validating SCP policies...
I: AWS SCP policies ok
I: Validating AWS quota...
I: AWS quota ok
I: Ensuring cluster administrator user 'osdCcsAdmin'...
I: Admin user 'osdCcsAdmin' already exists!
I: Validating SCP policies for 'osdCcsAdmin'...
I: AWS SCP policies ok
I: Validating cluster creation...
I: Cluster creation valid
I: Verifying whether OpenShift command-line tool is available...
I: Current OpenShift Client Version: 4.7.2

Creating cluster using interactive mode.

rosa create cluster -i
I: Interactive mode enabled.
Any optional fields can be left empty and a default will be selected.
? Cluster name: [? for help] 

Enter the name of the ROSA cluster.

? Multiple availability zones (optional): [? for help] (y/N) 

Enter y/N.

? AWS region:  [Use arrows to move, type to filter, ? for more help]
  eu-west-2
  eu-west-3
  sa-east-1
  us-east-1
  us-east-2
  us-west-1
> us-west-2 

Select the AWS region and hit <enter>.

? OpenShift version:  [Use arrows to move, type to filter, ? for more help]
> 4.7.2
  4.7.1
  4.7.0
  4.6.8
  4.6.6
  4.6.4
  4.6.3

Select the version and hit <enter>.

? Install into an existing VPC (optional): [? for help] (y/N)

Enter y/N.

? Compute nodes instance type (optional):  [Use arrows to move, type to filter, ? for more help]
> r5.xlarge
  m5.xlarge
  c5.2xlarge
  m5.2xlarge
  r5.2xlarge
  c5.4xlarge
  m5.4xlarge

Select the type and hit <enter>.

? Enable autoscaling (optional): [? for help] (y/N)

Enter y/N.

? Compute nodes: [? for help] (2)

Enter the numbers of workers to start.

? Machine CIDR: [? for help] (10.0.0.0/16)

Enter the machine CIDR or use default.

? Service CIDR: [? for help] (172.30.0.0/16)

Enter the service CIDR or use default.

? Pod CIDR: [? for help] (10.128.0.0/14)

Enter the pod CIDR or use default.

? Host prefix: [? for help] (23)

Enter the host prefix or use default

? Private cluster (optional): (y/N) 

Enter y/N.

Note:

Restrict master API endpoint and application routes to direct, private connectivity. You will not be able to access your cluster until you edit network settings in your cloud provider. I also learned that you would need one private subnet and one public subnet for each AZ for your existing private VPC for the GA version of ROSA. There will be more improvement to provide for the private cluster in the future release.

Returns:

I: Creating cluster 'rosa-c1'
I: To create this cluster again in the future, you can run:
   rosa create cluster --cluster-name rosa-c1 --region us-west-2 --version 4.7.2 --compute-nodes 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23
I: To view a list of clusters and their status, run 'rosa list clusters'
I: Cluster 'rosa-c1' has been created.
I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information.
I: To determine when your cluster is Ready, run 'rosa describe cluster -c rosa-c1'.
I: To watch your cluster installation logs, run 'rosa logs install -c rosa-c1 --watch'.
Name:                       rosa-c1
ID:                         xxxxxxxxxxyyyyyyyyyyyaxxxxxxxxx
External ID:
OpenShift Version:
Channel Group:              stable
DNS:                        rosa-c1.xxxx.p1.openshiftapps.com
AWS Account:                xxxxxxxxxxxx
API URL:
Console URL:
Region:                     us-west-2
Multi-AZ:                   false
Nodes:
 - Master:                  3
 - Infra:                   2
 - Compute:                 2 (m5.xlarge)
Network:
 - Service CIDR:            172.30.0.0/16
 - Machine CIDR:            10.0.0.0/16
 - Pod CIDR:                10.128.0.0/14
 - Host Prefix:             /23
State:                      pending (Preparing account)
Private:                    No
Created:                    Mar 30 2021 03:10:25 UTC
Details Page:               https://cloud.redhat.com/openshift/details/xxxxxxxxxxyyyyyyyyyyyaxxxxxxxxx
 

Copy the URL from the Details Page to the browser and click view logs to see the status of the installation.

When ROSA is completed, you will see the similar page as below.

You will need to access the OpenShift cluster.

Configure Quick Access

Add cluster-admin user

rosa create admin -c rosa-c1

Returns:

I: Admin account has been added to cluster 'rosa-c1'.
I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user.
I: To login, run the following command:

   oc login https://api.rosa-c1.xxxx.p1.openshiftapps.com:6443 --username cluster-admin --password xxxxx-xxxxx-xxxxx-xxxxx

I: It may take up to a minute for the account to become active.

Test user access

$ oc login https://api.rosa-c1.xxxx.p1.openshiftapps.com:6443 --username cluster-admin --password xxxxx-xxxxx-xxxxx-xxxxx
Login successful.

You have access to 86 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".

Configure Identity Provider

There are options for identity providers. I am using Github in this example.

I am not going to explain how to get identity to provide set up here. I did that in last blog. I will walk through the step to configure ROSA using Github.

rosa create idp --cluster=rosa-c1 -i
I: Interactive mode enabled.
Any optional fields can be left empty and a default will be selected.
? Type of identity provider:  [Use arrows to move, type to filter]
> github
  gitlab
  google
  ldap
  openid

Select one IDP

? Identity provider name: [? for help] (github-1)

Enter the name of the IDP configured on the ROSA

? Restrict to members of:  [Use arrows to move, type to filter, ? for more help]
> organizations
  teams

Select organizations

? GitHub organizations:

Enter the name of the organization. My example is `sc-rosa-idp`

? To use GitHub as an identity provider, you must first register the application:
  - Open the following URL:
    https://github.com/organizations/sc-rosa-idp/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.rosa-c1.0z3w.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=rosa-c1&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.rosa-c1.0z3w.p1.openshiftapps.com
  - Click on 'Register application'

Open a browser and use the above URL to register the application and cope client ID

? Client ID: [? for help] 

Enter the copied Client ID

? Client Secret: [? for help]

Enter client secret from the registered application.

? GitHub Enterprise Hostname (optional): [? for help] 

Hit <enter>

? Mapping method:  [Use arrows to move, type to filter, ? for more help]
  add
> claim
  generate
  lookup

Select claim

I: Configuring IDP for cluster 'rosa-c1'
I: Identity Provider 'github-1' has been created.
   It will take up to 1 minute for this configuration to be enabled.
   To add cluster administrators, see 'rosa create user --help'.
   To login into the console, open https://console-openshift-console.apps.rosa-c1.xxxx.p1.openshiftapps.com and click on github-1.

Congratulation! IDP configuration is completed.

Login in with IDP account

Open a browser with the URL from the IDP configuration. Our example is: https://console-openshift-console.apps.rosa-c1.xxxx.p1.openshiftapps.com.

Click github-1

Click Authorize sc-rosa-idp

Overall, it is straightforward to get started on creating a ROSA cluster on AWS. I hope this will help you in some ways.

Reference

Red Hat OpenShift on Amazon Documentation

Running ROSA on an Existing Private VPC

Pre-GA ROSA test

Test Run Pre-GA Red Hat OpenShift on AWS (ROSA)

I have an opportunity to try out the pre-GA ROSA. ROSA is a fully managed Red Hat OpenShift Container Platform (OCP) as a service and sold by AWS. I am excited to share my experience on ROSA. It installs OCP 4 from soup to nuts without configuring hosted zone and domain sever. As a developer, you may want to get the cluster up and running, so you can start doing the real work :). There are customization options with ROSA, but I am going to leave it for later exploration.

I am going to show you the steps I took to create OCP via ROSA. There are more use cases to test. I hope this blog will give you a taste of ROSA.

Creating OpenShift Cluster using ROSA Command

Since it is a pre-GA version, I download the ROSA command line tool from the here and have aws-cli available where I run the ROSA installation.

  • I am testing from my MacBook. I just move the “rosa” command line tool to /usr/local/bin/.
  • Verify that your AWS account if it has the necessary permissions using rosa verify permissions:
  • Verify that your AWS account has the necessary quota to deploy an Red Hat OpenShift Service on AWS cluster via rosa verify quota --region=<region>:
  • Log in your Red Hat account with ROSA command using rosa login --token=<token from cloud.redhat.com>:
  • Verify the AWS setup using rosa whoami:
  • Initialize the AWS for the cluster deploy via rosa init:
  •  Since I have OpenShift Client command line installed, it shows the existing OpenShift Client version. If you don’t have it, you can download the OpenShift Client command line via rosa download oc and make it available from your PATH. 
  • Create ROSA via rosa create cluster command below: 

Note: rosa create cluster -i with the interactive option, it provides customization for ROSA installation, such as multiple AZ, existing VPC, subnets, etc…

  • Copy the URL from Details Page to a browser and you can view the status for your ROSA installation. 
  • If you click View logs, you can watch the log from here until the cluster is completed.
  • When you see this screen, it means the cluster is created:
  • Now, you need to have a way to log into the OCP cluster. I created an organization called sc-rosa-idp on Github via rosa create idp --cluster=sc-rosa-test --interactive as show below.
  • Log into the OCP console via the URL from the output from last step:
  • Click github-1 –> redirect to authorize to the organization from Github -> log in Github.
  • Once you log in with your Github credential, we will see the OCP developer console: 
  • Grant cluster-admin role to the github user using rosa grant user cluster-admin --user <github user in your organization> --cluster <name of your rosa cluster>.
  • Click Administrator on the top left and access the OCP admin console with Admin access as shown below:

Delete ROSA

  • Go to the cluster from cloud.redhat.com, from Action –> select Delete cluster:
  • Enter the name of the cluster and click Delete:
  • The cluster shows as Uninstalling

 

Although it is a pre-GA without AWS console integration, I found it very easy to get my cluster up and running. If you cannot wait for GA, you can always request the preview access from here. Get a head start with ROSA!

References

ARO 4 and AAD Integration Take 2

In my last post on ARO 4, I have already walk through the steps to set up the Azure environment for creating ARO 4. My 2nd round testing requires the following specific requirements:

  • Use only one app registration
  • Not to use pull secret
You will need to complete the session for setting up Azure environment in my previous blog for ARO 4.

Create ARO 4 Cluster with existing service principal

Create a service principal
From the previous test, I learned that the process of creating ARO 4 will create a service principal. I am going to create a service principal before creating cluster.
$ az ad sp create-for-rbac --role Contributor --name all-in-one-sp
This command will return the appId and password information that we will need for the ARO 4 create command later.
Adding API permission to the service principal
  1. Login to Azure Portal
  2. Go to Azure Active Directory
  3. Click App registrations
  4. Click “All applications”
  5. Search for “app-in-one-sp”
  6. Click “View API permission”
  7. Click “Add a permission”
  8. Click “Azure Active Directory Graph”
  9. Click “Delegated Permissions”
  10. Check “User.Read”
  11. Click “Add permission” button at the bottom.
  12. Click “Grant admin consent …”
  13. A green check mark is shown under Status as shown below
Create ARO with existing service principal without pull secret
az aro create \
--resource-group $RESOURCEGROUP \
--name $CLUSTER \
--client-id <service principal application id> \
--client-secret <service principal password> \
--vnet aro-vnet \
--master-subnet master-subnet \
--worker-subnet worker-subnet \
--domain aro.ocpdemo.online
When I opted out the pull secret option, I will get the following message from the output of the azure cli.
No --pull-secret provided: cluster will not include samples or operators from Red Hat or from certified partners.
Adding api and ingress A record to the DNS zone
Using the output from the ARO 4 creation, Use the IP from the “apiserverProfile” portion is for api servier. The IP from “ingressProfiles” is for ingress. The example is shown below.
Test out the ARO cluster
az aro list-credentials \ 
--name $CLUSTER \ 
--resource-group $RESOURCEGROUP
Open the following URL from the browser and login using the kubeadmin with password from the above command
https://console-openshift-console.apps.<DNS domain>/

Integrate Azure Active Directory

The following steps are for getting the OAuth call back URL.
$ oc login -u kubeadmin -p <password> https://api.<DNS domain>:6443/ 
$ oauthCallBack=`oc get route oauth-openshift -n openshift-authentication -o jsonpath='{.spec.host}'` 
$ oauthCallBackURL=https://$oauthCallBack/oauth2callback/AAD
$ echo $oauthCallBackURL
where AAD is the name of the identity provider for OAuth configuration on OpenShift
Add the OAuth call back URL to the same service principal
  • Go to Azure Active Directory
  • Click App registration
  • Click on “all-in-one-sp” under all applications
  • Under Overview, click right top corner link for “Add a Redirect URI”
  • Click “Add a platform”
  • Click Web Application from the list of Configure platforms
  • Enter the value of the oauthCallBackURL from the previous step to the “Redirect URIs”
  • Click configure
Create a manifest file
cat > manifest.json<< EOF 
[{ "name": "upn", 
"source": null, 
"essential": false, 
"additionalProperties": [] 
}, 
{ "name": "email", 
"source": null, 
"essential": false, 
"additionalProperties": [] 
}] 
EOF
Update service principal with the manifest
$ az ad app update \
 --set optionalClaims.idToken=@manifest.json \
 --id <Service Principal appId>
Create secret to store service principal’s password
oc create secret generic openid-client-secret-azuread \
--namespace openshift-config \
--from-literal=clientSecret=<service principal password>
Create OAuth configuration
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: AAD
    mappingMethod: claim
    type: OpenID
    openID:
      clientID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
      clientSecret:
        name: openid-client-secret-azuread
      extraScopes:
      - email
      - profile
      extraAuthorizeParameters:
        include_granted_scopes: "true"
      claims:
        preferredUsername:
        - email
        - upn
        name:
        - name
        email:
        - email
      issuer: https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Apply the OAuth YAML
oc apply -f openid.yaml
Login openshift console via AAD

Reference

Azure Red Hat OpenShift 4 (ARO 4) integrate with Azure Active Directory

I happened to test out ARO 4 with Azure Active Directory integration. The Azure documentation is good, but I had to change a few while testing the steps. I am sharing my experience here and hope someone will find it useful.

Setting the requirements

Install or update Azure CLI
brew update && brew install azure-cli
Make sure you have permission to create resources in the resource group. I logged in as a global administrator when I am testing this.

Setup the environment variables
$ cat aro-env
LOCATION=centralus. # the location of your cluster
RESOURCEGROUP=aro-rg # the name of the resource group where you want to create your cluster
CLUSTER=poc #cluster-id of the ARO 4 cluster
$ source aro-env
Log in Azure
az login
Create a Resource Group
az group create \
--name $RESOURCEGROUP \
--location $LOCATION
Add DNS zone
If you don’t have a DNS zone already, you can use this step.
  1. Login Azure Portal
  2. Type: “DNS Zones” in the search box on the top and click on “DNS Zones”
  3. Click “+Add” on the top
  4. Select the newly created resource group
  5. Enter your domain
  6. Select the location
  7. Create “Review+Create”

Notes:

  • I am using a domain name outside of the Azure. You will need to add the NS records from the overview page of the DNS zone to your domain.
  • Request increase of quota from Azure portal. ARO requires a minimum of 40 cores.
Register Resource Provider
az account set --subscription
az provider register -n Microsoft.RedHatOpenShift --wait
az provider register -n Microsoft.Compute --wait
az provider register -n Microsoft.Storage --wait
Create a Virtual Network
az network vnet create \
--resource-group $RESOURCEGROUP \
--name aro-vnet \
--address-prefixes 10.0.0.0/22
Create an empty subnet for master nodes
az network vnet subnet create \
--resource-group $RESOURCEGROUP \
--vnet-name aro-vnet \
--name master-subnet \
--address-prefixes 10.0.0.0/23 \
--service-endpoints Microsoft.ContainerRegistry
Create an empty subnet for worker nodes
az network vnet subnet create \
--resource-group $RESOURCEGROUP \
--vnet-name aro-vnet \
--name worker-subnet \
--address-prefixes 10.0.2.0/23 \
--service-endpoints Microsoft.ContainerRegistry
Disable private endpoint policy
az network vnet subnet update \
--name master-subnet \
--resource-group $RESOURCEGROUP \
--vnet-name aro-vnet \
--disable-private-link-service-network-policies true
Once the above steps are done. You don’t have to redo the steps if you are going to reuse the names and resources.

Create Cluster

Please make sure you log in to Azure and environment variables are set.

Information that we need for creating a cluster
  • Get a copy of the pull secret from cloud.redhat.com. If you don’t have a user name created, please just register as a user for free.
  • Create an ARO cluster using the following command. Please apply to appropriate values.
    Some values were used in the example are explained as shown below.
    • aro-vnet – the name of virtual network
    • master-subnet – the name of master subnet
    • worker subnet – the name of worker subnet
    • ./pull-secret.txt – the path and pull secret where is located
    • aro.ocpdemo.online – custom domain for the cluster
az aro create \
--resource-group $RESOURCEGROUP \
--name $CLUSTER \
--vnet aro-vnet \
--master-subnet master-subnet \
--worker-subnet worker-subnet \
--pull-secret @./pull-secret.txt \
--domain aro.ocpdemo.online

The information from the JSON output of the above command can be useful if you are not familiar with OpenShift 4. You can find your API server IP, API URL, OpenShift console URL and ingress IP. You will need the API, and ingress IP for the next step.

{- Finished ..
"apiserverProfile": {
"ip": "x.x.x.x",
"url": "https://api.aro.ocpdemo.online:6443/",
"visibility": "Public"
...
},
"consoleProfile": {
"url": "https://console-openshift-console.apps.aro.ocpdemo.online/"
},
....
"ingressProfiles": [
{
"ip": "x.x.x.x",
"name": "default",
"visibility": "Public"
}
....

Post ARO Installation

Adding two A records for api and *.apps in the DNS zone
  1. Login to Azure portal
  2. Go to DNS zone
  3. Click onto the domain for the ARO cluster
  4. Click “+ Record Set” on the top menu to create an A record and add values to Name and IP. You will need to repeat this step for both api and *.apps A records.
    • Name: api or *.apps
    • IP: the *.apps/ingress IP is from the output of the creation of the ARO
  5. The below screenshot shows the DNS zone configuration and adding 2 A records.

Test ARO Cluster

Getting Kubeadmin credential
az aro list-credentials \
--name $CLUSTER \
--resource-group $RESOURCEGROUP
The command will return the kubeadmin credential.
Log in OpenShift Console
Open a browser and go to the OpenShift console or look for “consoleProfile” from the JSON output from ARO creation
https://console-openshift-console.apps.<DNS domain>/
The login user is kubeadmin and the password is the credential from the last command. Congrats!! The ARO installation is completed!

Azure Active Directory Integration

Getting oauthCallBackURL
  • Download OpenShift command line tool from console.
Download the OpenShift Command Lind Interface (CLI) from there. Once you extract it and add to the PATH. You can move on to the next step.
  • Login to ARO via OC CLI
$ oc login -u kubeadmin -p <password> https://api.<DNS domain>:6443/

$ oauthCallBack=`oc get route oauth-openshift -n openshift-authentication -o jsonpath='{.spec.host}'`

$ oauthCallBackURL=https://$oauthCallBack/oauth2callback/AAD
Note: AAD is the name of the identity provider when configuring OAuth on OpenShift

Creating Application on Azure Active Directory
az ad app create \
  --query appId -o tsv \
  --display-name poc-aro-auth \
  --reply-urls $oauthCallBackURL \
  --password '<ClientSecret>'
Note: Please note that the above command returns the registered Application Id (AppId) which you will need it when configuring the OAuth on OpenShift.
Get tenant Id
az account show --query tenantId -o tsv
Note: Please note that you will need the tenant Id for the OAuth configuration on OpenShift
Create manifest file
cat > manifest.json<< EOF
[{
"name": "upn",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "email",
"source": null,
"essential": false,
"additionalProperties": []
}]
EOF
Update the Azure Active Directory with a manifest
az ad app update \
--set optionalClaims.idToken=@manifest.json \
--id <AppId>
Update Application permission scope
az ad app permission add \
--api 00000002-0000-0000-c000-000000000000 \
--api-permissions 311a71cc-e848-46a1-bdf8-97ff7156d8e6=Scope \ 
--id <AppId>
Grant admin consent
  1. login Azure portal
  2. Go to Azure Active Directory
  3. Click App Registrations
  4. Click “All Application” and search for newly create application name
  5. Click onto the display name of the application
  6. Click view API permissions
  7. Click on the “check” to grant admin consent for directory
Add service principal
$ az ad sp create-for-rbac --role Contributor --name poc-aro-sp
You will need the “appId” from the output of the above command and that is the appId for the service principal
$az role assignment create --role "User Access Administrator" \
--assignee-object-id $(az ad sp list --filter "appId eq '<service-principal-appid>'" \
| jq '.[0].objectId' -r)
$az ad app permission add --id <appId> \ 
--api 00000002-0000-0000-c000-000000000000 \ 
--api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role
This will output the follow command as shown below.
$az ad app permission grant --id <appid> --api 00000002-0000-0000-c000-000000000000
I also grant the admin consent for the API permission for the service principal.
Create secret for identity provider on OpenShift
oc create secret generic openid-client-secret-azuread \
--namespace openshift-config \
--from-literal=clientSecret=<your password>
Create YAML for identity provider for AAD
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: AAD
    mappingMethod: claim
    type: OpenID
    openID:
      clientID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
      clientSecret:
        name: openid-client-secret-azuread
      extraScopes:
      - email
      - profile
      extraAuthorizeParameters:
        include_granted_scopes: "true"
      claims:
        preferredUsername:
        - email
        - upn
        name:
        - name
        email:
        - email
      issuer: https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Note:
  • The clientID is the AppId of your registered application.
  • Issuer URL is https://login.microsoftonline.com/<tenant id>.
  • The clientSecret is using the secret (openid-client-secret-azuread) that you created from the previous step.
Alternatively, you can obtain the clientID and tenant id from Azure Portal.
  • Login Azure Portal
  • Click Home
  • Click Azure Active Directory
  • Click App registrations on the left menu
  • Click all applications tab
  • Type the application that you just created in the search area
  • Click onto the application (my application is poc-aro-auth)
  • Under Overview, the information is shown as “Application (client) ID” and Directory (tenant) ID” as in the image below.
Update OpenShift OAuth Configuration
oc apply -f openid.yaml
Login OpenShift console via AAD
It will redirect you to Azure login page

Troubeshoot

Tip #1: If you are getting error, you can login as kubeadmin and check the logs from oauth-openshift pods under openshift-authentication project.

Tip #2: if you are creating a new registered application to try, make sure you clean up the user and identity.

Reference

Azure OpenShift 4 documentation

ARO and Azure Active Directory integration

OpenShift 4.3 – Configuring Metering to use AWS Billing information

My task is to figure out how to configure the Metering correlating AWS billing. The OpenShift documentation in the reference is where I started. I decided to record the end-to-end steps on how I set this up since there were some lessons learned in the process of it. I hope this helps you to set up the Metering with AWS billing much smoother.

Prerequisites:

Setting up AWS Report

  1. Before creating anything, you need to have data in the Billing & Cost Management Dashboard already.
  2. If you have a brand new account, you may have to wait until you get some data to show up before you proceed. You will have to have access to Cost & Usage Report​ under AWS Billing to set up the report.
  3. Log in to AWS, go to My Billing Dashboard
  4. Click Cost & Usage Reports
  5. Click Create reports
  6. Provide a name and check Include resource IDs
  7. Click Next
  8. Click Configure → add S3 bucket name and Region-> click Next
  9. Provide `prefix` and select your options for your report → Click Next
  10. Once you created a report similar to the followingScreen Shot 2020-04-28 at 7.29.23 PM.png
  11. Click onto the S3 bucket and validate reports are being created under the folder.
  12. Click Permissions tab
  13. Click Bucket Policy
  14. Copy and save the bucket policy somewhere you can get back to

Setting up AWS user permission policy

  1. Go to My Security Credentials
  2. Click Users → Click the username name will be used for accessing the reports and for OpenShift metering.
  3. Click Add Permissions→ Attach existing policies directly​ → Create policy → click JSON
  4. Paste the buck policy from the Cost & Usage report from the s3 bucket step #14 in the last session.
  5. Use the same step to add the following policy:
    { 
      "Version": "2012-10-17", 
      "Statement": [ 
      { 
        "Sid": "1", 
        "Effect": "Allow", 
        "Action": [ 
          "s3:AbortMultipartUpload", 
          "s3:DeleteObject", 
          "s3:GetObject", 
          "s3:HeadBucket", 
          "s3:ListBucket", 
          "s3:ListMultipartUploadParts", 
          "s3:PutObject" 
         ], 
         "Resource": [ 
            "arn:aws:s3:::<YOUR S3 BUCKET NAME FOR BILLING REPORT>/*",  
            "arn:aws:s3:::<YOUR S3 BUCKET NAME FOR BILLING REPORT>"  
          ] 
        } 
        ] 
    }
  6. Since I am using s3 bucket for metering storage, I also added the following policy to the user:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "1",
                "Effect": "Allow",
                "Action": [
                    "s3:AbortMultipartUpload",
                    "s3:DeleteObject",
                    "s3:GetObject",
                    "s3:HeadBucket",
                    "s3:ListBucket",
                    "s3:CreateBucket",
                    "s3:DeleteBucket",
                    "s3:ListMultipartUploadParts",
                    "s3:PutObject"
                ],
                "Resource": [
                    "arn:aws:s3:::<YOUR S3 BUCKET NAME FOR METERING STORAGE>/*",
                    "arn:aws:s3:::<YOUR S3 BUCKET NAME FOR METERING STORAGE>"
                ]
            }
        ]
    }

Configuration:

Install Metering Operator

  1. Login OpenShift Container Platform web console as cluster-admin, click AdministrationNamespacesCreate Namespace
  2. Enter openshift-metering
  3. Add openshift.io/cluster-monitoring=true as label → click Create.
  4. Click ComputeMachine Sets
  5. If you are like me, the cluster is using the default configuration on AWS. In my test, I increase 1 more worker per AZs.
  6. I notice that one pod for Metering requires more resources, and the standard size may not be big enough. I create an m5.2xlarge machine set. I only need 1 replica for this machineset.
    1. Create a template machine-set YAML:
      oc project openshift-machine-api
      oc get machineset poc-p6czj-worker-us-west-2a -o yaml > m52xLms.yaml
    2. Modify the YAML file by updating the name of the machine set and instance type, removing the status, timestamp, id, selflink, etc… Here is my example of a machine set for m5.2xlarge.
      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: poc-p6czj
        name: poc-p6czj-xl-worker-us-west-2a
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: poc-p6czj
            machine.openshift.io/cluster-api-machineset: poc-p6czj-xl-worker-us-west-2a
        template:
          metadata:
            creationTimestamp: null
            labels:
              machine.openshift.io/cluster-api-cluster: poc-p6czj
              machine.openshift.io/cluster-api-machine-role: worker
              machine.openshift.io/cluster-api-machine-type: worker
              machine.openshift.io/cluster-api-machineset: poc-p6czj-xl-worker-us-west-2a
          spec:
            metadata:
              creationTimestamp: null
            providerSpec:
              value:
                ami:
                  id: ami-0f0fac946d1d31e97
                apiVersion: awsproviderconfig.openshift.io/v1beta1
                blockDevices:
                - ebs:
                    iops: 0
                    volumeSize: 120
                    volumeType: gp2
                credentialsSecret:
                  name: aws-cloud-credentials
                deviceIndex: 0
                iamInstanceProfile:
                  id: poc-p6czj-worker-profile
                instanceType: m5.2xlarge
                kind: AWSMachineProviderConfig
                metadata:
                  creationTimestamp: null
                placement:
                  availabilityZone: us-west-2a
                  region: us-west-2
                publicIp: null
                securityGroups:
                - filters:
                  - name: tag:Name
                    values:
                    - poc-p6czj-worker-sg
                subnet:
                  filters:
                  - name: tag:Name
                    values:
                    - poc-p6czj-private-us-west-2a
                tags:
                - name: kubernetes.io/cluster/poc-p6czj
                  value: owned
                userDataSecret:
                  name: worker-user-data
    3. run:
      oc create -f m52xLms.yaml
      # wait for the new machine for m5.2xlarge created
      oc get machineset
  7. Create a secret to access the AWS account and make sure you are cluster-admin and run the following commands:
    oc project openshift-metering
    oc create secret -n openshift-metering generic my-aws-secret --from-literal=aws-access-key-id=<YOUR AWS KEY> --from-literal=aws-secret-access-key=<YOUR AWS SECRET>
  8. Back to Console, click OperatorsOperatorHub and type ‘metering` in the filter to find the Metering Operator.
  9. Click the Metering (provided by Red Hat), review the package description, and then click install.
  10. Under Installation Mode, select openshift-metering as namespace. Specify your update channel and approval strategy, then click Subscribe to install Metering.
  11. Click Installed Operators from the left menu, wait for Succeeded as status is shown next to the Metering Operator.
  12. Click WorkloadsPods → metering operator pod is in Running state
  13. Go back to your terminal, run:
    oc project openshift-metering
  14. We are now ready to create the MeteringConfig Object. Create a file `metering-config.yaml` as shown below. See the reference for more details of the MeteringConfig object.
    apiVersion: metering.openshift.io/v1
    kind: MeteringConfig
    metadata:
      name: operator-metering
      namespace: openshift-metering
    spec:
      openshift-reporting:
        spec:
          awsBillingReportDataSource:
            enabled: true
            bucket: "logs4reports"
            prefix: "bubble/ocpreports/"
            region: "us-west-2"
      storage:
        type: hive
        hive:
          s3:
            bucket: shanna-meter/demo
            createBucket: true
            region: us-west-2
            secretName: my-aws-secret
          type: s3
      presto:
        spec:
          config:
            aws:
              secretName: my-aws-secret
      hive:
        spec:
          config:
            aws:
              secretName: my-aws-secret
      reporting-operator:
        spec:
          config:
            aws:
              secretName: my-aws-secret
          resources:
            limits:
              cpu: 1
              memory: 500Mi
            requests:
              cpu: 500m
              memory: 100Mi
  15. Create MeteringConfig:
    oc create -f metering-config.yaml
  16. To monitor the process:
    watch 'oc get pod'
  17. Wait until you see all pods are up and running:
    $ oc get pods
    NAME                              READY STATUS   RESTARTS AGE
    hive-metastore-0                   2/2   Running 0        2m35s
    hive-server-0                      3/3   Running 0        2m36s
    metering-operator-69b664dc57-knd86 2/2   Running 0        31m
    presto-coordinator-0               2/2   Running 0        2m8s
    reporting-operator-674cb5d7b-zxwf4 1/2   Running 0        96s
  18. Verify the AWS report data source:
    $ oc get reportdatasource |grep aws
    aws-billing                                                                                                                                                     3m41s
    aws-ec2-billing-data-raw
  19. Verify the AWS report queries:
    $ oc get reportquery |grep aws
    aws-ec2-billing-data                         5m19s
    aws-ec2-billing-data-raw                     5m19s
    aws-ec2-cluster-cost                         5m19s
    pod-cpu-request-aws                          5m19s
    pod-cpu-usage-aws                            5m19s
    pod-memory-request-aws                       5m18s
    pod-memory-usage-aws                         5m18s

    For more information about the ReportDataSource and the ReportQuery​, please check out the GitHub link in the reference.

  20. Create reports to get AWS billing from the following YAML:
    apiVersion: metering.openshift.io/v1
    kind: Report
    metadata:
      name: pod-cpu-request-billing-run-once
    spec:
      query: "pod-cpu-request-aws"
      reportingStart: '2020-04-12T00:00:00Z'
      reportingEnd: '2020-04-30T00:00:00Z'
      runImmediately: true
    ---
    apiVersion: metering.openshift.io/v1
    kind: Report
    metadata:
      name: pod-memory-request-billing-run-once
    spec:
      query: "pod-memory-request-aws"
      reportingStart: '2020-04-12T00:00:00Z'
      reportingEnd: '2020-04-30T00:00:00Z'
      runImmediately: true
  21. Create reports (status as `RunImmediately`):
    $ oc create -f aws-reports.yaml
    $ oc get reports
    NAME                                  QUERY                    SCHEDULE   RUNNING          FAILED   LAST REPORT TIME   AGE
    pod-cpu-request-billing-run-once      pod-cpu-request-aws                 RunImmediately                               5s
    pod-memory-request-billing-run-once   pod-memory-request-aws              RunImmediately                               5s
  22. Wait until reports are completed (status as `Finished`):
    $ oc get reports
    NAME                                  QUERY                    SCHEDULE   RUNNING    FAILED   LAST REPORT TIME       AGE
    pod-cpu-request-billing-run-once      pod-cpu-request-aws                 Finished            2020-04-30T00:00:00Z   79s
    pod-memory-request-billing-run-once   pod-memory-request-aws              Finished            2020-04-30T00:00:00Z   79s
  23. I created a simple script (viewReport.sh) as shown below to view any report which requires $1 as the name of the report from oc get reports
    reportName=$1
    reportFormat=csv
    token="$(oc whoami -t)"
    meteringRoute="$(oc get routes metering -o jsonpath='{.spec.host}')"
    curl --insecure -H "Authorization: Bearer ${token}" "https://${meteringRoute}/api/v1/reports/get?name=${reportName}&namespace=openshift-metering&format=$reportFormat"
  24. Before running the script, please make sure you get a validate token via oc whoami -t
  25. View report by run the simple script in step #23:
    ./viewReport.sh pod-cpu-request-billing-run-once
    period_start,period_end,pod,namespace,node,pod_request_cpu_core_seconds,pod_cpu_usage_percent,pod_cost
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,alertmanager-main-0,openshift-monitoring,ip-10-0-174-47.us-west-2.compute.internal,792.000000,0.006587,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,alertmanager-main-1,openshift-monitoring,ip-10-0-138-24.us-west-2.compute.internal,792.000000,0.006587,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,alertmanager-main-2,openshift-monitoring,ip-10-0-148-172.us-west-2.compute.internal,792.000000,0.006587,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,apiserver-9dhcr,openshift-apiserver,ip-10-0-157-2.us-west-2.compute.internal,1080.000000,0.008982,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,apiserver-fr7w5,openshift-apiserver,ip-10-0-171-27.us-west-2.compute.internal,1080.000000,0.008982,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,apiserver-sdlsj,openshift-apiserver,ip-10-0-139-242.us-west-2.compute.internal,1080.000000,0.008982,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,apiservice-cabundle-injector-54ff756f6d-f4vl6,openshift-service-ca,ip-10-0-157-2.us-west-2.compute.internal,72.000000,0.000599,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,authentication-operator-6d865c4957-2jsql,openshift-authentication-operator,ip-10-0-171-27.us-west-2.compute.internal,72.000000,0.000599,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,catalog-operator-868fd6ddb5-rmfk7,openshift-operator-lifecycle-manager,ip-10-0-139-242.us-west-2.compute.internal,72.000000,0.000599,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,certified-operators-58874b4f86-rcbsl,openshift-marketplace,ip-10-0-148-172.us-west-2.compute.internal,20.400000,0.000170,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,certified-operators-5b86f97d6f-pcvqk,openshift-marketplace,ip-10-0-148-172.us-west-2.compute.internal,16.800000,0.000140,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,certified-operators-5fdf46bd6d-hhtqd,openshift-marketplace,ip-10-0-148-172.us-west-2.compute.internal,37.200000,0.000309,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,cloud-credential-operator-868c5f9f7f-tw5pn,openshift-cloud-credential-operator,ip-10-0-157-2.us-west-2.compute.internal,72.000000,0.000599,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,cluster-autoscaler-operator-74b5d8858b-bwtfc,openshift-machine-api,ip-10-0-139-242.us-west-2.compute.internal,144.000000,0.001198,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,cluster-image-registry-operator-9754995-cqm7v,openshift-image-registry,ip-10-0-139-242.us-west-2.compute.internal,144.000000,0.001198,
    ...
  26. The output from the preview steps are not too readable. Instead, I downloaded the output from the previous step into a file.
    ./viewReport.sh pod-cpu-request-billing-run-once > aws-pod-cpu-billing.txt
  27. Import the output file into a spreadsheet as shown below: Screen Shot 2020-04-30 at 11.02.25 PM.png

Troubleshoot:

The most useful log is the report operator log for debugging any report issues.

Reference:

OpenShift metering documentation: https://docs.openshift.com/container-platform/4.3/metering/metering-about-metering.html

Configure AWS Billing Correlation: https://docs.openshift.com/container-platform/4.6/metering/configuring_metering/metering-configure-aws-billing-correlation.html

Addition information: https://github.com/kube-reporting/metering-operator/blob/master/Documentation/metering-architecture.md

OpenShift4.3: Retest Static IP configuration on vSphere

Lesson learned from the last test (https://shanna-chan.blog/2019/07/26/openshift4-vsphere-static-ip/), and I got questions around clarification on using static IP. My apologies for the confusion from my last test since it was my test without any real documentation. I want to record all my errors so I can help others to troubleshoot.

Anyway, I decided to retest the installation of OCP 4.3 using static IP. The goal to clarify the installation instructions my last note from the last blog if you are trying to install OCP4 on the VMware environment manually using static IP.

Environment:

Screen Shot 2020-03-16 at 2.22.46 PM.png

  • OCP 4.3.5
  • vSphere 6.7

 

List of VMs:

  • Bootstrap 192.168.1.110
  • Master0 192.168.1.111
  • Master1 192.168.1.112
  • Master2 192.168.1.113
  • Worker0 192.168.1.114
  • Worker1 192.168.1.115

Prerequisites:

The following components are already running in my test environment.

DNS Server

  1. Add Zone /etc/named.conf. An example can be found here https://github.com/christianh814/openshift-toolbox/blob/master/ocp4_upi/docs/0.prereqs.md#dns
  2. Configures the zone files for all the DNS entries. An example configuration is shown below.
    ; The api points to the IP of your load balancer
    api.ocp43	IN	A	192.168.1.72
    api-int.ocp43	IN	A	192.168.1.72
    ;
    ; The wildcard also points to the load balancer
    *.apps.ocp43	IN	A	192.168.1.72
    ;
    ; Create entry for the bootstrap host
    bootstrap0.ocp43	IN	A	192.168.1.110
    ;
    ; Create entries for the master hosts
    master01.ocp43	IN	A	192.168.1.111
    master02.ocp43	IN	A	192.168.1.112
    master03.ocp43	IN	A	192.168.1.113
    ;
    ; Create entries for the worker hosts
    worker01.ocp43	IN	A	192.168.1.114
    worker02.ocp43	IN	A	192.168.1.115
    ;
    ; The ETCd cluster lives on the masters...so point these to the IP of the masters
    etcd-0.ocp43	IN	A	192.168.1.111
    etcd-1.ocp43	IN	A	192.168.1.112
    etcd-2.ocp43	IN	A	192.168.1.113
    ;
    ; The SRV records are IMPORTANT....make sure you get these right...note the trailing dot at the end...
    _etcd-server-ssl._tcp.ocp43	IN	SRV	0 10 2380 etcd-0.ocp43.example.com.
    _etcd-server-ssl._tcp.ocp43	IN	SRV	0 10 2380 etcd-1.ocp43.example.com.
    _etcd-server-ssl._tcp.ocp43	IN	SRV	0 10 2380 etcd-2.ocp43.example.com.

Load balancer

  1. Update /etc/haproxy/haproxy.cfg with cluster information. An example is shown below.
    #---------------------------------------------------------------------
    
    listen stats
        bind *:9000
        mode http
        stats enable
        stats uri /
        monitor-uri /healthz
    
    #---------------------------------------------------------------------
    #Cluster ocp43 - static ip test
    frontend openshift-api-server
        bind *:6443
        default_backend openshift-api-server
        mode tcp
        option tcplog
    
    backend openshift-api-server
        balance source
        mode tcp
        #server bootstrap0.ocp43.example.com 192.168.1.110:6443 check
        server master01.ocp43.example.com 192.168.1.111:6443 check
        server master02.ocp43.example.com 192.168.1.112:6443 check
        server master03.ocp43.example.com 192.168.1.113:6443 check
    
    frontend machine-config-server
        bind *:22623
        default_backend machine-config-server
        mode tcp
        option tcplog
    
    backend machine-config-server
        balance source
        mode tcp
        # server bootstrap0.ocp43.example.com 192.168.1.110:22623 check
        server master01.ocp43.example.com 192.168.1.111:22623 check
        server master02.ocp43.example.com 192.168.1.112:22623 check
        server master03.ocp43.example.com 192.168.1.113:22623 check
    
    frontend ingress-http
        bind *:80
        default_backend ingress-http
        mode tcp
        option tcplog
    
    backend ingress-http
        balance source
        mode tcp
        server worker01.ocp43.example.com 192.168.1.114:80 check
        server worker02.ocp43.example.com 192.168.1.115:80 check
    
    frontend ingress-https
        bind *:443
        default_backend ingress-https
        mode tcp
        option tcplog
    
    backend ingress-https
        balance source
        mode tcp
        server worker01.ocp43.example.com 192.168.1.114:443 check
        server worker02.ocp43.example.com 192.168.1.115:443 check

Web Server

  1. Configure a web server. In my example, I configure httpd on an RHEL VM.
yum -y install httpd
systemctl enable --now httpd
firewall-cmd --add-service=8080/tcp --permanent
firewall-cmd --reload

Installation downloads

Installation Using Static IP address

Prepare installation

  1. Generate SSH key:
    $ ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/vsphere-ocp43
  2. Start ssh-agent:
    $ eval "$(ssh-agent -s)"
  3.  Add ssh private key to the ssh-agent:
    $ ssh-add ~/.ssh/vsphere-ocp43
    Identity added: /Users/shannachan/.ssh/vsphere-ocp43 (shannachan@MacBook-Pro)
  4. Download & extract OpenShift Installer:
    wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.3.5/openshift-install-mac-4.3.5.tar.gz
    tar zxvf openshift-install-mac-4.3.5.tar.gz
  5. Download & extract OpenShift CLI:
    wget wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.3.5/openshift-client-mac-4.3.5.tar.gz
    tar zxvf openshift-client-mac-4.3.5.tar.gz
  6. Copy or download the pull secret from cloud.redhat.com
    1. Go to cloud.redhat.com
    2. Login with your credential (create an account if you don’t have one)
    3. Click “Create Cluster”
    4. Click OpenShift Container Platform
    5. Scroll down and click “VMware vSphere”
    6. Click on “Download Pull Secret” to download the secret

Create Installation manifests and ignition files

  1. Create an installation directory:
    mkdir ocp43
  2. Create `install-config.yaml` as shown below.
    apiVersion: v1
    baseDomain: example.com
    compute:
    - name: worker
      replicas: 0
    controlPlane:
      hyperthreading: Enabled
      name: master
      replicas: 3
    metadata:
      name: ocp43
    platform:
      vsphere:
        vcenter: 192.168.1.200
        username: vsphereadmin
        password: xxxx
        datacenter: Datacenter
        defaultDatastore: datastore3T
    pullSecret: '<copy your pull secret here>'
    sshKey: '<copy your public key here>'
  3. Backup install-config.yaml  and copy it into the installation directory
  4. Generate Kubernetes manifests for the cluster:
    $./openshift-install create manifests --dir=./ocp43
    INFO Consuming Install Config from target directory
    WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
  5. Modify <installation directory>/manifests/cluster-scheduler-02-config.yml
  6. Update mastersSchedulable to false
  7. Obtain Ignition files:
    $ ./openshift-install create ignition-configs --dir=./ocp43
    INFO Consuming Common Manifests from target directory
    INFO Consuming Worker Machines from target directory
    INFO Consuming Master Machines from target directory
    INFO Consuming OpenShift Install (Manifests) from target directory
    INFO Consuming Openshift Manifests from target directory
  8. Files that were created:
    $ tree ocp43
    ocp43
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

Upload files to the webserver

  1. Upload the rhcos-4.3.0-x86_64-metal.raw.gz to web server location
  2. Upload all the ignition files to the webserver location
  3. Update the file permission on the *.ign files on the webserver:
    chmod 644 *.ign

Note: check and make sure that you can download the ignition files and gz file for the webserver.

Custom ISO

Create all custom ISO files with the parameters that you need for each VMs. This step can skip if you plan to type all the kernel parameters by hand when prompt.

  1. Download rhcos-4.3.0-x86_64-installer.iso and rhcos-4.3.0-x86_64-metal.raw.gz
  2. Extract ISO to a temporary location:
    sudo mount rhcos-410.8.20190425.1-installer.iso /mnt/ 
    mkdir /tmp/rhcos 
    rsync -a /mnt/* /tmp/rhcos/ 
    cd /tmp/rhcos 
    vi isolinux/isolinux.cfg
  3. Modify the boot entry similar to this:
    label linux
      menu label ^Install RHEL CoreOS
      kernel /images/vmlinuz
      append initrd=/images/initramfs.img nomodeset rd.neednet=1 coreos.inst=yes ip=192.168.1.110::192.168.1.1:255.255.255.0:bootstrap0.ocp43.example.com:ens192:none nameserver=192.168.1.188 coreos.inst.install_dev=sda coreos.inst.image_url=http://192.168.1.230:8080/rhcos-4.3.0-x86_64-metal.raw.gz coreos.inst.ignition_url=http://192.168.1.230:8080/bootstrap.ign

    where:

    ip=<ip address of the VM>::<gateway>:<netmask>:<hostname of the VM>:<interface>:none

    nameserver=<DNS>

    coreos.inst.image_url=http://<webserver host:port>/rhcos-4.3.0-x86_64-metal.raw.gz

    coreos.inst.ignition_url=http://<webserver host:port>/<bootstrap, master or worker ignition>.ign

  4. Create new ISO as /tmp/rhcos_install.iso:
    sudo mkisofs -U -A "RHCOS-x86_64" -V "RHCOS-x86_64" -volset "RHCOS-x86_64" -J -joliet-long -r -v -T -x ./lost+found -o /tmp/rhcos_install.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot .
  5.  Upload all the custom ISOs to the datastore for VM creation via vCenter
  6. You will repeat the steps for all VMs with the specific IP and ign file. You only need to create individual VM for the cluster if you don’t want to type the kernel parameters at the prompt when installing via the ISO. I would recommend that since it actually takes less time to do that than typing the kernel parameters each time.

Create VM using custom ISO

  1. Create a resource folder
    • Action -> New folder -> New VM or Template folder
    • I normally give the name as the cluster id
  2. Create VM with 4 CPU and 16 RAM
    • Action -> New Virtual Machine
    • Select Create New Virtual Machine -> click Next
    • Add name
    • Select the VM folder -> Next
    • Select datacenter -> Next
    • Select storage -> Next
    • Use ESXi 6.7 -> Next
    • Select Linux and RHEL 7 -> Next
    • Use these parameters:
      • CPU: 4
      • Memory: 16 (Reserve all guest memory)
      • 120 GB disk
      • Select the corresponding ISO from Datastore and check “connect”
      • VMOption -> advantage -> Edit configuration -> Add configuration Params -> Add “disk.EnableUUID”: Specify TRUE
      • Click OK
      • Click Next
      • Click Finish
  3. Power on the bootstrap, masters and workers VMs as the steps below
  4. Go the VM console: Screen Shot 2020-03-04 at 12.27.44 PM.png
  5. Hit Enter
  6. You should see the login screen once the VM boots successfully Screen Shot 2020-03-04 at 12.34.04 PM.png
  7. repeat on all servers and make sure the specific ISO for the given VM is used.

Tips: you can clone the existing VM and just modify the ISO files for VM creation.

Creating Cluster

  1. Monitor the cluster:
    ./openshift-install --dir=<installation_directory> wait-for bootstrap-complete --log-level=info
    INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp43.example.com:6443...
    INFO API v1.16.2 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    INFO It is now safe to remove the bootstrap resources
  2.  From the bootstrap VM, similar log messages are shown:
    $ ssh -i ~/.ssh/vsphere-ocp43 core@bootstrap-vm
    $ journalctl -b -f -u bootkube.service
    ...
    Mar 16 20:03:57 bootstrap0.ocp43.example.com bootkube.sh[2816]: Tearing down temporary bootstrap control plane...
    Mar 16 20:03:57 bootstrap0.ocp43.example.com podman[18629]: 2020-03-16 20:03:57.232567868 +0000 UTC m=+726.128069883 container died 695412d7eece5a9bd099aac5b6bc6a8d412c8037b14391ff54ee33132ebce0e1 (image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:222fbfd3323ec347babbda1a66929019221fcee82cfc324a173b39b218cf6c4b, name=zen_lamarr)
    Mar 16 20:03:57 bootstrap0.ocp43.example.com podman[18629]: 2020-03-16 20:03:57.379721836 +0000 UTC m=+726.275223886 container remove 695412d7eece5a9bd099aac5b6bc6a8d412c8037b14391ff54ee33132ebce0e1 (image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:222fbfd3323ec347babbda1a66929019221fcee82cfc324a173b39b218cf6c4b, name=zen_lamarr)
    Mar 16 20:03:57 bootstrap0.ocp43.example.com bootkube.sh[2816]: bootkube.service complete
  3. Load balancer status
  4. Remove the bootstrap from the Load Balancer. You can check the status of LB from the status page

LB.png

 

Logging in to the Cluster

  1.  Export the kubeadmiin credentials:
    export KUBECONFIG=./ocp43/auth/kubeconfig
  2.  Verify cluster role via oc CLI
    $ oc whoami
    system:admin
  3. Approving the CSRs
    $ oc get nodes
    NAME                         STATUS   ROLES    AGE   VERSION
    master01.ocp43.example.com   Ready    master   60m   v1.16.2
    master02.ocp43.example.com   Ready    master   60m   v1.16.2
    master03.ocp43.example.com   Ready    master   60m   v1.16.2
    worker01.ocp43.example.com   Ready    worker   52m   v1.16.2
    worker02.ocp43.example.com   Ready    worker   51m   v1.16.2
    
    $ oc get csr
    NAME        AGE   REQUESTOR                                                                   CONDITION
    csr-66l6l   60m   system:node:master02.ocp43.example.com                                      Approved,Issued
    csr-8r2dc   52m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-hvt2d   51m   system:node:worker02.ocp43.example.com                                      Approved,Issued
    csr-k2ggg   60m   system:node:master03.ocp43.example.com                                      Approved,Issued
    csr-kg72s   52m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-qvbg2   60m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-rtncq   52m   system:node:worker01.ocp43.example.com                                      Approved,Issued
    csr-tsfxx   60m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-wn7rp   60m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-zl87q   60m   system:node:master01.ocp43.example.com                                      Approved,Issued
  4. If there is pending CSR, approve the CSR via the command below.
    oc adm certificate approve <csr_name>
  5.  Validate the cluster components all available:
    $ oc get co
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.3.5     True        False         False      41m
    cloud-credential                           4.3.5     True        False         False      63m
    cluster-autoscaler                         4.3.5     True        False         False      47m
    console                                    4.3.5     True        False         False      43m
    dns                                        4.3.5     True        False         False      54m
    image-registry                             4.3.5     True        False         False      49m
    ingress                                    4.3.5     True        False         False      48m
    insights                                   4.3.5     True        False         False      58m
    kube-apiserver                             4.3.5     True        False         False      53m
    kube-controller-manager                    4.3.5     True        False         False      54m
    kube-scheduler                             4.3.5     True        False         False      54m
    machine-api                                4.3.5     True        False         False      55m
    machine-config                             4.3.5     True        False         False      55m
    marketplace                                4.3.5     True        False         False      48m
    monitoring                                 4.3.5     True        False         False      42m
    network                                    4.3.5     True        False         False      59m
    node-tuning                                4.3.5     True        False         False      50m
    openshift-apiserver                        4.3.5     True        False         False      51m
    openshift-controller-manager               4.3.5     True        False         False      55m
    openshift-samples                          4.3.5     True        False         False      46m
    operator-lifecycle-manager                 4.3.5     True        False         False      55m
    operator-lifecycle-manager-catalog         4.3.5     True        False         False      55m
    operator-lifecycle-manager-packageserver   4.3.5     True        False         False      51m
    service-ca                                 4.3.5     True        False         False      58m
    service-catalog-apiserver                  4.3.5     True        False         False      50m
    service-catalog-controller-manager         4.3.5     True        False         False      50m
    storage                                    4.3.5     True        False         False      49m

Configure the Image Registry to use ephemeral storage for now.

I will update the image registry in the other blog since I want to focus on the completion of the installation.

To set emptyDir for the image registry:

oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'

Completing the installation:

$ ./openshift-install --dir=./ocp43 wait-for install-complete
INFO Waiting up to 30m0s for the cluster at https://api.ocp43.example.com:6443 to initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/shannachan/projects/ocp4.3/ocp43/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp43.example.com
INFO Login to the console with user: kubeadmin, password: xxxxxxxxxxxxxx

Congratulation Cluster is up!

Screen Shot 2020-03-16 at 6.22.41 PM.png

Troubleshoot tips:

Access any server via the command below:

ssh -i ~/.ssh/vsphere-ocp43 core@vm-server

Reference:

https://docs.openshift.com/container-platform/4.3/installing/installing_bare_metal/installing-bare-metal.html

https://docs.openshift.com/container-platform/4.3/installing/installing_vsphere/installing-vsphere.html

https://shanna-chan.blog/2019/07/26/openshift4-vsphere-static-ip/