Installing OpenShift using Temporary Credentials

One of the most frequently asked questions recently is how to install OpenShift on AWS with temporary credentials. The default OpenShift provisioning using AWS key and secret, which requires the Administrator privileges. The temporary credential often refers to AWS Security Token Service (STS), which allows end-users to assume an IAM role resulting in short-lived credentials.

Developers or platform teams will require approval from their security team to access the company AWS account. It can be challenging in some organizations to get access to Administrator privileges.

OpenShift 4.7 support for AWS Secure Token Service in manual mode is in Tech Preview. I decided to explore a little deeper—the exercise based on the information both on the OpenShift documentation and the upstream repos. I am recording the notes from my test run. I hope you will find it helpful.

OpenShift 4 version

OCP 4.7.9

Build sts-preflight binary

git clone https://github.com/sjenning/sts-preflight.git
go get github.com/sjenning/sts-preflight
cd <sts-preflight directory>
go build .

Getting the AWS STS

As an AWS administrator, I found the sts-preflight tool helpful in this exercise. The documentation has the manual steps, but I choose to use the sts-preflight tool here.

  • Create STS infrastructure in AWS:
./sts-preflight  create --infra-name <sts infra name> --region <aws region>

# ./sts-preflight  create --infra-name sc-example --region us-west-1
2021/04/28 13:24:42 Generating RSA keypair
2021/04/28 13:24:56 Writing private key to _output/sa-signer
2021/04/28 13:24:56 Writing public key to _output/sa-signer.pub
2021/04/28 13:24:56 Copying signing key for use by installer
2021/04/28 13:24:56 Reading public key
2021/04/28 13:24:56 Writing JWKS to _output/keys.json
2021/04/28 13:24:57 Bucket sc-example-installer created
2021/04/28 13:24:57 OIDC discovery document at .well-known/openid-configuration updated
2021/04/28 13:24:57 JWKS at keys.json updated
2021/04/28 13:24:57 OIDC provider created arn:aws:iam::##########:oidc-provider/s3.us-west-1.amazonaws.com/sc-example-installer
2021/04/28 13:24:57 Role created arn:aws:iam::##########:role/sc-example-installer
2021/04/28 13:24:58 AdministratorAccess attached to Role sc-example-installer
  • Create an OIDC token:
# ./sts-preflight token
2021/04/28 13:27:06 Token written to _output/token
  • Get STS credential:
# ./sts-preflight assume
Run these commands to use the STS credentials
export AWS_ACCESS_KEY_ID=<temporary key>
export AWS_SECRET_ACCESS_KEY=<temporary secret>
export AWS_SESSION_TOKEN=<session token>
  • The above short-lived key, secret, and token can be given to the person who are installing OpenShift.
  • Export all the AWS environment variables before proceeding to installation.

Start the Installation

As a Developer or OpenShift Admin, you will get the temporary credentials information and export the AWS environment variables before installing the OCP cluster.

# oc adm release extract quay.io/openshift-release-dev/ocp-release:4.7.9-x86_64 --credentials-requests --cloud=aws --to=./credreqs ; cat ./credreqs/*.yaml > credreqs.yaml
  • Create install-config.yaml for installation:
# ./openshift-install create install-config --dir=./sc-sts
? SSH Public Key /root/.ssh/id_rsa.pub
? Platform aws
INFO Credentials loaded from default AWS environment variables
? Region us-east-1
? Base Domain sc.ocp4demo.live
? Cluster Name sc-sts
? Pull Secret [? for help] 
INFO Install-Config created in: sc-sts
  • Make sure that we install the cluster in Manual mode:
# cd sc-sts
# echo "credentialsMode: Manual" >> install-config.yaml
  • Create install manifests:
# cd ..
# ./openshift-install create manifests --dir=./sc-sts
  • Using the sts-preflight tool to create AWS resources. Make sure you are in the sts-preflight directory:
#./sts-preflight create --infra-name sc-example --region us-west-1 --credentials-requests-to-roles ./credreqs.yaml
2021/04/28 13:45:34 Generating RSA keypair
2021/04/28 13:45:42 Writing private key to _output/sa-signer
2021/04/28 13:45:42 Writing public key to _output/sa-signer.pub
2021/04/28 13:45:42 Copying signing key for use by installer
2021/04/28 13:45:42 Reading public key
2021/04/28 13:45:42 Writing JWKS to _output/keys.json
2021/04/28 13:45:42 Bucket sc-example-installer already exists and is owned by us
2021/04/28 13:45:42 OIDC discovery document at .well-known/openid-configuration updated
2021/04/28 13:45:42 JWKS at keys.json updated
2021/04/28 13:45:43 Existing OIDC provider found arn:aws:iam::000000000000:oidc-provider/s3.us-west-1.amazonaws.com/sc-example-installer
2021/04/28 13:45:43 Existing Role found arn:aws:iam::000000000000:role/sc-example-installer
2021/04/28 13:45:43 AdministratorAccess attached to Role sc-example-installer
2021/04/28 13:45:43 Role arn:aws:iam::000000000000:role/sc-example-openshift-machine-api-aws-cloud-credentials created
2021/04/28 13:45:43 Saved credentials configuration to: _output/manifests/openshift-machine-api-aws-cloud-credentials-credentials.yaml
2021/04/28 13:45:43 Role arn:aws:iam::000000000000:role/sc-example-openshift-cloud-credential-operator-cloud-credential- created
2021/04/28 13:45:44 Saved credentials configuration to: _output/manifests/openshift-cloud-credential-operator-cloud-credential-operator-iam-ro-creds-credentials.yaml
2021/04/28 13:45:44 Role arn:aws:iam::000000000000:role/sc-example-openshift-image-registry-installer-cloud-credentials created
2021/04/28 13:45:44 Saved credentials configuration to: _output/manifests/openshift-image-registry-installer-cloud-credentials-credentials.yaml
2021/04/28 13:45:44 Role arn:aws:iam::000000000000:role/sc-example-openshift-ingress-operator-cloud-credentials created
2021/04/28 13:45:44 Saved credentials configuration to: _output/manifests/openshift-ingress-operator-cloud-credentials-credentials.yaml
2021/04/28 13:45:45 Role arn:aws:iam::000000000000:role/sc-example-openshift-cluster-csi-drivers-ebs-cloud-credentials created
2021/04/28 13:45:45 Saved credentials configuration to: _output/manifests/openshift-cluster-csi-drivers-ebs-cloud-credentials-credentials.yaml
  • Copy the generated manifest files and tls directory from sts-preflight/_output directory to installation directory:
# cp sts-preflight/_output/manifests/* sc-scs/manifests/
# cp -a sts-preflight/_output/tls sc-scs/
  • I ran both ./sts-preflight token and ./sts-preflight assume again to make sure I have enough time to finish my installation
  • Export the AWS environment variables.
  • I did not further restrict the role in my test.
  • Start to provision a OCP cluster:
# ./openshift-install create cluster --log-level=debug --dir=./sc-sts
...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/root/mufg-sts/sc-sts-test/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.sc-sts-test.xx.live
INFO Login to the console with user: "kubeadmin", and password: "xxxxxxxxxxx"
DEBUG Time elapsed per stage:
DEBUG     Infrastructure: 7m28s
DEBUG Bootstrap Complete: 11m6s
DEBUG  Bootstrap Destroy: 1m21s
DEBUG  Cluster Operators: 12m28s
INFO Time elapsed: 32m38s

#Cluster was created successfully.
  • Verify the components are assuming the IAM roles:
# oc get secrets -n openshift-image-registry installer-cloud-credentials -o json | jq -r .data.credentials | base64 --decode
[default]
role_arn = arn:aws:iam::000000000000:role/sc-sts-test-openshift-image-registry-installer-cloud-credentials
web_identity_token_file = /var/run/secrets/openshift/serviceaccount/token
  • Adding and deleting worker node works as well:
Increase the count from one of the MachineSets from Administrator console, worker node was able to provisioned.
Decrease the count from one of the MachineSets from Administrator console, worker node was deleted.

Delete the Cluster

  • Obtain a new temporary credential:
cd <sts-preflight directory>
# ./sts-preflight token
2021/04/29 08:19:01 Token written to _output/token

# ./sts-preflight assume
Run these commands to use the STS credentials
export AWS_ACCESS_KEY_ID=<temporary key>
export AWS_SECRET_ACCESS_KEY=<temporary secret>
export AWS_SESSION_TOKEN=<session token>
  • Export all AWS environment variables using the result output from last step
  • Delete the cluster:
# ./openshift-install destroy cluster --log-level=debug --dir=./sc-sts-test
DEBUG OpenShift Installer 4.7.9
DEBUG Built from commit fae650e24e7036b333b2b2d9dfb5a08a29cd07b1
INFO Credentials loaded from default AWS environment variables
DEBUG search for matching resources by tag in us-east-1 matching aws.Filter{"kubernetes.io/cluster/sc-sts-rj4pw":"owned"}
...
INFO Deleted                                       id=vpc-0bbacb9858fe280f9
INFO Deleted                                       id=dopt-071e7bf4cfcc86ad6
DEBUG search for matching resources by tag in us-east-1 matching aws.Filter{"kubernetes.io/cluster/sc-sts-test-rj4pw":"owned"}
DEBUG search for matching resources by tag in us-east-1 matching aws.Filter{"openshiftClusterID":"ab9baacf-a44f-47e8-8096-25df62c3b1dc"}
DEBUG no deletions from us-east-1, removing client
DEBUG search for IAM roles
DEBUG search for IAM users
DEBUG search for IAM instance profiles
DEBUG Search for and remove tags in us-east-1 matching kubernetes.io/cluster/sc-sts-test-rj4pw: shared
DEBUG No matches in us-east-1 for kubernetes.io/cluster/sc-sts-test-rj4pw: shared, removing client
DEBUG Purging asset "Metadata" from disk
DEBUG Purging asset "Master Ignition Customization Check" from disk
DEBUG Purging asset "Worker Ignition Customization Check" from disk
DEBUG Purging asset "Terraform Variables" from disk
DEBUG Purging asset "Kubeconfig Admin Client" from disk
DEBUG Purging asset "Kubeadmin Password" from disk
DEBUG Purging asset "Certificate (journal-gatewayd)" from disk
DEBUG Purging asset "Cluster" from disk
INFO Time elapsed: 4m39s

References

Red Hat OpenShift on Amazon (ROSA) is GA!

I have previously blogged about the pre-GA ROSA, and now it is GA. I decided to write up my GA experience on ROSA.

Let’s get started here.

Enable ROSA on AWS

After logging into AWS, enter openshift in the search box on the top of the page.

Click on the “Red Hat OpenShift Service on AWS” Service listed.

It will then take you to a page as shown below and click to enable the OpenShift service.

Once it is complete, it will show Service enabled.

Click to download the CLI and click on the OS where you run your ROSA CLI. It will start downloading to your local drive.

Set up ROSA CLI

Extract the downloaded CLI file and rosa add to your local path.

tar zxf rosa-macosx.tar.gz
mv rosa /usr/local/bin/rosa

Setting AWS Account

I have set up my AWS account as my IAM user account with proper access per the documentation. There is more information about the account access requirements for ROSA. It is available here.

I have configured my AWS key and secret in my .aws/credentials.

Create Cluster

Verify AWS account access.

rosa verify permissions

Returns:

I: Validating SCP policies...
I: AWS SCP policies ok

Verify the quota for the AWS account.

rosa verify quota --region=us-west-2

Returns:

I: Validating AWS quota...
I: AWS quota ok

Obtain Offline Access Token from the management portal cloud.redhat.com (if you don’t have one yet) by clicking Create One Now link

Go to https://cloud.redhat.com/openshift/token/rosa, and you will have to log in and prompt to accept terms as shown below.

Click View Terms and Conditions.

Check the box to agree the terms and click Submit.

Copy the token from cloud.redhat.com.

rosa login --token=<your cloud.redhat.com token>

Returns:

I: Logged in as 'your_username' on 'https://api.openshift.com'

Verify the login

rosa whoami

Returns:

AWS Account ID:               ############
AWS Default Region:           us-west-2
AWS ARN:                      arn:aws:iam::############:user/username
OCM API:                      https://api.openshift.com
OCM Account ID:               xxxxxyyyyyzzzzzwwwwwxxxxxx
OCM Account Name:             User Name
OCM Account Username:         User Name
OCM Account Email:            name@email.com
OCM Organization ID:          xxxxxyyyyyzzzzzwwwwwxxxxxx
OCM Organization Name:        company name
OCM Organization External ID: 11111111

Configure account and make sure everyone setup correctly

rosa init

Returns

I: Logged in as 'your_username' on 'https://api.openshift.com'
I: Validating AWS credentials...
I: AWS credentials are valid!
I: Validating SCP policies...
I: AWS SCP policies ok
I: Validating AWS quota...
I: AWS quota ok
I: Ensuring cluster administrator user 'osdCcsAdmin'...
I: Admin user 'osdCcsAdmin' already exists!
I: Validating SCP policies for 'osdCcsAdmin'...
I: AWS SCP policies ok
I: Validating cluster creation...
I: Cluster creation valid
I: Verifying whether OpenShift command-line tool is available...
I: Current OpenShift Client Version: 4.7.2

Creating cluster using interactive mode.

rosa create cluster -i
I: Interactive mode enabled.
Any optional fields can be left empty and a default will be selected.
? Cluster name: [? for help] 

Enter the name of the ROSA cluster.

? Multiple availability zones (optional): [? for help] (y/N) 

Enter y/N.

? AWS region:  [Use arrows to move, type to filter, ? for more help]
  eu-west-2
  eu-west-3
  sa-east-1
  us-east-1
  us-east-2
  us-west-1
> us-west-2 

Select the AWS region and hit <enter>.

? OpenShift version:  [Use arrows to move, type to filter, ? for more help]
> 4.7.2
  4.7.1
  4.7.0
  4.6.8
  4.6.6
  4.6.4
  4.6.3

Select the version and hit <enter>.

? Install into an existing VPC (optional): [? for help] (y/N)

Enter y/N.

? Compute nodes instance type (optional):  [Use arrows to move, type to filter, ? for more help]
> r5.xlarge
  m5.xlarge
  c5.2xlarge
  m5.2xlarge
  r5.2xlarge
  c5.4xlarge
  m5.4xlarge

Select the type and hit <enter>.

? Enable autoscaling (optional): [? for help] (y/N)

Enter y/N.

? Compute nodes: [? for help] (2)

Enter the numbers of workers to start.

? Machine CIDR: [? for help] (10.0.0.0/16)

Enter the machine CIDR or use default.

? Service CIDR: [? for help] (172.30.0.0/16)

Enter the service CIDR or use default.

? Pod CIDR: [? for help] (10.128.0.0/14)

Enter the pod CIDR or use default.

? Host prefix: [? for help] (23)

Enter the host prefix or use default

? Private cluster (optional): (y/N) 

Enter y/N.

Note:

Restrict master API endpoint and application routes to direct, private connectivity. You will not be able to access your cluster until you edit network settings in your cloud provider. I also learned that you would need one private subnet and one public subnet for each AZ for your existing private VPC for the GA version of ROSA. There will be more improvement to provide for the private cluster in the future release.

Returns:

I: Creating cluster 'rosa-c1'
I: To create this cluster again in the future, you can run:
   rosa create cluster --cluster-name rosa-c1 --region us-west-2 --version 4.7.2 --compute-nodes 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23
I: To view a list of clusters and their status, run 'rosa list clusters'
I: Cluster 'rosa-c1' has been created.
I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information.
I: To determine when your cluster is Ready, run 'rosa describe cluster -c rosa-c1'.
I: To watch your cluster installation logs, run 'rosa logs install -c rosa-c1 --watch'.
Name:                       rosa-c1
ID:                         xxxxxxxxxxyyyyyyyyyyyaxxxxxxxxx
External ID:
OpenShift Version:
Channel Group:              stable
DNS:                        rosa-c1.xxxx.p1.openshiftapps.com
AWS Account:                xxxxxxxxxxxx
API URL:
Console URL:
Region:                     us-west-2
Multi-AZ:                   false
Nodes:
 - Master:                  3
 - Infra:                   2
 - Compute:                 2 (m5.xlarge)
Network:
 - Service CIDR:            172.30.0.0/16
 - Machine CIDR:            10.0.0.0/16
 - Pod CIDR:                10.128.0.0/14
 - Host Prefix:             /23
State:                      pending (Preparing account)
Private:                    No
Created:                    Mar 30 2021 03:10:25 UTC
Details Page:               https://cloud.redhat.com/openshift/details/xxxxxxxxxxyyyyyyyyyyyaxxxxxxxxx
 

Copy the URL from the Details Page to the browser and click view logs to see the status of the installation.

When ROSA is completed, you will see the similar page as below.

You will need to access the OpenShift cluster.

Configure Quick Access

Add cluster-admin user

rosa create admin -c rosa-c1

Returns:

I: Admin account has been added to cluster 'rosa-c1'.
I: Please securely store this generated password. If you lose this password you can delete and recreate the cluster admin user.
I: To login, run the following command:

   oc login https://api.rosa-c1.xxxx.p1.openshiftapps.com:6443 --username cluster-admin --password xxxxx-xxxxx-xxxxx-xxxxx

I: It may take up to a minute for the account to become active.

Test user access

$ oc login https://api.rosa-c1.xxxx.p1.openshiftapps.com:6443 --username cluster-admin --password xxxxx-xxxxx-xxxxx-xxxxx
Login successful.

You have access to 86 projects, the list has been suppressed. You can list all projects with 'oc projects'

Using project "default".

Configure Identity Provider

There are options for identity providers. I am using Github in this example.

I am not going to explain how to get identity to provide set up here. I did that in last blog. I will walk through the step to configure ROSA using Github.

rosa create idp --cluster=rosa-c1 -i
I: Interactive mode enabled.
Any optional fields can be left empty and a default will be selected.
? Type of identity provider:  [Use arrows to move, type to filter]
> github
  gitlab
  google
  ldap
  openid

Select one IDP

? Identity provider name: [? for help] (github-1)

Enter the name of the IDP configured on the ROSA

? Restrict to members of:  [Use arrows to move, type to filter, ? for more help]
> organizations
  teams

Select organizations

? GitHub organizations:

Enter the name of the organization. My example is `sc-rosa-idp`

? To use GitHub as an identity provider, you must first register the application:
  - Open the following URL:
    https://github.com/organizations/sc-rosa-idp/settings/applications/new?oauth_application%5Bcallback_url%5D=https%3A%2F%2Foauth-openshift.apps.rosa-c1.0z3w.p1.openshiftapps.com%2Foauth2callback%2Fgithub-1&oauth_application%5Bname%5D=rosa-c1&oauth_application%5Burl%5D=https%3A%2F%2Fconsole-openshift-console.apps.rosa-c1.0z3w.p1.openshiftapps.com
  - Click on 'Register application'

Open a browser and use the above URL to register the application and cope client ID

? Client ID: [? for help] 

Enter the copied Client ID

? Client Secret: [? for help]

Enter client secret from the registered application.

? GitHub Enterprise Hostname (optional): [? for help] 

Hit <enter>

? Mapping method:  [Use arrows to move, type to filter, ? for more help]
  add
> claim
  generate
  lookup

Select claim

I: Configuring IDP for cluster 'rosa-c1'
I: Identity Provider 'github-1' has been created.
   It will take up to 1 minute for this configuration to be enabled.
   To add cluster administrators, see 'rosa create user --help'.
   To login into the console, open https://console-openshift-console.apps.rosa-c1.xxxx.p1.openshiftapps.com and click on github-1.

Congratulation! IDP configuration is completed.

Login in with IDP account

Open a browser with the URL from the IDP configuration. Our example is: https://console-openshift-console.apps.rosa-c1.xxxx.p1.openshiftapps.com.

Click github-1

Click Authorize sc-rosa-idp

Overall, it is straightforward to get started on creating a ROSA cluster on AWS. I hope this will help you in some ways.

Reference

Red Hat OpenShift on Amazon Documentation

Running ROSA on an Existing Private VPC

Pre-GA ROSA test

Test Run Pre-GA Red Hat OpenShift on AWS (ROSA)

I have an opportunity to try out the pre-GA ROSA. ROSA is a fully managed Red Hat OpenShift Container Platform (OCP) as a service and sold by AWS. I am excited to share my experience on ROSA. It installs OCP 4 from soup to nuts without configuring hosted zone and domain sever. As a developer, you may want to get the cluster up and running, so you can start doing the real work :). There are customization options with ROSA, but I am going to leave it for later exploration.

I am going to show you the steps I took to create OCP via ROSA. There are more use cases to test. I hope this blog will give you a taste of ROSA.

Creating OpenShift Cluster using ROSA Command

Since it is a pre-GA version, I download the ROSA command line tool from the here and have aws-cli available where I run the ROSA installation.

  • I am testing from my MacBook. I just move the “rosa” command line tool to /usr/local/bin/.
  • Verify that your AWS account if it has the necessary permissions using rosa verify permissions:
  • Verify that your AWS account has the necessary quota to deploy an Red Hat OpenShift Service on AWS cluster via rosa verify quota --region=<region>:
  • Log in your Red Hat account with ROSA command using rosa login --token=<token from cloud.redhat.com>:
  • Verify the AWS setup using rosa whoami:
  • Initialize the AWS for the cluster deploy via rosa init:
  •  Since I have OpenShift Client command line installed, it shows the existing OpenShift Client version. If you don’t have it, you can download the OpenShift Client command line via rosa download oc and make it available from your PATH. 
  • Create ROSA via rosa create cluster command below: 

Note: rosa create cluster -i with the interactive option, it provides customization for ROSA installation, such as multiple AZ, existing VPC, subnets, etc…

  • Copy the URL from Details Page to a browser and you can view the status for your ROSA installation. 
  • If you click View logs, you can watch the log from here until the cluster is completed.
  • When you see this screen, it means the cluster is created:
  • Now, you need to have a way to log into the OCP cluster. I created an organization called sc-rosa-idp on Github via rosa create idp --cluster=sc-rosa-test --interactive as show below.
  • Log into the OCP console via the URL from the output from last step:
  • Click github-1 –> redirect to authorize to the organization from Github -> log in Github.
  • Once you log in with your Github credential, we will see the OCP developer console: 
  • Grant cluster-admin role to the github user using rosa grant user cluster-admin --user <github user in your organization> --cluster <name of your rosa cluster>.
  • Click Administrator on the top left and access the OCP admin console with Admin access as shown below:

Delete ROSA

  • Go to the cluster from cloud.redhat.com, from Action –> select Delete cluster:
  • Enter the name of the cluster and click Delete:
  • The cluster shows as Uninstalling

 

Although it is a pre-GA without AWS console integration, I found it very easy to get my cluster up and running. If you cannot wait for GA, you can always request the preview access from here. Get a head start with ROSA!

References

ARO 4 and AAD Integration Take 2

In my last post on ARO 4, I have already walk through the steps to set up the Azure environment for creating ARO 4. My 2nd round testing requires the following specific requirements:

  • Use only one app registration
  • Not to use pull secret
You will need to complete the session for setting up Azure environment in my previous blog for ARO 4.

Create ARO 4 Cluster with existing service principal

Create a service principal
From the previous test, I learned that the process of creating ARO 4 will create a service principal. I am going to create a service principal before creating cluster.
$ az ad sp create-for-rbac --role Contributor --name all-in-one-sp
This command will return the appId and password information that we will need for the ARO 4 create command later.
Adding API permission to the service principal
  1. Login to Azure Portal
  2. Go to Azure Active Directory
  3. Click App registrations
  4. Click “All applications”
  5. Search for “app-in-one-sp”
  6. Click “View API permission”
  7. Click “Add a permission”
  8. Click “Azure Active Directory Graph”
  9. Click “Delegated Permissions”
  10. Check “User.Read”
  11. Click “Add permission” button at the bottom.
  12. Click “Grant admin consent …”
  13. A green check mark is shown under Status as shown below
Create ARO with existing service principal without pull secret
az aro create \
--resource-group $RESOURCEGROUP \
--name $CLUSTER \
--client-id <service principal application id> \
--client-secret <service principal password> \
--vnet aro-vnet \
--master-subnet master-subnet \
--worker-subnet worker-subnet \
--domain aro.ocpdemo.online
When I opted out the pull secret option, I will get the following message from the output of the azure cli.
No --pull-secret provided: cluster will not include samples or operators from Red Hat or from certified partners.
Adding api and ingress A record to the DNS zone
Using the output from the ARO 4 creation, Use the IP from the “apiserverProfile” portion is for api servier. The IP from “ingressProfiles” is for ingress. The example is shown below.
Test out the ARO cluster
az aro list-credentials \ 
--name $CLUSTER \ 
--resource-group $RESOURCEGROUP
Open the following URL from the browser and login using the kubeadmin with password from the above command
https://console-openshift-console.apps.<DNS domain>/

Integrate Azure Active Directory

The following steps are for getting the OAuth call back URL.
$ oc login -u kubeadmin -p <password> https://api.<DNS domain>:6443/ 
$ oauthCallBack=`oc get route oauth-openshift -n openshift-authentication -o jsonpath='{.spec.host}'` 
$ oauthCallBackURL=https://$oauthCallBack/oauth2callback/AAD
$ echo $oauthCallBackURL
where AAD is the name of the identity provider for OAuth configuration on OpenShift
Add the OAuth call back URL to the same service principal
  • Go to Azure Active Directory
  • Click App registration
  • Click on “all-in-one-sp” under all applications
  • Under Overview, click right top corner link for “Add a Redirect URI”
  • Click “Add a platform”
  • Click Web Application from the list of Configure platforms
  • Enter the value of the oauthCallBackURL from the previous step to the “Redirect URIs”
  • Click configure
Create a manifest file
cat > manifest.json<< EOF 
[{ "name": "upn", 
"source": null, 
"essential": false, 
"additionalProperties": [] 
}, 
{ "name": "email", 
"source": null, 
"essential": false, 
"additionalProperties": [] 
}] 
EOF
Update service principal with the manifest
$ az ad app update \
 --set optionalClaims.idToken=@manifest.json \
 --id <Service Principal appId>
Create secret to store service principal’s password
oc create secret generic openid-client-secret-azuread \
--namespace openshift-config \
--from-literal=clientSecret=<service principal password>
Create OAuth configuration
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: AAD
    mappingMethod: claim
    type: OpenID
    openID:
      clientID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
      clientSecret:
        name: openid-client-secret-azuread
      extraScopes:
      - email
      - profile
      extraAuthorizeParameters:
        include_granted_scopes: "true"
      claims:
        preferredUsername:
        - email
        - upn
        name:
        - name
        email:
        - email
      issuer: https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Apply the OAuth YAML
oc apply -f openid.yaml
Login openshift console via AAD

Reference

Azure Red Hat OpenShift 4 (ARO 4) integrate with Azure Active Directory

I happened to test out ARO 4 with Azure Active Directory integration. The Azure documentation is good, but I had to change a few while testing the steps. I am sharing my experience here and hope someone will find it useful.

Setting the requirements

Install or update Azure CLI
brew update && brew install azure-cli
Make sure you have permission to create resources in the resource group. I logged in as a global administrator when I am testing this.

Setup the environment variables
$ cat aro-env
LOCATION=centralus. # the location of your cluster
RESOURCEGROUP=aro-rg # the name of the resource group where you want to create your cluster
CLUSTER=poc #cluster-id of the ARO 4 cluster
$ source aro-env
Log in Azure
az login
Create a Resource Group
az group create \
--name $RESOURCEGROUP \
--location $LOCATION
Add DNS zone
If you don’t have a DNS zone already, you can use this step.
  1. Login Azure Portal
  2. Type: “DNS Zones” in the search box on the top and click on “DNS Zones”
  3. Click “+Add” on the top
  4. Select the newly created resource group
  5. Enter your domain
  6. Select the location
  7. Create “Review+Create”

Notes:

  • I am using a domain name outside of the Azure. You will need to add the NS records from the overview page of the DNS zone to your domain.
  • Request increase of quota from Azure portal. ARO requires a minimum of 40 cores.
Register Resource Provider
az account set --subscription
az provider register -n Microsoft.RedHatOpenShift --wait
az provider register -n Microsoft.Compute --wait
az provider register -n Microsoft.Storage --wait
Create a Virtual Network
az network vnet create \
--resource-group $RESOURCEGROUP \
--name aro-vnet \
--address-prefixes 10.0.0.0/22
Create an empty subnet for master nodes
az network vnet subnet create \
--resource-group $RESOURCEGROUP \
--vnet-name aro-vnet \
--name master-subnet \
--address-prefixes 10.0.0.0/23 \
--service-endpoints Microsoft.ContainerRegistry
Create an empty subnet for worker nodes
az network vnet subnet create \
--resource-group $RESOURCEGROUP \
--vnet-name aro-vnet \
--name worker-subnet \
--address-prefixes 10.0.2.0/23 \
--service-endpoints Microsoft.ContainerRegistry
Disable private endpoint policy
az network vnet subnet update \
--name master-subnet \
--resource-group $RESOURCEGROUP \
--vnet-name aro-vnet \
--disable-private-link-service-network-policies true
Once the above steps are done. You don’t have to redo the steps if you are going to reuse the names and resources.

Create Cluster

Please make sure you log in to Azure and environment variables are set.

Information that we need for creating a cluster
  • Get a copy of the pull secret from cloud.redhat.com. If you don’t have a user name created, please just register as a user for free.
  • Create an ARO cluster using the following command. Please apply to appropriate values.
    Some values were used in the example are explained as shown below.
    • aro-vnet – the name of virtual network
    • master-subnet – the name of master subnet
    • worker subnet – the name of worker subnet
    • ./pull-secret.txt – the path and pull secret where is located
    • aro.ocpdemo.online – custom domain for the cluster
az aro create \
--resource-group $RESOURCEGROUP \
--name $CLUSTER \
--vnet aro-vnet \
--master-subnet master-subnet \
--worker-subnet worker-subnet \
--pull-secret @./pull-secret.txt \
--domain aro.ocpdemo.online

The information from the JSON output of the above command can be useful if you are not familiar with OpenShift 4. You can find your API server IP, API URL, OpenShift console URL and ingress IP. You will need the API, and ingress IP for the next step.

{- Finished ..
"apiserverProfile": {
"ip": "x.x.x.x",
"url": "https://api.aro.ocpdemo.online:6443/",
"visibility": "Public"
...
},
"consoleProfile": {
"url": "https://console-openshift-console.apps.aro.ocpdemo.online/"
},
....
"ingressProfiles": [
{
"ip": "x.x.x.x",
"name": "default",
"visibility": "Public"
}
....

Post ARO Installation

Adding two A records for api and *.apps in the DNS zone
  1. Login to Azure portal
  2. Go to DNS zone
  3. Click onto the domain for the ARO cluster
  4. Click “+ Record Set” on the top menu to create an A record and add values to Name and IP. You will need to repeat this step for both api and *.apps A records.
    • Name: api or *.apps
    • IP: the *.apps/ingress IP is from the output of the creation of the ARO
  5. The below screenshot shows the DNS zone configuration and adding 2 A records.

Test ARO Cluster

Getting Kubeadmin credential
az aro list-credentials \
--name $CLUSTER \
--resource-group $RESOURCEGROUP
The command will return the kubeadmin credential.
Log in OpenShift Console
Open a browser and go to the OpenShift console or look for “consoleProfile” from the JSON output from ARO creation
https://console-openshift-console.apps.<DNS domain>/
The login user is kubeadmin and the password is the credential from the last command. Congrats!! The ARO installation is completed!

Azure Active Directory Integration

Getting oauthCallBackURL
  • Download OpenShift command line tool from console.
Download the OpenShift Command Lind Interface (CLI) from there. Once you extract it and add to the PATH. You can move on to the next step.
  • Login to ARO via OC CLI
$ oc login -u kubeadmin -p <password> https://api.<DNS domain>:6443/

$ oauthCallBack=`oc get route oauth-openshift -n openshift-authentication -o jsonpath='{.spec.host}'`

$ oauthCallBackURL=https://$oauthCallBack/oauth2callback/AAD
Note: AAD is the name of the identity provider when configuring OAuth on OpenShift

Creating Application on Azure Active Directory
az ad app create \
  --query appId -o tsv \
  --display-name poc-aro-auth \
  --reply-urls $oauthCallBackURL \
  --password '<ClientSecret>'
Note: Please note that the above command returns the registered Application Id (AppId) which you will need it when configuring the OAuth on OpenShift.
Get tenant Id
az account show --query tenantId -o tsv
Note: Please note that you will need the tenant Id for the OAuth configuration on OpenShift
Create manifest file
cat > manifest.json<< EOF
[{
"name": "upn",
"source": null,
"essential": false,
"additionalProperties": []
},
{
"name": "email",
"source": null,
"essential": false,
"additionalProperties": []
}]
EOF
Update the Azure Active Directory with a manifest
az ad app update \
--set optionalClaims.idToken=@manifest.json \
--id <AppId>
Update Application permission scope
az ad app permission add \
--api 00000002-0000-0000-c000-000000000000 \
--api-permissions 311a71cc-e848-46a1-bdf8-97ff7156d8e6=Scope \ 
--id <AppId>
Grant admin consent
  1. login Azure portal
  2. Go to Azure Active Directory
  3. Click App Registrations
  4. Click “All Application” and search for newly create application name
  5. Click onto the display name of the application
  6. Click view API permissions
  7. Click on the “check” to grant admin consent for directory
Add service principal
$ az ad sp create-for-rbac --role Contributor --name poc-aro-sp
You will need the “appId” from the output of the above command and that is the appId for the service principal
$az role assignment create --role "User Access Administrator" \
--assignee-object-id $(az ad sp list --filter "appId eq '<service-principal-appid>'" \
| jq '.[0].objectId' -r)
$az ad app permission add --id <appId> \ 
--api 00000002-0000-0000-c000-000000000000 \ 
--api-permissions 824c81eb-e3f8-4ee6-8f6d-de7f50d565b7=Role
This will output the follow command as shown below.
$az ad app permission grant --id <appid> --api 00000002-0000-0000-c000-000000000000
I also grant the admin consent for the API permission for the service principal.
Create secret for identity provider on OpenShift
oc create secret generic openid-client-secret-azuread \
--namespace openshift-config \
--from-literal=clientSecret=<your password>
Create YAML for identity provider for AAD
apiVersion: config.openshift.io/v1
kind: OAuth
metadata:
  name: cluster
spec:
  identityProviders:
  - name: AAD
    mappingMethod: claim
    type: OpenID
    openID:
      clientID: xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
      clientSecret:
        name: openid-client-secret-azuread
      extraScopes:
      - email
      - profile
      extraAuthorizeParameters:
        include_granted_scopes: "true"
      claims:
        preferredUsername:
        - email
        - upn
        name:
        - name
        email:
        - email
      issuer: https://login.microsoftonline.com/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx
Note:
  • The clientID is the AppId of your registered application.
  • Issuer URL is https://login.microsoftonline.com/<tenant id>.
  • The clientSecret is using the secret (openid-client-secret-azuread) that you created from the previous step.
Alternatively, you can obtain the clientID and tenant id from Azure Portal.
  • Login Azure Portal
  • Click Home
  • Click Azure Active Directory
  • Click App registrations on the left menu
  • Click all applications tab
  • Type the application that you just created in the search area
  • Click onto the application (my application is poc-aro-auth)
  • Under Overview, the information is shown as “Application (client) ID” and Directory (tenant) ID” as in the image below.
Update OpenShift OAuth Configuration
oc apply -f openid.yaml
Login OpenShift console via AAD
It will redirect you to Azure login page

Troubeshoot

Tip #1: If you are getting error, you can login as kubeadmin and check the logs from oauth-openshift pods under openshift-authentication project.

Tip #2: if you are creating a new registered application to try, make sure you clean up the user and identity.

Reference

Azure OpenShift 4 documentation

ARO and Azure Active Directory integration

OpenShift 4.3 – Configuring Metering to use AWS Billing information

My task is to figure out how to configure the Metering correlating AWS billing. The OpenShift documentation in the reference is where I started. I decided to record the end-to-end steps on how I set this up since there were some lessons learned in the process of it. I hope this helps you to set up the Metering with AWS billing much smoother.

Prerequisites:

Setting up AWS Report

  1. Before creating anything, you need to have data in the Billing & Cost Management Dashboard already.
  2. If you have a brand new account, you may have to wait until you get some data to show up before you proceed. You will have to have access to Cost & Usage Report​ under AWS Billing to set up the report.
  3. Log in to AWS, go to My Billing Dashboard
  4. Click Cost & Usage Reports
  5. Click Create reports
  6. Provide a name and check Include resource IDs
  7. Click Next
  8. Click Configure → add S3 bucket name and Region-> click Next
  9. Provide `prefix` and select your options for your report → Click Next
  10. Once you created a report similar to the followingScreen Shot 2020-04-28 at 7.29.23 PM.png
  11. Click onto the S3 bucket and validate reports are being created under the folder.
  12. Click Permissions tab
  13. Click Bucket Policy
  14. Copy and save the bucket policy somewhere you can get back to

Setting up AWS user permission policy

  1. Go to My Security Credentials
  2. Click Users → Click the username name will be used for accessing the reports and for OpenShift metering.
  3. Click Add Permissions→ Attach existing policies directly​ → Create policy → click JSON
  4. Paste the buck policy from the Cost & Usage report from the s3 bucket step #14 in the last session.
  5. Use the same step to add the following policy:
    { 
      "Version": "2012-10-17", 
      "Statement": [ 
      { 
        "Sid": "1", 
        "Effect": "Allow", 
        "Action": [ 
          "s3:AbortMultipartUpload", 
          "s3:DeleteObject", 
          "s3:GetObject", 
          "s3:HeadBucket", 
          "s3:ListBucket", 
          "s3:ListMultipartUploadParts", 
          "s3:PutObject" 
         ], 
         "Resource": [ 
            "arn:aws:s3:::<YOUR S3 BUCKET NAME FOR BILLING REPORT>/*",  
            "arn:aws:s3:::<YOUR S3 BUCKET NAME FOR BILLING REPORT>"  
          ] 
        } 
        ] 
    }
  6. Since I am using s3 bucket for metering storage, I also added the following policy to the user:
    {
        "Version": "2012-10-17",
        "Statement": [
            {
                "Sid": "1",
                "Effect": "Allow",
                "Action": [
                    "s3:AbortMultipartUpload",
                    "s3:DeleteObject",
                    "s3:GetObject",
                    "s3:HeadBucket",
                    "s3:ListBucket",
                    "s3:CreateBucket",
                    "s3:DeleteBucket",
                    "s3:ListMultipartUploadParts",
                    "s3:PutObject"
                ],
                "Resource": [
                    "arn:aws:s3:::<YOUR S3 BUCKET NAME FOR METERING STORAGE>/*",
                    "arn:aws:s3:::<YOUR S3 BUCKET NAME FOR METERING STORAGE>"
                ]
            }
        ]
    }

Configuration:

Install Metering Operator

  1. Login OpenShift Container Platform web console as cluster-admin, click AdministrationNamespacesCreate Namespace
  2. Enter openshift-metering
  3. Add openshift.io/cluster-monitoring=true as label → click Create.
  4. Click ComputeMachine Sets
  5. If you are like me, the cluster is using the default configuration on AWS. In my test, I increase 1 more worker per AZs.
  6. I notice that one pod for Metering requires more resources, and the standard size may not be big enough. I create an m5.2xlarge machine set. I only need 1 replica for this machineset.
    1. Create a template machine-set YAML:
      oc project openshift-machine-api
      oc get machineset poc-p6czj-worker-us-west-2a -o yaml > m52xLms.yaml
    2. Modify the YAML file by updating the name of the machine set and instance type, removing the status, timestamp, id, selflink, etc… Here is my example of a machine set for m5.2xlarge.
      apiVersion: machine.openshift.io/v1beta1
      kind: MachineSet
      metadata:
        labels:
          machine.openshift.io/cluster-api-cluster: poc-p6czj
        name: poc-p6czj-xl-worker-us-west-2a
        namespace: openshift-machine-api
      spec:
        replicas: 1
        selector:
          matchLabels:
            machine.openshift.io/cluster-api-cluster: poc-p6czj
            machine.openshift.io/cluster-api-machineset: poc-p6czj-xl-worker-us-west-2a
        template:
          metadata:
            creationTimestamp: null
            labels:
              machine.openshift.io/cluster-api-cluster: poc-p6czj
              machine.openshift.io/cluster-api-machine-role: worker
              machine.openshift.io/cluster-api-machine-type: worker
              machine.openshift.io/cluster-api-machineset: poc-p6czj-xl-worker-us-west-2a
          spec:
            metadata:
              creationTimestamp: null
            providerSpec:
              value:
                ami:
                  id: ami-0f0fac946d1d31e97
                apiVersion: awsproviderconfig.openshift.io/v1beta1
                blockDevices:
                - ebs:
                    iops: 0
                    volumeSize: 120
                    volumeType: gp2
                credentialsSecret:
                  name: aws-cloud-credentials
                deviceIndex: 0
                iamInstanceProfile:
                  id: poc-p6czj-worker-profile
                instanceType: m5.2xlarge
                kind: AWSMachineProviderConfig
                metadata:
                  creationTimestamp: null
                placement:
                  availabilityZone: us-west-2a
                  region: us-west-2
                publicIp: null
                securityGroups:
                - filters:
                  - name: tag:Name
                    values:
                    - poc-p6czj-worker-sg
                subnet:
                  filters:
                  - name: tag:Name
                    values:
                    - poc-p6czj-private-us-west-2a
                tags:
                - name: kubernetes.io/cluster/poc-p6czj
                  value: owned
                userDataSecret:
                  name: worker-user-data
    3. run:
      oc create -f m52xLms.yaml
      # wait for the new machine for m5.2xlarge created
      oc get machineset
  7. Create a secret to access the AWS account and make sure you are cluster-admin and run the following commands:
    oc project openshift-metering
    oc create secret -n openshift-metering generic my-aws-secret --from-literal=aws-access-key-id=<YOUR AWS KEY> --from-literal=aws-secret-access-key=<YOUR AWS SECRET>
  8. Back to Console, click OperatorsOperatorHub and type ‘metering` in the filter to find the Metering Operator.
  9. Click the Metering (provided by Red Hat), review the package description, and then click install.
  10. Under Installation Mode, select openshift-metering as namespace. Specify your update channel and approval strategy, then click Subscribe to install Metering.
  11. Click Installed Operators from the left menu, wait for Succeeded as status is shown next to the Metering Operator.
  12. Click WorkloadsPods → metering operator pod is in Running state
  13. Go back to your terminal, run:
    oc project openshift-metering
  14. We are now ready to create the MeteringConfig Object. Create a file `metering-config.yaml` as shown below. See the reference for more details of the MeteringConfig object.
    apiVersion: metering.openshift.io/v1
    kind: MeteringConfig
    metadata:
      name: operator-metering
      namespace: openshift-metering
    spec:
      openshift-reporting:
        spec:
          awsBillingReportDataSource:
            enabled: true
            bucket: "logs4reports"
            prefix: "bubble/ocpreports/"
            region: "us-west-2"
      storage:
        type: hive
        hive:
          s3:
            bucket: shanna-meter/demo
            createBucket: true
            region: us-west-2
            secretName: my-aws-secret
          type: s3
      presto:
        spec:
          config:
            aws:
              secretName: my-aws-secret
      hive:
        spec:
          config:
            aws:
              secretName: my-aws-secret
      reporting-operator:
        spec:
          config:
            aws:
              secretName: my-aws-secret
          resources:
            limits:
              cpu: 1
              memory: 500Mi
            requests:
              cpu: 500m
              memory: 100Mi
  15. Create MeteringConfig:
    oc create -f metering-config.yaml
  16. To monitor the process:
    watch 'oc get pod'
  17. Wait until you see all pods are up and running:
    $ oc get pods
    NAME                              READY STATUS   RESTARTS AGE
    hive-metastore-0                   2/2   Running 0        2m35s
    hive-server-0                      3/3   Running 0        2m36s
    metering-operator-69b664dc57-knd86 2/2   Running 0        31m
    presto-coordinator-0               2/2   Running 0        2m8s
    reporting-operator-674cb5d7b-zxwf4 1/2   Running 0        96s
  18. Verify the AWS report data source:
    $ oc get reportdatasource |grep aws
    aws-billing                                                                                                                                                     3m41s
    aws-ec2-billing-data-raw
  19. Verify the AWS report queries:
    $ oc get reportquery |grep aws
    aws-ec2-billing-data                         5m19s
    aws-ec2-billing-data-raw                     5m19s
    aws-ec2-cluster-cost                         5m19s
    pod-cpu-request-aws                          5m19s
    pod-cpu-usage-aws                            5m19s
    pod-memory-request-aws                       5m18s
    pod-memory-usage-aws                         5m18s

    For more information about the ReportDataSource and the ReportQuery​, please check out the GitHub link in the reference.

  20. Create reports to get AWS billing from the following YAML:
    apiVersion: metering.openshift.io/v1
    kind: Report
    metadata:
      name: pod-cpu-request-billing-run-once
    spec:
      query: "pod-cpu-request-aws"
      reportingStart: '2020-04-12T00:00:00Z'
      reportingEnd: '2020-04-30T00:00:00Z'
      runImmediately: true
    ---
    apiVersion: metering.openshift.io/v1
    kind: Report
    metadata:
      name: pod-memory-request-billing-run-once
    spec:
      query: "pod-memory-request-aws"
      reportingStart: '2020-04-12T00:00:00Z'
      reportingEnd: '2020-04-30T00:00:00Z'
      runImmediately: true
  21. Create reports (status as `RunImmediately`):
    $ oc create -f aws-reports.yaml
    $ oc get reports
    NAME                                  QUERY                    SCHEDULE   RUNNING          FAILED   LAST REPORT TIME   AGE
    pod-cpu-request-billing-run-once      pod-cpu-request-aws                 RunImmediately                               5s
    pod-memory-request-billing-run-once   pod-memory-request-aws              RunImmediately                               5s
  22. Wait until reports are completed (status as `Finished`):
    $ oc get reports
    NAME                                  QUERY                    SCHEDULE   RUNNING    FAILED   LAST REPORT TIME       AGE
    pod-cpu-request-billing-run-once      pod-cpu-request-aws                 Finished            2020-04-30T00:00:00Z   79s
    pod-memory-request-billing-run-once   pod-memory-request-aws              Finished            2020-04-30T00:00:00Z   79s
  23. I created a simple script (viewReport.sh) as shown below to view any report which requires $1 as the name of the report from oc get reports
    reportName=$1
    reportFormat=csv
    token="$(oc whoami -t)"
    meteringRoute="$(oc get routes metering -o jsonpath='{.spec.host}')"
    curl --insecure -H "Authorization: Bearer ${token}" "https://${meteringRoute}/api/v1/reports/get?name=${reportName}&namespace=openshift-metering&format=$reportFormat"
  24. Before running the script, please make sure you get a validate token via oc whoami -t
  25. View report by run the simple script in step #23:
    ./viewReport.sh pod-cpu-request-billing-run-once
    period_start,period_end,pod,namespace,node,pod_request_cpu_core_seconds,pod_cpu_usage_percent,pod_cost
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,alertmanager-main-0,openshift-monitoring,ip-10-0-174-47.us-west-2.compute.internal,792.000000,0.006587,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,alertmanager-main-1,openshift-monitoring,ip-10-0-138-24.us-west-2.compute.internal,792.000000,0.006587,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,alertmanager-main-2,openshift-monitoring,ip-10-0-148-172.us-west-2.compute.internal,792.000000,0.006587,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,apiserver-9dhcr,openshift-apiserver,ip-10-0-157-2.us-west-2.compute.internal,1080.000000,0.008982,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,apiserver-fr7w5,openshift-apiserver,ip-10-0-171-27.us-west-2.compute.internal,1080.000000,0.008982,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,apiserver-sdlsj,openshift-apiserver,ip-10-0-139-242.us-west-2.compute.internal,1080.000000,0.008982,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,apiservice-cabundle-injector-54ff756f6d-f4vl6,openshift-service-ca,ip-10-0-157-2.us-west-2.compute.internal,72.000000,0.000599,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,authentication-operator-6d865c4957-2jsql,openshift-authentication-operator,ip-10-0-171-27.us-west-2.compute.internal,72.000000,0.000599,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,catalog-operator-868fd6ddb5-rmfk7,openshift-operator-lifecycle-manager,ip-10-0-139-242.us-west-2.compute.internal,72.000000,0.000599,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,certified-operators-58874b4f86-rcbsl,openshift-marketplace,ip-10-0-148-172.us-west-2.compute.internal,20.400000,0.000170,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,certified-operators-5b86f97d6f-pcvqk,openshift-marketplace,ip-10-0-148-172.us-west-2.compute.internal,16.800000,0.000140,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,certified-operators-5fdf46bd6d-hhtqd,openshift-marketplace,ip-10-0-148-172.us-west-2.compute.internal,37.200000,0.000309,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,cloud-credential-operator-868c5f9f7f-tw5pn,openshift-cloud-credential-operator,ip-10-0-157-2.us-west-2.compute.internal,72.000000,0.000599,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,cluster-autoscaler-operator-74b5d8858b-bwtfc,openshift-machine-api,ip-10-0-139-242.us-west-2.compute.internal,144.000000,0.001198,
    2020-04-12 00:00:00 +0000 UTC,2020-04-30 00:00:00 +0000 UTC,cluster-image-registry-operator-9754995-cqm7v,openshift-image-registry,ip-10-0-139-242.us-west-2.compute.internal,144.000000,0.001198,
    ...
  26. The output from the preview steps are not too readable. Instead, I downloaded the output from the previous step into a file.
    ./viewReport.sh pod-cpu-request-billing-run-once > aws-pod-cpu-billing.txt
  27. Import the output file into a spreadsheet as shown below: Screen Shot 2020-04-30 at 11.02.25 PM.png

Troubleshoot:

The most useful log is the report operator log for debugging any report issues.

Reference:

OpenShift metering documentation: https://docs.openshift.com/container-platform/4.3/metering/metering-about-metering.html

Configure AWS Billing Correlation: https://docs.openshift.com/container-platform/4.6/metering/configuring_metering/metering-configure-aws-billing-correlation.html

Addition information: https://github.com/kube-reporting/metering-operator/blob/master/Documentation/metering-architecture.md

OpenShift4.3: Retest Static IP configuration on vSphere

Lesson learned from the last test (https://shanna-chan.blog/2019/07/26/openshift4-vsphere-static-ip/), and I got questions around clarification on using static IP. My apologies for the confusion from my last test since it was my test without any real documentation. I want to record all my errors so I can help others to troubleshoot.

Anyway, I decided to retest the installation of OCP 4.3 using static IP. The goal to clarify the installation instructions my last note from the last blog if you are trying to install OCP4 on the VMware environment manually using static IP.

Environment:

Screen Shot 2020-03-16 at 2.22.46 PM.png

  • OCP 4.3.5
  • vSphere 6.7

 

List of VMs:

  • Bootstrap 192.168.1.110
  • Master0 192.168.1.111
  • Master1 192.168.1.112
  • Master2 192.168.1.113
  • Worker0 192.168.1.114
  • Worker1 192.168.1.115

Prerequisites:

The following components are already running in my test environment.

DNS Server

  1. Add Zone /etc/named.conf. An example can be found here https://github.com/christianh814/openshift-toolbox/blob/master/ocp4_upi/docs/0.prereqs.md#dns
  2. Configures the zone files for all the DNS entries. An example configuration is shown below.
    ; The api points to the IP of your load balancer
    api.ocp43	IN	A	192.168.1.72
    api-int.ocp43	IN	A	192.168.1.72
    ;
    ; The wildcard also points to the load balancer
    *.apps.ocp43	IN	A	192.168.1.72
    ;
    ; Create entry for the bootstrap host
    bootstrap0.ocp43	IN	A	192.168.1.110
    ;
    ; Create entries for the master hosts
    master01.ocp43	IN	A	192.168.1.111
    master02.ocp43	IN	A	192.168.1.112
    master03.ocp43	IN	A	192.168.1.113
    ;
    ; Create entries for the worker hosts
    worker01.ocp43	IN	A	192.168.1.114
    worker02.ocp43	IN	A	192.168.1.115
    ;
    ; The ETCd cluster lives on the masters...so point these to the IP of the masters
    etcd-0.ocp43	IN	A	192.168.1.111
    etcd-1.ocp43	IN	A	192.168.1.112
    etcd-2.ocp43	IN	A	192.168.1.113
    ;
    ; The SRV records are IMPORTANT....make sure you get these right...note the trailing dot at the end...
    _etcd-server-ssl._tcp.ocp43	IN	SRV	0 10 2380 etcd-0.ocp43.example.com.
    _etcd-server-ssl._tcp.ocp43	IN	SRV	0 10 2380 etcd-1.ocp43.example.com.
    _etcd-server-ssl._tcp.ocp43	IN	SRV	0 10 2380 etcd-2.ocp43.example.com.

Load balancer

  1. Update /etc/haproxy/haproxy.cfg with cluster information. An example is shown below.
    #---------------------------------------------------------------------
    
    listen stats
        bind *:9000
        mode http
        stats enable
        stats uri /
        monitor-uri /healthz
    
    #---------------------------------------------------------------------
    #Cluster ocp43 - static ip test
    frontend openshift-api-server
        bind *:6443
        default_backend openshift-api-server
        mode tcp
        option tcplog
    
    backend openshift-api-server
        balance source
        mode tcp
        #server bootstrap0.ocp43.example.com 192.168.1.110:6443 check
        server master01.ocp43.example.com 192.168.1.111:6443 check
        server master02.ocp43.example.com 192.168.1.112:6443 check
        server master03.ocp43.example.com 192.168.1.113:6443 check
    
    frontend machine-config-server
        bind *:22623
        default_backend machine-config-server
        mode tcp
        option tcplog
    
    backend machine-config-server
        balance source
        mode tcp
        # server bootstrap0.ocp43.example.com 192.168.1.110:22623 check
        server master01.ocp43.example.com 192.168.1.111:22623 check
        server master02.ocp43.example.com 192.168.1.112:22623 check
        server master03.ocp43.example.com 192.168.1.113:22623 check
    
    frontend ingress-http
        bind *:80
        default_backend ingress-http
        mode tcp
        option tcplog
    
    backend ingress-http
        balance source
        mode tcp
        server worker01.ocp43.example.com 192.168.1.114:80 check
        server worker02.ocp43.example.com 192.168.1.115:80 check
    
    frontend ingress-https
        bind *:443
        default_backend ingress-https
        mode tcp
        option tcplog
    
    backend ingress-https
        balance source
        mode tcp
        server worker01.ocp43.example.com 192.168.1.114:443 check
        server worker02.ocp43.example.com 192.168.1.115:443 check

Web Server

  1. Configure a web server. In my example, I configure httpd on an RHEL VM.
yum -y install httpd
systemctl enable --now httpd
firewall-cmd --add-service=8080/tcp --permanent
firewall-cmd --reload

Installation downloads

Installation Using Static IP address

Prepare installation

  1. Generate SSH key:
    $ ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/vsphere-ocp43
  2. Start ssh-agent:
    $ eval "$(ssh-agent -s)"
  3.  Add ssh private key to the ssh-agent:
    $ ssh-add ~/.ssh/vsphere-ocp43
    Identity added: /Users/shannachan/.ssh/vsphere-ocp43 (shannachan@MacBook-Pro)
  4. Download & extract OpenShift Installer:
    wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.3.5/openshift-install-mac-4.3.5.tar.gz
    tar zxvf openshift-install-mac-4.3.5.tar.gz
  5. Download & extract OpenShift CLI:
    wget wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/4.3.5/openshift-client-mac-4.3.5.tar.gz
    tar zxvf openshift-client-mac-4.3.5.tar.gz
  6. Copy or download the pull secret from cloud.redhat.com
    1. Go to cloud.redhat.com
    2. Login with your credential (create an account if you don’t have one)
    3. Click “Create Cluster”
    4. Click OpenShift Container Platform
    5. Scroll down and click “VMware vSphere”
    6. Click on “Download Pull Secret” to download the secret

Create Installation manifests and ignition files

  1. Create an installation directory:
    mkdir ocp43
  2. Create `install-config.yaml` as shown below.
    apiVersion: v1
    baseDomain: example.com
    compute:
    - name: worker
      replicas: 0
    controlPlane:
      hyperthreading: Enabled
      name: master
      replicas: 3
    metadata:
      name: ocp43
    platform:
      vsphere:
        vcenter: 192.168.1.200
        username: vsphereadmin
        password: xxxx
        datacenter: Datacenter
        defaultDatastore: datastore3T
    pullSecret: '<copy your pull secret here>'
    sshKey: '<copy your public key here>'
  3. Backup install-config.yaml  and copy it into the installation directory
  4. Generate Kubernetes manifests for the cluster:
    $./openshift-install create manifests --dir=./ocp43
    INFO Consuming Install Config from target directory
    WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings
  5. Modify <installation directory>/manifests/cluster-scheduler-02-config.yml
  6. Update mastersSchedulable to false
  7. Obtain Ignition files:
    $ ./openshift-install create ignition-configs --dir=./ocp43
    INFO Consuming Common Manifests from target directory
    INFO Consuming Worker Machines from target directory
    INFO Consuming Master Machines from target directory
    INFO Consuming OpenShift Install (Manifests) from target directory
    INFO Consuming Openshift Manifests from target directory
  8. Files that were created:
    $ tree ocp43
    ocp43
    ├── auth
    │   ├── kubeadmin-password
    │   └── kubeconfig
    ├── bootstrap.ign
    ├── master.ign
    ├── metadata.json
    └── worker.ign

Upload files to the webserver

  1. Upload the rhcos-4.3.0-x86_64-metal.raw.gz to web server location
  2. Upload all the ignition files to the webserver location
  3. Update the file permission on the *.ign files on the webserver:
    chmod 644 *.ign

Note: check and make sure that you can download the ignition files and gz file for the webserver.

Custom ISO

Create all custom ISO files with the parameters that you need for each VMs. This step can skip if you plan to type all the kernel parameters by hand when prompt.

  1. Download rhcos-4.3.0-x86_64-installer.iso and rhcos-4.3.0-x86_64-metal.raw.gz
  2. Extract ISO to a temporary location:
    sudo mount rhcos-410.8.20190425.1-installer.iso /mnt/ 
    mkdir /tmp/rhcos 
    rsync -a /mnt/* /tmp/rhcos/ 
    cd /tmp/rhcos 
    vi isolinux/isolinux.cfg
  3. Modify the boot entry similar to this:
    label linux
      menu label ^Install RHEL CoreOS
      kernel /images/vmlinuz
      append initrd=/images/initramfs.img nomodeset rd.neednet=1 coreos.inst=yes ip=192.168.1.110::192.168.1.1:255.255.255.0:bootstrap0.ocp43.example.com:ens192:none nameserver=192.168.1.188 coreos.inst.install_dev=sda coreos.inst.image_url=http://192.168.1.230:8080/rhcos-4.3.0-x86_64-metal.raw.gz coreos.inst.ignition_url=http://192.168.1.230:8080/bootstrap.ign

    where:

    ip=<ip address of the VM>::<gateway>:<netmask>:<hostname of the VM>:<interface>:none

    nameserver=<DNS>

    coreos.inst.image_url=http://<webserver host:port>/rhcos-4.3.0-x86_64-metal.raw.gz

    coreos.inst.ignition_url=http://<webserver host:port>/<bootstrap, master or worker ignition>.ign

  4. Create new ISO as /tmp/rhcos_install.iso:
    sudo mkisofs -U -A "RHCOS-x86_64" -V "RHCOS-x86_64" -volset "RHCOS-x86_64" -J -joliet-long -r -v -T -x ./lost+found -o /tmp/rhcos_install.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot .
  5.  Upload all the custom ISOs to the datastore for VM creation via vCenter
  6. You will repeat the steps for all VMs with the specific IP and ign file. You only need to create individual VM for the cluster if you don’t want to type the kernel parameters at the prompt when installing via the ISO. I would recommend that since it actually takes less time to do that than typing the kernel parameters each time.

Create VM using custom ISO

  1. Create a resource folder
    • Action -> New folder -> New VM or Template folder
    • I normally give the name as the cluster id
  2. Create VM with 4 CPU and 16 RAM
    • Action -> New Virtual Machine
    • Select Create New Virtual Machine -> click Next
    • Add name
    • Select the VM folder -> Next
    • Select datacenter -> Next
    • Select storage -> Next
    • Use ESXi 6.7 -> Next
    • Select Linux and RHEL 7 -> Next
    • Use these parameters:
      • CPU: 4
      • Memory: 16 (Reserve all guest memory)
      • 120 GB disk
      • Select the corresponding ISO from Datastore and check “connect”
      • VMOption -> advantage -> Edit configuration -> Add configuration Params -> Add “disk.EnableUUID”: Specify TRUE
      • Click OK
      • Click Next
      • Click Finish
  3. Power on the bootstrap, masters and workers VMs as the steps below
  4. Go the VM console: Screen Shot 2020-03-04 at 12.27.44 PM.png
  5. Hit Enter
  6. You should see the login screen once the VM boots successfully Screen Shot 2020-03-04 at 12.34.04 PM.png
  7. repeat on all servers and make sure the specific ISO for the given VM is used.

Tips: you can clone the existing VM and just modify the ISO files for VM creation.

Creating Cluster

  1. Monitor the cluster:
    ./openshift-install --dir=<installation_directory> wait-for bootstrap-complete --log-level=info
    INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp43.example.com:6443...
    INFO API v1.16.2 up
    INFO Waiting up to 30m0s for bootstrapping to complete...
    INFO It is now safe to remove the bootstrap resources
  2.  From the bootstrap VM, similar log messages are shown:
    $ ssh -i ~/.ssh/vsphere-ocp43 core@bootstrap-vm
    $ journalctl -b -f -u bootkube.service
    ...
    Mar 16 20:03:57 bootstrap0.ocp43.example.com bootkube.sh[2816]: Tearing down temporary bootstrap control plane...
    Mar 16 20:03:57 bootstrap0.ocp43.example.com podman[18629]: 2020-03-16 20:03:57.232567868 +0000 UTC m=+726.128069883 container died 695412d7eece5a9bd099aac5b6bc6a8d412c8037b14391ff54ee33132ebce0e1 (image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:222fbfd3323ec347babbda1a66929019221fcee82cfc324a173b39b218cf6c4b, name=zen_lamarr)
    Mar 16 20:03:57 bootstrap0.ocp43.example.com podman[18629]: 2020-03-16 20:03:57.379721836 +0000 UTC m=+726.275223886 container remove 695412d7eece5a9bd099aac5b6bc6a8d412c8037b14391ff54ee33132ebce0e1 (image=quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:222fbfd3323ec347babbda1a66929019221fcee82cfc324a173b39b218cf6c4b, name=zen_lamarr)
    Mar 16 20:03:57 bootstrap0.ocp43.example.com bootkube.sh[2816]: bootkube.service complete
  3. Load balancer status
  4. Remove the bootstrap from the Load Balancer. You can check the status of LB from the status page

LB.png

 

Logging in to the Cluster

  1.  Export the kubeadmiin credentials:
    export KUBECONFIG=./ocp43/auth/kubeconfig
  2.  Verify cluster role via oc CLI
    $ oc whoami
    system:admin
  3. Approving the CSRs
    $ oc get nodes
    NAME                         STATUS   ROLES    AGE   VERSION
    master01.ocp43.example.com   Ready    master   60m   v1.16.2
    master02.ocp43.example.com   Ready    master   60m   v1.16.2
    master03.ocp43.example.com   Ready    master   60m   v1.16.2
    worker01.ocp43.example.com   Ready    worker   52m   v1.16.2
    worker02.ocp43.example.com   Ready    worker   51m   v1.16.2
    
    $ oc get csr
    NAME        AGE   REQUESTOR                                                                   CONDITION
    csr-66l6l   60m   system:node:master02.ocp43.example.com                                      Approved,Issued
    csr-8r2dc   52m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-hvt2d   51m   system:node:worker02.ocp43.example.com                                      Approved,Issued
    csr-k2ggg   60m   system:node:master03.ocp43.example.com                                      Approved,Issued
    csr-kg72s   52m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-qvbg2   60m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-rtncq   52m   system:node:worker01.ocp43.example.com                                      Approved,Issued
    csr-tsfxx   60m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-wn7rp   60m   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
    csr-zl87q   60m   system:node:master01.ocp43.example.com                                      Approved,Issued
  4. If there is pending CSR, approve the CSR via the command below.
    oc adm certificate approve <csr_name>
  5.  Validate the cluster components all available:
    $ oc get co
    NAME                                       VERSION   AVAILABLE   PROGRESSING   DEGRADED   SINCE
    authentication                             4.3.5     True        False         False      41m
    cloud-credential                           4.3.5     True        False         False      63m
    cluster-autoscaler                         4.3.5     True        False         False      47m
    console                                    4.3.5     True        False         False      43m
    dns                                        4.3.5     True        False         False      54m
    image-registry                             4.3.5     True        False         False      49m
    ingress                                    4.3.5     True        False         False      48m
    insights                                   4.3.5     True        False         False      58m
    kube-apiserver                             4.3.5     True        False         False      53m
    kube-controller-manager                    4.3.5     True        False         False      54m
    kube-scheduler                             4.3.5     True        False         False      54m
    machine-api                                4.3.5     True        False         False      55m
    machine-config                             4.3.5     True        False         False      55m
    marketplace                                4.3.5     True        False         False      48m
    monitoring                                 4.3.5     True        False         False      42m
    network                                    4.3.5     True        False         False      59m
    node-tuning                                4.3.5     True        False         False      50m
    openshift-apiserver                        4.3.5     True        False         False      51m
    openshift-controller-manager               4.3.5     True        False         False      55m
    openshift-samples                          4.3.5     True        False         False      46m
    operator-lifecycle-manager                 4.3.5     True        False         False      55m
    operator-lifecycle-manager-catalog         4.3.5     True        False         False      55m
    operator-lifecycle-manager-packageserver   4.3.5     True        False         False      51m
    service-ca                                 4.3.5     True        False         False      58m
    service-catalog-apiserver                  4.3.5     True        False         False      50m
    service-catalog-controller-manager         4.3.5     True        False         False      50m
    storage                                    4.3.5     True        False         False      49m

Configure the Image Registry to use ephemeral storage for now.

I will update the image registry in the other blog since I want to focus on the completion of the installation.

To set emptyDir for the image registry:

oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'

Completing the installation:

$ ./openshift-install --dir=./ocp43 wait-for install-complete
INFO Waiting up to 30m0s for the cluster at https://api.ocp43.example.com:6443 to initialize...
INFO Waiting up to 10m0s for the openshift-console route to be created...
INFO Install complete!
INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/Users/shannachan/projects/ocp4.3/ocp43/auth/kubeconfig'
INFO Access the OpenShift web-console here: https://console-openshift-console.apps.ocp43.example.com
INFO Login to the console with user: kubeadmin, password: xxxxxxxxxxxxxx

Congratulation Cluster is up!

Screen Shot 2020-03-16 at 6.22.41 PM.png

Troubleshoot tips:

Access any server via the command below:

ssh -i ~/.ssh/vsphere-ocp43 core@vm-server

Reference:

https://docs.openshift.com/container-platform/4.3/installing/installing_bare_metal/installing-bare-metal.html

https://docs.openshift.com/container-platform/4.3/installing/installing_vsphere/installing-vsphere.html

https://shanna-chan.blog/2019/07/26/openshift4-vsphere-static-ip/

OpenShift4: vSphere + Static IP

There are many ways to install OCP4. One of the most common ask is how to install OCP4 with the static IP address on the vSphere environment. This is one of the use cases that I want to test out and hope I can share my lessons learned.

Environment:

  • vSphere 6.7 Update2
  • Run install from macOS Mojave 10.14.5

Requirements:

  • No DHCP server
  • Need to use static IP addresses

Problems I had:

Error #1: Dracut: FATAL: Sorry, ‘ip=dhcp’ does not make sense for multiple interface configurations.

dracut.png

Cause:

When I tried to overwrite the IP address by setting the kernel parameters using ip=<ip>::<gateway>:<net mask>:<FQDN>:<interface>:none with cloning from OVA.

Solution:

Setting the IP parameter before the initramfs is created from the rhcos-install.iso instead of from OVA.

Here are steps to create custom ISO with the parameters to simplify the process. You can use the downloaded ISO, but it will be a lot of typing, so the following steps are very useful when creating many VMs from the ISO.

sudo mount rhcos-410.8.20190425.1-installer.iso /mnt/
mkdir /tmp/rhcos
rsync -a /mnt/* /tmp/rhcos/
cd /tmp/rhcos
vi isolinux/isolinux.cfg
  • Modify the boot entry at the end of the file similar to this:
label linux
  menu label ^Install RHEL CoreOS
  kernel /images/vmlinuz
  append initrd=/images/initramfs.img nomodeset rd.neednet=1 coreos.inst=yes ip=192.168.1.124::192.168.1.1:255.255.255.0:bootstrap.ocp4.example.com:ens192:none nameserver=192.168.1.188 coreos.inst.install_dev=sda coreos.inst.image_url=http://192.168.1.231:8080/rhcos-4.1.0-x86_64-metal-bios.raw.gz coreos.inst.ignition_url=http://192.168.1.231:8080/static.ign

where:

ip=<ip address>::<gateway>:<netmask>:<hostname>:<interface>:none

nameserver=<DNS> 

coreos.inst.image_url=http://<webserver host:port>/rhcos-4.1.0-x86_64-metal-bios.raw.gz

coreos.inst.ignition_url=http://<webserver host:port>/<master or worker ignition>.ign 

  • Create new ISO as /tmp/rhcos_install.iso
sudo mkisofs -U -A "RHCOS-x86_64" -V "RHCOS-x86_64" -volset "RHCOS-x86_64" -J -joliet-long -r -v -T -x ./lost+found -o /tmp/rhcos_install.iso -b isolinux/isolinux.bin -c isolinux/boot.cat -no-emul-boot -boot-load-size 4 -boot-info-table -eltorito-alt-boot -e images/efiboot.img -no-emul-boot .
  • Update the custom ISO to the datastore for VM creation.

Error #2: No such host

no such host.png

Cause:

Most likely the network did not set up correctly when the master or worker start.

Solution:

In my case, this is an issue when creating masters/workers from an OVA and network configuration did not get setup when the RHCOS is booted.

Error #3: Getting EOF from LB

EOF.png

Cause:

Most likely the DNS and webserver configuration errors.

Solution:

Make sure all FQDN resolve to the correct IPs and restart related services.

Error #4: X509 cert error

x509error.png

Cause:

The reason in my case was the clocks on all servers were not synced and required to regenerate my SSH key.

Solution:

NTP was setup on DNS and webserver and make sure the clock is synced across. I also regenerate the SSH and update my install-config.yaml file.

Prerequisites:

The above components are required in my setup. I used the link [3] in the Reference section to setup DNS, load balancer, and webserver. I configured NTP on my DNS, webserver, load balancer and make sure I configure the time on my ESXi server as well. The filetranspiler is an awesome tool for manipulating the ignition files. I used it thought out the test here.

Preparing the infrastructure:

I started my installation with OCP 4 official documentation for vSphere (Reference [1] below).

  • SSH keygen

Captured my example steps here. Please use your own value.

ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/ocp4vsphere
eval "$(ssh-agent -s)"
ssh-add ~/.ssh/ocp4vsphere
  • Download OpenShift 4 installer
    • extract it
    • chmod +x openshift-installer
    • mv to /usr/local/bin directory
wget https://mirror.openshift.com/pub/openshift-v4/clients/ocp/latest/openshift-install-mac-4.1.7.tar.gz
apiVersion: v1
baseDomain: example.com 
compute:
- hyperthreading: Enabled   
  name: worker
  replicas: 0 
controlPlane:
  hyperthreading: Enabled   
  name: master
  replicas: 3 
metadata:
  name: ocp4
platform:
  vsphere:
    vcenter: <vCenter host>
    username: <administrator>
    password: <password>
    datacenter: dc
    defaultDatastore: datastore
pullSecret: '<your pull seceret>' 
sshKey: '<your public ssh key>'
  • Create ignition files
openshift-install create ignition-configs --dir=<installation_directory>
  • Prepare for creating bootstrap with hostname and the static IP
    • Download filetranspiler:
      • git clone https://github.com/ashcrow/filetranspiler
    • Copy <installation_directory>/bootstrap.ign to <filetranspile_directory>/
    • Create bootstrap hostname file:
      echo "bootstrap.ocp4.example.com" > hostname
    • move hostname file to <filetranspile_directory>/bootstrap/etc/
    • Create ifcfg-ens192 file under

      <filetranspile_directory>/bootstrap/etc/sysconfig/network-scripts with following content

      NAME=ens192
      DEVICE=ens192
      TYPE=Ethernet
      BOOTPROTO=none
      ONBOOT=yes
      IPADDR=<bootstrap IP address>
      NETMASK=<netmask>
      GATEWAY=<gateway>
      DOMAIN=example.com
      DNS1=<dns>
      PREFIX=24
      DEFROUTE=yes
      IPV6INIT=no
    • Run this command to create new boostrap ignition file:
      cd <filetranspile_directory>
      ./filetranspile -i bootstrap.ign -f bootstrap -o bootstrap-static.ign
    • Upload bootstrap-static.ign to the webserver:
      scp bootstrap-static.ign user@<webserverip>:/var/www/html/bootstrap.ign
    • Create an append-bootstrap.ign. Example as shown below.
      {
        "ignition": {
          "config": {
            "append": [
              {
                "source": "http://<webserverip:port>/bootstrap.ign", 
                "verification": {}
              }
            ]
          },
          "timeouts": {},
          "version": "2.1.0"
        },
        "networkd": {},
        "passwd": {},
        "storage": {},
        "systemd": {}
      }
    • Encode the append-bootstrap.ign file.
      openssl base64 -A -in append-bootstrap.ign -out append-bootstrap.64
    • Upload master0-static.ign to the webserver:
      scp master0-static.ign user@<webserverip>:/var/www/html/master0.ign
      • Note that master0.ign is used in the kernel parameter when installing the ISO.
    • Create VM from the custom ISO
      • Create VM with 4 CPU and 16 RAM
      • Select the custom ISO
      • add “disk.EnableUUID”: Specify TRUE under VM Options and Edit Configuration.
      • Power on the VM
      • Go the VM console:
      • Screen Shot 2019-07-26 at 1.37.09 PM.png
      • Hit <Tab>
      • Screen Shot 2019-07-26 at 1.37.22 PM.png
      • you can modify the parameters per each server here.
      • Hit <enter>
      • The server will reboot after installation.
  • Repeat for all masters and workers.

Installation:

  • When you have all the VMs created, run the following command.
$ openshift-install --dir=ocp4 wait-for bootstrap-complete --log-level debug

DEBUG OpenShift Installer v4.1.7-201907171753-dirty 
DEBUG Built from commit 5175a461235612ac64d576aae09939764ac1845d 
INFO Waiting up to 30m0s for the Kubernetes API at https://api.ocp4.example.com:6443... 
INFO API v1.13.4+3a25c9b up                       
INFO Waiting up to 30m0s for bootstrapping to complete... 
DEBUG Bootstrap status: complete                  
INFO It is now safe to remove the bootstrap resources 

 

Verification

  • Log in:
$ export KUBECONFIG=ocp4/auth/kubeconfig
$ oc whoami

$ oc get nodes
NAME                       STATUS   ROLES    AGE     VERSION
master0.ocp4.example.com   Ready    master   35m     v1.13.4+205da2b4a
master1.ocp4.example.com   Ready    master   35m     v1.13.4+205da2b4a
master2.ocp4.example.com   Ready    master   35m     v1.13.4+205da2b4a
worker0.ocp4.example.com   Ready    worker   20m     v1.13.4+205da2b4a
worker1.ocp4.example.com   Ready    worker   11m     v1.13.4+205da2b4a
worker2.ocp4.example.com   Ready    worker   5m25s   v1.13.4+205da2b4a
  • Validate all CSR is approved
$ oc get csr

NAME        AGE     REQUESTOR                                                                   CONDITION
csr-6vqqn   35m     system:node:master1.ocp4.example.com                                        Approved,Issued
csr-7hlkk   20m     system:node:worker0.ocp4.example.com                                        Approved,Issued
csr-9p6sw   11m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-b4cst   35m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-gx4dz   5m33s   system:node:worker2.ocp4.example.com                                        Approved,Issued
csr-kqcfv   11m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-lh5zg   35m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-m2hvl   35m     system:node:master0.ocp4.example.com                                        Approved,Issued
csr-npb4l   35m     system:node:master2.ocp4.example.com                                        Approved,Issued
csr-rdpgm   20m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-s2d7z   11m     system:node:worker1.ocp4.example.com                                        Approved,Issued
csr-sx2r5   6m      system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-tvgbq   35m     system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
csr-vvp2h   6m11s   system:serviceaccount:openshift-machine-config-operator:node-bootstrapper   Approved,Issued
  • Patching the images registry for non-production environment
$oc project openshift-image-registry
$oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{"spec":{"storage":{"emptyDir":{}}}}'
config.imageregistry.operator.openshift.io/cluster patched

Next step?

To improve the process, we need to automate this.

Reference:

[1] OpenShift 4 Official Installation Documentation for vSphere

[2] http://Using Static IP for OCP4 Installation Guide

[3] Setting Up Pre-requisites Guide

Knative on OCP 4.1 Test Run

Install OCP 4.1.2

This blog assumes that you went to try.openshift.com and created your OCP 4.1 IPI cluster. If you have not, you can go to try.openshift.com –> Get Started to set up an OCP 4.1 cluster.

Install Istio (Maistra 0.11)

Istio is required before installing Knative. However, Knative operator will install the minimum Istio components if Istio is not installed on the platform. For my test, I did install service mesh on OCP 4.1 using the community version. Here are my steps:

  • Install service mesh operator
oc new-project istio-operator
oc new-project istio-system
oc project istio-operator
oc apply -f https://raw.githubusercontent.com/Maistra/istio-operator/maistra-0.11/deploy/maistra-operator.yaml
  • Service Mesh operator is up and running
#to get the name of the operator pod
oc get pods
#view the logs of the pod
oc logs <name of the pod from above step>

#log shown as below
{"level":"info","ts":1562602857.4691303,"logger":"kubebuilder.controller","caller":"controller/controller.go:153","msg":"Starting workers","Controller":"servicemeshcontrolplane-controller","WorkerCount":1}
  • Create custom resource as cr.yaml using the below content.
apiVersion: maistra.io/v1
kind: ServiceMeshControlPlane
metadata:
  name: basic-install
spec:
  # NOTE, if you remove all children from an element, you should remove the
  # element too.  An empty element is interpreted as null and will override all
  # default values (i.e. no values will be specified for that element, not even
  # the defaults baked into the chart values.yaml).
  istio:
    global:
      proxy:
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 500m
            memory: 128Mi

    gateways:
      istio-egressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
      istio-ingressgateway:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
        # set to true to enable IOR
        ior_enabled: true

    mixer:
      policy:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false

      telemetry:
        # disable autoscaling for use in smaller environments
        autoscaleEnabled: false
        # constrain resources for use in smaller environments
        resources:
          requests:
            cpu: 100m
            memory: 1G
          limits:
            cpu: 500m
            memory: 4G

    pilot:
      # disable autoscaling for use in smaller environments
      autoscaleEnabled: false
      # increase random sampling rate for development/testing
      traceSampling: 100.0

    kiali:
      # change to false to disable kiali
      enabled: true

      # to use oauth, remove the following 'dashboard' section (note, oauth is broken on OCP 4.0 with kiali 0.16.2)
      # create a secret for accessing kiali dashboard with the following credentials
      dashboard:
        user: admin
        passphrase: admin

    tracing:
      # change to false to disable tracing (i.e. jaeger)
      enabled: true
  • Install service mesh
oc project istio-system
oc create -f cr.yaml

#it will take a while to have all the pods up
watch 'oc get pods'
  • When service mesh is available
Every 2.0s: oc get pod -n istio-system                                                                                              Mon Jul  8 16:39:46 2019

NAME                                      READY   STATUS    RESTARTS   AGE
elasticsearch-0                           1/1     Running   0          13m
grafana-86dc5978b8-k2dvl                  1/1     Running   0          9m15s
ior-6656b5cfdb-cjt7z                      1/1     Running   0          9m55s
istio-citadel-7678d4749b-bjqq8            1/1     Running   0          14m
istio-egressgateway-66d8b969b8-wmcfm	  1/1     Running   0          9m55s
istio-galley-7f57cd4c6c-6d2r8             1/1     Running   0          11m
istio-ingressgateway-7794d8d4fc-dd72g     1/1     Running   0          9m55s
istio-pilot-77d65868d4-68lzd              2/2     Running   0          10m
istio-policy-7486f4cb6c-fdw6q             2/2     Running   0          11m
istio-sidecar-injector-66d49c6865-clqzm   1/1     Running   0          9m39s
istio-telemetry-799557976b-9ljz4          2/2     Running   0          11m
jaeger-agent-b7bz8                        1/1     Running   0          13m
jaeger-agent-j4dnp                        1/1     Running   0          13m
jaeger-agent-xmwzz                        1/1     Running   0          13m
jaeger-collector-96756f879-n889z          1/1     Running   3          13m
jaeger-query-6f4456546c-mwjkk             1/1     Running   3          13m
kiali-c58c8476d-wzhj6                     1/1     Running   0          8m45s
prometheus-5cb5d7549b-lmjtk               1/1     Running   0          14m

Install Knative 0.6

  • Install Knative serving operator
    • Click Catalog -> OperatorHub -> search for “knative” keyword
    • Click “Knative Serving Operator”
    • Click “Install”

Screen Shot 2019-07-08 at 9.45.57 AM.png

  • Install Knative eventing operator
    • Click Catalog -> OperatorHub -> search for “knative” keyword
    • Click “Knative Eventing Operator”
    • Click “Install”

Screen Shot 2019-07-08 at 9.47.24 AM.png

  • I also manually scale up my nodes to prepare for the tutorial deployment.
  • Click Compute -> Machine Sets
  • Click “3 dots” at the end of each machine set-> click Edit Count -> enter 2

Screen Shot 2019-07-08 at 9.49.15 AM.png

  • validate (Installed Operator under openshift-operators project)

Screen Shot 2019-07-08 at 9.52.02 AM.png

Install Knative Client – kn

Knative client CLI (kn) can list, create, delete, and update Knative service.

$ which kn
/usr/local/bin/kn
$ kn version
Version:      v20190625-13ff277
Build Date:   2019-06-25 09:52:20
Git Revision: 13ff277
Dependencies:
- serving:    

 

Let’s Have Some Fun

Knative serving via kn

I am using my already build an image that is out in docker hub for this example. Here are the steps to create a simple Knative service.

oc new-project knative-demo
oc adm policy add-scc-to-user anyuid -z default -n knative-demo
oc adm policy add-scc-to-user privileged -z default -n knative-demo
kn service create mysvc --image docker.io/piggyvenus/greeter:0.0.1

List Knative service

$ kn service list
NAME    DOMAIN                                                        GENERATION   AGE   CONDITIONS   READY   REASON
mysvc   mysvc.knative-demo.apps.cluster-6c33.sandbox661.opentlc.com   1            29s   3 OK / 3     True    

Execute the service

$ curl mysvc.knative-demo.apps.cluster-6c33.sandbox661.opentlc.com
Hi  greeter => '0bd7a995d27e' : 1

Knative serving via a YAML file

Create Knative service YAML can also do the trick. Example, as shown below.

apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
  name: greeter
spec:
  runLatest:
    configuration:
      revisionTemplate:
        spec:
          container:
            image: docker.io/piggyvenus/greeter:0.0.1
            livenessProbe:
              httpGet:
                path: /healthz
            readinessProbe:
              httpGet:
                path: /healthz

 

Create Knative service

oc apply -f service.yaml

Check out Knative service resources

oc get deployment
oc get pods
oc get services.serving.knative.dev
oc get configuration.serving.knative.dev
oc get routes.serving.knative.dev

Invoke Knative service

oc get routes.serving.knative.dev
curl mysvc.knative-demo.apps.cluster-6c33.sandbox661.opentlc.com

Please check out Knative tutorial! There are more examples of Knative. I hope you find this blog useful.