Tuesday, 5 July 2022

A local Openshift 4.x development environment on your laptop

Having access to dev OpenShift 4.x cluster that you control is invaluable - Redhat now provides this ability through their Code Ready Container also known an crc

Setting up the rest of the cluster and dev ecosystem is a little complicated at first so here's a set of notes documenting how it can be done on an 8core / 16Gb Fedora 35 machine.

All steps below refer to a machine called devhost which will be an alias to the local server.
. Code for the REST server and the configuration is available here

Initial crc setup

$ sudo dnf install -y podman skopeo # add 'devhost' to machine IP addr that references the crc i/f $ echo "192.168.130.1 devhost" | sudo tee -a /etc/hosts # set up OpenShift - this will require your 'pull secrets' as it downloads its virtual image to run # this will result in a ~36Gb VM image under ~/.crc $ wget https://developers.redhat.com/content-gateway/file/pub/openshift-v4/clients/crc/2.5.1/crc-linux-amd64.tar.xz $ xz -d < crc-llinux-amd64.tar.xz | tar xf - && sudo mv crc /usr/local/bin $ echo 'export PATH=${PATH}:~/.crc/bin/oc/' >> ~/.bashrc $ crc setup INFO Using bundle path /home/ray/.crc/cache/crc_libvirt_4.10.18_amd64.crcbundle INFO Checking if running as non-root INFO Checking if running inside WSL2 INFO Checking if crc-admin-helper executable is cached INFO Caching crc-admin-helper executable INFO Using root access: Changing ownership of /home/ray/.crc/bin/crc-admin-helper-linux INFO Using root access: Setting suid for /home/ray/.crc/bin/crc-admin-helper-linux INFO Checking for obsolete admin-helper executable INFO Checking if running on a supported CPU architecture INFO Checking minimum RAM requirements INFO Checking if crc executable symlink exists INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Checking if user is part of libvirt group INFO Checking if active user/process is currently part of the libvirt group INFO Checking if libvirt daemon is running INFO Checking if a supported libvirt version is installed INFO Checking if crc-driver-libvirt is installed INFO Checking crc daemon systemd service INFO Checking crc daemon systemd socket units INFO Checking if systemd-networkd is running INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Checking if dnsmasq configurations file exist for NetworkManager INFO Checking if the systemd-resolved service is running INFO Checking if /etc/NetworkManager/dispatcher.d/99-crc.sh exists INFO Checking if libvirt 'crc' network is available INFO Checking if libvirt 'crc' network is active INFO Checking if CRC bundle is extracted in '$HOME/.crc' INFO Checking if /home/ray/.crc/cache/crc_libvirt_4.10.18_amd64.crcbundle exists INFO Getting bundle for the CRC executable INFO Downloading crc_libvirt_4.10.18_amd64.crcbundle 119.76 MiB / 3.13 GiB [------>____________________________________________________________________________________________________________________________________________________________________ INFO Uncompressing /home/ray/.crc/cache/crc_libvirt_4.10.18_amd64.crcbundle crc.qcow2: 12.45 GiB / 12.45 GiB [---------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00% oc: 117.14 MiB / 117.14 MiB [--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00% Your system is correctly setup for using CRC. Use 'crc start' to start the instance # if upgrading, remove existing cluster info (incl projects) $ crc delete $ crc start INFO Checking if running as non-root INFO Checking if running inside WSL2 INFO Checking if crc-admin-helper executable is cached INFO Checking for obsolete admin-helper executable INFO Checking if running on a supported CPU architecture INFO Checking minimum RAM requirements INFO Checking if crc executable symlink exists INFO Checking if Virtualization is enabled INFO Checking if KVM is enabled INFO Checking if libvirt is installed INFO Checking if user is part of libvirt group INFO Checking if active user/process is currently part of the libvirt group INFO Checking if libvirt daemon is running INFO Checking if a supported libvirt version is installed INFO Checking if crc-driver-libvirt is installed INFO Checking crc daemon systemd socket units INFO Checking if systemd-networkd is running INFO Checking if NetworkManager is installed INFO Checking if NetworkManager service is running INFO Checking if dnsmasq configurations file exist for NetworkManager INFO Checking if the systemd-resolved service is running INFO Checking if /etc/NetworkManager/dispatcher.d/99-crc.sh exists INFO Checking if libvirt 'crc' network is available INFO Checking if libvirt 'crc' network is active INFO Loading bundle: crc_libvirt_4.10.18_amd64... INFO Starting CRC VM for OpenShift 4.10.18... INFO CRC instance is running with IP 192.168.130.11 INFO CRC VM is running INFO Check internal and public DNS query... INFO Check DNS query from host... INFO Verifying validity of the kubelet certificates... INFO Starting OpenShift kubelet service INFO Waiting for kube-apiserver availability... [takes around 2min] INFO Waiting for user's pull secret part of instance disk... INFO Starting OpenShift cluster... [waiting for the cluster to stabilize] INFO Operator openshift-apiserver is not yet available INFO Operator openshift-apiserver is not yet available INFO All operators are available. Ensuring stability... INFO Operators are stable (2/3)... INFO Operators are stable (3/3)... INFO Adding crc-admin and crc-developer contexts to kubeconfig... Started the OpenShift cluster. The server is accessible via web console at: https://console-openshift-console.apps-crc.testing Log in as administrator: Username: kubeadmin Password: .... Log in as user: Username: developer Password: developer Use the 'oc' command line interface: $ eval $(crc oc-env) $ oc login -u developer https://api.crc.testing:6443 # convience if we ever need to login into cluster direclty $ cat >> ~/.ssh/config << EOF Host crc Hostname 192.168.130.11 User core IdentityFile ~/.crc/machines/crc/id_ecdsa StrictHostKeyChecking no UserKnownHostsFile /dev/null EOF $ chmod 600 ~/.ssh/config $ ssh core@crc

Podman and as a local registery

Allow podman and its images to be used remotely by crc. This is different to your local docker image cache that is avaiable via your podman images commands.

The first thing you must do is setup your local docker registry. To do that, you must first create a directory to house container data with the command.
$ systemctl --user enable --now podman.socket $ firewall-cmd --permanent --add-port=5000/tcp --zone=libvirt $ firewall-cmd --reload $ mkdir ~/.config/containers $ cat > ~/.config/containers/registries.conf << EOF unqualified-search-registries = ["registry.fedoraproject.org", "registry.access.redhat.com", "docker.io", "quay.io", "devhost:5000"] [[registry]] prefix = "devhost/foo" insecure = true blocked = false location = "devhost:5000" short-name-mode="enforcing" EOF $ mkdir -p ${HOME}/.local/share/containers/registry # finally run the registry # this is running on the 'crc' interface (ie 192.168.130.1) that is created when crc starts; # using the crc i/f ensures that you can run the cluster and registry on an isolated device with no network $ podman run --privileged -d --name registry \ -p $(getent hosts devhost | cut -f1 -d\ ):5000:5000 \ -v ${HOME}/.local/share/containers/registry:/var/lib/registry \ -e REGISTRY_STORAGE_DELETE_ENABLED=true \ --rm \ registry:2 Resolved "registry" as an alias (/etc/containers/registries.conf.d/000-shortnames.conf) Trying to pull docker.io/library/registry:2... Getting image source signatures Copying blob e69d20d3dd20 done Copying blob ea60b727a1ce done Copying blob c87369050336 done Copying blob 2408cc74d12b done Copying blob fc30d7061437 done Copying config 773dbf02e4 done Writing manifest to image destination Storing signatures c598323f0a44835e7771c79adb0e1280d7b3347bf96318f3b3be7adb9e20f7ee $ podman ps CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES c598323f0a44 docker.io/library/registry:2 /etc/docker/regis... 15 seconds ago Up 16 seconds ago 0.0.0.0:5000->5000/tcp registry
Refs:
https://github.com/containers/podman/blob/main/docs/tutorials/remote_client.md

The registry is now available but will run in an insecure mode which means we have and additional set to configure crc:
# https://github.com/code-ready/crc/wiki/Adding-an-insecure-registry $ oc login -u kubeadmin https://api.crc.testing:6443 $ oc patch --type=merge --patch='{ "spec": { "registrySources": { "insecureRegistries": [ "devhost:5000" ] } } }' image.config.openshift.io/cluster
Trying to setup with a reverse proxy and a self generated rootCA and self signed cert, even if rootCA installed on cluster does not work properly, so save yourself headaches.

How to push an image to the local registry

# this provides a simple nodejs based server, with /api/status endpoint $ git clone https://github.com/whatdoineed2do/vanilla-node-rest-api # generate the docker image $ make package $ cat > Dockerfile << EOF FROM node:18-alpine3.15 WORKDIR /app COPY . . EXPOSE 8080 CMD node server.js EOF $ export UUID=$(uuidgen) && \ podman build --squash -t vnra:${UUID} && \ podman push --tls-verify=false vnra:${UUID} devhost:5000/foo/vnra:${UUID} # validate it been pushed $ podman search --tls-verify=false devhost:5000/ INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED devhost:5000 devhost:5000/foo/vnra 0 $ podman search --tls-verify=false vnra INDEX NAME DESCRIPTION STARS OFFICIAL AUTOMATED devhost:5000 devhost:5000/foo/vnra 0 $ skopeo inspect --tls-verify=false docker://devhost:5000/foo/vnra { "Name": "devhost:5000/foo/vnra", "Digest": "sha256:e4a7636b834c6287800a3a664ef3f5ce3f06d623437a37b104a81febef69b1e7", "RepoTags": [ "latest", "b835dd7", "ecd9fe5", "66f793b" ], "Created": "2022-07-05T15:11:41.878665307Z", "DockerVersion": "", "Labels": { "io.buildah.version": "1.23.1" }, "Architecture": "amd64", "Os": "linux", "Layers": [ "sha256:8dfb4e6dc5179a0adf4a069e14d984216740f28b088c26090c8f16b97e44b222", "sha256:be2771caf87008c0ade639b6debce2ddb8f735e32eeb73d4bc01a6c68c09c933", "sha256:be4f0bf8cf1b2cab1e1197378bf7756cae87232d43ef1ec0c031e62cb83f6735", "sha256:89383deba3bc0da6d79f88604e4710a8972c9e682412267fd565630d79e90cd4", "sha256:0f3180c4d208c7874b0afddd1940fc3f297dd67b90944e40ed630cd5adaa3a4b" ], "Env": [ "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "NODE_VERSION=18.4.0", "YARN_VERSION=1.22.19" ] }

How to delete an image from the local registry

# registry MUST be run with '-e REGISTRY_STORAGE_DELETE_ENABLED=true' $ skopeo delete --tls-verify=false \ docker://devhost:5000/foo/bar:0.0.2 # force garbage collection on running registry $ podman exec registry \ /bin/registry garbage-collect \ --delete-untagged=true /etc/docker/registry/config.yml

Configuring CRC to use our local development docker registry

Now that we have a local registry, we can see how to use this with crc. It is possible that we could push images directly to the cluster's internal repo but this not typical in your production environments.

Create a DeploymentConfig that specifies the usual items and a reference to our local registry and apply it to the cluster which will pull the image and spin up the pod:
$ cat > vnra.yaml << EOF apiVersion: apps.openshift.io/v1 kind: List items: - apiVersion: v1 kind: Service metadata: name: vnra spec: ports: - port: 8080 targetPort: 8080 selector: deploymentconfig: vnra - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: vnra labels: app: vnra spec: replicas: 1 selector: deploymentconfig: vnra strategy: # strategy to Recreate, means that it will be scaled down prior to being scaled up type: Rolling template: metadata: labels: deploymentconfig: vnra app: vnra spec: restartPolicy: Always containers: - image: devhost:5000/foo/vnra:66f793b name: main imagePullPolicy: IfNotPresent ports: - containerPort: 8080 protocol: TCP name: http livenessProbe: failureThreshold: 5 httpGet: path: /api/status port: 8080 scheme: HTTP periodSeconds: 60 successThreshold: 1 triggers: - type: ConfigChange - apiVersion: route.openshift.io/v1 kind: Route metadata: name: vnra spec: to: kind: Service name: vnra EOF $ oc login -u developer -p developer $ oc apply -f vnra.yaml ## pods should be spinning up $ oc get routes NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD vnra vnra-foo.apps-crc.testing vnra None $ oc get pods NAME READY STATUS RESTARTS AGE vnra-8-deploy 0/1 Completed 0 18m vnra-8-qvzn6 1/1 Running 0 18m $ oc logs -f $(oc get pods | grep -v deploy | grep vnra | cut -f1 -d\ ) Server 66f793b running on port 8080 Tue Jul 05 2022 21:42:56 GMT+0000 (Coordinated Universal Time): #1 GET /api/status {"host":"10.217.0.188:8080","user-agent":"kube-probe/1.23","accept":"*/*","connection":"close"} Tue Jul 05 2022 21:43:56 GMT+0000 (Coordinated Universal Time): #2 GET /api/status {"host":"10.217.0.188:8080","user-agent":"kube-probe/1.23","accept":"*/*","connection":"close"} Tue Jul 05 2022 21:44:56 GMT+0000 (Coordinated Universal Time): #3 GET /api/status {"host":"10.217.0.188:8080","user-agent":"kube-probe/1.23","accept":"*/*","connection":"close"} ... $ curl vnra-foo.apps-crc.testing/api/status | jq % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 105 0 105 0 0 44080 0 --:--:-- --:--:-- --:--:-- 52500 { "ip": "10.217.0.188", "uptime": 1277.743065979, "timestamp": 1657053916, "version": "66f793b", "requests": 23 }

Troubleshooting cluster image pull

If you are struggle with the pod's not spinning up due to ImagePullBackOff you can verify that the cluster can communicate with the registry AND the specified image name is avaiable. A successful manual pull looks like this:
$ oc login -u developer -p developer $ oc new-app --image=devhost:5000/foo/vnra:latest --name=manual --> Found container image 66f793b (1 days old) from devhost:5000 for "devhost:5000/foo/vnra:latest" * An image stream tag will be created as "manual:latest" that will track this image --> Creating resources ... imagestream.image.openshift.io "manual" created deployment.apps "manual" created --> Success Run 'oc status' to view your app.
Refs:
https://cloud.redhat.com/blog/deploying-applications-from-images-in-openshift-part-one-web-console

Helm

helm can be thought of as a packaging configuration for Kubernetes that will let you define and install configurations: It is very useful with templates that can define different environments with a common files.

Firstly we need helm and then a helm chart repository - we can use Chart Museum for the latter which is part of the helm project.
$ mkdir ~/.local/share/containers/helm $ chartmuseum --debug --port=8089 \ --storage=local \ --storage-local-rootdir=~/.local/share/containers/helm # one time setup $ helm repo add chartmuseum http://devhost:8089
Once the helm components are available, we need to make crc aware:
$ oc login -u kubeadmin https://api.crc.testing:6443 $ cat << EOF | oc apply -f - apiVersion: helm.openshift.io/v1beta1 kind: HelmChartRepository metadata: name: helm-local-repo spec: name: helm-local-repo connectionConfig: url: http://devhost:8089/ EOF $ oc login -u developer https://api.crc.testing:6443
Refs:
https://docs.openshift.com/container-platform/4.6/cli_reference/helm_cli/configuring-custom-helm-chart-repositories.html

Creating a boilerplate `helm` chart

$ helm create foo $ tree foo foo ├── charts ├── Chart.yaml ├── templates │   ├── deployment.yaml │   ├── _helpers.tpl │   ├── hpa.yaml │   ├── ingress.yaml │   ├── NOTES.txt │   ├── serviceaccount.yaml │   ├── service.yaml │   └── tests │   └── test-connection.yaml └── values.yaml 3 directories, 10 files
We can make our modifications, in particular adding a {dev,prod}.yaml and test:
# https://helm.sh/docs/chart_template_guide/debugging/ $ helm lint --debug -f foo/dev.yaml foo ==> Linting foo [INFO] Chart.yaml: icon is recommended 1 chart(s) linted, 0 chart(s) failed

Validating and Installing a helm chart manually

For the first installation following a succesful `lint`, you can fully verify the parameters and rendering is valid by performing a `install --dry-run`. This process however will communicate with the cluster (`helm` uses the local api server automatically without the need to specify `--kube-apiserver`) and will fail if there already exists the `route`, `service` or `deployment`/`deploymentconfig`.

Preparing

We require at minimum a Chart.yaml, values and tempalte configuration.
$ mkdir -p helm/tempaltes $ cat > helm/Chart.yaml << EOF apiVersion: v2 name: vnra description: Vanilla Node REST api service in K8 type: application version: 0.0.1 appVersion: "79de471" EOF $ cat > helm/values.yaml << EOF replicaCount: 1 image: repository: devhost:5000/foo pullPolicy: IfNotPresent tag: "79de471" autoscaling: enabled: false minReplicas: 1 EOF $ cat > helm/dev.yaml << EOF env: dev replicaCount: 1 autoscaling: enabled: true maxReplicas: 2 targetCPUUtilizationPercentage: 80 EOF $ cat > helm/tempaltes/all.yaml << EOF apiVersion: apps.openshift.io/v1 kind: List items: - apiVersion: v1 kind: Service metadata: name: vnra spec: ports: - port: 8080 targetPort: 8080 selector: deploymentconfig: vnra - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: vnra labels: app: vnra env: {{ .Values.env }} spec: {{- if not .Values.autoscaling.enabled }} replicas: {{ .Values.replicaCount }} {{- end }} selector: deploymentconfig: vnra strategy: # We set the type of strategy to Recreate, which means that it will be scaled down prior to being scaled up type: Rolling template: metadata: labels: deploymentconfig: vnra app: vnra spec: restartPolicy: Always containers: - image: "{{ .Values.image.repository }}/{{ .Chart.Name }}:{{ .Values.image.tag | default .Chart.AppVersion }}" name: {{ .Chart.Name }} imagePullPolicy: {{ .Values.image.pullPolicy }} ports: - containerPort: 8080 protocol: TCP name: http livenessProbe: failureThreshold: 5 httpGet: path: /api/status port: 8080 scheme: HTTP periodSeconds: 10 successThreshold: 1 triggers: - type: ConfigChange - apiVersion: route.openshift.io/v1 kind: Route metadata: name: vnra spec: to: kind: Service name: vnra EOF
Note how this differs from the original deployment config with the parameterisation avilable for different environments.

Installing

# login to the cluster and project $ oc login -u developer $ oc project foo # supply override values, name and file location of Chart.yaml - note the use of -f helm/dev.yaml that will overlay the values.yaml that is still being implicitly used $ helm install --dry-run --debug -f helm/dev.yaml vnra ./helm iNAME: vnra LAST DEPLOYED: Sat Jul 9 10:58:38 2022 NAMESPACE: foo STATUS: pending-install REVISION: 1 TEST SUITE: None USER-SUPPLIED VALUES: autoscaling: enabled: true maxReplicas: 2 targetCPUUtilizationPercentage: 80 env: dev replicaCount: 1 COMPUTED VALUES: autoscaling: enabled: true maxReplicas: 2 minReplicas: 1 targetCPUUtilizationPercentage: 80 env: dev image: pullPolicy: IfNotPresent repository: devhost:5000/foo tag: 79de471 replicaCount: 1 HOOKS: MANIFEST: --- # Source: vnra/templates/hpa.yaml apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: vnra spec: scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: vnra minReplicas: 1 maxReplicas: 2 metrics: - type: Resource resource: name: cpu target: type: Utilization averageUtilization: 80 --- # Source: vnra/templates/all.yaml apiVersion: apps.openshift.io/v1 kind: List items: - apiVersion: v1 kind: Service metadata: name: vnra spec: ports: - port: 8080 targetPort: 8080 selector: deploymentconfig: vnra - apiVersion: apps.openshift.io/v1 kind: DeploymentConfig metadata: name: vnra labels: app: vnra env: dev spec: selector: deploymentconfig: vnra strategy: # We set the type of strategy to Recreate, which means that it will be scaled down prior to being scaled up type: Rolling template: metadata: labels: deploymentconfig: vnra app: vnra spec: restartPolicy: Always containers: - image: "devhost:5000/foo/vnra:79de471" name: vnra imagePullPolicy: IfNotPresent ports: - containerPort: 8080 protocol: TCP name: http livenessProbe: failureThreshold: 5 httpGet: path: /api/status port: 8080 scheme: HTTP periodSeconds: 10 successThreshold: 1 triggers: - type: ConfigChange - apiVersion: route.openshift.io/v1 kind: Route metadata: name: vnra spec: to: kind: Service name: vnra
Once verified this is all good, we can perform the installation to the cluster:
$ helm install --dry-run --debug -f helm/dev.yaml vnra ./helm NAME: vnra LAST DEPLOYED: Sat Jul 9 11:04:35 2022 NAMESPACE: foo STATUS: deployed REVISION: 1 TEST SUITE: None # verify what's been installed on the cluster via helm $ helm list NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION vnra foo 1 2022-07-09 11:04:35.789973015 +0100 BST deployed vnra-0.0.1 acba902 # and again confirm what the cluster thinks: note the annotations $ oc get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE vnra ClusterIP 10.217.5.77 <none> 8080/TCP 15h $ oc describe svc vnra Name: vnra Namespace: foo Labels: app.kubernetes.io/managed-by=Helm Annotations: meta.helm.sh/release-name: vnra meta.helm.sh/release-namespace: foo Selector: deploymentconfig=vnra Type: ClusterIP IP Family Policy: SingleStack IP Families: IPv4 IP: 10.217.5.77 IPs: 10.217.5.77 Port: <unset> 8080/TCP TargetPort: 8080/TCP Endpoints: 10.217.0.7:8080 Session Affinity: None Events: <none>
Once helm has been used to install, you may uninstall or upgrade

Installing via a helm repository

# in the directory with your Chart.yaml $ helm package . Successfully packaged chart and saved it to: /home/ray/dev/Docker/vanilla-node-rest-api/openshift/helm/vnra-0.0.1.tgz # publish helm chart $ curl --data-binary @vnra-0.0.1.tgz http://devhost:8089/api/charts {"saved": true } $ helm search chartmuseum

No comments:

Post a Comment