Zero-Touch Provisioning Of Edge Devices Using Microshift And RHEL For Edge
As time goes by, Cloud-Native applications in the Edge Computing vertical are becoming more and more relevant. We want workloads to run as close as possible to the end user to improve the user experience while enjoying the flexibility, availability, scalability, and portability of what Cloud-Native applications offer today.
It all comes down to how we manage those edge devices (deployed at massive scale in different locations) as easily as we manage them today in our core data centers, with platforms such as Kubernetes, for example.
That is why today I’d like to share a way of using two of Red Hat’s products in order to achieve the requirements presented above.
For the sake of this demo, I’ve used
Microshift
, which is a small-footprint Kubernetes instance that can run your applications at the edge with that minimum of required resources.Additionally, I’ve used
RHEL For Edge
, which is an operating system based onRHEL
, that usesrpm-ostree
which is a tool for managing operating system upgrades, rollbacks, and configuration easily.
Let’s dive in!
Prepare Yourself For Being Deployed
In order to prepare the needed installation media, we’ll have to have a server in place that will run RHEL Image Builder
.
This utility will allow us to pre-build a bootable ISO image that includes all the needed dependencies and configuration for running Microshift
.
In order to have this demo successful, make sure you have the following prerequisites:
- An installed RHEL 8.7 server (preferred to have at least 8 GRAM and 4 cores)
Enabling The RHEL Image Builder
First, we have to prepare our deployment server to have all the needed dependencies that will help us build the needed OS images.
For that, make sure that you run the following commands and install all needed dependencies, we’ll start with installing cockpit
and its osbuild
utilities:
$ yum install osbuild-composer composer-cli cockpit-composer bash-completion firewalld createrepo_c podman -y
Make sure that cockpit
can be accessed via web browser (if wanted), for the sake of this article we'll use composer-cli
:
$ firewall-cmd --add-service=cockpit && firewall-cmd --add-service=cockpit --permanent
Enable the needed service sockets:
$ systemctl enable osbuild-composer.socket cockpit.socket --now
If needed, you can also enable the auto-completion for the composer-cli
:
$ source /etc/bash_completion.d/composer-cl
Preparing Offline Repositories
In order for RHEL Image Builder
to create the needed OS images, we'll have to enable a few repositories to our bastion server, these will later be used as package sources for our Image Builder
:
$ sudo subscription-manager repos --enable rhocp-4.12-for-rhel-8-$(uname -i)-rpms --enable fast-datapath-for-rhel-8-$(uname -i)-rpms
Install the needed dependencies to create a local offline repository:
$ sudo yum install -y yum-utils
Mirror needed packages to a local directory path (This will mirror all needed rpm
packages to a directory located on your local server):
$ sudo reposync --arch=$(uname -i) --arch=noarch --gpgcheck \
--download-path /var/repos/microshift-local \
--repo=rhocp-4.12-for-rhel-8-$(uname -i)-rpms \
--repo=fast-datapath-for-rhel-8-$(uname -i)-rpms
Integrate Image Builder With Your Offline Repository
Create a TOML file that describes your repository, for the Image Builder
to use (This file includes the repository reference, quite similar to having a repo file in /etc/yum.repos.d
would):
$ sudo tee /var/repos/microshift-local/microshift.toml > /dev/null <<EOF
id = "microshift-local"
name = "MicroShift local repo"
type = "yum-baseurl"
url = "file:////var/repos/microshift-local/"
check_gpg = false
check_ssl = false
system = false
EOF
Turn all mirrored files to a real offline repository using the createrepo
command:
$ sudo createrepo /var/repos/microshift-local/
Directory walk started
Directory walk done - 1524 packages
Temporary output repo path: /var/repos/microshift-local/.repodata/
Preparing sqlite DBs
Pool started (with 5 workers)
Pool finished
Add the created repository as a source for our RHEL Image Builder
(This configuration points Image Builder
to the local repository created in earlier steps):
$ sudo composer-cli sources add /var/repos/microshift-local/microshift.toml
Validate that our source was added properly by getting its info:
$ sudo composer-cli sources info microshift-local
check_gpg = false
check_ssl = false
id = "microshift-local"
name = "MicroShift local repo"
rhsm = false
system = false
type = "yum-baseurl"
url = "file:////var/repos/microshift-local/"
Creating A RHEL For Edge rpm-ostree Commit Repository
Now that we have the prerequisites ready, we can deploy our first commit image.
This committed image will act as a repository that will be served for further OS upgrades and holds all the initial configuration and dependencies for deploying Microshift
.
Create a TOML
file that holds the needed configuration:
name = "microshift-rpmostree-commit"
description = ""
version = "0.0.1"
modules = []
groups = []
distro = ""
[[packages]]
name = "microshift"
version = "*"
[[packages]]
name = "microshift-networking"
version = "*"
[[packages]]
name = "microshift-release-info"
version = "*"
[[packages]]
name = "microshift-selinux"
version = "*"
[customizations]
hostname = "microshift"
[customizations.firewall]
ports = ["6443:tcp"]
[customizations.firewall.services]
enabled = ["http", "https", "ntp", "dhcp", "ssh"]
disabled = ["telnet"]
[customizations.services]
enabled = ["sshd", "microshift"]
[[customizations.user]]
name = "spaz"
password = "$6$m2YSOTwpT7rftgJZ$tps4BIkRAdNguFIjaTZlOk6EAHhQzMCWjPGSMfOHF4N8Q7N2OGT.l2KxhjAmOT6wGysPU0uE1VYTQLPd0O5oP."
key = "<YOUR_SSH_PUBLIC_KEY"
groups = ["wheel"]
In this file, you’ll be able to find services and ports that we want to start automatically, creation of users, passwords and SSH keys, so as the needed package dependencies needed for running Microshift
.
This is a great way to create a pre-built image for air-gapped environments.
Now, let’s push this configuration so that Image Builder
will be aware of it:
$ composer-cli blueprints push microshift-rpmostree-commit.toml
Building The rpm-ostree Commit Image
Now we can start building the committed image. This image can be later on downloaded as at .tar
file, which will be used to serve a local repository.
When needed, a new commit image can be published to the repository and the deployed instance will be able to pull new OS configs automatically.
To start the build phase, use the following command:
$ composer-cli compose start microshift-rpmostree-commit edge-commit
You can validate that everything has started properly using the following command:
$ composer-cli compose status
07dac143-f6f6-44f1-a866-e6cd6c710ed9 RUNNING Mon Apr 17 13:29:40 2023 microshift-rpmostree-commit 0.0.1 edge-commit
After the status is marked as FINISHED
, you can download you repository file using the following command:
$ composer-cli compose image 2956e7da-e3aa-4add-b4c7-1846dd54680a
2956e7da-e3aa-4add-b4c7-1846dd54680a-commit.tar
Now, untar the created repository file, in order to create the local repository (you should have a repo
directory created after running this command):
$ tar xvf 2956e7da-e3aa-4add-b4c7-1846dd54680a-commit.tar
Serving The rpm-ostree Local Repository
An important thing to understand is that the committed image that we’ve created in earlier steps, acts as a repository server for both packages and configuration for our RHEL For Edge instance that is running Microshift
.
Basically, we'll point our instance to that repository and it can pull data and upgrade itself automatically/manually.
In the next steps, we’ll build a container image that is based on nginx
, and contains all the needed dependencies.
Important!
The extraction of the previous command will output a directory called repo
, make sure that it's created as we need it for the following steps.
Create the nginx
configuration file that will represent your local repository web server:
cat > nginx << EOF
events {
}
http {
server{
listen 8080;
root /usr/share/nginx/html;
}
}
pid /run/nginx.pid;
daemon off;
EOF
Create a Containerfile
that will help you build the local repository's container image:
cat > Containerfile << EOF
FROM registry.access.redhat.com/ubi8/ubi
RUN yum -y install nginx && yum clean all
COPY repo /usr/share/nginx/html/
COPY nginx /etc/nginx.conf
EXPOSE 8080
CMD ["/usr/sbin/nginx", "-c", "/etc/nginx.conf"]
ARG commit
ADD ${commit} /usr/share/nginx/html/
EOF
Now, we can build the container image, using the files that we have created previously (make sure to use the .tar
file name that we've downloaded before):
$ podman build -t microshift-image-0.0.1 --build-arg commit=2956e7da-e3aa-4add-b4c7-1846dd54680a-commit.tar .
Run the container in order to serve the rpm-ostree
local repository:
$ podman run --name rpm-ostree-repository --rm -d -p 8080:8080 localhost/microshift-commit-image-0.0.1:latest
Create A Bootable ISO Image
Now, in order to create the bootable ISO file to deploy our server, we’ll create another blueprint that will point to our previously created commit image:
$ cat > microshift-rpmostree-installer.toml << EOF
name = "microshift-rpmostree-installer"
description = ""
version = "0.0.1"
modules = []
groups = []
EOF
Let’s push this configuration so that Image Builder
will be aware of it:
$ composer-cli blueprints push microshift-rpmostree-installer.toml
Now, we can start and build the bootable ISO image that will be used by our server (notice that here we chose edge-installer
image type, where in the previous command we've used edge-commit
).
This command will request and bootable ISO creation request from Image Builder
that can be downloaded later on:
$ composer-cli compose start-ostree --ref rhel/8/x86_64/edge --url http://127.0.0.1:8080/repo microshift-rpmostree-installer edge-installer
Verify that your new build request had turned into FINISHED
state:
$ composer-cli compose status
2956e7da-e3aa-4add-b4c7-1846dd54680a FINISHED Sat Apr 8 11:33:54 2023 microshift 0.0.1 edge-commit
16252a6d-f7e1-4736-93ff-3799d9211dd4 FINISHED Sat Apr 8 12:45:11 2023 microshift 0.0.1 edge-installer
Download the bootable ISO image using the following command:
$ composer-cli compose image 9509b4de-dc66-4020-9d24-86476ffe086f
9509b4de-dc66-4020-9d24-86476ffe086f-installer.iso
Adding Extra Configuration Using Kickstart
Previously, we’ve used the Image Builder
TOML files in order to create a static configuration of our server.
As we have a few prerequisites for Microshift
that might change more often, we can use a Kickstart
file in order to have a more dynamic configuration ability.
Create a Kickstart
file using the following command (make sure to change the served rpm-ostree
server to yours):
cat > kickstart.ks << EOF
lang en_US.UTF-8
keyboard us
timezone UTC
zerombr
clearpart --all --initlabel
part /boot/efi --fstype=efi --size=200
part /boot --fstype=xfs --asprimary --size=800
part pv.01 --grow
volgroup rhel pv.01
logvol / --vgname=rhel --fstype=xfs --size=10000 --name=root
reboot
text
network --bootproto=dhcp
ostreesetup --nogpg --url=http://<YOUR_REPO_SERVER>:8080/repo --osname=rhel --remote=edge --ref=rhel/8/x86_64/edge
%post
# Add the pull secret to CRI-O and set root user-only read/write permissions
cat > /etc/crio/openshift-pull-secret << EOF
<YOUR_PULL_SECRET>
EOF
chmod 600 /etc/crio/openshift-pull-secret
%end
%post
# Configure the firewall with the mandatory rules for MicroShift
firewall-offline-cmd --zone=trusted --add-source=10.42.0.0/16
firewall-offline-cmd --zone=trusted --add-source=169.254.169.1
%end
EOF
This Kickstart
file configures the needed local storage requirements for Microshift
, so as points the instance to its rpm-ostree
repository that was previously created.
Make sure to use your own pull secret when needed so that you'll be able to pull the needed images for deploying Microshift
.
In order for our server to have the wanted configuration, we’ll have to embed the current kickstart file into our created ISO, to do so use the following command:
$ mkksiso kickstart.ks 9509b4de-dc66-4020-9d24-86476ffe086f-installer.iso microshift-rpmostree-installer.iso
Deploying Microshift Using Zero-Touch-Provisioning
Now that you have the bootable ISO image, you can boot your device however you’d like (for example, can be flashed on a USB thumb drive).
Once your device is up and running, it’ll run Microshift
with all of our added configurations.
Validating Microshift’s Installation
Once your device has booted, make sure that your RHEL For Edge
instance indeed points to the created rpm-ostree
repository:
$ rpm-ostree status
State: idle
Deployments:
* edge:rhel/8/x86_64/edge
Version: 8.7 (2023-04-17T17:44:57Z)
Commit: 0d94e1c6053df35b64d1d1aec36ec9e93ff914f33254ce41b5ac588aba608f7d
Make sure that your Microshift
service is up and running:
systemctl status microshift
● microshift.service - MicroShift
Loaded: loaded (/usr/lib/systemd/system/microshift.service; disabled; vendor preset: disabled)
Active: active (running) since Sat 2023-04-08 13:26:10 EDT; 1min 9s ago
Main PID: 2551 (microshift)
Tasks: 28 (limit: 23417)
Memory: 350.1M
CPU: 20.059s
CGroup: /system.slice/microshift.service
└─2551 /usr/bin/microshift run
Now, let’s make sure that our cluster is up and running! Make sure to copy the kubeconfig
from /var/lib/microshift/resources/kubeadmin/kubeconfig
and copy it to where you have oc
or kubectl
command line (of course make sure to change the kube-api
server reference from 127.0.0.1
to your Microshift server's address):
$ oc get nodes --kubeconfig kubeconfig
NAME STATUS ROLES AGE VERSION
microshift Ready control-plane,master,worker 7m33s v1.25.0
Let’s make sure pods are indeed running and in a ready state:
$ oc get pods -A --kubeconfig kubeconfig
NAMESPACE NAME READY STATUS RESTARTS AGE
openshift-dns dns-default-zjj4z 2/2 Running 0 86s
openshift-dns node-resolver-d7g8s 1/1 Running 0 3m20s
openshift-ingress router-default-7764495c8b-pd76d 1/1 Running 0 3m14s
openshift-ovn-kubernetes ovnkube-master-jl29r 4/4 Running 0 3m20s
openshift-ovn-kubernetes ovnkube-node-jssrx 1/1 Running 0 3m20s
openshift-service-ca service-ca-5556ff5b86-hslw4 1/1 Running 0 3m15s
openshift-storage topolvm-controller-5fb656798-426sq 4/4 Running 0 3m21s
openshift-storage topolvm-node-dfg2q 4/4 Running 0 86s
Great! we have Microshift
up and running and we haven't touched a single thing!
Running Our First Edge Application
Make sure to create a namespace for the demo application:
$ oc create namespace microshift-demo --kubeconfig kubeconfig
Apply the following configuration to your cluster (it holds a simple Flask
server that will run in your newly created namespace):
oc --kubeconfig kubeconfig apply -f - << EOF
apiVersion: apps/v1
kind: Deployment
metadata:
name: flask-app
namespace: microshift-demo
labels:
app: flask
spec:
replicas: 1
selector:
matchLabels:
app: flask
template:
metadata:
labels:
app: flask
spec:
containers:
- name: flask-app
image: docker.io/shonpaz123/flask-microshift-demo:latest
ports:
- containerPort: 5000
---
apiVersion: v1
kind: Service
metadata:
name: flask-app-svc
namespace: microshift-demo
spec:
selector:
app: flask
ports:
- protocol: TCP
port: 5000
targetPort: 5000
type: ClusterIP
---
apiVersion: route.openshift.io/v1
kind: Route
metadata:
name: flask-app-route
namespace: microshift-demo
spec:
port:
targetPort: 5000
to:
kind: Service
name: flask-app-svc
EOF
Now, validate that the application is indeed running (of course, make sure to point a DNS record to your created Route
):
$ oc get pods -n microshift-demo --kubeconfig kubeconfig
NAME READY STATUS RESTARTS AGE
flask-app-54c5bdb95c-bvcdf 1/1 Running 0 151m
Open the browser and see the magic!
Conclusion
In this demo, we saw how we could use RHEL For Edge
and Microshift
in order to create a bootable, distributable image that can be used to deploy an Edge Device (the platform and the operating system) without having to touch anything at all.
I hope you’ve enjoyed this demo, see ya next time :)