This repository contains the source code of a simple Openshift Operator that manages JWS.
This prototype mimics the features provided by the JWS Tomcat8 Basic Template. It allows the automated deployment of Tomcat instances.
The operator has been written in Golang. It uses the operator-sdk as development Framework and project manager. This SDK allows the generation of source code to increase productivity. It is solely used to conveniently write and build an Openshift or Kubernetes operator (the end-user does not need the operator-sdk to deploy a pre-build version of the operator)·
The development workflow used in this prototype is standard to all Operator development, check the operator SDK doc for that.
To build the operator, you will first need to install the following:
- [Golang] (https://golang.org/doc/install)
- [podman and podman-docker] (most of the linux distributions have it).
Now that the required tools are installed, follow these few steps to build it:
- make sure you have kube-apiserver, etcd and kubectl installed, they are needed for docker-build to make local tests. (see https://book.kubebuilder.io/reference/artifacts.html and the notes below)
- Clone the repo in $GOPATH/src/github.com/web-servers
- Set a name for your image. Default value is docker.io/${USER}/jws-operator:latest
- The first time you build you have to download controller-gen in bin
$ make controller-gen
- Sync the vendor directory
$ go mod vendor
- Then, simply run
make manifests docker-build docker-push
to build the operator and push it to your image registry.
You will need to push it to a Docker Registry accessible by your Openshift Server in order to deploy it. For example:
$ mkdir -p $GOPATH/src/github.com/web-servers
$ cd $GOPATH/src/github.com/web-servers
$ git clone https://github.com/web-servers/jws-operator.git
$ export IMG=quay.io/${USER}/jws-operator
$ cd jws-operator
$ podman login quay.io
$ make manifests docker-build docker-push
Note the Makefile uses go mod tidy, go mod vendor then go build to build the executable and podman to build and push the image.
Note the build is done using a docker image: Check the Dockerfile, note the FROM golang:1.17 as builder so don't forget to adjust it with changing the go version in go.mod.
Note To generate the vendor
directory which is needed to build the operator internally in RH build system, check out the repository and run go mod vendor
(add -v for verbose output) and wait for the directory to get updated.
Note The TEST_ASSET_KUBE_APISERVER, TEST_ASSET_ETCD and TEST_ASSET_KUBECTL can be used to define kube-apiserver, etcd and kubectl if they are not in $PATH (see https://book.kubebuilder.io/reference/envtest.html for more).
Make sure you have OLM installed, otherwise install it. See https://olm.operatorframework.io/docs/getting-started/ To build the bundle and deploy the operator do something like the following:
make bundle
podman login quay.io
make bundle-build bundle-push BUNDLE_IMG=quay.io/${USER}/jws-operator-bundle:0.0.0
operator-sdk run bundle quay.io/${USER}/jws-operator-bundle:0.0.0
To remove
operator-sdk cleanup jws-operator
Note Check the installModes: in bundle/manifests/jws-operator.clusterserviceversion.yaml (all AllNamespaces is openshift-operators) Note Uninstall other versions of the operator otherwise the your modifications might not be visible.
The operator is pre-built and containerized in a docker image. By default, the deployment has been configured to utilize that image. Therefore, deploying the operator can be done by following these simple steps:
make deploy IMG=quay.io/${USER}/jws-operator
Note Uninstall other versions of the operator otherwise the your modifications might not be visible.
To check for the operator installation you can check the operator pods
kubectl get pods -n jws-operator-system
You should get something like:
NAME READY STATUS RESTARTS AGE
jws-operator-controller-manager-789dcf556f-2cl2q 2/2 Running 0 2m13s
- Define a namespace
$ export NAMESPACE="jws-operator"
- Login to your Openshift Server using
oc login
and use it to create a new project
$ oc new-project $NAMESPACE
- Install the JWS Tomcat Basic Image Stream in the openshift project namespace. For testing purposes, this repository provides a version of the corresponding script (xpaas-streams/jws54-tomcat9-image-stream.json) using the unsecured Red Hat Registy (registry.access.redhat.com). Please make sure to use the latest version with a secured registry for production use.
$ oc create -f xpaas-streams/jws56-tomcat9-image-stream.json -n openshift
As the image stream isn't namespace-specific, creating this resource in the openshift project makes it convenient to reuse it across multiple namespaces. The following resources, which are more specific, will need to be created for every namespace. If you don't use the -n openshift or use another ImageStream name you will have to adjust the imageStreamNamespace: to $NAMESPACE and imageStreamName: to the correct value in the Custom Resource file config/samples/jws_v1alpha1_tomcat_cr.yaml.
- Create a Tomcat instance (Custom Resource). An example has been provided in config/samples/web.servers.org_webservers_imagestream_cr.yaml . Make sure to adjust sourceRepositoryUrl, sourceRepositoryRef (branch) and contextDir (subdirectory) to you webapp sources, branch and context. like:
kind: WebServer
metadata:
name: example-imagestream-webserver
spec:
applicationName: jws-app
replicas: 2
webImageStream:
imageStreamNamespace: openshift
imageStreamName: webserver56-openjdk8-tomcat9-ubi8-image-stream
webSources:
sourceRepositoryUrl: https://github.com/jboss-openshift/openshift-quickstarts.git
sourceRepositoryRef: "1.2"
contextDir: tomcat-websocket-chat
- Then deploy your webapp.
$ oc apply -f config/samples/web.servers.org_webservers_imagestream_cr.yaml
-
If the DNS is not setup in your Openshift installation, you will need to add the resulting route to your local
/etc/hosts
file in order to resolve the URL. It has point to the IP address of the node running the router. You can determine this address by runningoc get endpoints
with a cluster-admin user. -
Finally, to access the newly deployed application, simply use the created route with /demo-1.0/demo
oc get routes
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
jws-app jws-app-jws-operator.apps.jclere.rhmw-runtimes.net jws-app <all> None
Then go to http://jws-app-jws-operator.apps.jclere.rhmw-runtimes.net/demo-1.0/demo using a browser.
- To remove everything
oc delete webserver.web.servers.org/example-webserver
oc delete deployment.apps/jws-operator
Note that the first oc delete deletes what the operator creates for the example-webserver application, these second oc delete deletes the operator and all resource it needs to run. The ImageStream can be deleted manually if needed.
The operator is pre-built and containerized in a docker image. By default, the deployment has been configured to utilize that image. Therefore, deploying the operator can be done by following these simple steps:
- Define a namespace
$ export NAMESPACE="jws-operator"
- Login to your Openshift Server using
oc login
and use it to create a new project
$ oc new-project $NAMESPACE
-
Prepare your image and push it somewhere See https://github.com/jfclere/tomcat-openshift or https://github.com/apache/tomcat/tree/master/modules/stuffed to build the images.
-
Create a Tomcat instance (Custom Resource). An example has been provided in config/samples/web.servers.org_webservers_cr.yaml
apiVersion: web.servers.org/v1alpha1
kind: WebServer
metadata:
name: example-image-webserver
spec:
applicationName: jws-app
replicas: 2
webImage:
applicationImage: quay.io/jfclere/tomcat10:latest
- Then deploy your webapp.
$ oc apply -f config/samples/web.servers.org_webservers_cr.yaml
- On kubernetes you have to create a balancer to expose the service and later something depending on your cloud to expose the application
kubectl expose deployment jws-app --type=LoadBalancer --name=jws-balancer
kubectl kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
jws-balancer LoadBalancer 10.100.57.140 <pending> 8080:32567/TCP 4m6s
The service jws-balancer then can be used to expose the application.
- To remove everything
oc delete webserver.web.servers.org/example-webserver
oc delete deployment.apps/jws-operator --namespace jws-operator-system
or better to clean everything:
oc delete webserver.web.servers.org/example-webserver
make undeploy
Note that the first oc delete deletes what the operator creates for the example-webserver application, these second oc delete deletes the operator and all resource it needs to run. The ImageStream can be deleted manually if needed.
serverReadinessScript and serverLivenessScript allow to use a custom liveness or readiness probe, we support the following formats:
for a single command:
serverLivenessScript: command
for script:
serverLivenessScript: command; command; command
In case you don't use the HealthCheckValve you have to configure at least a serverReadinessScript.
For example if you are using the JWS 5.4 images you could use the following:
webServerHealthCheck:
serverReadinessScript: /usr/bin/curl --noproxy '*' -s 'http://localhost:8080/health' | /usr/bin/grep -i 'status.*UP'
If you are using a openjdk:8-jre-alpine based image and /test is your health URL:
serverReadinessScript: /bin/busybox wget http://localhost:8080/test -O /dev/null
Note that HealthCheckValve requires tomcat 9.0.38+ or 10.0.0-M8 to work as expected and it was introducted in 9.0.15.
To run a test with a real cluster you need a real cluster (kubernetes or openshift). A secret is needed to run a bunch of tests. You can create the secret using something like:
kubectl create secret generic secretfortests --from-file=.dockerconfigjson=$HOME/.docker/config.json --type=kubernetes.io/dockerconfigjson
Some tests are pulling from the redhat portal make sure you have access to it (otherwise some tests will fail), some tests need to push to quay.io make sure you have access there. The repositories you have to be able to pull from for the tests are:
registry.redhat.io/jboss-webserver-5/webserver56-openjdk8-tomcat9-openshift-rhel8
quay.io/jfclere/tomcat10-buildah
quay.io/jfclere/tomcat10
The quay.io ones are public
You also need to be able to push to:
quay.io/${USER}/test
When on openshift the jboss-webserver56-openjdk8-tomcat9-ubi8-image-stream ImageStream is used by the tests, to create it
oc secrets link default secretfortests --for=pull
oc create -f xpaas-streams/jws56-tomcat9-image-stream.json
To test the routes created by the operator for tls we need a secret to mount to the pod containing the certificates for tomcat. Secret named 'test-tls-secret' must contain a 'server.crt', 'server.key' and 'ca.crt' . Route tests mounts this to '/tls' and execute the tests. You can create the secret with this command:
kubectl create secret generic test-tls-secret --from-file=server.crt=server.crt --from-file=server.key=server.key --from-file=ca.crt=ca.crt
The PersistentLogs tests require a PV and SC to be created, check https://github.com/web-servers/jws-operator/blob/main/test/scripts/README.md to create them before starting the tests.
To run the test to:
make realtest
The whole testsuite takes about 40 minutes...
Note When running the tests on OpenShift make sure to test in your own namespace and DON'T use default. Also make sure you have added "anyuid" to the ServiceAccount builder:
oc adm policy add-scc-to-user anyuid -z builder
Note When using podman remember the auth.json is in ${XDG_RUNTIME_DIR}/containers the format is like the $HOME/.docker/config.json but has the username/repo instead just username (like "quay.io/jfclere/jws-operator" versus "quay.io/jfclere" in docker).
Below are some features that may be relevant to add in the near future.
Adding Support for Custom Configurations
The JWS Image Templates provide custom configurations using databases such as MySQL, PostgreSQL, and MongoDB. We could add support for these configurations defining a custom resource for each of these platforms and managing them in the Reconciliation loop.
Handling Image Updates
This may be tricky depending on how we decide to handle Tomcat updates. We may need to implement data migration along with backups to ensure the reliability of the process. The operator can support updates in 2 ways: Pushing a new image in the ImageStream (OpenShift only) or updating the CR yaml file
Adding Full Support for Kubernetes Clusters
The Operator supports some Openshift specific resources such as DeploymentConfigs, Routes, and ImageStreams. Those are not available on Kubernetes cluster. Building from source in Kubernetes requires an additional image builder image, like the BuildConfig the builder needs to use a Docker repository to push what it is building. See https://github.com/web-servers/image-builder-jws for the builder.