This is an example of how to deploy and use a service virtualization, API mocking and API simulation tool Traffic Parrot with Docker and Openshift 3.
Follow the examples below to use Traffic Parrot in an OpenShift 3 environment.
- OpenShift 3 is now in end of life phase in favour of OpenShift 4
- OpenShift 3 Online is replaced by OpenShift 4 cloud services
- Minishift for OpenShift 3 is replaced by OpenShift 4 Local
- OpenShift Origin 3 open source edition is replaced by OpenShift Origin Key Distribution 4
- This example is still broadly applicable to OpenShift 4, but there may be slight differences in e.g. the YAML templates, permissions and
oc
commands that you will run into - If you are using OpenShift 4, please contact us and we will help you to get started with an example we have validated in OpenShift 4
There are a number of ways you can do this, for example:
- Locally using Minishift for OpenShift 3
- OpenShift Origin 3 open source edition hosted in AWS
- OpenShift Origin 3 open source edition hosted on premise
- Red Hat commercial edition hosted in OpenShift 3 Online
- Red Hat commercial edition hosted in AWS (requires a Red Hat subscription)
- Red Hat commercial edition hosted on premise (requires a Red Hat subscription)
You will need 2Gi free RAM and 2Gi free persistent volume storage to run the entire CI/CD demo including a Jenkins pipeline, demo application deployment and a Traffic Parrot deployment.
- You will need the
oc
client tool to issue commands to the cluster - You will need Docker to be able to build and push custom Docker images to the registry
Find your login by clicking on "Command Line Tools in the web console:
Then copy the token:
And login to the console:
oc login https://api.<instance>.openshift.com --token=<token>
You can use the following commands to log in with the default system user:
oc login
system
admin
We will work with the project trafficparrot-test
oc new-project trafficparrot-test
If you want to start again at any point, you can do:
oc delete project trafficparrot-test
Set a local variable that contains the cluster registry, for example:
CLUSTER_REGISTRY=registry.<instance>.openshift.com
for OpenShift 3 onlineCLUSTER_REGISTRY=$(minishift openshift registry)
for Minishift
Now let's log in to the registry so that we can build and push our Traffic Parrot image:
docker login -u developer -p $(oc whoami -t) ${CLUSTER_REGISTRY}
We build the image locally and tag it ready to be used in the cluster registry.
Note that you must set build arguments:
TRAFFIC_PARROT_ZIP
is a HTTP location or a local file location. You can download a trial copy.ACCEPT_LICENSE
should be set totrue
if you accept the terms of the LICENSE
docker build \
--build-arg TRAFFIC_PARROT_ZIP=<fill this in> \
--build-arg ACCEPT_LICENSE=<fill this in> \
--tag trafficparrot:4.1.6 \
--file openshift/trafficparrot/Dockerfile .
Next, we tag and push to the cluster registry:
docker tag trafficparrot:4.1.6 ${CLUSTER_REGISTRY}/trafficparrot-test/trafficparrot-image
docker push ${CLUSTER_REGISTRY}/trafficparrot-test/trafficparrot-image
You will need Jenkins in the OpenShift 3 cluster to provide CI/CD support for the pipeline.
The easiest way to do this is via the web console catalog:
Change the memory to 750Mi, Jenkins is quite memory hungry. Accept the default values for everything else.
NOTE: It is best to wait at least 10 minutes for Jenkins to fully start up the first time. The UI will initially be unresponsive and return an "Application is not available" message.
First we need the ability to build Java images:
oc create -f openshift/openjdk-s2i-imagestream.json
Now we can import the pipeline:
oc create -f openshift/finance/pipeline.yaml
Next, run the pipeline:
The pipeline will:
- Build the
finance-application
demo image - Deploy Traffic Parrot using the
trafficparrot-image
we pushed to the registry earlier - Import the OpenAPI definition markit.yaml into Traffic Parrot
- Deploy the
finance-application
- Wait for you to preview the demo
To preview the demo, click on the finance route:
You should see this:
You can push the pipeline forwards by clicking on and then the button in Jenkins:
Behind the scenes, we just demonstrated that the demo finance application was able to communicate with Traffic Parrot inside the cluster.
Traffic Parrot was configured by importing an OpenAPI definition markit.yaml using the HTTP Management API.
Have a look at the configuration files in this project to see how it is done. The key files are:
- finance/pipeline.yaml is the pipeline
BuildConfig
- finance/jenkinsfile.groovy is the Jenkins pipeline configuration
- finance/build.json is the
BuildConfig
used to build the finance application - finance/deploy.json is the
Template
used to deploy the finance application - trafficparrot/Dockerfile is the
Dockerfile
used to build the Traffic Parrot image - trafficparrot/deploy.json is the
Template
used to deploy Traffic Parrot
To clean up the pipeline:
oc delete bc finance-pipeline
To clean up the demo app:
oc delete all -l "app=${DEMO_ID}"
oc delete configmap ${DEMO_ID}-config
To clean up Traffic Parrot:
oc delete all -l "app=${TRAFFIC_PARROT_ID}"
oc delete imagestream ${TRAFFIC_PARROT_ID}