This Node.JS Application is part of the Inventory Service for the IBM Cloud Native Toolkit Journey. This application allows users to produce a message to a kafka topic notifying all consumers that an update to an item in inventory has occured.
This application supports the following platforms:
- Confluent
- Local Kafka
- Strimzi
Operator Setup
Follow the Instructions at the following link to setup [confluent](https://github.ibm.com/ben-cornwell/confluent-operator/) on OpenShift.
Be sure to record the global.sasl.plain.username
and global.sasl.plain.password
from the values
file in the confluent-operator
directory for the Secret Creation
step below.
Once the operator has finished installing, copy the confluentCA.key
and confluentCA.pem
and move it to a convient location for you to access. Both will be needed for the Secret Creation
step as well.
Secret Creation
Secrets will be needed in order to connect your Kafka Client to the running instance of Kafka. **Two** secrets will need to be created.
First will be named confluent-kafka-cert
. Use the following command to create the secret:
oc create secret tls confluent-kafka-cert --cert='./~PATH TO PEM~/confluentCA.pem' --key='./~PATH TO KEY~/confluentCA.key' -n NAMESPACE
Replace the PATH TO
with the proper directory path to the file and NAMESPACE
with the namespace you want it to be deployed.
The second key to create will be named kafka-operator-key
. Use the following command to create the secret:
oc create secret generic kafka-operator-key --from-literal=username=GLOBAL.SASL.PLAIN.USERNAME --from-literal=password=GLOBAL.SASL.PLAIN.PASSWORD -n NAMESPACE
Replace the GLOBAL.SASL.PLAIN.*
with the value from the previous step and NAMESPACE
with the namespace you want it to be deployed.
Client Configuration
First we need to setup the `clusterDev` configuration for the new deployed services.
Open the file /src/env/clusterDev.js
. Modify the following capitalized parameters to match your deployment.
kafka: {
TOPIC: 'YOUR TOPIC',
BROKERS: ['kafka.NAMESPACE.svc:9071'],
GROUPID: 'GROUPID',
CLIENTID: 'CLIENTID',
SASLMECH:'plain',
CONNECTIONTIMEOUT: 3000,
AUTHENTICATIONTIMEOUT: 1000,
REAUTHENTICATIONTHRESHOLD: 10000,
RETRIES: 3,
MAXRETRYTIME: 5
}
Check out the documentation for details about the other parameters.
Running the Client
Deploying to Openshift
oc apply -f openshift/deployment.yaml -n NAMESPACE
Once the Deployment is ready, Access the swagger page via through the Route that was created. You can find the route through:
oc get route -n NAMESPACE | grep test-kafka
Go to the link that looks like: test-kafka-NAMESPACE.---.us-east.containers.appdomain.cloud
Kafka Setup
Make sure you have an instance of kafka running either locally or remotely.
Following the instruction here for running kafka locally.
Local Client Configuration
First we need to setup the `localDev` configuration for the new deployed services.
Open the file /src/env/localDev.js
. Modify the following capitalized parameters to match your deployment.
kafka: {
TOPIC: 'YOUR TOPIC',
BROKERS: ['localhost:9092'],
GROUPID: 'GROUPID',
CLIENTID: 'CLIENTID',
CONNECTIONTIMEOUT: 3000,
AUTHENTICATIONTIMEOUT: 1000,
REAUTHENTICATIONTHRESHOLD: 10000,
RETRIES: 3,
MAXRETRYTIME: 5
}
Running the Client
Install the dependencies
npm install
To start the server run:
npm run dev
Access the swagger page via http:localhost:3000
Coming Soon...
- Bryan Kribbs ([email protected])