Skip to content
This repository has been archived by the owner on Jul 22, 2024. It is now read-only.

Example Health is a conceptual healthcare/insurance type company. It has been around a long time, and has 100s of thousands of patient records in an SQL database connected to a mainframe. This repo houses the UI/Service a data analyst would interact with.

License

Notifications You must be signed in to change notification settings

IBM/example-health-analytics

Repository files navigation

Build Status

Example Health Analytics

This project is a conceptual Node.js analytics web application for a health records system, designed to showcase best in class integration of modern cloud technology, in collaboration with legacy mainframe code.

NOTE: This project is also compatible with the Example Health JEE Application on Openshift. See notes below for details.

Example Health Context

Example Health is a conceptual healthcare/insurance type company. It has been around a long time, and has 100s of thousands of patient records in a SQL database on a mainframe running z/OS. Their health records look very similar to the health records of most insurance companies.

Here's a view a data analyst might see when they interact with the Example Health Analytics Application:

Example Health has recently started understanding how data science/analytics on some of the patient records might surface interesting insights. There is lots of talk about this among some of the big data companies.

Example Health has also heard a great deal about cloud computing. There is a lot of legacy code in the mainframe, and it works well for now, but they think it may be a complimentary opportunity to explore some data science/analytics in the cloud.

Their CTO sees an architecture for Example Health like this:

Architecture

Using Kubernetes

Using Cloud Foundry

  1. Data Service API acts as a data pipeline and is triggered for updating data lake with updated health records data by calling API Connect APIs associated with the z/OS Mainframe.
  2. API Connect APIs process relevant health records data from z/OS Mainframe data warehouse and send the data through the data pipeline.
  3. The Data Service data pipeline processes z/OS Mainframe data warehouse data and updates MongoDB data lake.
  4. User interacts with the UI to view and analyze analytics.
  5. The functionality of the App UI that the User interacts with is handled by Node.js. Node.js is where the API calls are initialized.
  6. The API calls are processed in the Node.js data service and are handled accordingly.
  7. The data is gathered from the MongoDB data lake from API calls.
  8. The responses from the API calls are handled accordingly by the App UI.

Steps

Follow these steps to setup and run this code pattern locally and on the Cloud. The steps are described in detail below.

  1. Clone the repo
  2. Prerequisites
  3. Get Mapbox Access Token
  4. Run the application
  5. Deploy to IBM Cloud

1. Clone the repo

Clone the example-health-analytics repo locally. In a terminal, run:

git clone https://github.com/IBM/example-health-analytics
cd example-health-analytics

2. Prerequisites

For running these services locally without Docker containers, the following will be needed:

  • MongoDB
  • NodeJS
  • NPM
  • Relevant Node Components: Use npm install in /data-service and /web

NOTE: Run the command csvtojson in /generate, if there is an error csvtojson command not found, run sudo npm install -g csvtojson@latest. If the error still persists, uninstall Node and reinstall Node.js LTS, run the command sudo npm install -g npm and run sudo npm install -g csvtojson@latest & csvtojson.

3. Get Mapbox Access Token

  1. In order to make API calls to help in populating the Mapbox map used, a Mapbox access token will be needed.
  2. Assign the access token to MAPBOX_ACCESS_TOKEN in docker-compose.yml.

4. Run the application

z/OS Mainframe Data

NOTE: If using the Example Health JEE Application on Openshift as your data source, follow these steps.

If your data source for this application is on a z/OS Mainframe, follow these steps for populating the datalake and running the application:

  1. Assign the API Connect URL to DATA_SOURCE_API in docker-compose.yml

NOTE: If using the Example Health JEE Application on Openshift as your data source, assign that API URL to DATA_SOURCE_API

  1. Start the application by running docker-compose up --build in this repo's root directory.
  2. Once the containers are created and the application is running, use the Open API Doc (Swagger) at http://localhost:3000 and API.md for instructions on how to use the APIs.
  3. Run curl localhost:3000/api/v1/update -X PUT to connect to the z/OS Mainframe and populate the data lake. For information on the data lake and data service, read the data service README.md.
  4. Once the data has been populated in the data lake, use http://localhost:4000 to access the Example Health Analytics UI. For information on the analytics data and UI, read the web README.md.

Generate Data

If you do not have a data source for this application and would like to generate mock data, follow these steps for populating the datalake and running the application:

  1. Start the application by running docker-compose up --build in this repo's root directory.
  2. Once the containers are created and the application is running, use the Open API Doc (Swagger) at http://localhost:3000 and API.md for instructions on how to use the APIs.
  3. Use the provided generate/generate.sh script to generate and populate data. Read README.md for instructions on how to use the script. For information on the data lake and data service, read the data service README.md.
  4. Once the data has been populated in the data lake, use http://localhost:4000 to access the Example Health Analytics UI. For information on the analytics data and UI, read the web README.md.

5. Deploy to IBM Cloud

Kubernetes

  1. To allow changes to the Data Service or the UI, create a repo on Docker Cloud where the new modified containers will be pushed to.

NOTE: If a new repo is used for the Docker containers, the container image will need to be modified to the name of the new repo used in deploy-dataservice.yml and/or deploy-webapp.yml.

export DOCKERHUB_USERNAME=<your-dockerhub-username>

docker build -t $DOCKERHUB_USERNAME/examplehealthanalyticsdata:latest data-service/
docker build -t $DOCKERHUB_USERNAME/examplehealthanalyticsweb:latest web/

docker login

docker push $DOCKERHUB_USERNAME/examplehealthanalyticsdata:latest
docker push $DOCKERHUB_USERNAME/examplehealthanalyticsweb:latest
  1. Provision the IBM Cloud Kubernetes Service and follow the set of instructions for creating a Container and Cluster based on your cluster type, Standard vs Lite.

NOTE use --sso if you have a single sign on account, or delete for username/password login

ibmcloud login --sso
  • Set the Kubernetes environment to work with your cluster:
ibmcloud cs cluster-config $CLUSTER_NAME

The output of this command will contain a KUBECONFIG environment variable that must be exported in order to set the context. Copy and paste the output in the terminal window. An example is:

export KUBECONFIG=/home/rak/.bluemix/plugins/container-service/clusters/Kate/kube-config-prod-dal10-<cluster_name>.yml

Lite Cluster Instructions

  1. Get the workers for your Kubernetes cluster:
ibmcloud cs workers <mycluster>

and locate the Public IP. This IP is used to access the Data Service and UI on the Cloud. Update the env values for HOST_IP in deploy-dataservice.yml to <Public IP>:32000 and DATA_SERVER in deploy-webapp.yml to http://<Public IP>:32000. Also in deploy-dataservice.yml, update the env value for SCHEME to http.

  1. Assign the Mapbox access token to MAPBOX_ACCESS_TOKEN in deploy-dataservice.yml and deploy-webapp.yml. If your data source for this application is on a z/OS Mainframe, assign the API Connect URL to DATA_SOURCE_API in deploy-dataservice.yml.

NOTE: If using the Example Health JEE Application on Openshift as your data source, assign that API URL to DATA_SOURCE_API

  1. To deploy the services to the IBM Cloud Kubernetes Service, run:
kubectl apply -f deploy-mongodb.yml
kubectl apply -f deploy-dataservice.yml
kubectl apply -f deploy-webapp.yml

## Confirm the services are running - this may take a minute
kubectl get pods
  1. Use http://PUBLIC_IP:32001 to access the UI and the Open API Doc (Swagger) at http://PUBLIC_IP:32000 for instructions on how to make API calls.

Standard Cluster Instructions

  1. Run ibmcloud cs cluster-get <CLUSTER_NAME> and locate the Ingress Subdomain and Ingress Secret. This is the domain of the URL that is to be used to access the Data Service and UI on the Cloud. Update the env values for HOST_IP in deploy-dataservice.yml to api.<Ingress Subdomain> and DATA_SERVER in deploy-webapp.yml to https://api.<Ingress Subdomain>. Also in deploy-dataservice.yml, update the env value for SCHEME to https. In addition, update the host and secretName in ingress-dataservice.yml and ingress-webapp.yml to Ingress Subdomain and Ingress Secret.

  2. Assign the Mapbox access token to MAPBOX_ACCESS_TOKEN in deploy-dataservice.yml and deploy-webapp.yml. If your data source for this application is on a z/OS Mainframe, assign the API Connect URL to DATA_SOURCE_API in deploy-dataservice.yml.

NOTE: If using the Example Health JEE Application on Openshift as your data source, assign that API URL to DATA_SOURCE_API

  1. To deploy the services to the IBM Cloud Kubernetes Service, run:
kubectl apply -f deploy-mongodb.yml
kubectl apply -f deploy-dataservice.yml
kubectl apply -f deploy-webapp.yml

## Confirm the services are running - this may take a minute
kubectl get pods

## Update protocol being used to https
kubectl apply -f ingress-dataservice.yml
kubectl apply -f ingress-webapp.yml
  1. Use https://<INGRESS_SUBDOMAIN> to access the UI and the Open API Doc (Swagger) at https://api.<INGRESS_SUBDOMAIN> for instructions on how to make API calls.

Cloud Foundry

  1. Provision two SDK for Node.js applications. One will be for ./data-service and the other will be for ./web.

  2. Provision a Compose for MongoDB database.

  3. Update the following in the manifest.yml file:

  • name for both Cloud Foundry application names provisioned from Step 1.

  • services with the name of the MongoDB service provisioned from Step 2.

  • HOST_IP and DATA_SERVER with the host name and domain of the data-service from Step 1.

  • MONGODB with the HTTPS Connection String of the MongoDB provisioned from Step 2. This can be found under Manage > Overview of the database dashboard.

  • MAPBOX_ACCESS_TOKEN with the Mapbox access token.

  • DATA_SOURCE_API with the API Connect URL if your data source for this application is on a z/OS Mainframe.

NOTE: If using the Example Health JEE Application on Openshift as your data source, assign that API URL to DATA_SOURCE_API

  1. Connect the Compose for MongoDB database with the data service Node.js app by going to Connections on the dashboard of the data service app provisioned and clicking Create Connection. Locate the Compose for MongoDB database you provisioned and press connect.

  1. To deploy the services to IBM Cloud Foundry, go to one of the dashboards of the apps provisioned from Step 1 and follow the Getting Started instructions for connecting and logging in to IBM Cloud from the console (Step 3 of Getting Started). Once logged in, run ibmcloud app push from the root directory.

  2. Use https://<WEB-HOST-NAME>.<WEB-DOMAIN> to access the UI and the Open API Doc (Swagger) at https://<DATA-SERVICE-HOST-NAME>.<DATA-SERVICE-DOMAIN> for instructions on how to make API calls.

License

This code pattern is licensed under the Apache License, Version 2. Separate third-party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 and the Apache License, Version 2.

Apache License FAQ

About

Example Health is a conceptual healthcare/insurance type company. It has been around a long time, and has 100s of thousands of patient records in an SQL database connected to a mainframe. This repo houses the UI/Service a data analyst would interact with.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published