Services for working with MDS provider
data, built as runnable Docker containers.
These services are implemented via mds-provider
,
a general-purpose Python library for working with MDS Provider data.
The services are organized around specific functions. More detailed explanation can be found in service README
files.
service | description |
---|---|
analytics |
Perform analysis on provider data |
client |
pgAdmin4 web client |
db |
Work with a provider database |
fake |
Generate fake provider data for testing and development |
ingest |
Ingest provider data from different sources |
server |
Local postgres database server |
validate |
Validate provider data feeds and/or local MDS payload files. |
Requires both Docker and Docker Compose.
Commands below should be run from the root of this repository, where the docker-compose.yml
file lives.
Copy the dev
file and edit as necessary. Compose automatically uses this file for service definitions and configuration.
You shouldn't have to make too many (if any) changes; see the next step for environment variable configuration.
cp docker-compose.dev.yml docker-compose.yml
Alternatively, use the dev
file as-is by prepending a switch to docker-compose
commands, e.g.:
docker-compose -f docker-compose.dev.yml CMD [OPTIONS] SERVICE [OPTIONS]
Copy the sample and edit as necessary. Compose automatically sources this environment file for docker-compose
commands.
cp .env.sample .env
Modify this file with your own settings, but the defaults should be good enough to get going.
Build and start the necessary containers to load and explore a Postgres database.
bin/initdb.sh
Now you can browse to http://localhost:PGADMIN_HOST_PORT
and login with the PGADMIN_DEFAULT
credentials.
Attach to the server POSTGRES_HOSTNAME
, database MDS_DB
, with the MDS
credentials.
The other services rely on a common python:3.7
-based image:
docker-compose build base
Generally, an individual service SERVICE
can be run with a command like:
docker-compose run SERVICE [OPTIONS]
See the README
file in each service folder for more details.
analytics
, fake
and ingest
all come with Jupyter Notebook servers that can be run locally:
bin/notebook.sh SERVICE [ARGS]
Now browse to http://localhost:NB_HOST_PORT
and append the /?token=<token>
param shown in the Notebook container startup output.
Note your NB_HOST_PORT
may be different than the default shown in the container output (8888
).
Also note that all of the services make use of the same NB_HOST_PORT
environment variable, and so they cannot be run at the same time!
Modify docker-compose.yml
if you need to use different ports to run Notebook servers on multiple services simultaneously.
Optional [ARGS]
will be passed directly to the jupyter notebook
startup command. See bin/notebook.sh for details.
Run a local Postgres database server:
docker-compose up server
This container uses the following environment variables to create the Postgres server:
POSTGRES_HOSTNAME=server
POSTGRES_DB=postgres
POSTGRES_USER=postgres
POSTGRES_PASSWORD=postgres_password
A web client interface into local and remote Postgres databases:
docker-compose up client
This container uses the following environment variables to configure pgAdmin4:
[email protected]
PGADMIN_DEFAULT_PASSWORD=pgadmin_password
PGADMIN_HOST_PORT=8088
Once running, connect to the container from a web browser at: http://localhost:$PGADMIN_HOST_PORT
.
Use the $PGADMIN_DEFAULT_EMAIL
and $PGADMIN_DEFAULT_PASSWORD
to log in.