This project demonstrates how Batch Reinforcement Learning (RL) can be effectively used for Decision Support in Clinical Settings. It explores the acquisition, processing, modeling, and application of clinical data for RL-based decision-making.
For more details, you can read the accompanying blog post.
- About
- Data Acquisition
- Pre-Processing
- Creating Trajectories (Episodes) and Feature Encoding
- Modeling with Intel Coach
- Installation and Setup
- Experimental Evaluation
- Outputs
The aim of this experiment is to demonstrate how batch reinforcement learning can be used to support clinical decision-making. The pipeline includes:
- Data Acquisition
- Data Pre-Processing
- Creating Trajectories (Episodes) and Feature Encoding
- Modeling with Reinforcement Learning
The data used in this project comes from the MIMIC-III database, which contains health-related records for critical care patients. To acquire this data, follow these steps:
- Create an account on PhysioNet and request access to the dataset: MIMIC-III Clinical Database v1.4.
- Review the schema of the database here: MIMIC Schema.
If you don't have a data engineering team available to set up data pipelines, it's useful to have a quick way to access powerful computing resources.
Follow this guide on how to set up an AWS EC2 instance for data analysis: Zero to AWS EC2 for Data Science.
To connect to the AWS instance, use the command below (using Transmit and iTerm simultaneously is recommended for easy file transfers).
ssh -i "mimic2.pem" ubuntu@<your-ec2-instance-address>
Screen is a useful utility for managing remote terminal sessions, allowing you to keep multiple terminal sessions open.
Install Screen using the following commands:
sudo apt-get update
sudo apt-get install screen
screen
Verify that the screen session is running properly by pressing Ctrl-a v
.
To download the necessary code:
git clone https://github.com/MLforHealth/MIMIC_Extract.git
cd ~/MIMIC_Extract/data
Use wget to download the MIMIC-III dataset (enter your PhysioNet credentials when prompted):
wget -r -N -c -np --user <your-username> --ask-password https://physionet.org/files/mimiciii/1.4/
(Optional) Uncompress the files:
gunzip *.gz
Install PostgreSQL and create a database for storing the MIMIC-III data:
sudo apt-get update
sudo apt-get install postgresql
sudo -u postgres createuser --interactive
createdb mimic
Connect to the PostgreSQL database and set up the schema:
psql -U ubuntu -d mimic
\c mimic;
CREATE SCHEMA mimiciii;
set search_path to mimiciii;
Clone the MIMIC Code repository:
git clone https://github.com/MIT-LCP/mimic-code/
Create tables using the SQL script provided:
psql 'dbname=mimic user=mimicuser options=--search_path=mimiciii' -f postgres_create_tables.sql
To populate the tables:
psql 'dbname=mimic user=mimicuser options=--search_path=mimiciii' -f postgres_load_data.sql -v mimic_data_dir='<path_to_data>'
Check the sizes of the tables to verify that the data has been correctly populated.
Run the following script to create materialized views:
- Postgres Functions Script
- Then run:
concepts/postgres_make_concepts.sh
The preprocessing step involves data extraction and feature engineering to create suitable input for modeling.
Edit the user environment setup script as needed:
https://github.com/MLforHealth/MIMIC_Extract/blob/455a2484c1fd2de3809ec2aa52897717379dc1b7/utils/setup_user_env.sh
Source the script:
source ./setup_user_env.sh
Install Anaconda and create a new environment for MIMIC data extraction:
cd ~
wget https://repo.continuum.io/archive/Anaconda2-4.2.0-Linux-x86_64.sh
bash Anaconda2-4.2.0-Linux-x86_64.sh -b -p ~/anaconda
rm Anaconda2-4.2.0-Linux-x86_64.sh
echo 'export PATH="~/anaconda/bin:$PATH"' >> ~/.bashrc
source ~/.bashrc
conda update conda
conda env create --force -f ../mimic_extract_env.yml
source activate mimic_data_extraction
Edit the mimic_direct_extract.py
file for specific column names, and then build the curated dataset:
make build_curated_from_psql
Ensure enough space is available:
lsblk
sudo growpart /dev/nvme0n1 1
To create episodes and encode features, use the MIMIC_RL notebook.
The input to the model is obtained from the last step:
X = pd.read_hdf(DATAFILE, 'vitals_labs')
Y = pd.read_hdf(DATAFILE, 'interventions')
static = pd.read_hdf(DATAFILE, 'patients')
The generated output is a set of RL trajectories:
action,all_action_probabilities,episode_id,episode_name,reward,...
The modeling phase involves using Intel Coach for RL experiments.
# Install necessary packages
sudo -E apt-get install python3-pip cmake zlib1g-dev python3-tk python-opencv -y
sudo -E apt-get install libboost-all-dev -y
sudo -E apt-get install libblas-dev liblapack-dev libatlas-base-dev gfortran -y
sudo -E apt-get install libsdl-dev libsdl-image1.2-dev ...
sudo -E pip3 install virtualenv
virtualenv -p python3 coach_env
. coach_env/bin/activate
git clone https://github.com/NervanaSystems/coach.git
cd coach
pip3 install -e .
In our experiments, we focus on several popular RL algorithms, including:
- Deep Q Learning (DQN)
- Double Deep Q Learning (DDQN)
- DDQN combined with Bootstrapped Neural Coach (BNC)
- Mixed Monte Carlo (MMC)
- Persistent Advantage Learning (PAL)
Refer to the MIMIC RL Notebook for further details on the experimental setup.
Provide specifications of the machine used for running the experiments.
We rely on various Off-Policy Evaluation (OPE) metrics to evaluate the performance of the trained RL models without deploying them.
- Training Logs: Use TensorBoard to visualize the training process.
- Checkpoints: Save model checkpoints for evaluation and comparison.
Feel free to explore the repository for additional details or reach out via issues for questions and discussions!