Sensitivity Analysis and Uncertainty QUAnTification in Cardiac Hemodynamics
A framework for sensitivity analysis and uncertainty quantification in cardiac hemodynamics
Date: June 2024
Mail: [email protected] or [email protected]
This project consists of three major parts:
- Data Analysis
- Surrogate Model Comparison
- Sensitivity Analysis and Uncertainty Quantification
Performing Data Analysis you get insights in the data you're using. That can help to find outliers and to check whether the program understands the data. You get different plots:
- Boxplot of distribution of data in columns
- Correlation Matrix of data
Surrogate Model Comparison gives insights on how well different Surrogate Models perform with predicting the data. To verify their behavior you get the following plots:
- mean of the timings (training and testing) over a certain number of folds
-
$R^2$ -score, Mean Absolut Error and Root Mean Squared Error for each model and outputparameter (colors) - Actual vs Predicted values for each output Parameter (subfigure) and Model (color)
The Sensitivity Analysis provides more insights in th dependencies between input and output parameter. For the chosen models you get a heat-map which shows how sensitive a certain output parameter depends on the inputs:
- heat-map with sensitivities Additionally it is possible to perform the sensitivity analsys with a changing bounds range. With that one can perform uncertainty quantification. It can be run with (uq).
It is also possible to add a project specific program.
- Provide your data
- Set preferences in config.txt
- Run program and choose da, sc, sa, uq, or ps
- don't worry be happy ;)
Adding surrogate models:
- go to
models.py
and add a new model class according to theNIPCE
class - you need a
init()
,fit()
,predict()
,get_params()
andset_params()
function - add the new model in the
creatingModels()
Adding preprocessing function:
- define in
preprocessing.py
a new functionproject_specific_preprocessing()
- in
initialization.py
you can add it inread_data()
- you can do it like
mv_uq_procect_preprocessing()
Adding hyperparameters:
- define your parameter in
config.txt
- add the parameter in
main.py
like the others in the sectionInitialize Hyperparameter
with:your_param = parameter['your_param']
- now you can use
your_param
in the main class
There are a couple of preferences you can set. You write the name of the variable and its value(s) separated with spaces. The order is arbitrary. It ignores text after a #
.
Here is a short description:
Name | Example input | Note |
---|---|---|
data_path | data_df.scv or ../03_Results |
.xlsx, .csv, and ansys .out files |
input_parameter | y z alpha |
use column names of .csv/.xlsx |
output_parameter | energy-loss wss |
use column names of .csv/.xlsx |
output_parameter _sa_plot | Energy_loss WSS |
names for the |
normalize | True |
normalizing data |
scaler | 'none' or 'minmax' or 'standard' |
scale data |
save_data | True |
save the data in .csv file |
get_mean_of_ each_input_pair | True |
mean over e.g. timesteps in data set |
To define the outputfolder go to plotting.py
ll. 17 and specify the save_fig_path
variable.
Name | Example input | Explanation |
---|---|---|
models | Svr-Rbf |
Support Vector Regression - Radial Basis Function |
models | Svr-Linear |
Support Vector Regression - Linear Basis Function |
models | Svr-poly |
Support Vector Regression - Polynomial Basis Function |
models | Svr-Sigmoid |
Support Vector Regression - Sigmoid Basis Function |
models | RF |
Random Forrest |
models | KNN |
K Nearest Neighbors |
models | LR |
Linear Regression |
models | Bayesian-Ridge |
Bayesian-Ridge |
models | NIPCE |
Non intrusive polynomial chaos expansion |
models | GP |
Gaussian Process |
models | DecisionTree |
Decision Tree |
nipce_order | 2 |
Order of NIPCE Model |
Name | Example input | Explanation |
---|---|---|
n_splits | 10 |
number of splits for k-cross fold validation |
shuffle | True |
for random order of datapoints |
random_state | 42 |
for split desicion |
Name | Example input | Note |
---|---|---|
plot_data | True |
pair-plot (scatter) of data frame (currently not used) |
is_plotting_... | True |
currently not used |
Name | Example input | Explanation |
---|---|---|
sa_models | NIPCE GP |
defines with which model(s) you want to perform the sa |
sa_sobol_indice | ST or S1 |
Total order or first order sa |
sa_17_segment_model | NIPCE |
Defiens the model for segment plot |
sa_sample_size | 512 |
sample size for SA |
sa_output_parameter | WSS Eloss ... |
defines the output parameter for SA calculation |
output_parameter_sa_plot | WSS Eloss ... |
defines outputs for plotting |
Here you can add your project specific settings. In case of the Mitral Valve Uncertainty Quantification they're the following
Name | Example input | Explanation |
---|---|---|
sa_17_segment_model | Lin-Reg |
which model is used for the 17-segments plot |
In this project you need to install the following libraries:
Python: Version 3.9.13
Library | Version |
---|---|
numpy | 1.25.2 |
pandas | 2.0.0 |
matplotlib.pyplot | 3.8.2 |
seaborn | 0.13.0 |
scipy | 1.12.0 |
scikit-learn | 1.3.2 |
chaospy | 4.3.13 |
time | |
warnings |
This tool was used in the following publication:
Quantifying the Impact of Mitral Valve Anatomy on Clinical Markers Using Surrogate Models and Sensitivity Analysis
https://engrxiv.org/preprint/view/3785
The input/output pairs used for training the surrogate models were create using Ansys Fluent CFD simulations. More details on using this automated CFD model and the corresponding setup files can be found here: