Following Table shows the mapping between each TAI principle and the corresponding knowledge sources covering that principle in POLARIS framework.
TAI Principle | Knowledge Source |
---|---|
Explainability | Jin et al. - EUCA: the Explainable AI Framework [22] Tensorflow - Responsible AI in your ML workflow [25] CSIRO - Responsible AI Pattern Catalogue [26] |
Fairness | Amsterdam Intelligence - The Fairness Handbook [23] |
Security | NISA - Securing Machine Learning Algorithms [24] ICO - Guidance on AI and data protection [27] Tensorflow - Responsible AI in your ML workflow [25] Microsoft - Threat Modeling AI/ML Systems and Dependencies [28] CSIRO - Responsible AI Pattern Catalogue [26] |
Privacy | ENISA - Securing Machine Learning Algorithms [24] ICO - Guidance on AI and data protection [27] Tensorflow - Responsible AI in your ML workflow [25] Microsoft - Threat Modeling AI/ML Systems and Dependencies [28] CSIRO - Responsible AI Pattern Catalogue [26] |
More details can be found in the pre-print version of the paper:
@misc{baldassarre2024polaris,
title={POLARIS: A framework to guide the development of Trustworthy AI systems},
author={Maria Teresa Baldassarre and Domenico Gigante and Marcos Kalinowski and Azzurra Ragone},
year={2024},
eprint={2402.05340},
archivePrefix={arXiv},
primaryClass={cs.SE}
}