-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
TrustyAI project and Kubeflow #733
Comments
your feedback is appreciated @jbottum @james-jwu @zijianjoy @thesuperzapper @terrytangyuan @johnugeorge @kimwnasptd @andreyvelich @akgraner @StefanoFioravanzo @rimolive |
@ruivieira This is a very interesting proposal! It seems like you are proposing to integrate the TrustyAI ecosystem across various popular Kubeflow components. I wonder how we could highlight the added value to the user experience and describe what is the success criteria of this initiative. Few questions:
Again, thanks for the proposal. Looking forward to more! |
@StefanoFioravanzo thank you! Regarding your questions:
All of TrustyAI's code (core algorithms, services and integrations) is fully open-source, released under Apache 2.0 and this is a core requirement for contributors. There will be no dependencies on paid services.
The TrustyAI team is already implementing some of the integrations and will continue, but any contribution from the wider community would be more than welcome.
Jupyter notebooks / workbenchesTrustyAI is available as a Python library, with pre-built workbench container images. PipelinesDue to its simple architecture (integrations built on top of a core of algorithms) TrustyAI can be containerised into single purpose pipeline steps which could be part of a model building or deployment. As a real-world example, a global explainability step could score feature importances and provide a check on whether regulatory compliance about protected attributes are being met. Deployment/monitoringWhen a model is deployed, the TrustyAI service provides real-time metrics such as data drift and bias/fairness. A model's bias or potential data drift is published into Prometheus, from which alerting can be set up if a certain value is outside the acceptable thresholds. |
(This issue aims at capturing the discussion following the community presentation and the
kubeflow-discuss
mailing list post)On behalf of the TrustyAI team, I would like to thank you all for the opportunity to present the TrustyAI project and discuss the fit with Kubeflow at the community meeting.
TrustyAI summary
TrustyAI is an open-source community dedicated to providing a diverse toolkit for responsible AI development and deployment. TrustyAI was founded in 2019 as part of Kogito, an open-source business automation community, as a response to growing demand from users in highly regulated industries such as financial services and healthcare.
The TrustyAI community maintains a number of projects within the responsible AI field, mostly revolving around model explainability, model monitoring, and responsible model serving.
TrustyAI provides tools to apply explainability, inspect bias/fairness, monitor data drift and mitigate harmful content for a number of different user profiles. For Java developers, we provide the TrustyAI Java library containing TrustyAI’s core algorithms. For data scientists and developers that are using Python, we expose our Java library via the TrustyAI Python library, which fuses the advantages of Java’s speed to the familiarity of Python. Here, TrustyAI’s algorithms are integrated with common data science libraries like Numpy and Pandas. Future work is planned to add native Python algorithms to the library, such as to broaden TrustyAI’s compatibility by integrating with libraries like Pytorch and Tensorflow. One such nascent project is trustyai-detoxify, a module within the TrustyAI Python library that provides guardrails, toxic language detection, and rephrasing capabilities for use with LLMs.
For enterprise and MLOps use-cases, TrustyAI provides the TrustyAI Kubernetes Service and Operator which serves TrustyAI bias, explainability, and drift algorithms within Kubernetes. Both the service and operator are integrated into Open Data Hub (ODH) to facilitate coordination between model servers and TrustyAI, bringing easy access to our responsible AI toolkit to users of both platforms. Currently, the TrustyAI Kubernetes service supports tabular models served in KServe or ModelMesh.
Potential integrations with Kubeflow
Presentation
Any feedback would be greatly appreciated.
All the best,
TrustyAI team.
The text was updated successfully, but these errors were encountered: