The Intel® AI Analytics Toolkit (AI Kit) gives data scientists, AI developers, and researchers familiar Python* tools and frameworks to accelerate end-to-end data science and analytics pipelines on Intel® architectures. The components are built using oneAPI libraries for low-level compute optimizations. This toolkit maximizes performance from preprocessing through machine learning, and provides interoperability for efficient model development.
You can find more information at AI Kit.
Users could learn how to run samples for different components in AI Kit with those getting started samples.
Code samples are licensed under the MIT license. See License.txt for details.
Third party program Licenses can be found here: third-party-programs.txt
Compoment | Folder | Description |
---|---|---|
daal4py | IntelPython_daal4py_GettingStarted | Batch linear regression using the python API package daal4py from oneAPI Data Analytics Library (oneDAL) . |
Intel® Neural Compressor | INC-Sample-for-Tensorflow | Quantize a fp32 model into int8 by Intel® Neural Compressor, and compare the performance between fp32 and int8 . |
Modin | IntelModin_GettingStarted | Run Modin-accelerated Pandas functions and note the performance gain . |
PyTorch | IntelPyTorch_GettingStarted | A simple training example for PyTorch. |
TensorFlow | IntelTensorFlow_GettingStarted | A simple training example for TensorFlow. |
XGBoost | IntelPython_XGBoost_GettingStarted | Set up and train an XGBoost* model on datasets for prediction. |
Please refer to Using AI samples in Intel® DevCloud for oneAPI.