All CK components can be found at cKnowledge.io and in one GitHub repository!
This project is hosted by the cTuning foundation.
CK-TensorFlow repository provides automation components in the CK format for tedious and repetitive tasks such as detecting and installing different TensorFlow versions, models and data sets across diverse platforms and running AI/ML workflows in a unified way.
Note that if some third-party automation fails or misses some functionality (software detection, package installation, bechmarking and autotuning workflow, etc), the CK concept is to continuously and collaboratively improve such reusable components! Please provide your feedback and report bugs via GitHub issues or get in touch with the community using this public CK mailing list!
- Python 2.x:
$ sudo apt-get install python-dev python-pip python-setuptools python-opencv git
or
- Python 3.x:
$ sudo apt-get install python3-dev python3-pip python3-setuptools
Note that CK will automatically install the following dependencies into CK TF virtual space:
protobuf easydict joblib image wheel numpy scipy absl-py
-
CUDA/cuDNN if you have CUDA-enabled GPU
-
Android NDK if you want to compile and run TF for Android devices
Follow these instructions to install CK.
$ ck pull repo:ck-tensorflow
$ ck install package:lib-tensorflow-1.8.0-cpu
and/or (CK enables easy co-existance of different versions of tools
$ ck install package:lib-tensorflow-1.8.0-cuda
Check that TF is installed locally and registered in the CK:
$ ck show env --tags=lib,tensorflow
Use CK virtual environment to test it (similar to Python virtual env but for any binary package installed via CK):
$ ck virtual env --tags=lib,tensorflow
Install other TF versions available in the CK:
$ ck install package --tags=lib,tensorflow
$ ck run program:tensorflow --cmd_key=classify
Note, that you will be asked to select a jpeg image from available CK data sets. We added standard demo images (cat.jpg, catgrey.jpg, fish-bike.jpg, computer_mouse.jpg) to the 'ctuning-datasets-min' repository. You can list them via
$ ck pull repo:ctuning-datasets-min
$ ck search dataset --tags=dnn
You can find more details about customized TensorFlow builds via CK for Android, Linux, Windows, Raspberry Pi, odroid, etc here.
$ ck run program:tensorflow (--env.BATCH_SIZE=10) (--env.NUM_BATCHES=5)
Select one of the test_cpu
and test_cuda
commands; select an available version of TensorFlow,
if prompted (more than one choice); select an available benchmark, if prompted (more than one choice),
and select TensorFlow model.
It is now possible to participate in crowd-benchmarking of TensorFlow (early prototype):
$ ck crowdbench tensorflow --user={your email or ID to acknowledge contributions} (--env.BATCH_SIZE=128 --env.NUM_BATCHES=100)
You can see continuously aggregated results in the public Collective Knowledge repository under 'crowd-benchmark TensorFlow library' scenario.
Note, that this is an on-going, heavily evolving and long-term project to enable collaborative and systematic benchmarking and tuning of realistic workloads across diverse hardware (ARM TechCon'16 talk and demo, DATE'16, CPC'15). We also plan to add crowd-benchmarking and crowd-tuning of Caffe, TensorFlow and other DNN frameworks to our Android application soon - please, stay tuned!
It is now possible to take advantage of our universal multi-objective CK autotuner to optimize TensorFlow. As a first simple example, we added batch size tuning via CK. You can invoke it as follows:
$ ck autotune tensorflow
All results will be recorded in the local CK repository and you will be given command lines to plot graphs or replay experiments such as:
$ ck plot graph:{experiment UID}
$ ck replay experiment:{experiment UID} --point={specific optimization point}
We are now working to extend above autotuner and crowdsource optimization of the whole SW/HW/model/data set stack (paper 1, paper 2).
We would like to thank the community for their interest and feedback about this collaborative AI optimization approach powered by CK at ARM TechCon'16 and the Embedded Vision Summit'17 - so please stay tuned ;) !
CK allows us to unify AI interfaces while collaboratively optimizing underneath engines. For example, we added similar support to install, use and evaluate Caffe/Caffe2, CK-PyTorch and MXNet via CK:
$ ck pull repo:ck-caffe2
$ ck pull repo --url=https://github.com/dividiti/ck-caffe
$ ck pull repo:ck-mxnet
$ ck install package:lib-caffe-bvlc-master-cpu-universal --env.CAFFE_BUILD_PYTHON=ON
$ ck install package:lib-caffe2-master-eigen-cpu-universal --env.CAFFE_BUILD_PYTHON=ON
$ ck install package --tags=mxnet
$ ck run program:caffe --cmd_key=classify
$ ck run program:caffe2 --cmd_key=classify
$ ck run program:mxnet --cmd_key=classify
$ ck crowdbench caffe --env.BATCH_SIZE=5
$ ck crowdbench caffe2 --env.BATCH_SIZE=5 --user=i_want_to_ack_my_contribution
$ ck autotune caffe
$ ck autotune caffe2
We provided an option in all our AI crowd-tuning tools to let the community report and share mispredictions (images, correct label and wrong misprediction) to gradually and collaboratively build realistic data/training sets:
- Public repository (see "mispredictions and unexpected behavior)
- Misclassified images via CK-based AI web-service
- Simple demo to classify images with continuous optimization of DNN engines underneath, sharing of mispredictions and creation of a community training set; and to predict compiler optimizations based on program features.
We use crowd-benchmarking and crowd-tuning of such realistic workloads across diverse hardware for open academic and industrial R&D challenges - join this community effort!
- SqueezeDet demo currently work well with Python 3.5 and package:squeezedetmodel-squeezedet, so install it first:
$ ck install package:squeezedetmodel-squeezedet
$ ck run program:squeezedet
Get in touch with ck-tensorflow developers via CK mailing list: http://groups.google.com/group/collective-knowledge !