Releases: mlcommons/ck
MLCommons CM aka CK2 v1.1.3
Stable release of the MLCommons CM automation meta-framework from the MLCommons taskforce on education and reproducibility:
- improved removal of CM entries on Windows
- fixed #574
- improved detection of CM entries with "."
- added --yaml option in "cm add" to save meta in YAML
- added --save_to_json to save output to JSON (useful for web services)
- extended "cm info {automation} {artifact}" (copy to clipboard)
Stable release of MLCommons CM v1.1.1
Stable release from the MLCommons taskforce on education and reproducibility to automate MLPerf inference at the Student Cluster Competition at SuperComputing'22:
- https://github.com/mlcommons/ck/blob/master/docs/tutorials/sc22-scc-mlperf.md
- https://github.com/mlcommons/ck/blob/master/docs/tutorials/sc22-scc-mlperf2.md
- https://wandb.ai/cmind/cm-mlperf-sc22-scc-retinanet-offline
- https://twitter.com/DrHaiAhNam/status/1592221106290688001
Stable release of MLCommons CM v1.1.0
Stable release from the MLCommons taskforce on education and reproducibility to automate MLPerf inference at the Student Cluster Competition at SuperComputing'22:
- https://github.com/mlcommons/ck/blob/master/docs/tutorials/sc22-scc-mlperf.md
- https://github.com/mlcommons/ck/blob/master/docs/tutorials/sc22-scc-mlperf2.md
- https://wandb.ai/cmind/cm-mlperf-sc22-scc-retinanet-offline
- https://twitter.com/DrHaiAhNam/status/1592221106290688001
cm-v1.0.5
Stable release from the MLCommons taskforce on education and reproducibility to test the MLPerf inference benchmark automation for the Student Cluster Competition at SC'22.
MLCommons CM v1.0.0 - the next generation of the MLCommons Collective Knowledge framework
This is the stable release of the MLCommons Collective Mind framework v1.0.1 with reusable and portable MLOps components - the next generation of the MLCommons Collective Knowledge framework developed to modularize AI/ML Systems and automate their benchmarking, optimization and design space exploration based on the mature MLPerf methodology.
After donating the CK framework to the MLCommons, we have been developing this portable workflow automation technology as a community effort within the open education workgroup to modularize MLPerf and make it easier to plug in real-world tasks, models, data sets, software and hardware from the cloud to the edge.
We are very glad to see that more than 80% of all performance results and more than 95% of all power results were automated by the MLCommons CK v2.6.1 in the latest MLPerf inference round thanks to submissions from Qualcomm, Krai, Dell, HPE and Lenovo!
We invite you to join our public workgroup to continue developing this portable workflow framework and reusable automation for MLOps and DevOps as a community effort to:
- develop an open-source educational toolkit to make it easier to plug any real-world ML & AI tasks, models, data sets, software and hardware into the MLPerf benchmarking infrastructure;
- automate design space exploration of diverse ML/SW/HW stacks to trade off performance, accuracy, energy, size and costs;
- help end-users reproduce MLPerf results and deploy the most suitable ML/SW/HW stacks in production;
- support collaborative and reproducible research.
(C)opyright MLCommons 2022
Shortcuts
MLCommons CM toolkit v0.7.24 - the first stable release to modularize and automate MLPerf inference v2.1
Stable release of the MLCommons CM toolkit - the next generation of the CK framework developed in the open workgroup to modularize and automate MLPerf benchmarks.
A fix to support Python 3.9+
This release includes a fix for issue #184 .
Stable release for MLCommons CK v2.6.0
This is a stable release of the MLCommons CK framework with a few minor fixes to automate MLPerf inference benchmark v2.0+ submissions.
Several improvements including the possibility to skip arbitrary dependencies from CK workflows
- fixed copyright note in the License file
- improved problem reporting in module:program
- important fix in "module:program" detected while preparing MLPerf inference out-of-the-box benchmarking: clean tmp directory when running CK workflow that doesn't have compilation!
- added --remove_deps flag to module:program and module:env as suggested by CK users to be able to remove some dependencies from CK program workflows and thus use natively installed compilers, tools, libraries and other components - useful to debugging and testing. This flag takes a list of keys from dependencies from a given program workflows separated by comma.
- added the latest contributors to the CK project.
Extra improvements for MLPerf inference
- added 'ck_html_end_note' key to customize CK result dashboard
- fixed Pareto frontier filter
- added "ck filter_2d math.frontier" for MLPerf inference