Skip to content

Releases: openvinotoolkit/openvino

2020.3.1 LTS

12 Nov 17:19
f26da46
Compare
Choose a tag to compare

What's New

  • This release provides bug fixes for the previous 2020.3 Long-Term Support (LTS) release, a new release type that provides longer-term maintenance and support with a focus on stability and compatibility. Read more about the support details: Long Term Support Release
  • Based on v.2020.3 LTS, the v.2020.3.1 LTS release includes security and functionality bug fixes, and minor capability changes.
  • Includes improved support for 11th Generation Intel® Core™ Processor (formerly codenamed Tiger Lake), which includes Intel® Iris® Xe Graphics and Intel® DL Boost instructions.
  • Intel® Distribution of OpenVINO™ toolkit 2020.3.X LTS releases will continue to support Intel® Vision Accelerator Design with an Intel® Arria® 10 FPGA and the Intel® Programmable Acceleration Card with Intel® Arria® 10 GX FPGA. For questions about next-generation programmable deep-learning solutions based on FPGAs, talk to your sales representative or contact us to get the latest FPGA updates.

You can find OpenVINO™ toolkit 2020.3.1 release here:

Release notes: https://software.intel.com/content/www/us/en/develop/articles/openvino-2020-3-lts-relnotes.html

2021.1

06 Oct 21:31
f557dca
Compare
Choose a tag to compare

What's New

  • Introducing a major release in October 2020 (v.2021). You are highly encouraged to upgrade to this version because there it introduces new and important capabilities, as well as breaking changes and backward-incompatible changes.
  • Support for TensorFlow 2.2.x. Introduces official support for models trained in the TensorFlow 2.2.x framework.
  • Support for the Latest Hardware. Introduces official support for 11th Generation Intel® Core™ Processor Family for Internet of Things (IoT) Applications (formerly codenamed Tiger Lake) including new inference performance enhancements with Iris® Xe Graphics and Intel® DL Boost instructions, as well as Intel® Gaussian & Neural Accelerators 2.0 for low-power speech processing acceleration.
  • Going Beyond Vision. Enables end-to-end capabilities to leverage the Intel® Distribution of OpenVINO™ toolkit for workloads beyond computer vision, which include audio, speech, language, and recommendation, with new pre-trained models, support for public models, code samples and demos, and support for non-vision workloads in OpenVINO™ toolkit DL Streamer.
  • Coming in Q4 2020: (Beta Release) Integration of DL Workbench and the Intel® DevCloud for the Edge. Developers can now graphically analyze models using the DL Workbench on Intel® DevCloud for the Edge (instead of a local machine only) to compare, visualize and fine-tune a solution against multiple remote hardware configurations.
  • OpenVINO™ Model Server. An add-on to the Intel® Distribution of OpenVINO™ toolkit and a scalable microservice, which provides a gRPC or HTTP/REST endpoint for inference, makes it easier to deploy models in cloud or edge server environments. It is now implemented in C++ to enable reduced container footprint (for example, less than 500MB) and deliver higher throughput and lower latency.
  • Now available through Gitee* and PyPI* distribution methods. You are encouraged to choose from the distribution methods and download.

You can find OpenVINO™ toolkit 2021.1 release here:

Release Notes: https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html

2020.4

14 Jul 17:29
023e7c2
Compare
Choose a tag to compare

What's New

  • Improves performance while maintaining accuracy close to full precision (for example, FP32 data type) by introducing support for the Bfloat16 data type for inferencing using the 3rd generation Intel® Xeon® Scalable processor (formerly code-named Cooper Lake).
  • Increases accuracy when layers have varying bit-widths by extending the Post-Training Optimization Tool to support mixed-precision quantization.
  • Allows greater compatibility of models by supporting directly reading Open Neural Network Exchange (ONNX*) model format to the Inference Engine.
    • For users looking to take full advantage of Intel® Distribution of OpenVINO™ toolkit, it is recommended to follow the native workflow of using the Intermediate Representation from the Model Optimizer as input to the Inference Engine.
    • For users looking to more easily take a converted model in ONNX model format (for example, PyTorch to ONNX using torch.onnx), they are now able to input the ONNX format directly to the Inference Engine to run models on Intel architecture.
  • Enables initial support for TensorFlow* 2.2.0 for computer vision use cases.
  • Enables users to connect to and profile multiple remote hosts; collect and store data in one place for further analysis by extending the Deep Learning Workbench with remote profiling capability.

You can find OpenVINO™ toolkit 2020.4 release here:

Release notes: https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html

2020.3 LTS

03 Jun 17:11
Compare
Choose a tag to compare

You can find OpenVINO™ toolkit 2020.3.0 release here:

Release notes: https://software.intel.com/content/www/us/en/develop/articles/openvino-2020-3-lts-relnotes.html

2020.2

13 Apr 19:21
Compare
Choose a tag to compare

You can find OpenVINO™ toolkit 2020.2 release here:

Release notes: https://software.intel.com/en-us/articles/OpenVINO-RelNotes

2020.1

12 Feb 12:50
Compare
Choose a tag to compare

You can find OpenVINO™ toolkit 2020.1 release here:

Release notes: https://software.intel.com/en-us/articles/OpenVINO-RelNotes

2019 R3.1

28 Oct 18:37
fe3f978
Compare
Choose a tag to compare

2019 R3

09 Oct 11:37
1c794d9
Compare
Choose a tag to compare

2019 R2

09 Aug 16:26
Compare
Choose a tag to compare

2019_R1.1

27 May 18:24
Compare
Choose a tag to compare
Publishing 2019 R1.1 content and Myriad plugin sources (#162)

* Publishing 2019 R1.1 content and Myriad plugin sources