diff --git a/assets/hub/datvuthanh_hybridnets.ipynb b/assets/hub/datvuthanh_hybridnets.ipynb
index 21079390cd80..00c86f24aef0 100644
--- a/assets/hub/datvuthanh_hybridnets.ipynb
+++ b/assets/hub/datvuthanh_hybridnets.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "1a23e4fe",
+ "id": "78ef6f65",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fa3dbd9b",
+ "id": "14ca72d6",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
- "id": "c5bd408d",
+ "id": "bf4b68e3",
"metadata": {},
"source": [
"## Model Description\n",
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "17c495ce",
+ "id": "ca53724e",
"metadata": {},
"outputs": [],
"source": [
@@ -109,7 +109,7 @@
},
{
"cell_type": "markdown",
- "id": "8bc1f000",
+ "id": "83314455",
"metadata": {},
"source": [
"### Citation\n",
@@ -120,7 +120,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1a38dc41",
+ "id": "15b8f38a",
"metadata": {
"attributes": {
"classes": [
diff --git a/assets/hub/facebookresearch_WSL-Images_resnext.ipynb b/assets/hub/facebookresearch_WSL-Images_resnext.ipynb
index 12666d4828bb..741f261f6bbb 100644
--- a/assets/hub/facebookresearch_WSL-Images_resnext.ipynb
+++ b/assets/hub/facebookresearch_WSL-Images_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "f4d2674d",
+ "id": "96811b60",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "80bfff34",
+ "id": "151ce189",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "9b9fe53f",
+ "id": "98fddff0",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9117ed40",
+ "id": "6ea05ab4",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "08069887",
+ "id": "01cef70e",
"metadata": {},
"outputs": [],
"source": [
@@ -99,7 +99,7 @@
},
{
"cell_type": "markdown",
- "id": "263df839",
+ "id": "fb70ab0d",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb b/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
index 8691cc2725ba..598290025cfe 100644
--- a/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
+++ b/assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "a948b179",
+ "id": "d73c061b",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c7eb1820",
+ "id": "0c545315",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
- "id": "ca2a2ff1",
+ "id": "e1db3156",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 120)` where `N` is the number of images to be generated.\n",
@@ -45,7 +45,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d26cd592",
+ "id": "2c8444cd",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
},
{
"cell_type": "markdown",
- "id": "737b158e",
+ "id": "7c8e6840",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
diff --git a/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb b/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
index f0525a30aaad..9a5d521679b6 100644
--- a/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
+++ b/assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "59201b85",
+ "id": "64d946f7",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fffb1be6",
+ "id": "aa4f705d",
"metadata": {},
"outputs": [],
"source": [
@@ -44,7 +44,7 @@
},
{
"cell_type": "markdown",
- "id": "c49a6dd3",
+ "id": "93378e01",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 512)` where `N` is the number of images to be generated.\n",
@@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "baca3fe4",
+ "id": "797a4471",
"metadata": {},
"outputs": [],
"source": [
@@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
- "id": "8eb63816",
+ "id": "f1c93db5",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
diff --git a/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb b/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
index a13cd01d0eb0..b3623d00a6d6 100644
--- a/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
+++ b/assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "da7aa6a5",
+ "id": "1e4ffd92",
"metadata": {},
"source": [
"# 3D ResNet\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fd7f021b",
+ "id": "7b063b4c",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "668d4445",
+ "id": "a7f12d3c",
"metadata": {},
"source": [
"Import remaining functions:"
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d082be6b",
+ "id": "44b70ae2",
"metadata": {},
"outputs": [],
"source": [
@@ -64,7 +64,7 @@
},
{
"cell_type": "markdown",
- "id": "d7bd09d9",
+ "id": "41044408",
"metadata": {},
"source": [
"#### Setup\n",
@@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "847bc6b5",
+ "id": "d1d16728",
"metadata": {
"attributes": {
"classes": [
@@ -94,7 +94,7 @@
},
{
"cell_type": "markdown",
- "id": "2e7545d2",
+ "id": "8383c28d",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
@@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3a946380",
+ "id": "013a9644",
"metadata": {},
"outputs": [],
"source": [
@@ -116,7 +116,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8a572111",
+ "id": "c30bdd3c",
"metadata": {},
"outputs": [],
"source": [
@@ -131,7 +131,7 @@
},
{
"cell_type": "markdown",
- "id": "1051e7da",
+ "id": "eac707f1",
"metadata": {},
"source": [
"#### Define input transform"
@@ -140,7 +140,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "aad028fe",
+ "id": "e4eb1ae6",
"metadata": {},
"outputs": [],
"source": [
@@ -174,7 +174,7 @@
},
{
"cell_type": "markdown",
- "id": "4ae67591",
+ "id": "be47c2ac",
"metadata": {},
"source": [
"#### Run Inference\n",
@@ -185,7 +185,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "69c94af8",
+ "id": "1e7c1b5d",
"metadata": {},
"outputs": [],
"source": [
@@ -197,7 +197,7 @@
},
{
"cell_type": "markdown",
- "id": "25909f02",
+ "id": "01bfdda1",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
@@ -206,7 +206,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "79529928",
+ "id": "1ad6e673",
"metadata": {},
"outputs": [],
"source": [
@@ -231,7 +231,7 @@
},
{
"cell_type": "markdown",
- "id": "09db3efa",
+ "id": "8df56164",
"metadata": {},
"source": [
"#### Get Predictions"
@@ -240,7 +240,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3ea2dc87",
+ "id": "06fa83b6",
"metadata": {},
"outputs": [],
"source": [
@@ -259,7 +259,7 @@
},
{
"cell_type": "markdown",
- "id": "2c4edde3",
+ "id": "aa6a52b5",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb b/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb
index 5eb89d4fb9f4..f08cb820c9a3 100644
--- a/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb
+++ b/assets/hub/facebookresearch_pytorchvideo_slowfast.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "369b9dc8",
+ "id": "7f89a5d1",
"metadata": {},
"source": [
"# SlowFast\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f30497f1",
+ "id": "76a23419",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "9848c666",
+ "id": "2d7cdf13",
"metadata": {},
"source": [
"Import remaining functions:"
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2c4d7f93",
+ "id": "a7b840b7",
"metadata": {},
"outputs": [],
"source": [
@@ -65,7 +65,7 @@
},
{
"cell_type": "markdown",
- "id": "6580e3d8",
+ "id": "ec699d36",
"metadata": {},
"source": [
"#### Setup\n",
@@ -76,7 +76,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e7a65bb3",
+ "id": "2a01095e",
"metadata": {
"attributes": {
"classes": [
@@ -95,7 +95,7 @@
},
{
"cell_type": "markdown",
- "id": "a59326f5",
+ "id": "492cf411",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
@@ -104,7 +104,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "96e6c180",
+ "id": "49754224",
"metadata": {},
"outputs": [],
"source": [
@@ -117,7 +117,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5c9ffc30",
+ "id": "f38ab730",
"metadata": {},
"outputs": [],
"source": [
@@ -132,7 +132,7 @@
},
{
"cell_type": "markdown",
- "id": "25b7b5ad",
+ "id": "6ce5e7c6",
"metadata": {},
"source": [
"#### Define input transform"
@@ -141,7 +141,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7d138601",
+ "id": "f26b67eb",
"metadata": {},
"outputs": [],
"source": [
@@ -198,7 +198,7 @@
},
{
"cell_type": "markdown",
- "id": "29176bdd",
+ "id": "a146afb0",
"metadata": {},
"source": [
"#### Run Inference\n",
@@ -209,7 +209,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eebbdbfb",
+ "id": "2442d76d",
"metadata": {},
"outputs": [],
"source": [
@@ -221,7 +221,7 @@
},
{
"cell_type": "markdown",
- "id": "5967489f",
+ "id": "b1b99cb8",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
@@ -230,7 +230,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "49386520",
+ "id": "c73f13d1",
"metadata": {},
"outputs": [],
"source": [
@@ -255,7 +255,7 @@
},
{
"cell_type": "markdown",
- "id": "7a4bd371",
+ "id": "19c57afa",
"metadata": {},
"source": [
"#### Get Predictions"
@@ -264,7 +264,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f650a220",
+ "id": "422fdd97",
"metadata": {},
"outputs": [],
"source": [
@@ -283,7 +283,7 @@
},
{
"cell_type": "markdown",
- "id": "6213b363",
+ "id": "5c07dd08",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb b/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb
index 0dd11ddaf5a2..3e0e40066b8f 100644
--- a/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb
+++ b/assets/hub/facebookresearch_pytorchvideo_x3d.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "f86298b7",
+ "id": "cd62969f",
"metadata": {},
"source": [
"# X3D\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9f591e8f",
+ "id": "305646d6",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
- "id": "e51ce828",
+ "id": "9dd9c4e1",
"metadata": {},
"source": [
"Import remaining functions:"
@@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2d9e8178",
+ "id": "492cecca",
"metadata": {},
"outputs": [],
"source": [
@@ -65,7 +65,7 @@
},
{
"cell_type": "markdown",
- "id": "bdd55880",
+ "id": "e39b3a6a",
"metadata": {},
"source": [
"#### Setup\n",
@@ -76,7 +76,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9845bab5",
+ "id": "19c7e815",
"metadata": {},
"outputs": [],
"source": [
@@ -88,7 +88,7 @@
},
{
"cell_type": "markdown",
- "id": "4fd6c982",
+ "id": "bb61b91f",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dcd17279",
+ "id": "64245d8d",
"metadata": {},
"outputs": [],
"source": [
@@ -110,7 +110,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3d9a9e96",
+ "id": "71a06ce3",
"metadata": {},
"outputs": [],
"source": [
@@ -125,7 +125,7 @@
},
{
"cell_type": "markdown",
- "id": "9105c8f8",
+ "id": "3e94c13b",
"metadata": {},
"source": [
"#### Define input transform"
@@ -134,7 +134,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "47dc3290",
+ "id": "8c40d050",
"metadata": {},
"outputs": [],
"source": [
@@ -187,7 +187,7 @@
},
{
"cell_type": "markdown",
- "id": "edfececa",
+ "id": "78ad5e70",
"metadata": {},
"source": [
"#### Run Inference\n",
@@ -198,7 +198,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f016d2fd",
+ "id": "fd021919",
"metadata": {},
"outputs": [],
"source": [
@@ -210,7 +210,7 @@
},
{
"cell_type": "markdown",
- "id": "316b58d5",
+ "id": "adc03f1b",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
@@ -219,7 +219,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ce097d7f",
+ "id": "a0b446e8",
"metadata": {},
"outputs": [],
"source": [
@@ -244,7 +244,7 @@
},
{
"cell_type": "markdown",
- "id": "d69d6986",
+ "id": "256da97e",
"metadata": {},
"source": [
"#### Get Predictions"
@@ -253,7 +253,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e9466eb2",
+ "id": "20c12ebc",
"metadata": {},
"outputs": [],
"source": [
@@ -272,7 +272,7 @@
},
{
"cell_type": "markdown",
- "id": "f1b9fcbc",
+ "id": "db26ca68",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb b/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb
index 99d54e544ee1..e6ed767e5361 100644
--- a/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb
+++ b/assets/hub/facebookresearch_semi-supervised-ImageNet1K-models_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "b4583ef8",
+ "id": "9e0c7f8a",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ab94b1f7",
+ "id": "cd6d4992",
"metadata": {},
"outputs": [],
"source": [
@@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
- "id": "98cb306e",
+ "id": "8beacd30",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ade1233a",
+ "id": "8670302d",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "df59558a",
+ "id": "680b2ab1",
"metadata": {},
"outputs": [],
"source": [
@@ -107,7 +107,7 @@
},
{
"cell_type": "markdown",
- "id": "9acbc34c",
+ "id": "df77a33f",
"metadata": {},
"source": [
"### Model Description\n",
@@ -144,7 +144,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bbb599eb",
+ "id": "664b222b",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/huggingface_pytorch-transformers.ipynb b/assets/hub/huggingface_pytorch-transformers.ipynb
index b8058317a865..eb4d72bb4ff0 100644
--- a/assets/hub/huggingface_pytorch-transformers.ipynb
+++ b/assets/hub/huggingface_pytorch-transformers.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "7028442a",
+ "id": "9b827bbf",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cc03e50f",
+ "id": "303adbc5",
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
- "id": "8bd7008d",
+ "id": "e480f886",
"metadata": {},
"source": [
"# Usage\n",
@@ -86,7 +86,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3ae6e31a",
+ "id": "643e20f2",
"metadata": {
"attributes": {
"classes": [
@@ -104,7 +104,7 @@
},
{
"cell_type": "markdown",
- "id": "18460eb6",
+ "id": "669673c4",
"metadata": {},
"source": [
"## Models\n",
@@ -115,7 +115,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0720e3c1",
+ "id": "2502dfd3",
"metadata": {
"attributes": {
"classes": [
@@ -138,7 +138,7 @@
},
{
"cell_type": "markdown",
- "id": "e25b3b2f",
+ "id": "5b612f51",
"metadata": {},
"source": [
"## Models with a language modeling head\n",
@@ -149,7 +149,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "33f4b198",
+ "id": "b02046df",
"metadata": {
"attributes": {
"classes": [
@@ -172,7 +172,7 @@
},
{
"cell_type": "markdown",
- "id": "1baa6640",
+ "id": "c625844d",
"metadata": {},
"source": [
"## Models with a sequence classification head\n",
@@ -183,7 +183,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b0883fb5",
+ "id": "cef2d2a5",
"metadata": {
"attributes": {
"classes": [
@@ -206,7 +206,7 @@
},
{
"cell_type": "markdown",
- "id": "46f36af4",
+ "id": "bb33ca6e",
"metadata": {},
"source": [
"## Models with a question answering head\n",
@@ -217,7 +217,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0b106520",
+ "id": "4000752f",
"metadata": {
"attributes": {
"classes": [
@@ -240,7 +240,7 @@
},
{
"cell_type": "markdown",
- "id": "33172d60",
+ "id": "7077c3dc",
"metadata": {},
"source": [
"## Configuration\n",
@@ -251,7 +251,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d7f0443b",
+ "id": "daa97e30",
"metadata": {
"attributes": {
"classes": [
@@ -282,7 +282,7 @@
},
{
"cell_type": "markdown",
- "id": "aa7f81fb",
+ "id": "cefa4222",
"metadata": {},
"source": [
"# Example Usage\n",
@@ -295,7 +295,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b36f7a88",
+ "id": "a25555fb",
"metadata": {},
"outputs": [],
"source": [
@@ -311,7 +311,7 @@
},
{
"cell_type": "markdown",
- "id": "b9c67018",
+ "id": "9da99916",
"metadata": {},
"source": [
"## Using `BertModel` to encode the input sentence in a sequence of last layer hidden-states"
@@ -320,7 +320,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dc5b7b02",
+ "id": "a1bc32d1",
"metadata": {},
"outputs": [],
"source": [
@@ -339,7 +339,7 @@
},
{
"cell_type": "markdown",
- "id": "dbdf2907",
+ "id": "32034a09",
"metadata": {},
"source": [
"## Using `modelForMaskedLM` to predict a masked token with BERT"
@@ -348,7 +348,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "42b13425",
+ "id": "e06385bf",
"metadata": {},
"outputs": [],
"source": [
@@ -370,7 +370,7 @@
},
{
"cell_type": "markdown",
- "id": "620983e9",
+ "id": "6165c587",
"metadata": {},
"source": [
"## Using `modelForQuestionAnswering` to do question answering with BERT"
@@ -379,7 +379,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2521c4d1",
+ "id": "856cb80d",
"metadata": {},
"outputs": [],
"source": [
@@ -409,7 +409,7 @@
},
{
"cell_type": "markdown",
- "id": "e26274f1",
+ "id": "c1016402",
"metadata": {},
"source": [
"## Using `modelForSequenceClassification` to do paraphrase classification with BERT"
@@ -418,7 +418,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a2c9644c",
+ "id": "eb06619f",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/hustvl_yolop.ipynb b/assets/hub/hustvl_yolop.ipynb
index f0137bc37429..dea911087aaa 100644
--- a/assets/hub/hustvl_yolop.ipynb
+++ b/assets/hub/hustvl_yolop.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "accb5f3d",
+ "id": "0ca45a05",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -23,7 +23,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3a85bd34",
+ "id": "259ec082",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "428e60a8",
+ "id": "cff06889",
"metadata": {},
"source": [
"## YOLOP: You Only Look Once for Panoptic driving Perception\n",
@@ -132,7 +132,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "73566b2a",
+ "id": "6edec57f",
"metadata": {},
"outputs": [],
"source": [
@@ -148,7 +148,7 @@
},
{
"cell_type": "markdown",
- "id": "f9338d7c",
+ "id": "b7a6728d",
"metadata": {},
"source": [
"### Citation\n",
diff --git a/assets/hub/intelisl_midas_v2.ipynb b/assets/hub/intelisl_midas_v2.ipynb
index 148272b279a5..e346b5caa0c3 100644
--- a/assets/hub/intelisl_midas_v2.ipynb
+++ b/assets/hub/intelisl_midas_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "1b31d734",
+ "id": "0ca15c14",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -32,7 +32,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a2df664b",
+ "id": "ec7b4b3a",
"metadata": {
"attributes": {
"classes": [
@@ -48,7 +48,7 @@
},
{
"cell_type": "markdown",
- "id": "553c5b35",
+ "id": "574a2a3e",
"metadata": {},
"source": [
"### Example Usage\n",
@@ -59,7 +59,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "69b18e3c",
+ "id": "c55e126e",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "56adc982",
+ "id": "b7f25cb1",
"metadata": {},
"source": [
"Load a model (see [https://github.com/intel-isl/MiDaS/#Accuracy](https://github.com/intel-isl/MiDaS/#Accuracy) for an overview)"
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bfc1e9fc",
+ "id": "073192f9",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
},
{
"cell_type": "markdown",
- "id": "631d5a8e",
+ "id": "2a15a992",
"metadata": {},
"source": [
"Move model to GPU if available"
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9d983f3d",
+ "id": "3a057738",
"metadata": {},
"outputs": [],
"source": [
@@ -117,7 +117,7 @@
},
{
"cell_type": "markdown",
- "id": "2ee860f0",
+ "id": "d9852dc0",
"metadata": {},
"source": [
"Load transforms to resize and normalize the image for large or small model"
@@ -126,7 +126,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "22e2be03",
+ "id": "2e3c78c9",
"metadata": {},
"outputs": [],
"source": [
@@ -140,7 +140,7 @@
},
{
"cell_type": "markdown",
- "id": "65f992c9",
+ "id": "d838b808",
"metadata": {},
"source": [
"Load image and apply transforms"
@@ -149,7 +149,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4017cb94",
+ "id": "e477aa09",
"metadata": {},
"outputs": [],
"source": [
@@ -161,7 +161,7 @@
},
{
"cell_type": "markdown",
- "id": "7df48639",
+ "id": "59c054f0",
"metadata": {},
"source": [
"Predict and resize to original resolution"
@@ -170,7 +170,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6ea19a5d",
+ "id": "3784b95e",
"metadata": {},
"outputs": [],
"source": [
@@ -189,7 +189,7 @@
},
{
"cell_type": "markdown",
- "id": "dc0501c4",
+ "id": "239a9db7",
"metadata": {},
"source": [
"Show result"
@@ -198,7 +198,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7a8a80a8",
+ "id": "6977b5d4",
"metadata": {},
"outputs": [],
"source": [
@@ -208,7 +208,7 @@
},
{
"cell_type": "markdown",
- "id": "f4463278",
+ "id": "1d9563a9",
"metadata": {},
"source": [
"### References\n",
@@ -222,7 +222,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dff06581",
+ "id": "96fa2fc6",
"metadata": {
"attributes": {
"classes": [
@@ -244,7 +244,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a374df73",
+ "id": "b9484d84",
"metadata": {
"attributes": {
"classes": [
diff --git a/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb b/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb
index 6b77c64ece1f..ce6e3b6a94e7 100644
--- a/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb
+++ b/assets/hub/mateuszbuda_brain-segmentation-pytorch_unet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "92ce7789",
+ "id": "42a01a9d",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "74a4458a",
+ "id": "de49234d",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "330d2139",
+ "id": "a814f27f",
"metadata": {},
"source": [
"Loads a U-Net model pre-trained for abnormality segmentation on a dataset of brain MRI volumes [kaggle.com/mateuszbuda/lgg-mri-segmentation](https://www.kaggle.com/mateuszbuda/lgg-mri-segmentation)\n",
@@ -57,7 +57,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "10abff03",
+ "id": "da439230",
"metadata": {},
"outputs": [],
"source": [
@@ -71,7 +71,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "58b06fb0",
+ "id": "ce669ff4",
"metadata": {},
"outputs": [],
"source": [
@@ -100,7 +100,7 @@
},
{
"cell_type": "markdown",
- "id": "0b81da23",
+ "id": "bab671fe",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb b/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb
index 9f08fc0cb48d..a95a2343287e 100644
--- a/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb
+++ b/assets/hub/nicolalandro_ntsnet-cub200_ntsnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "7003fd9f",
+ "id": "2ef7362f",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "89ff606f",
+ "id": "1a2e681b",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "97db4cbb",
+ "id": "6feeaa26",
"metadata": {},
"source": [
"### Example Usage"
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "17800a55",
+ "id": "73a68f3d",
"metadata": {},
"outputs": [],
"source": [
@@ -78,7 +78,7 @@
},
{
"cell_type": "markdown",
- "id": "33a9cb23",
+ "id": "a4ebfc3f",
"metadata": {},
"source": [
"### Model Description\n",
@@ -91,7 +91,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a9a1b9ac",
+ "id": "87244d31",
"metadata": {
"attributes": {
"classes": [
diff --git a/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb b/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb
index 71eb4c55803a..0276772b492d 100644
--- a/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_efficientnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "41d9852d",
+ "id": "3f0b311f",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "37b531b6",
+ "id": "2ec8ad80",
"metadata": {},
"outputs": [],
"source": [
@@ -52,7 +52,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6ba833e0",
+ "id": "b4b1d16b",
"metadata": {},
"outputs": [],
"source": [
@@ -73,7 +73,7 @@
},
{
"cell_type": "markdown",
- "id": "70e2eabe",
+ "id": "5bf154cc",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset.\n",
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5d648380",
+ "id": "c10c0e59",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "b00e0455",
+ "id": "2029bc18",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a39647cd",
+ "id": "382f65b6",
"metadata": {},
"outputs": [],
"source": [
@@ -132,7 +132,7 @@
},
{
"cell_type": "markdown",
- "id": "c09b1b82",
+ "id": "c2e836a2",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probable hypotheses according to the model."
@@ -141,7 +141,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "97cfbdb7",
+ "id": "31108d42",
"metadata": {},
"outputs": [],
"source": [
@@ -153,7 +153,7 @@
},
{
"cell_type": "markdown",
- "id": "ca0bec3d",
+ "id": "87b87bfe",
"metadata": {},
"source": [
"Display the result."
@@ -162,7 +162,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2764d9df",
+ "id": "b24a2151",
"metadata": {},
"outputs": [],
"source": [
@@ -176,7 +176,7 @@
},
{
"cell_type": "markdown",
- "id": "e265d7d3",
+ "id": "681c2b3a",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb b/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb
index 3b16edf17b52..fe6602be856c 100644
--- a/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_fastpitch.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "3aa96ed9",
+ "id": "6f1cc87b",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -51,7 +51,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "35429e35",
+ "id": "b4efa58c",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +66,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b0998a1a",
+ "id": "7d5db952",
"metadata": {},
"outputs": [],
"source": [
@@ -82,7 +82,7 @@
},
{
"cell_type": "markdown",
- "id": "65a6ca60",
+ "id": "147645fe",
"metadata": {},
"source": [
"Download and setup FastPitch generator model."
@@ -91,7 +91,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a58ad6fe",
+ "id": "01146659",
"metadata": {},
"outputs": [],
"source": [
@@ -100,7 +100,7 @@
},
{
"cell_type": "markdown",
- "id": "11108145",
+ "id": "bd50b9c7",
"metadata": {},
"source": [
"Download and setup vocoder and denoiser models."
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c5359c2a",
+ "id": "dd54d534",
"metadata": {},
"outputs": [],
"source": [
@@ -118,7 +118,7 @@
},
{
"cell_type": "markdown",
- "id": "1d48ad22",
+ "id": "c86d6d34",
"metadata": {},
"source": [
"Verify that generator and vocoder models agree on input parameters."
@@ -127,7 +127,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6cc473a5",
+ "id": "c5d52a68",
"metadata": {},
"outputs": [],
"source": [
@@ -147,7 +147,7 @@
},
{
"cell_type": "markdown",
- "id": "534c5a2c",
+ "id": "292fc283",
"metadata": {},
"source": [
"Put all models on available device."
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8de3ecad",
+ "id": "ac61dec1",
"metadata": {},
"outputs": [],
"source": [
@@ -167,7 +167,7 @@
},
{
"cell_type": "markdown",
- "id": "a6bda59b",
+ "id": "d444233c",
"metadata": {},
"source": [
"Load text processor."
@@ -176,7 +176,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "dbf6677e",
+ "id": "58476bad",
"metadata": {},
"outputs": [],
"source": [
@@ -185,7 +185,7 @@
},
{
"cell_type": "markdown",
- "id": "2110febe",
+ "id": "412608c4",
"metadata": {},
"source": [
"Set the text to be synthetized, prepare input and set additional generation parameters."
@@ -194,7 +194,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "79d7d94d",
+ "id": "f227fbc0",
"metadata": {},
"outputs": [],
"source": [
@@ -204,7 +204,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "81aca6c3",
+ "id": "2823104b",
"metadata": {},
"outputs": [],
"source": [
@@ -214,7 +214,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2e3fd969",
+ "id": "8f8dcd38",
"metadata": {},
"outputs": [],
"source": [
@@ -228,7 +228,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "77c776ad",
+ "id": "b3cf00f1",
"metadata": {},
"outputs": [],
"source": [
@@ -242,7 +242,7 @@
},
{
"cell_type": "markdown",
- "id": "969af82a",
+ "id": "6b7c8180",
"metadata": {},
"source": [
"Plot the intermediate spectorgram."
@@ -251,7 +251,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5951c48f",
+ "id": "b2c5877d",
"metadata": {},
"outputs": [],
"source": [
@@ -265,7 +265,7 @@
},
{
"cell_type": "markdown",
- "id": "def1907c",
+ "id": "6d9ad78d",
"metadata": {},
"source": [
"Syntesize audio."
@@ -274,7 +274,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6c80e883",
+ "id": "0f7267bd",
"metadata": {},
"outputs": [],
"source": [
@@ -284,7 +284,7 @@
},
{
"cell_type": "markdown",
- "id": "48cb4c27",
+ "id": "4b4a2613",
"metadata": {},
"source": [
"Write audio to wav file."
@@ -293,7 +293,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1057432b",
+ "id": "f4fdd41d",
"metadata": {},
"outputs": [],
"source": [
@@ -303,7 +303,7 @@
},
{
"cell_type": "markdown",
- "id": "2a8f1730",
+ "id": "ee5e7ba3",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb b/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb
index 4a6ef47fa4ec..af834c3468d7 100644
--- a/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_gpunet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "992eff33",
+ "id": "39b99f1f",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5dde860c",
+ "id": "d71b9d78",
"metadata": {},
"outputs": [],
"source": [
@@ -45,7 +45,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9fada04e",
+ "id": "82111960",
"metadata": {},
"outputs": [],
"source": [
@@ -73,7 +73,7 @@
},
{
"cell_type": "markdown",
- "id": "69badccf",
+ "id": "3358dfcb",
"metadata": {},
"source": [
"### Load Pretrained model\n",
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "76e8a0b6",
+ "id": "97161364",
"metadata": {},
"outputs": [],
"source": [
@@ -113,7 +113,7 @@
},
{
"cell_type": "markdown",
- "id": "2a3f892b",
+ "id": "6bdfa3a9",
"metadata": {},
"source": [
"### Prepare inference data\n",
@@ -123,7 +123,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0613d4ad",
+ "id": "da5e3ea6",
"metadata": {},
"outputs": [],
"source": [
@@ -146,7 +146,7 @@
},
{
"cell_type": "markdown",
- "id": "d6d6c237",
+ "id": "3e86ed67",
"metadata": {},
"source": [
"### Run inference\n",
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d4f5a916",
+ "id": "b905c895",
"metadata": {},
"outputs": [],
"source": [
@@ -168,7 +168,7 @@
},
{
"cell_type": "markdown",
- "id": "bdfeb855",
+ "id": "f8fbf7a3",
"metadata": {},
"source": [
"### Display result"
@@ -177,7 +177,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2671e835",
+ "id": "5c8a62f7",
"metadata": {},
"outputs": [],
"source": [
@@ -191,7 +191,7 @@
},
{
"cell_type": "markdown",
- "id": "f02c93b5",
+ "id": "49ac95f2",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb b/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb
index 56c50a72ef63..0e1d54e89064 100644
--- a/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_hifigan.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "5d76a6d0",
+ "id": "e0c8659e",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -44,7 +44,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8c12c19a",
+ "id": "2acf4ea7",
"metadata": {},
"outputs": [],
"source": [
@@ -59,7 +59,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e37a5c2c",
+ "id": "467188cc",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "11052161",
+ "id": "471e0ef8",
"metadata": {},
"source": [
"Download and setup FastPitch generator model."
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "80c81a17",
+ "id": "6219eaa2",
"metadata": {},
"outputs": [],
"source": [
@@ -93,7 +93,7 @@
},
{
"cell_type": "markdown",
- "id": "9f9607e4",
+ "id": "bdefdd69",
"metadata": {},
"source": [
"Download and setup vocoder and denoiser models."
@@ -102,7 +102,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e3e1dc36",
+ "id": "56041c8f",
"metadata": {},
"outputs": [],
"source": [
@@ -111,7 +111,7 @@
},
{
"cell_type": "markdown",
- "id": "6d3d8190",
+ "id": "1c2b142b",
"metadata": {},
"source": [
"Verify that generator and vocoder models agree on input parameters."
@@ -120,7 +120,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "06433a11",
+ "id": "ef55d591",
"metadata": {},
"outputs": [],
"source": [
@@ -140,7 +140,7 @@
},
{
"cell_type": "markdown",
- "id": "5dc17298",
+ "id": "4168e03a",
"metadata": {},
"source": [
"Put all models on available device."
@@ -149,7 +149,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "168eb71d",
+ "id": "2a4d7195",
"metadata": {},
"outputs": [],
"source": [
@@ -160,7 +160,7 @@
},
{
"cell_type": "markdown",
- "id": "32115b9f",
+ "id": "c6dd0341",
"metadata": {},
"source": [
"Load text processor."
@@ -169,7 +169,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "abac70d1",
+ "id": "4f2a7019",
"metadata": {},
"outputs": [],
"source": [
@@ -178,7 +178,7 @@
},
{
"cell_type": "markdown",
- "id": "a6e4a7c6",
+ "id": "6e83bfcb",
"metadata": {},
"source": [
"Set the text to be synthetized, prepare input and set additional generation parameters."
@@ -187,7 +187,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "47712fb1",
+ "id": "a92ab30e",
"metadata": {},
"outputs": [],
"source": [
@@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "16ca9c9a",
+ "id": "d7e5c3a9",
"metadata": {},
"outputs": [],
"source": [
@@ -207,7 +207,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "41b0683b",
+ "id": "57d06a91",
"metadata": {},
"outputs": [],
"source": [
@@ -221,7 +221,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cea1296f",
+ "id": "4b7c56b8",
"metadata": {},
"outputs": [],
"source": [
@@ -235,7 +235,7 @@
},
{
"cell_type": "markdown",
- "id": "32bdd445",
+ "id": "48c985a2",
"metadata": {},
"source": [
"Plot the intermediate spectorgram."
@@ -244,7 +244,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "db0788a6",
+ "id": "10dfa77d",
"metadata": {},
"outputs": [],
"source": [
@@ -258,7 +258,7 @@
},
{
"cell_type": "markdown",
- "id": "61f46e4b",
+ "id": "692f3b76",
"metadata": {},
"source": [
"Syntesize audio."
@@ -267,7 +267,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6948a654",
+ "id": "3156ecb0",
"metadata": {},
"outputs": [],
"source": [
@@ -277,7 +277,7 @@
},
{
"cell_type": "markdown",
- "id": "2fe119ac",
+ "id": "a2038b95",
"metadata": {},
"source": [
"Write audio to wav file."
@@ -286,7 +286,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f856c681",
+ "id": "5fcf8a73",
"metadata": {},
"outputs": [],
"source": [
@@ -296,7 +296,7 @@
},
{
"cell_type": "markdown",
- "id": "ffeebce8",
+ "id": "dadd3057",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb b/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb
index 57be2d47e758..d3f75b52da15 100644
--- a/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_resnet50.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "3946fad6",
+ "id": "0af86b8d",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -44,7 +44,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1870ac2d",
+ "id": "e4574894",
"metadata": {},
"outputs": [],
"source": [
@@ -54,7 +54,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e20e9ee9",
+ "id": "355d6d37",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "b21a0443",
+ "id": "91e35b43",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset."
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e3ac9d17",
+ "id": "c4d3137c",
"metadata": {},
"outputs": [],
"source": [
@@ -96,7 +96,7 @@
},
{
"cell_type": "markdown",
- "id": "a6bc3aed",
+ "id": "0e3f775b",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -105,7 +105,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "46dc2ebd",
+ "id": "f1720870",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "2f71ffef",
+ "id": "33b3c91d",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probably hypothesis according to the model."
@@ -132,7 +132,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "175606ff",
+ "id": "765ae405",
"metadata": {},
"outputs": [],
"source": [
@@ -144,7 +144,7 @@
},
{
"cell_type": "markdown",
- "id": "cc723ca1",
+ "id": "12e7d9a6",
"metadata": {},
"source": [
"Display the result."
@@ -153,7 +153,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b25739c8",
+ "id": "334ef8e1",
"metadata": {},
"outputs": [],
"source": [
@@ -167,7 +167,7 @@
},
{
"cell_type": "markdown",
- "id": "71002a91",
+ "id": "d1b3f286",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_resnext.ipynb b/assets/hub/nvidia_deeplearningexamples_resnext.ipynb
index 7b3a18de0caf..1a90780e6b78 100644
--- a/assets/hub/nvidia_deeplearningexamples_resnext.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8f2f11ba",
+ "id": "003b3f21",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "29f7e379",
+ "id": "19ad5bc0",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "61a40021",
+ "id": "3b73bc7e",
"metadata": {},
"outputs": [],
"source": [
@@ -84,7 +84,7 @@
},
{
"cell_type": "markdown",
- "id": "664858ad",
+ "id": "d47193d3",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset."
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bf23ced1",
+ "id": "bca7efaa",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "44a96a27",
+ "id": "77d0b667",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d6a3f078",
+ "id": "66eede1c",
"metadata": {},
"outputs": [],
"source": [
@@ -133,7 +133,7 @@
},
{
"cell_type": "markdown",
- "id": "e2315a94",
+ "id": "eb6b2a76",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probably hypothesis according to the model."
@@ -142,7 +142,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bd4edaf4",
+ "id": "a0e19e97",
"metadata": {},
"outputs": [],
"source": [
@@ -154,7 +154,7 @@
},
{
"cell_type": "markdown",
- "id": "5d5a4d58",
+ "id": "52387bc8",
"metadata": {},
"source": [
"Display the result."
@@ -163,7 +163,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "43a2fb9d",
+ "id": "bae07e9f",
"metadata": {},
"outputs": [],
"source": [
@@ -177,7 +177,7 @@
},
{
"cell_type": "markdown",
- "id": "92046fbd",
+ "id": "6b472050",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb b/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb
index 7cec6f755a33..f125a6d793f1 100644
--- a/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_se-resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "14429a4d",
+ "id": "0b3aa65c",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b40c8eb1",
+ "id": "76055072",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9a6cd557",
+ "id": "75459e65",
"metadata": {},
"outputs": [],
"source": [
@@ -84,7 +84,7 @@
},
{
"cell_type": "markdown",
- "id": "7d906dcd",
+ "id": "1d5656e4",
"metadata": {},
"source": [
"Load the model pretrained on ImageNet dataset."
@@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5026ec55",
+ "id": "d52e6483",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "6943ca80",
+ "id": "32204973",
"metadata": {},
"source": [
"Prepare sample input data."
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "32b0c631",
+ "id": "d5d015ec",
"metadata": {},
"outputs": [],
"source": [
@@ -133,7 +133,7 @@
},
{
"cell_type": "markdown",
- "id": "f19bce82",
+ "id": "d4a53564",
"metadata": {},
"source": [
"Run inference. Use `pick_n_best(predictions=output, n=topN)` helper function to pick N most probable hypotheses according to the model."
@@ -142,7 +142,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "19ba7f52",
+ "id": "d9e1244f",
"metadata": {},
"outputs": [],
"source": [
@@ -154,7 +154,7 @@
},
{
"cell_type": "markdown",
- "id": "df499441",
+ "id": "07203744",
"metadata": {},
"source": [
"Display the result."
@@ -163,7 +163,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d52229d3",
+ "id": "d7a0913e",
"metadata": {},
"outputs": [],
"source": [
@@ -177,7 +177,7 @@
},
{
"cell_type": "markdown",
- "id": "f6b0efea",
+ "id": "2b178fdf",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_ssd.ipynb b/assets/hub/nvidia_deeplearningexamples_ssd.ipynb
index d5085d437425..517de92cabef 100644
--- a/assets/hub/nvidia_deeplearningexamples_ssd.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_ssd.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "73422ffe",
+ "id": "1e9a7bb0",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -56,7 +56,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "25cf29e2",
+ "id": "c9a78065",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +66,7 @@
},
{
"cell_type": "markdown",
- "id": "b5ea1514",
+ "id": "f7e5fd7d",
"metadata": {},
"source": [
"Load an SSD model pretrained on COCO dataset, as well as a set of utility methods for convenient and comprehensive formatting of input and output of the model."
@@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3210c556",
+ "id": "20007c6f",
"metadata": {},
"outputs": [],
"source": [
@@ -86,7 +86,7 @@
},
{
"cell_type": "markdown",
- "id": "26b638fd",
+ "id": "2c776228",
"metadata": {},
"source": [
"Now, prepare the loaded model for inference"
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "be606d5b",
+ "id": "7e70c6dc",
"metadata": {},
"outputs": [],
"source": [
@@ -105,7 +105,7 @@
},
{
"cell_type": "markdown",
- "id": "62391c98",
+ "id": "542ac3e0",
"metadata": {},
"source": [
"Prepare input images for object detection.\n",
@@ -115,7 +115,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6e3646a4",
+ "id": "a6dfcdb2",
"metadata": {},
"outputs": [],
"source": [
@@ -128,7 +128,7 @@
},
{
"cell_type": "markdown",
- "id": "324c45ee",
+ "id": "a95bd5a9",
"metadata": {},
"source": [
"Format the images to comply with the network input and convert them to tensor."
@@ -137,7 +137,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f436cae3",
+ "id": "1833bf6f",
"metadata": {},
"outputs": [],
"source": [
@@ -147,7 +147,7 @@
},
{
"cell_type": "markdown",
- "id": "309e3919",
+ "id": "936f0bf7",
"metadata": {},
"source": [
"Run the SSD network to perform object detection."
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d9a59a29",
+ "id": "eecd26a4",
"metadata": {},
"outputs": [],
"source": [
@@ -166,7 +166,7 @@
},
{
"cell_type": "markdown",
- "id": "8f1c8840",
+ "id": "d64fa16f",
"metadata": {},
"source": [
"By default, raw output from SSD network per input image contains\n",
@@ -177,7 +177,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1485e9b0",
+ "id": "124d34e6",
"metadata": {},
"outputs": [],
"source": [
@@ -187,7 +187,7 @@
},
{
"cell_type": "markdown",
- "id": "77c68753",
+ "id": "5d0083c8",
"metadata": {},
"source": [
"The model was trained on COCO dataset, which we need to access in order to translate class IDs into object names.\n",
@@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7d12892e",
+ "id": "f08c74fd",
"metadata": {},
"outputs": [],
"source": [
@@ -206,7 +206,7 @@
},
{
"cell_type": "markdown",
- "id": "7e2b5e34",
+ "id": "2b2fbc6b",
"metadata": {},
"source": [
"Finally, let's visualize our detections"
@@ -215,7 +215,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4feb9028",
+ "id": "331187df",
"metadata": {},
"outputs": [],
"source": [
@@ -240,7 +240,7 @@
},
{
"cell_type": "markdown",
- "id": "b46905d2",
+ "id": "bd0c7eac",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb b/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
index 3bff783e33dc..837b8d51ba7f 100644
--- a/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_tacotron2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "2f9a033e",
+ "id": "37b77db7",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -41,7 +41,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a6a8f9c6",
+ "id": "919f4e48",
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
- "id": "f697f76c",
+ "id": "21e68a49",
"metadata": {},
"source": [
"Load the Tacotron2 model pre-trained on [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/) and prepare it for inference:"
@@ -62,7 +62,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c14a7f6f",
+ "id": "b6caefcf",
"metadata": {},
"outputs": [],
"source": [
@@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
- "id": "19359e04",
+ "id": "09bdcd81",
"metadata": {},
"source": [
"Load pretrained WaveGlow model"
@@ -83,7 +83,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3a406aac",
+ "id": "273915a0",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
},
{
"cell_type": "markdown",
- "id": "52919fb0",
+ "id": "c9e8a97b",
"metadata": {},
"source": [
"Now, let's make the model say:"
@@ -104,7 +104,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7618fa47",
+ "id": "6904a29f",
"metadata": {},
"outputs": [],
"source": [
@@ -113,7 +113,7 @@
},
{
"cell_type": "markdown",
- "id": "3a698108",
+ "id": "63a17615",
"metadata": {},
"source": [
"Format the input using utility methods"
@@ -122,7 +122,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "261d320e",
+ "id": "2e8ef343",
"metadata": {},
"outputs": [],
"source": [
@@ -132,7 +132,7 @@
},
{
"cell_type": "markdown",
- "id": "2eb7cb66",
+ "id": "51cd4516",
"metadata": {},
"source": [
"Run the chained models:"
@@ -141,7 +141,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "14b94fdf",
+ "id": "9864ec34",
"metadata": {},
"outputs": [],
"source": [
@@ -154,7 +154,7 @@
},
{
"cell_type": "markdown",
- "id": "7c3637dc",
+ "id": "d1652768",
"metadata": {},
"source": [
"You can write it to a file and listen to it"
@@ -163,7 +163,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b54d6afb",
+ "id": "1910ba70",
"metadata": {},
"outputs": [],
"source": [
@@ -173,7 +173,7 @@
},
{
"cell_type": "markdown",
- "id": "5ac64a12",
+ "id": "43507e08",
"metadata": {},
"source": [
"Alternatively, play it right away in a notebook with IPython widgets"
@@ -182,7 +182,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4acc5ad5",
+ "id": "bcd20388",
"metadata": {},
"outputs": [],
"source": [
@@ -192,7 +192,7 @@
},
{
"cell_type": "markdown",
- "id": "50d4f082",
+ "id": "33c5e2bc",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb b/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb
index c2598660ecca..29e13bdea6fc 100644
--- a/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb
+++ b/assets/hub/nvidia_deeplearningexamples_waveglow.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "26bcef94",
+ "id": "da13c841",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -39,7 +39,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c0619373",
+ "id": "d5e8a45c",
"metadata": {},
"outputs": [],
"source": [
@@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
- "id": "fab40be6",
+ "id": "3bb8fcf3",
"metadata": {},
"source": [
"Load the WaveGlow model pre-trained on [LJ Speech dataset](https://keithito.com/LJ-Speech-Dataset/)"
@@ -60,7 +60,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f0bc68e1",
+ "id": "aecc46e8",
"metadata": {},
"outputs": [],
"source": [
@@ -70,7 +70,7 @@
},
{
"cell_type": "markdown",
- "id": "c41e4201",
+ "id": "a62ae3a0",
"metadata": {},
"source": [
"Prepare the WaveGlow model for inference"
@@ -79,7 +79,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f6210335",
+ "id": "6b71ea20",
"metadata": {},
"outputs": [],
"source": [
@@ -90,7 +90,7 @@
},
{
"cell_type": "markdown",
- "id": "94ce1327",
+ "id": "7f29b91e",
"metadata": {},
"source": [
"Load a pretrained Tacotron2 model"
@@ -99,7 +99,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f9e3b4b5",
+ "id": "8dd33c2b",
"metadata": {},
"outputs": [],
"source": [
@@ -110,7 +110,7 @@
},
{
"cell_type": "markdown",
- "id": "7f153697",
+ "id": "cab119d2",
"metadata": {},
"source": [
"Now, let's make the model say:"
@@ -119,7 +119,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e184c23d",
+ "id": "364878f4",
"metadata": {},
"outputs": [],
"source": [
@@ -128,7 +128,7 @@
},
{
"cell_type": "markdown",
- "id": "1a904610",
+ "id": "873a878c",
"metadata": {},
"source": [
"Format the input using utility methods"
@@ -137,7 +137,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0b46ea38",
+ "id": "333ed468",
"metadata": {},
"outputs": [],
"source": [
@@ -147,7 +147,7 @@
},
{
"cell_type": "markdown",
- "id": "e95c9731",
+ "id": "a5b73031",
"metadata": {},
"source": [
"Run the chained models"
@@ -156,7 +156,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5d1563bf",
+ "id": "c19ecedb",
"metadata": {},
"outputs": [],
"source": [
@@ -169,7 +169,7 @@
},
{
"cell_type": "markdown",
- "id": "a03376c9",
+ "id": "c41bb170",
"metadata": {},
"source": [
"You can write it to a file and listen to it"
@@ -178,7 +178,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c2279d1a",
+ "id": "84a930ee",
"metadata": {},
"outputs": [],
"source": [
@@ -188,7 +188,7 @@
},
{
"cell_type": "markdown",
- "id": "e005d8f2",
+ "id": "99fcb8a9",
"metadata": {},
"source": [
"Alternatively, play it right away in a notebook with IPython widgets"
@@ -197,7 +197,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3ab1f816",
+ "id": "114dab84",
"metadata": {},
"outputs": [],
"source": [
@@ -207,7 +207,7 @@
},
{
"cell_type": "markdown",
- "id": "d2303af2",
+ "id": "a7e77759",
"metadata": {},
"source": [
"### Details\n",
diff --git a/assets/hub/pytorch_fairseq_roberta.ipynb b/assets/hub/pytorch_fairseq_roberta.ipynb
index 467e34c22f28..fd834b848566 100644
--- a/assets/hub/pytorch_fairseq_roberta.ipynb
+++ b/assets/hub/pytorch_fairseq_roberta.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "2c68650a",
+ "id": "0beca512",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -43,7 +43,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4e132063",
+ "id": "fad1d5b4",
"metadata": {},
"outputs": [],
"source": [
@@ -53,7 +53,7 @@
},
{
"cell_type": "markdown",
- "id": "53622564",
+ "id": "ab60753f",
"metadata": {},
"source": [
"### Example\n",
@@ -64,7 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8bcf3cbc",
+ "id": "3643e255",
"metadata": {},
"outputs": [],
"source": [
@@ -75,7 +75,7 @@
},
{
"cell_type": "markdown",
- "id": "126a1565",
+ "id": "83cd9802",
"metadata": {},
"source": [
"##### Apply Byte-Pair Encoding (BPE) to input text"
@@ -84,7 +84,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4dae253e",
+ "id": "8a92540d",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
},
{
"cell_type": "markdown",
- "id": "7f28f95c",
+ "id": "09a03e94",
"metadata": {},
"source": [
"##### Extract features from RoBERTa"
@@ -104,7 +104,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "934599c3",
+ "id": "67217047",
"metadata": {},
"outputs": [],
"source": [
@@ -120,7 +120,7 @@
},
{
"cell_type": "markdown",
- "id": "151e664a",
+ "id": "aa9ba36d",
"metadata": {},
"source": [
"##### Use RoBERTa for sentence-pair classification tasks"
@@ -129,7 +129,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7c2a011f",
+ "id": "87865a75",
"metadata": {},
"outputs": [],
"source": [
@@ -151,7 +151,7 @@
},
{
"cell_type": "markdown",
- "id": "6c76bab5",
+ "id": "db2ae6e0",
"metadata": {},
"source": [
"##### Register a new (randomly initialized) classification head"
@@ -160,7 +160,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a0bbd3e2",
+ "id": "61173e4f",
"metadata": {},
"outputs": [],
"source": [
@@ -170,7 +170,7 @@
},
{
"cell_type": "markdown",
- "id": "e70c04cc",
+ "id": "ffc2fd93",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/pytorch_fairseq_translation.ipynb b/assets/hub/pytorch_fairseq_translation.ipynb
index d56c46b83db5..2cdb752f3ec3 100644
--- a/assets/hub/pytorch_fairseq_translation.ipynb
+++ b/assets/hub/pytorch_fairseq_translation.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "241513fa",
+ "id": "df7eda32",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -37,7 +37,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b96812ed",
+ "id": "5bfe24d4",
"metadata": {},
"outputs": [],
"source": [
@@ -47,7 +47,7 @@
},
{
"cell_type": "markdown",
- "id": "6bf5ee20",
+ "id": "c43b9f93",
"metadata": {},
"source": [
"### English-to-French Translation\n",
@@ -59,7 +59,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2176bcfc",
+ "id": "44608970",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
},
{
"cell_type": "markdown",
- "id": "c497c16f",
+ "id": "64e0f7ee",
"metadata": {},
"source": [
"### English-to-German Translation\n",
@@ -123,7 +123,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4d478458",
+ "id": "803ea5e1",
"metadata": {},
"outputs": [],
"source": [
@@ -142,7 +142,7 @@
},
{
"cell_type": "markdown",
- "id": "7afabfba",
+ "id": "90ae1dd5",
"metadata": {},
"source": [
"We can also do a round-trip translation to create a paraphrase:"
@@ -151,7 +151,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "27223332",
+ "id": "aca82b31",
"metadata": {},
"outputs": [],
"source": [
@@ -172,7 +172,7 @@
},
{
"cell_type": "markdown",
- "id": "02120c1d",
+ "id": "255eefd1",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/pytorch_vision_alexnet.ipynb b/assets/hub/pytorch_vision_alexnet.ipynb
index 9be13cd7067f..6006d779ee77 100644
--- a/assets/hub/pytorch_vision_alexnet.ipynb
+++ b/assets/hub/pytorch_vision_alexnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "57da348c",
+ "id": "92589dbc",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ed9cefb8",
+ "id": "e299da27",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "c2d8a76d",
+ "id": "27176420",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9db683ad",
+ "id": "e9c5df0e",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "205ff05f",
+ "id": "6d6ac0ab",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4ca08a4b",
+ "id": "dae8cf4c",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fa918d0b",
+ "id": "a7839e7e",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "4e158e54",
+ "id": "06e2fb48",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb b/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb
index 04a986048990..f164fec9c22d 100644
--- a/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb
+++ b/assets/hub/pytorch_vision_deeplabv3_resnet101.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8b906bc4",
+ "id": "a0690384",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "779d1938",
+ "id": "d4233e44",
"metadata": {},
"outputs": [],
"source": [
@@ -38,7 +38,7 @@
},
{
"cell_type": "markdown",
- "id": "bcdcdfdd",
+ "id": "9287e0db",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -54,7 +54,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ab80cafc",
+ "id": "7309b7f0",
"metadata": {},
"outputs": [],
"source": [
@@ -68,7 +68,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "50e69bbd",
+ "id": "efd2e539",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
},
{
"cell_type": "markdown",
- "id": "0e2092e0",
+ "id": "3635f551",
"metadata": {},
"source": [
"The output here is of shape `(21, H, W)`, and at each location, there are unnormalized probabilities corresponding to the prediction of each class.\n",
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6fca5f52",
+ "id": "1fa44464",
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
- "id": "80efbc8b",
+ "id": "6f3edc53",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_densenet.ipynb b/assets/hub/pytorch_vision_densenet.ipynb
index d43369f691bf..35415fe54dbf 100644
--- a/assets/hub/pytorch_vision_densenet.ipynb
+++ b/assets/hub/pytorch_vision_densenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "245baf5b",
+ "id": "a7be5d61",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "c6ff2211",
+ "id": "12b8bede",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "542d3ecb",
+ "id": "515ed829",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2e633188",
+ "id": "a1dd25a1",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e2a41b4f",
+ "id": "129a65cb",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "941fa345",
+ "id": "e699af4a",
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +112,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2d1ddfe6",
+ "id": "bf108b41",
"metadata": {},
"outputs": [],
"source": [
@@ -127,7 +127,7 @@
},
{
"cell_type": "markdown",
- "id": "467cd44b",
+ "id": "14ee6db2",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_fcn_resnet101.ipynb b/assets/hub/pytorch_vision_fcn_resnet101.ipynb
index bf7a25168110..3324f47a27f8 100644
--- a/assets/hub/pytorch_vision_fcn_resnet101.ipynb
+++ b/assets/hub/pytorch_vision_fcn_resnet101.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "fb556fc0",
+ "id": "cfe922ec",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2cde013c",
+ "id": "088429ce",
"metadata": {},
"outputs": [],
"source": [
@@ -37,7 +37,7 @@
},
{
"cell_type": "markdown",
- "id": "8e1bb94b",
+ "id": "41ce095b",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "36691cf6",
+ "id": "e7dc4ffb",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "279d2386",
+ "id": "24b4f333",
"metadata": {},
"outputs": [],
"source": [
@@ -96,7 +96,7 @@
},
{
"cell_type": "markdown",
- "id": "9b31877d",
+ "id": "ecd14a9a",
"metadata": {},
"source": [
"The output here is of shape `(21, H, W)`, and at each location, there are unnormalized probabilities corresponding to the prediction of each class.\n",
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6b2f042d",
+ "id": "f9f9ee82",
"metadata": {},
"outputs": [],
"source": [
@@ -128,7 +128,7 @@
},
{
"cell_type": "markdown",
- "id": "9d219512",
+ "id": "5b4ff2e0",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_ghostnet.ipynb b/assets/hub/pytorch_vision_ghostnet.ipynb
index 51181fecfa69..fcf8c9aa4231 100644
--- a/assets/hub/pytorch_vision_ghostnet.ipynb
+++ b/assets/hub/pytorch_vision_ghostnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "fe47c71b",
+ "id": "8c9dd8b8",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9a4a37e3",
+ "id": "6d6a6d0f",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "f945d3ba",
+ "id": "4ee97ed5",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -47,7 +47,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "29293b68",
+ "id": "fd7df3c9",
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3476cd7a",
+ "id": "8514596b",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "af92a8ca",
+ "id": "04b47216",
"metadata": {},
"outputs": [],
"source": [
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d8244455",
+ "id": "387eab86",
"metadata": {},
"outputs": [],
"source": [
@@ -121,7 +121,7 @@
},
{
"cell_type": "markdown",
- "id": "8a25e3ba",
+ "id": "3c6fde87",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_googlenet.ipynb b/assets/hub/pytorch_vision_googlenet.ipynb
index 3cdbbb28bfe9..175a41528ac7 100644
--- a/assets/hub/pytorch_vision_googlenet.ipynb
+++ b/assets/hub/pytorch_vision_googlenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "e3ddfbeb",
+ "id": "f99091bc",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6ba586c8",
+ "id": "d795924c",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "185db0e1",
+ "id": "1e57e9e6",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "90000237",
+ "id": "e34a9491",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "559d54d9",
+ "id": "d9eaf9dd",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7b60d675",
+ "id": "2d8ae4ba",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ef669d6c",
+ "id": "c8508078",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "476fcf59",
+ "id": "03c93a61",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_hardnet.ipynb b/assets/hub/pytorch_vision_hardnet.ipynb
index 7b4ad9e704d7..c7216e799629 100644
--- a/assets/hub/pytorch_vision_hardnet.ipynb
+++ b/assets/hub/pytorch_vision_hardnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "ec5c25fd",
+ "id": "3a9f8a01",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "71d81a32",
+ "id": "712e087d",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "a36b434f",
+ "id": "8c4744c4",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6d3f91d6",
+ "id": "d34c7fc1",
"metadata": {},
"outputs": [],
"source": [
@@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fd4e2221",
+ "id": "6be0716c",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e27900c1",
+ "id": "10bc39d6",
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +112,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "64570a83",
+ "id": "9073fcdd",
"metadata": {},
"outputs": [],
"source": [
@@ -127,7 +127,7 @@
},
{
"cell_type": "markdown",
- "id": "acaf677e",
+ "id": "98eeeed3",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_ibnnet.ipynb b/assets/hub/pytorch_vision_ibnnet.ipynb
index a8257c8db461..886e3cb5827a 100644
--- a/assets/hub/pytorch_vision_ibnnet.ipynb
+++ b/assets/hub/pytorch_vision_ibnnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "ba272198",
+ "id": "2ecb7bab",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e1dd15ba",
+ "id": "b0158503",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "174e37f7",
+ "id": "86cb9819",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -47,7 +47,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "060ea5e2",
+ "id": "5d1df3b5",
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5c5158bb",
+ "id": "764c5af1",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9c012cad",
+ "id": "110dfa28",
"metadata": {},
"outputs": [],
"source": [
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "db59257a",
+ "id": "97c0fad8",
"metadata": {},
"outputs": [],
"source": [
@@ -121,7 +121,7 @@
},
{
"cell_type": "markdown",
- "id": "cd13f19e",
+ "id": "b9dd9785",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_inception_v3.ipynb b/assets/hub/pytorch_vision_inception_v3.ipynb
index 51a9d5930487..e8f3ec3735f4 100644
--- a/assets/hub/pytorch_vision_inception_v3.ipynb
+++ b/assets/hub/pytorch_vision_inception_v3.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "8cc2dee0",
+ "id": "6ad32bde",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8cc65753",
+ "id": "e87ad1b1",
"metadata": {},
"outputs": [],
"source": [
@@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
- "id": "67bfca69",
+ "id": "cf74c0a3",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -47,7 +47,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d4459f89",
+ "id": "511acedb",
"metadata": {},
"outputs": [],
"source": [
@@ -61,7 +61,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0871bf81",
+ "id": "a9b8693a",
"metadata": {},
"outputs": [],
"source": [
@@ -95,7 +95,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "7211faa6",
+ "id": "895c9266",
"metadata": {},
"outputs": [],
"source": [
@@ -106,7 +106,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5de3365d",
+ "id": "6a3fb748",
"metadata": {},
"outputs": [],
"source": [
@@ -121,7 +121,7 @@
},
{
"cell_type": "markdown",
- "id": "a308ef65",
+ "id": "d4efe963",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_meal_v2.ipynb b/assets/hub/pytorch_vision_meal_v2.ipynb
index 9421247a35c0..6d1b54e707ff 100644
--- a/assets/hub/pytorch_vision_meal_v2.ipynb
+++ b/assets/hub/pytorch_vision_meal_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "595748d6",
+ "id": "c0ef5809",
"metadata": {},
"source": [
"### This notebook requires a GPU runtime to run.\n",
@@ -27,7 +27,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e41c1abf",
+ "id": "c067cfbe",
"metadata": {},
"outputs": [],
"source": [
@@ -38,7 +38,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "644eb539",
+ "id": "0473f4e2",
"metadata": {},
"outputs": [],
"source": [
@@ -51,7 +51,7 @@
},
{
"cell_type": "markdown",
- "id": "7172c98c",
+ "id": "e46d1fdc",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -65,7 +65,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9a5c0509",
+ "id": "fc792869",
"metadata": {},
"outputs": [],
"source": [
@@ -79,7 +79,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6ee261ec",
+ "id": "ea97c166",
"metadata": {},
"outputs": [],
"source": [
@@ -113,7 +113,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0bbac4cd",
+ "id": "255abd4b",
"metadata": {},
"outputs": [],
"source": [
@@ -124,7 +124,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2289822b",
+ "id": "3a0f4003",
"metadata": {},
"outputs": [],
"source": [
@@ -139,7 +139,7 @@
},
{
"cell_type": "markdown",
- "id": "78b7f60e",
+ "id": "cf8cd41e",
"metadata": {},
"source": [
"### Model Description\n",
@@ -167,7 +167,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3ce2f8e1",
+ "id": "ea133d77",
"metadata": {},
"outputs": [],
"source": [
@@ -181,7 +181,7 @@
},
{
"cell_type": "markdown",
- "id": "8e1255a8",
+ "id": "1bf27086",
"metadata": {},
"source": [
"@inproceedings{shen2019MEAL,\n",
diff --git a/assets/hub/pytorch_vision_mobilenet_v2.ipynb b/assets/hub/pytorch_vision_mobilenet_v2.ipynb
index c1855b69afa2..71f4f7ed19f1 100644
--- a/assets/hub/pytorch_vision_mobilenet_v2.ipynb
+++ b/assets/hub/pytorch_vision_mobilenet_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "a191e1cb",
+ "id": "0c0a8789",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fe4a8dc6",
+ "id": "1e9f5c80",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "d49c36c2",
+ "id": "4d533734",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2c492916",
+ "id": "f5ef6454",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f3477dd0",
+ "id": "25dfe2a9",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a73155fd",
+ "id": "67324bd7",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a96c2276",
+ "id": "3fca99ba",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "7f96758d",
+ "id": "5e176aff",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_once_for_all.ipynb b/assets/hub/pytorch_vision_once_for_all.ipynb
index 79c8874ae8bb..d2aa5e9f61e5 100644
--- a/assets/hub/pytorch_vision_once_for_all.ipynb
+++ b/assets/hub/pytorch_vision_once_for_all.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "e7823e01",
+ "id": "2ab35054",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -29,7 +29,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d852a914",
+ "id": "cf0f12cd",
"metadata": {},
"outputs": [],
"source": [
@@ -45,7 +45,7 @@
},
{
"cell_type": "markdown",
- "id": "bec74186",
+ "id": "807103b2",
"metadata": {},
"source": [
"| OFA Network | Design Space | Resolution | Width Multiplier | Depth | Expand Ratio | kernel Size | \n",
@@ -62,7 +62,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5760eb32",
+ "id": "24077ee2",
"metadata": {},
"outputs": [],
"source": [
@@ -77,7 +77,7 @@
},
{
"cell_type": "markdown",
- "id": "a516df3d",
+ "id": "a229dc1a",
"metadata": {},
"source": [
"### Get Specialized Architecture"
@@ -86,7 +86,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1482b0a1",
+ "id": "25bc7f55",
"metadata": {},
"outputs": [],
"source": [
@@ -101,7 +101,7 @@
},
{
"cell_type": "markdown",
- "id": "8422c66d",
+ "id": "4ff98dc6",
"metadata": {},
"source": [
"More models and configurations can be found in [once-for-all/model-zoo](https://github.com/mit-han-lab/once-for-all#evaluate-1)\n",
@@ -111,7 +111,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "04b27be9",
+ "id": "b689604f",
"metadata": {},
"outputs": [],
"source": [
@@ -122,7 +122,7 @@
},
{
"cell_type": "markdown",
- "id": "278c68d7",
+ "id": "cec1eef5",
"metadata": {},
"source": [
"The model's prediction can be evalutaed by"
@@ -131,7 +131,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "73a297dd",
+ "id": "e54d7090",
"metadata": {},
"outputs": [],
"source": [
@@ -173,7 +173,7 @@
},
{
"cell_type": "markdown",
- "id": "2a9a3420",
+ "id": "595d6a19",
"metadata": {},
"source": [
"### Model Description\n",
@@ -189,7 +189,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "96b4d62c",
+ "id": "23040a64",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/pytorch_vision_proxylessnas.ipynb b/assets/hub/pytorch_vision_proxylessnas.ipynb
index fc678131ecd9..3b30076016b0 100644
--- a/assets/hub/pytorch_vision_proxylessnas.ipynb
+++ b/assets/hub/pytorch_vision_proxylessnas.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "a301959d",
+ "id": "a02f10f9",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e2db19bf",
+ "id": "b498a829",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "78c0fae0",
+ "id": "1872688d",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "877e835a",
+ "id": "70de5c9c",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f253bdf3",
+ "id": "4d84c1f0",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "07c19c24",
+ "id": "bb15d301",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eea0c60f",
+ "id": "356525e9",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "b491cbd8",
+ "id": "bf565c5d",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_resnest.ipynb b/assets/hub/pytorch_vision_resnest.ipynb
index ee2f328603f0..5ee6a460d833 100644
--- a/assets/hub/pytorch_vision_resnest.ipynb
+++ b/assets/hub/pytorch_vision_resnest.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "f5cea2dd",
+ "id": "6187e8dc",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0ecbb048",
+ "id": "f3dfdc7e",
"metadata": {},
"outputs": [],
"source": [
@@ -36,7 +36,7 @@
},
{
"cell_type": "markdown",
- "id": "18c35c43",
+ "id": "2e5481ea",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -50,7 +50,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2c973f73",
+ "id": "2d3cece8",
"metadata": {},
"outputs": [],
"source": [
@@ -64,7 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eb270781",
+ "id": "0c8c1784",
"metadata": {},
"outputs": [],
"source": [
@@ -98,7 +98,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "d23ad113",
+ "id": "8e9db733",
"metadata": {},
"outputs": [],
"source": [
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2a815dd3",
+ "id": "f26a591c",
"metadata": {},
"outputs": [],
"source": [
@@ -124,7 +124,7 @@
},
{
"cell_type": "markdown",
- "id": "038e56b9",
+ "id": "85c616af",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_resnet.ipynb b/assets/hub/pytorch_vision_resnet.ipynb
index 1d54c525052e..62c67fb8ce69 100644
--- a/assets/hub/pytorch_vision_resnet.ipynb
+++ b/assets/hub/pytorch_vision_resnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "ce458536",
+ "id": "b4492464",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "03989f31",
+ "id": "9cabea66",
"metadata": {},
"outputs": [],
"source": [
@@ -38,7 +38,7 @@
},
{
"cell_type": "markdown",
- "id": "e7721cc1",
+ "id": "324505a9",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -52,7 +52,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "84bed2bd",
+ "id": "6c0ff5ed",
"metadata": {},
"outputs": [],
"source": [
@@ -66,7 +66,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eb390021",
+ "id": "a4c9641a",
"metadata": {},
"outputs": [],
"source": [
@@ -100,7 +100,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "28f62d8b",
+ "id": "b1f3fab0",
"metadata": {},
"outputs": [],
"source": [
@@ -111,7 +111,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5c2b8fb5",
+ "id": "cc04a4d3",
"metadata": {},
"outputs": [],
"source": [
@@ -126,7 +126,7 @@
},
{
"cell_type": "markdown",
- "id": "5acebfe8",
+ "id": "8675ce1a",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_resnext.ipynb b/assets/hub/pytorch_vision_resnext.ipynb
index 87bb86b3f689..f54213285823 100644
--- a/assets/hub/pytorch_vision_resnext.ipynb
+++ b/assets/hub/pytorch_vision_resnext.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "16d8ffc2",
+ "id": "29489f6a",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "10e4c384",
+ "id": "cd35b6f9",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "dbb5ea48",
+ "id": "e6f9234e",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cef58c62",
+ "id": "dcca7265",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "cd6d8a60",
+ "id": "00424bfb",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b7ca29c9",
+ "id": "25880085",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "766845c1",
+ "id": "ea038925",
"metadata": {},
"outputs": [],
"source": [
@@ -125,7 +125,7 @@
},
{
"cell_type": "markdown",
- "id": "a1685071",
+ "id": "8d41fdee",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_shufflenet_v2.ipynb b/assets/hub/pytorch_vision_shufflenet_v2.ipynb
index 73c986fcac84..58dc82f8af1c 100644
--- a/assets/hub/pytorch_vision_shufflenet_v2.ipynb
+++ b/assets/hub/pytorch_vision_shufflenet_v2.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "2a3fd405",
+ "id": "23b1d762",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4f35c58e",
+ "id": "9e8fbff8",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "d1b90097",
+ "id": "7a06f7bf",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e05d8798",
+ "id": "9b43d958",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b4cb0bd0",
+ "id": "6114ff6c",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "10b06312",
+ "id": "368abd58",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e1650622",
+ "id": "356195c6",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "68433d94",
+ "id": "01d8d5b4",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_snnmlp.ipynb b/assets/hub/pytorch_vision_snnmlp.ipynb
index 3f23392d460a..b9b7c9f3ec6b 100644
--- a/assets/hub/pytorch_vision_snnmlp.ipynb
+++ b/assets/hub/pytorch_vision_snnmlp.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "51fb5c1c",
+ "id": "4fb94153",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "fc082212",
+ "id": "e4266eb1",
"metadata": {},
"outputs": [],
"source": [
@@ -37,7 +37,7 @@
},
{
"cell_type": "markdown",
- "id": "50361948",
+ "id": "48f3d42f",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -51,7 +51,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "6e0a9ff3",
+ "id": "4014b664",
"metadata": {},
"outputs": [],
"source": [
@@ -65,7 +65,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "03d4bf60",
+ "id": "dd5e2560",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
},
{
"cell_type": "markdown",
- "id": "49f1f369",
+ "id": "365c19d5",
"metadata": {},
"source": [
"### Model Description\n",
@@ -121,7 +121,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "312607f0",
+ "id": "ac9e2f83",
"metadata": {},
"outputs": [],
"source": [
diff --git a/assets/hub/pytorch_vision_squeezenet.ipynb b/assets/hub/pytorch_vision_squeezenet.ipynb
index 4d7d6e9c984c..77e105d5065a 100644
--- a/assets/hub/pytorch_vision_squeezenet.ipynb
+++ b/assets/hub/pytorch_vision_squeezenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "4ace5dc0",
+ "id": "d806b7e8",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2ceaaca1",
+ "id": "b3e608e3",
"metadata": {},
"outputs": [],
"source": [
@@ -35,7 +35,7 @@
},
{
"cell_type": "markdown",
- "id": "8e25b156",
+ "id": "9449860f",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -49,7 +49,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "2181987b",
+ "id": "271a643d",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "376aec3c",
+ "id": "bf3f4fa2",
"metadata": {},
"outputs": [],
"source": [
@@ -97,7 +97,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5d49c6d0",
+ "id": "a96dff14",
"metadata": {},
"outputs": [],
"source": [
@@ -108,7 +108,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "46a39dfe",
+ "id": "63ef7155",
"metadata": {},
"outputs": [],
"source": [
@@ -123,7 +123,7 @@
},
{
"cell_type": "markdown",
- "id": "d1b9e3c6",
+ "id": "f5df6733",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_vgg.ipynb b/assets/hub/pytorch_vision_vgg.ipynb
index ca974b7997bc..2f497b7457c0 100644
--- a/assets/hub/pytorch_vision_vgg.ipynb
+++ b/assets/hub/pytorch_vision_vgg.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "e2147959",
+ "id": "ef2c7d61",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "0856cebb",
+ "id": "04dbdebe",
"metadata": {},
"outputs": [],
"source": [
@@ -41,7 +41,7 @@
},
{
"cell_type": "markdown",
- "id": "4ece9123",
+ "id": "05e86640",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "150a6e12",
+ "id": "07d46ca5",
"metadata": {},
"outputs": [],
"source": [
@@ -69,7 +69,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3f5162c7",
+ "id": "2ee8eaf3",
"metadata": {},
"outputs": [],
"source": [
@@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "bfd04549",
+ "id": "83dc855c",
"metadata": {},
"outputs": [],
"source": [
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3879bcc5",
+ "id": "92f4346f",
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
- "id": "01b89edb",
+ "id": "2c5a35ad",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/pytorch_vision_wide_resnet.ipynb b/assets/hub/pytorch_vision_wide_resnet.ipynb
index 6d31cf0f1c35..1295f71f5aa9 100644
--- a/assets/hub/pytorch_vision_wide_resnet.ipynb
+++ b/assets/hub/pytorch_vision_wide_resnet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "4a0dde22",
+ "id": "a982b48a",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ab5848b6",
+ "id": "6c11f2d4",
"metadata": {},
"outputs": [],
"source": [
@@ -36,7 +36,7 @@
},
{
"cell_type": "markdown",
- "id": "185aa74c",
+ "id": "e6b9c418",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -50,7 +50,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "4e1784be",
+ "id": "e7195b1a",
"metadata": {},
"outputs": [],
"source": [
@@ -64,7 +64,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "a395ea0c",
+ "id": "00d80f43",
"metadata": {},
"outputs": [],
"source": [
@@ -98,7 +98,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "9daee8c1",
+ "id": "021ea52d",
"metadata": {},
"outputs": [],
"source": [
@@ -109,7 +109,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "460a9b93",
+ "id": "8d109350",
"metadata": {},
"outputs": [],
"source": [
@@ -124,7 +124,7 @@
},
{
"cell_type": "markdown",
- "id": "594e1d0e",
+ "id": "f3d0520b",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb b/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb
index 7a470010e900..a091b96e3555 100644
--- a/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb
+++ b/assets/hub/sigsep_open-unmix-pytorch_umx.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "4e044cf1",
+ "id": "7966f391",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "b8576f13",
+ "id": "095ab6c4",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "f760d705",
+ "id": "ea3d6ec1",
"metadata": {},
"outputs": [],
"source": [
@@ -59,7 +59,7 @@
},
{
"cell_type": "markdown",
- "id": "0c9ff3e4",
+ "id": "02b1b9d6",
"metadata": {},
"source": [
"### Model Description\n",
@@ -94,7 +94,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e9447214",
+ "id": "6c8668e1",
"metadata": {},
"outputs": [],
"source": [
@@ -104,7 +104,7 @@
},
{
"cell_type": "markdown",
- "id": "923a9173",
+ "id": "b8603e37",
"metadata": {},
"source": [
"### References\n",
diff --git a/assets/hub/simplenet.ipynb b/assets/hub/simplenet.ipynb
index ddc50e2a09a9..b563a823edfc 100644
--- a/assets/hub/simplenet.ipynb
+++ b/assets/hub/simplenet.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "54cd7c44",
+ "id": "1891c6f9",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "8fed2140",
+ "id": "12bdf714",
"metadata": {},
"outputs": [],
"source": [
@@ -41,7 +41,7 @@
},
{
"cell_type": "markdown",
- "id": "de457bd9",
+ "id": "94a62c5d",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
@@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "44f054c3",
+ "id": "33b12be2",
"metadata": {},
"outputs": [],
"source": [
@@ -69,7 +69,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "ffd1c4b1",
+ "id": "cf722178",
"metadata": {},
"outputs": [],
"source": [
@@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1f57a18f",
+ "id": "e62070bf",
"metadata": {},
"outputs": [],
"source": [
@@ -114,7 +114,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "eb0d2450",
+ "id": "2987895b",
"metadata": {},
"outputs": [],
"source": [
@@ -129,7 +129,7 @@
},
{
"cell_type": "markdown",
- "id": "0975f8f8",
+ "id": "7cf2e3d1",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/snakers4_silero-models_stt.ipynb b/assets/hub/snakers4_silero-models_stt.ipynb
index 10fecc4fc00e..69327a58eeb3 100644
--- a/assets/hub/snakers4_silero-models_stt.ipynb
+++ b/assets/hub/snakers4_silero-models_stt.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "b4ca6dfb",
+ "id": "569058b9",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "da0f9896",
+ "id": "9c5ca13a",
"metadata": {},
"outputs": [],
"source": [
@@ -36,7 +36,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "e04156b3",
+ "id": "291ca135",
"metadata": {},
"outputs": [],
"source": [
@@ -69,7 +69,7 @@
},
{
"cell_type": "markdown",
- "id": "25394371",
+ "id": "ac0896f5",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/snakers4_silero-models_tts.ipynb b/assets/hub/snakers4_silero-models_tts.ipynb
index d9ba48cb53d8..250f6f846a7c 100644
--- a/assets/hub/snakers4_silero-models_tts.ipynb
+++ b/assets/hub/snakers4_silero-models_tts.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "17640371",
+ "id": "b30f0294",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -20,7 +20,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "945c2c6f",
+ "id": "6162608a",
"metadata": {},
"outputs": [],
"source": [
@@ -32,7 +32,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "18dfbdcb",
+ "id": "c0043d5f",
"metadata": {},
"outputs": [],
"source": [
@@ -55,7 +55,7 @@
},
{
"cell_type": "markdown",
- "id": "8a1cae7a",
+ "id": "6150d625",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/snakers4_silero-vad_vad.ipynb b/assets/hub/snakers4_silero-vad_vad.ipynb
index 0941879d6926..1fd7933fbf0d 100644
--- a/assets/hub/snakers4_silero-vad_vad.ipynb
+++ b/assets/hub/snakers4_silero-vad_vad.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "b0f87cee",
+ "id": "f80c12bd",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "931edd40",
+ "id": "a09673ee",
"metadata": {},
"outputs": [],
"source": [
@@ -34,7 +34,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "5b619fd0",
+ "id": "c90352b7",
"metadata": {},
"outputs": [],
"source": [
@@ -63,7 +63,7 @@
},
{
"cell_type": "markdown",
- "id": "17b43e50",
+ "id": "8baeb592",
"metadata": {},
"source": [
"### Model Description\n",
diff --git a/assets/hub/ultralytics_yolov5.ipynb b/assets/hub/ultralytics_yolov5.ipynb
index ce8aaf0f5192..2a4bb24a8d4e 100644
--- a/assets/hub/ultralytics_yolov5.ipynb
+++ b/assets/hub/ultralytics_yolov5.ipynb
@@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
- "id": "93a68768",
+ "id": "d5b96eba",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
@@ -29,7 +29,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "3b62b10a",
+ "id": "38ab273f",
"metadata": {},
"outputs": [],
"source": [
@@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
- "id": "ead6b696",
+ "id": "bdc9e822",
"metadata": {},
"source": [
"## Model Description\n",
@@ -82,7 +82,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "1a1ff659",
+ "id": "288a7130",
"metadata": {},
"outputs": [],
"source": [
@@ -112,7 +112,7 @@
},
{
"cell_type": "markdown",
- "id": "3c8d0963",
+ "id": "2d3cef04",
"metadata": {},
"source": [
"## Citation\n",
@@ -125,7 +125,7 @@
{
"cell_type": "code",
"execution_count": null,
- "id": "de035c7d",
+ "id": "bccfb9d6",
"metadata": {
"attributes": {
"classes": [
@@ -150,7 +150,7 @@
},
{
"cell_type": "markdown",
- "id": "19ffa3fa",
+ "id": "5ee2bac9",
"metadata": {},
"source": [
"## Contact\n",
diff --git a/assets/quick-start-module.js b/assets/quick-start-module.js
index b7f9e8ab2d97..dfde9363d5e6 100644
--- a/assets/quick-start-module.js
+++ b/assets/quick-start-module.js
@@ -11,8 +11,8 @@ var archInfoMap = new Map([
['accnone', {title: "CPU", platforms: new Set(['linux', 'macos', 'windows'])}]
]);
-let version_map={"nightly": {"accnone": ["cpu", ""], "cuda.x": ["cuda", "11.8"], "cuda.y": ["cuda", "12.1"], "cuda.z": ["cuda", "12.4"], "rocm5.x": ["rocm", "6.1"]}, "release": {"accnone": ["cpu", ""], "cuda.x": ["cuda", "11.8"], "cuda.y": ["cuda", "12.1"], "cuda.z": ["cuda", "12.4"], "rocm5.x": ["rocm", "6.1"]}}
-let stable_version="Stable (2.4.0)";
+let version_map={"nightly": {"accnone": ["cpu", ""], "cuda.x": ["cuda", "11.8"], "cuda.y": ["cuda", "12.1"], "cuda.z": ["cuda", "12.4"], "rocm5.x": ["rocm", "6.2"]}, "release": {"accnone": ["cpu", ""], "cuda.x": ["cuda", "11.8"], "cuda.y": ["cuda", "12.1"], "cuda.z": ["cuda", "12.4"], "rocm5.x": ["rocm", "6.1"]}}
+let stable_version="Stable (2.4.1)";
var default_selected_os = getAnchorSelectedOS() || getDefaultSelectedOS();
var opts = {
@@ -266,7 +266,7 @@ $("[data-toggle='cloud-dropdown']").on("click", function(e) {
});
function commandMessage(key) {
- var object = {"preview,pip,linux,accnone,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,linux,cuda.x,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118", "preview,pip,linux,cuda.y,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121", "preview,pip,linux,cuda.z,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124", "preview,pip,linux,rocm5.x,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.1", "preview,conda,linux,cuda.x,python": "conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch-nightly -c nvidia", "preview,conda,linux,cuda.y,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia", "preview,conda,linux,cuda.z,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch-nightly -c nvidia", "preview,conda,linux,rocm5.x,python": "NOTE: Conda packages are not currently available for ROCm, please use pip instead
", "preview,conda,linux,accnone,python": "conda install pytorch torchvision torchaudio cpuonly -c pytorch-nightly", "preview,libtorch,linux,accnone,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,libtorch,linux,cuda.x,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,libtorch,linux,cuda.y,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu121/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,libtorch,linux,cuda.z,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu124/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu124/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,libtorch,linux,rocm5.x,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/rocm6.1/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/rocm6.1/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,pip,macos,cuda.x,python": "# CUDA is not available on MacOS, please use default package
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,macos,cuda.y,python": "# CUDA is not available on MacOS, please use default package
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,macos,cuda.z,python": "# CUDA is not available on MacOS, please use default package
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,macos,rocm5.x,python": "# ROCm is not available on MacOS, please use default package
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,macos,accnone,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,conda,macos,cuda.x,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,conda,macos,cuda.y,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,conda,macos,cuda.z,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,conda,macos,rocm5.x,python": "# ROCm is not available on MacOS, please use default package
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,conda,macos,accnone,python": "conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,libtorch,macos,accnone,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,libtorch,macos,cuda.x,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,libtorch,macos,cuda.y,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,libtorch,macos,cuda.z,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,libtorch,macos,rocm5.x,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,pip,windows,accnone,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,windows,cuda.x,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118", "preview,pip,windows,cuda.y,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121", "preview,pip,windows,cuda.z,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124", "preview,pip,windows,rocm5.x,python": "NOTE: ROCm is not available on Windows", "preview,conda,windows,cuda.x,python": "conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch-nightly -c nvidia", "preview,conda,windows,cuda.y,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia", "preview,conda,windows,cuda.z,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch-nightly -c nvidia", "preview,conda,windows,rocm5.x,python": "NOTE: ROCm is not available on Windows", "preview,conda,windows,accnone,python": "conda install pytorch torchvision torchaudio cpuonly -c pytorch-nightly", "preview,libtorch,windows,accnone,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-win-shared-with-deps-debug-latest.zip", "preview,libtorch,windows,cuda.x,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-win-shared-with-deps-debug-latest.zip", "preview,libtorch,windows,cuda.y,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/nightly/cu121/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/nightly/cu121/libtorch-win-shared-with-deps-debug-latest.zip", "preview,libtorch,windows,cuda.z,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/nightly/cu124/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/nightly/cu124/libtorch-win-shared-with-deps-debug-latest.zip", "preview,libtorch,windows,rocm5.x,cplusplus": "NOTE: ROCm is not available on Windows", "stable,pip,linux,accnone,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu", "stable,pip,linux,cuda.x,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118", "stable,pip,linux,cuda.y,python": "pip3 install torch torchvision torchaudio", "stable,pip,linux,cuda.z,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124", "stable,pip,linux,rocm5.x,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1", "stable,conda,linux,cuda.x,python": "conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia", "stable,conda,linux,cuda.y,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia", "stable,conda,linux,cuda.z,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia", "stable,conda,linux,rocm5.x,python": "NOTE: Conda packages are not currently available for ROCm, please use pip instead
", "stable,conda,linux,accnone,python": "conda install pytorch torchvision torchaudio cpuonly -c pytorch", "stable,libtorch,linux,accnone,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-2.4.0%2Bcpu.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.4.0%2Bcpu.zip", "stable,libtorch,linux,cuda.x,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/cu118/libtorch-shared-with-deps-2.4.0%2Bcu118.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.4.0%2Bcu118.zip", "stable,libtorch,linux,cuda.y,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/cu121/libtorch-shared-with-deps-2.4.0%2Bcu121.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/cu121/libtorch-cxx11-abi-shared-with-deps-2.4.0%2Bcu121.zip", "stable,libtorch,linux,cuda.z,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/cu124/libtorch-shared-with-deps-2.4.0%2Bcu124.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/cu124/libtorch-cxx11-abi-shared-with-deps-2.4.0%2Bcu124.zip", "stable,libtorch,linux,rocm5.x,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/rocm6.1/libtorch-shared-with-deps-2.4.0%2Brocm6.1.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/rocm6.1/libtorch-cxx11-abi-shared-with-deps-2.4.0%2Brocm6.1.zip", "stable,pip,macos,cuda.x,python": "# CUDA is not available on MacOS, please use default package
pip3 install torch torchvision torchaudio", "stable,pip,macos,cuda.y,python": "# CUDA is not available on MacOS, please use default package
pip3 install torch torchvision torchaudio", "stable,pip,macos,cuda.z,python": "# CUDA is not available on MacOS, please use default package
pip3 install torch torchvision torchaudio", "stable,pip,macos,rocm5.x,python": "# ROCm is not available on MacOS, please use default package
pip3 install torch torchvision torchaudio", "stable,pip,macos,accnone,python": "pip3 install torch torchvision torchaudio", "stable,conda,macos,cuda.x,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,conda,macos,cuda.y,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,conda,macos,cuda.z,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,conda,macos,rocm5.x,python": "# ROCm is not available on MacOS, please use default package
conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,conda,macos,accnone,python": "conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,libtorch,macos,accnone,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.0.zip", "stable,libtorch,macos,cuda.x,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.0.zip", "stable,libtorch,macos,cuda.y,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.0.zip", "stable,libtorch,macos,cuda.z,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.0.zip", "stable,libtorch,macos,rocm5.x,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.0.zip", "stable,pip,windows,accnone,python": "pip3 install torch torchvision torchaudio", "stable,pip,windows,cuda.x,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118", "stable,pip,windows,cuda.y,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121", "stable,pip,windows,cuda.z,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124", "stable,pip,windows,rocm5.x,python": "NOTE: ROCm is not available on Windows", "stable,conda,windows,cuda.x,python": "conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia", "stable,conda,windows,cuda.y,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia", "stable,conda,windows,cuda.z,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia", "stable,conda,windows,rocm5.x,python": "NOTE: ROCm is not available on Windows", "stable,conda,windows,accnone,python": "conda install pytorch torchvision torchaudio cpuonly -c pytorch", "stable,libtorch,windows,accnone,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/cpu/libtorch-win-shared-with-deps-2.4.0%2Bcpu.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/cpu/libtorch-win-shared-with-deps-debug-2.4.0%2Bcpu.zip", "stable,libtorch,windows,cuda.x,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-2.4.0%2Bcu118.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-debug-2.4.0%2Bcu118.zip", "stable,libtorch,windows,cuda.y,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/cu121/libtorch-win-shared-with-deps-2.4.0%2Bcu121.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/cu121/libtorch-win-shared-with-deps-debug-2.4.0%2Bcu121.zip", "stable,libtorch,windows,cuda.z,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/cu124/libtorch-win-shared-with-deps-2.4.0%2Bcu124.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/cu124/libtorch-win-shared-with-deps-debug-2.4.0%2Bcu124.zip", "stable,libtorch,windows,rocm5.x,cplusplus": "NOTE: ROCm is not available on Windows"};
+ var object = {"preview,pip,linux,accnone,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,linux,cuda.x,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118", "preview,pip,linux,cuda.y,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121", "preview,pip,linux,cuda.z,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124", "preview,pip,linux,rocm5.x,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm6.2", "preview,conda,linux,cuda.x,python": "conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch-nightly -c nvidia", "preview,conda,linux,cuda.y,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia", "preview,conda,linux,cuda.z,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch-nightly -c nvidia", "preview,conda,linux,rocm5.x,python": "NOTE: Conda packages are not currently available for ROCm, please use pip instead
", "preview,conda,linux,accnone,python": "conda install pytorch torchvision torchaudio cpuonly -c pytorch-nightly", "preview,libtorch,linux,accnone,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,libtorch,linux,cuda.x,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,libtorch,linux,cuda.y,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu121/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu121/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,libtorch,linux,cuda.z,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu124/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/cu124/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,libtorch,linux,rocm5.x,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/rocm6.2/libtorch-shared-with-deps-latest.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/nightly/rocm6.2/libtorch-cxx11-abi-shared-with-deps-latest.zip", "preview,pip,macos,cuda.x,python": "# CUDA is not available on MacOS, please use default package
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,macos,cuda.y,python": "# CUDA is not available on MacOS, please use default package
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,macos,cuda.z,python": "# CUDA is not available on MacOS, please use default package
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,macos,rocm5.x,python": "# ROCm is not available on MacOS, please use default package
pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,macos,accnone,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,conda,macos,cuda.x,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,conda,macos,cuda.y,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,conda,macos,cuda.z,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,conda,macos,rocm5.x,python": "# ROCm is not available on MacOS, please use default package
conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,conda,macos,accnone,python": "conda install pytorch-nightly::pytorch torchvision torchaudio -c pytorch-nightly", "preview,libtorch,macos,accnone,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,libtorch,macos,cuda.x,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,libtorch,macos,cuda.y,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,libtorch,macos,cuda.z,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,libtorch,macos,rocm5.x,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-macos-arm64-latest.zip", "preview,pip,windows,accnone,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cpu", "preview,pip,windows,cuda.x,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu118", "preview,pip,windows,cuda.y,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu121", "preview,pip,windows,cuda.z,python": "pip3 install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu124", "preview,pip,windows,rocm5.x,python": "NOTE: ROCm is not available on Windows", "preview,conda,windows,cuda.x,python": "conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch-nightly -c nvidia", "preview,conda,windows,cuda.y,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch-nightly -c nvidia", "preview,conda,windows,cuda.z,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch-nightly -c nvidia", "preview,conda,windows,rocm5.x,python": "NOTE: ROCm is not available on Windows", "preview,conda,windows,accnone,python": "conda install pytorch torchvision torchaudio cpuonly -c pytorch-nightly", "preview,libtorch,windows,accnone,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/nightly/cpu/libtorch-win-shared-with-deps-debug-latest.zip", "preview,libtorch,windows,cuda.x,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/nightly/cu118/libtorch-win-shared-with-deps-debug-latest.zip", "preview,libtorch,windows,cuda.y,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/nightly/cu121/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/nightly/cu121/libtorch-win-shared-with-deps-debug-latest.zip", "preview,libtorch,windows,cuda.z,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/nightly/cu124/libtorch-win-shared-with-deps-latest.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/nightly/cu124/libtorch-win-shared-with-deps-debug-latest.zip", "preview,libtorch,windows,rocm5.x,cplusplus": "NOTE: ROCm is not available on Windows", "stable,pip,linux,accnone,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cpu", "stable,pip,linux,cuda.x,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118", "stable,pip,linux,cuda.y,python": "pip3 install torch torchvision torchaudio", "stable,pip,linux,cuda.z,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124", "stable,pip,linux,rocm5.x,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.1", "stable,conda,linux,cuda.x,python": "conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia", "stable,conda,linux,cuda.y,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia", "stable,conda,linux,cuda.z,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia", "stable,conda,linux,rocm5.x,python": "NOTE: Conda packages are not currently available for ROCm, please use pip instead
", "stable,conda,linux,accnone,python": "conda install pytorch torchvision torchaudio cpuonly -c pytorch", "stable,libtorch,linux,accnone,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/cpu/libtorch-shared-with-deps-2.4.1%2Bcpu.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/cpu/libtorch-cxx11-abi-shared-with-deps-2.4.1%2Bcpu.zip", "stable,libtorch,linux,cuda.x,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/cu118/libtorch-shared-with-deps-2.4.1%2Bcu118.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/cu118/libtorch-cxx11-abi-shared-with-deps-2.4.1%2Bcu118.zip", "stable,libtorch,linux,cuda.y,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/cu121/libtorch-shared-with-deps-2.4.1%2Bcu121.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/cu121/libtorch-cxx11-abi-shared-with-deps-2.4.1%2Bcu121.zip", "stable,libtorch,linux,cuda.z,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/cu124/libtorch-shared-with-deps-2.4.1%2Bcu124.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/cu124/libtorch-cxx11-abi-shared-with-deps-2.4.1%2Bcu124.zip", "stable,libtorch,linux,rocm5.x,cplusplus": "Download here (Pre-cxx11 ABI):
https://download.pytorch.org/libtorch/rocm6.1/libtorch-shared-with-deps-2.4.1%2Brocm6.1.zip
Download here (cxx11 ABI):
https://download.pytorch.org/libtorch/rocm6.1/libtorch-cxx11-abi-shared-with-deps-2.4.1%2Brocm6.1.zip", "stable,pip,macos,cuda.x,python": "# CUDA is not available on MacOS, please use default package
pip3 install torch torchvision torchaudio", "stable,pip,macos,cuda.y,python": "# CUDA is not available on MacOS, please use default package
pip3 install torch torchvision torchaudio", "stable,pip,macos,cuda.z,python": "# CUDA is not available on MacOS, please use default package
pip3 install torch torchvision torchaudio", "stable,pip,macos,rocm5.x,python": "# ROCm is not available on MacOS, please use default package
pip3 install torch torchvision torchaudio", "stable,pip,macos,accnone,python": "pip3 install torch torchvision torchaudio", "stable,conda,macos,cuda.x,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,conda,macos,cuda.y,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,conda,macos,cuda.z,python": "# CUDA is not available on MacOS, please use default package
conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,conda,macos,rocm5.x,python": "# ROCm is not available on MacOS, please use default package
conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,conda,macos,accnone,python": "conda install pytorch::pytorch torchvision torchaudio -c pytorch", "stable,libtorch,macos,accnone,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.1.zip", "stable,libtorch,macos,cuda.x,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.1.zip", "stable,libtorch,macos,cuda.y,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.1.zip", "stable,libtorch,macos,cuda.z,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.1.zip", "stable,libtorch,macos,rocm5.x,cplusplus": "Download arm64 libtorch here (ROCm and CUDA are not supported):
https://download.pytorch.org/libtorch/cpu/libtorch-macos-arm64-2.4.1.zip", "stable,pip,windows,accnone,python": "pip3 install torch torchvision torchaudio", "stable,pip,windows,cuda.x,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu118", "stable,pip,windows,cuda.y,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu121", "stable,pip,windows,cuda.z,python": "pip3 install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu124", "stable,pip,windows,rocm5.x,python": "NOTE: ROCm is not available on Windows", "stable,conda,windows,cuda.x,python": "conda install pytorch torchvision torchaudio pytorch-cuda=11.8 -c pytorch -c nvidia", "stable,conda,windows,cuda.y,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.1 -c pytorch -c nvidia", "stable,conda,windows,cuda.z,python": "conda install pytorch torchvision torchaudio pytorch-cuda=12.4 -c pytorch -c nvidia", "stable,conda,windows,rocm5.x,python": "NOTE: ROCm is not available on Windows", "stable,conda,windows,accnone,python": "conda install pytorch torchvision torchaudio cpuonly -c pytorch", "stable,libtorch,windows,accnone,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/cpu/libtorch-win-shared-with-deps-2.4.1%2Bcpu.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/cpu/libtorch-win-shared-with-deps-debug-2.4.1%2Bcpu.zip", "stable,libtorch,windows,cuda.x,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-2.4.1%2Bcu118.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/cu118/libtorch-win-shared-with-deps-debug-2.4.1%2Bcu118.zip", "stable,libtorch,windows,cuda.y,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/cu121/libtorch-win-shared-with-deps-2.4.1%2Bcu121.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/cu121/libtorch-win-shared-with-deps-debug-2.4.1%2Bcu121.zip", "stable,libtorch,windows,cuda.z,cplusplus": "Download here (Release version):
https://download.pytorch.org/libtorch/cu124/libtorch-win-shared-with-deps-2.4.1%2Bcu124.zip
Download here (Debug version):
https://download.pytorch.org/libtorch/cu124/libtorch-win-shared-with-deps-debug-2.4.1%2Bcu124.zip", "stable,libtorch,windows,rocm5.x,cplusplus": "NOTE: ROCm is not available on Windows"};
if (!object.hasOwnProperty(key)) {
$("#command").html(
diff --git a/ecosystem/index.html b/ecosystem/index.html
index fef09cdc4a35..fc66b86492ac 100644
--- a/ecosystem/index.html
+++ b/ecosystem/index.html
@@ -364,13 +364,13 @@
baal (bayesian active learning) aims to implement active learning using metrics of uncertainty derived from approximations of bayesian posteriors in neural networks.
+ClearML is a full system ML / DL experiment manager, versioning and ML-Ops solution.
PyTorch Lightning is a Keras-like ML library for PyTorch. It leaves core training and validation logic to you and automates the rest.
+Flair is a very simple framework for state-of-the-art natural language processing (NLP).
Substra is a federated learning Python library to run federated learning experiments at scale on real distributed data.
+A generalizable application framework for segmentation, regression, and classification using PyTorch
Machine learning metrics for distributed, scalable PyTorch applications.
+NeMo: a toolkit for conversational AI.
A PyTorch framework for deep learning on point clouds.
+Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend.
library of algorithms to speed up neural network training
+TorchDrift is a data and concept drift library for PyTorch. It lets you monitor your PyTorch models to see if they operate within spec.
TorchDrift is a data and concept drift library for PyTorch. It lets you monitor your PyTorch models to see if they operate within spec.
+An open source hyperparameter optimization framework to automate hyperparameter search.
PySyft is a Python library for encrypted, privacy preserving deep learning.
+AdaptDL is a resource-adaptive deep learning training and scheduling framework.
TensorLy is a high level API for tensor methods and deep tensorized neural networks in Python that aims to make tensor learning simple.
+Deep Graph Library (DGL) is a Python package built for easy implementation of graph neural network model family, on top of PyTorch and other frameworks.
Polyaxon is a platform for building, training, and monitoring large-scale deep learning applications.
+Minimalist Neural Machine Translation toolkit for educational purposes
fastai is a library that simplifies training fast and accurate neural nets using modern best practices.
+Substra is a federated learning Python library to run federated learning experiments at scale on real distributed data.
TorchQuantum is a quantum classical simulation framework based on PyTorch. It supports statevector, density matrix simulation and pulse simulation on different hardware platforms such as CPUs and GPUs.
+skorch is a high-level library for PyTorch that provides full scikit-learn compatibility.
🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
+TorchIO is a set of tools to efficiently read, preprocess, sample, augment, and write 3D medical images in deep learning applications written in PyTorch.
A unified ensemble framework for PyTorch to improve the performance and robustness of your deep learning model.
+A PyTorch-based knowledge distillation toolkit for natural language processing
PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric.
+A powerful and flexible machine learning platform for drug discovery.
Pyro is a universal probabilistic programming language (PPL) written in Python and supported by PyTorch on the backend.
+Catalyst helps you write compact, but full-featured deep learning and reinforcement learning pipelines with a few lines of code.
A framework for elegantly configuring complex applications.
+baal (bayesian active learning) aims to implement active learning using metrics of uncertainty derived from approximations of bayesian posteriors in neural networks.
Flexible and powerful tensor operations for readable and reliable code.
+AllenNLP is an open-source research library built on PyTorch for designing and evaluating deep learning models for NLP.
A toolbox for adversarial robustness research. It contains modules for generating adversarial examples and defending against attacks.
+PyKale is a PyTorch library for multimodal learning and transfer learning with deep learning and dimensionality reduction on graphs, images, texts, and videos.
A PyTorch-based knowledge distillation toolkit for natural language processing
+Flexible and powerful tensor operations for readable and reliable code.
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible.
+TorchQuantum is a quantum classical simulation framework based on PyTorch. It supports statevector, density matrix simulation and pulse simulation on different hardware platforms such as CPUs and GPUs.
Data-centric declarative deep learning framework
+Avalanche: an End-to-End Library for Continual Learning
PyTorch Geometric is a library for deep learning on irregular input data such as graphs, point clouds, and manifolds.
+Framework for reproducible classification of Alzheimer's Disease
TIAToolbox provides an easy-to-use API where researchers can use, adapt and create models for CPath.
+depyf is a tool to help users understand and adapt to PyTorch compiler torch.compile.
Avalanche: an End-to-End Library for Continual Learning
+FuseMedML is a python framework accelerating ML based discovery in the medical field by encouraging code reuse
Framework for reproducible classification of Alzheimer's Disease
+Forte is a toolkit for building NLP pipelines featuring composable components, convenient data interfaces, and cross-task interaction.
AdaptDL is a resource-adaptive deep learning training and scheduling framework.
+Hummingbird compiles trained ML models into tensor computation for faster inference.
AllenNLP is an open-source research library built on PyTorch for designing and evaluating deep learning models for NLP.
+Train PyTorch models with Differential Privacy
USB is a Pytorch-based Python package for Semi-Supervised Learning (SSL). It is easy-to-use/extend, affordable to small groups, and comprehensive for developing and evaluating SSL algorithms.
+Data-centric declarative deep learning framework
Train PyTorch models with Differential Privacy
+Ignite is a high-level library for training neural networks in PyTorch. It helps with writing compact, but full-featured training loops.
GPyTorch is a Gaussian process library implemented using PyTorch, designed for creating scalable, flexible Gaussian process models.
+PyTorch Lightning is a Keras-like ML library for PyTorch. It leaves core training and validation logic to you and automates the rest.
PyKale is a PyTorch library for multimodal learning and transfer learning with deep learning and dimensionality reduction on graphs, images, texts, and videos.
+Ray is a fast and simple framework for building and running distributed applications.
Hummingbird compiles trained ML models into tensor computation for faster inference.
+fastai is a library that simplifies training fast and accurate neural nets using modern best practices.
Pipeline Abstractions for Deep Learning in PyTorch
+PyTorch Geometric is a library for deep learning on irregular input data such as graphs, point clouds, and manifolds.
SimulAI is basically a toolkit with pipelines for physics-informed machine learning.
+Polyaxon is a platform for building, training, and monitoring large-scale deep learning applications.
Basic Utilities for PyTorch Natural Language Processing (NLP).
+higher is a library which facilitates the implementation of arbitrarily complex gradient-based meta-learning algorithms and nested optimisation loops with near-vanilla PyTorch.
torchdistill is a coding-free framework built on PyTorch for reproducible deep learning and knowledge distillation studies.
+Kornia is a differentiable computer vision library that consists of a set of routines and differentiable modules to solve generic CV problems.
Flair is a very simple framework for state-of-the-art natural language processing (NLP).
+Renate is a library providing tools for re-training pytorch models over time as new data becomes available.
Detectron2 is FAIR's next-generation platform for object detection and segmentation.
+SimulAI is basically a toolkit with pipelines for physics-informed machine learning.
MONAI provides domain-optimized foundational capabilities for developing healthcare imaging training workflows.
+A complete and open-sourced solution for injecting domain-specific knowledge into pre-trained LLM.
A Python toolbox for data mining on Partially-Observed Time Series (POTS) and helps engineers focus more on the core problems in rather than missing parts in their data.
+CrypTen is a framework for Privacy Preserving ML. Its goal is to make secure computing techniques accessible to ML practitioners.
PennyLane is a library for quantum ML, automatic differentiation, and optimization of hybrid quantum-classical computations.
+Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch.
The Unified Machine Learning Framework
+PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch.
Colossal-AI is a Unified Deep Learning System for Big Model Era
+PyTorch Geometric Temporal is a temporal (dynamic) extension library for PyTorch Geometric.
PyPose is a robotics-oriented, PyTorch-based library that combines deep perceptual models with physics-based optimization techniques, so that users can focus on their novel applications.
+octoml-profile is a python library and cloud service designed to provide a simple experience for assessing and optimizing the performance of PyTorch models.
Flower - A Friendly Federated Learning Framework
+PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT
A runtime fault injection tool for PyTorch.
+PennyLane is a library for quantum ML, automatic differentiation, and optimization of hybrid quantum-classical computations.
ML Prediction, Planning and Simulation for Self-Driving built on PyTorch.
+A library for state-of-the-art self-supervised learning
Ray is a fast and simple framework for building and running distributed applications.
+ParlAI is a unified platform for sharing, training, and evaluating dialog models across many tasks.
Horovod is a distributed training library for deep learning frameworks. Horovod aims to make distributed DL fast and easy to use.
+A runtime fault injection tool for PyTorch.
A complete and open-sourced solution for injecting domain-specific knowledge into pre-trained LLM.
+MONAI provides domain-optimized foundational capabilities for developing healthcare imaging training workflows.
Renate is a library providing tools for re-training pytorch models over time as new data becomes available.
+Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves as a modular toolbox for inference and training of diffusion models.
Catalyst helps you write compact, but full-featured deep learning and reinforcement learning pipelines with a few lines of code.
+ML Prediction, Planning and Simulation for Self-Driving built on PyTorch.
RoMa is a standalone library to handle rotation representations with PyTorch (rotation matrices, quaternions, rotation vectors, etc). It aims for robustness, ease-of-use, and efficiency.
+OpenMMLab covers a wide range of computer vision research topics including classification, detection, segmentation, and super-resolution.
Minimalist Neural Machine Translation toolkit for educational purposes
+A unified ensemble framework for PyTorch to improve the performance and robustness of your deep learning model.
skorch is a high-level library for PyTorch that provides full scikit-learn compatibility.
+TIAToolbox provides an easy-to-use API where researchers can use, adapt and create models for CPath.
TorchIO is a set of tools to efficiently read, preprocess, sample, augment, and write 3D medical images in deep learning applications written in PyTorch.
+A modular framework for vision & language multimodal research from Facebook AI Research (FAIR).
A generalizable application framework for segmentation, regression, and classification using PyTorch
+library of algorithms to speed up neural network training
Determined is a platform that helps deep learning teams train models more quickly, easily share GPU resources, and effectively collaborate.
+A PyTorch framework for deep learning on point clouds.
pystiche is a framework for Neural Style Transfer (NST) built upon PyTorch.
+PyPose is a robotics-oriented, PyTorch-based library that combines deep perceptual models with physics-based optimization techniques, so that users can focus on their novel applications.
NeMo: a toolkit for conversational AI.
+Determined is a platform that helps deep learning teams train models more quickly, easily share GPU resources, and effectively collaborate.
PyTorch3D provides efficient, reusable components for 3D Computer Vision research with PyTorch.
+A Python package for improving PyTorch performance on Intel platforms
Stable Baselines3 (SB3) is a set of reliable implementations of reinforcement learning algorithms in PyTorch.
+BoTorch is a library for Bayesian Optimization. It provides a modular, extensible interface for composing Bayesian optimization primitives.
Fast and extensible image augmentation library for different CV tasks like classification, segmentation, object detection and pose estimation.
+TensorLy is a high level API for tensor methods and deep tensorized neural networks in Python that aims to make tensor learning simple.
ParlAI is a unified platform for sharing, training, and evaluating dialog models across many tasks.
+🚀 A simple way to train and use PyTorch models with multi-GPU, TPU, mixed-precision
OpenMMLab covers a wide range of computer vision research topics including classification, detection, segmentation, and super-resolution.
+A Python toolbox for data mining on Partially-Observed Time Series (POTS) and helps engineers focus more on the core problems in rather than missing parts in their data.
DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
+A deep learning library for video understanding research. Hosts various video-focused models, datasets, training pipelines and more.
State-of-the-art Natural Language Processing for PyTorch.
+Pipeline Abstractions for Deep Learning in PyTorch
Deep Graph Library (DGL) is a Python package built for easy implementation of graph neural network model family, on top of PyTorch and other frameworks.
+RoMa is a standalone library to handle rotation representations with PyTorch (rotation matrices, quaternions, rotation vectors, etc). It aims for robustness, ease-of-use, and efficiency.
FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes.
+ONNX Runtime is a cross-platform inferencing and training accelerator.
Lightly is a computer vision framework for self-supervised learning.
+Captum (“comprehension” in Latin) is an open source, extensible library for model interpretability built on PyTorch.
ONNX Runtime is a cross-platform inferencing and training accelerator.
+Detectron2 is FAIR's next-generation platform for object detection and segmentation.
Diffusers provides pretrained diffusion models across multiple modalities, such as vision and audio, and serves as a modular toolbox for inference and training of diffusion models.
+The PopTorch interface library is a simple wrapper for running PyTorch programs directly on Graphcore IPUs.
A lightweight declarative PyTorch wrapper for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.
+DeepSpeed is a deep learning optimization library that makes distributed training easy, efficient, and effective.
The PopTorch interface library is a simple wrapper for running PyTorch programs directly on Graphcore IPUs.
+A framework for elegantly configuring complex applications.
A deep learning library for video understanding research. Hosts various video-focused models, datasets, training pipelines and more.
+A toolbox for adversarial robustness research. It contains modules for generating adversarial examples and defending against attacks.
Intel® Neural Compressor provides unified APIs for network compression technologies for faster inference
+Flower - A Friendly Federated Learning Framework
A Python package for improving PyTorch performance on Intel platforms
+Intel® Neural Compressor provides unified APIs for network compression technologies for faster inference
FuseMedML is a python framework accelerating ML based discovery in the medical field by encouraging code reuse
+Datasets, transforms, and models for geospatial data
An open source framework for deep learning on satellite and aerial imagery.
+GPyTorch is a Gaussian process library implemented using PyTorch, designed for creating scalable, flexible Gaussian process models.
BoTorch is a library for Bayesian Optimization. It provides a modular, extensible interface for composing Bayesian optimization primitives.
+USB is a Pytorch-based Python package for Semi-Supervised Learning (SSL). It is easy-to-use/extend, affordable to small groups, and comprehensive for developing and evaluating SSL algorithms.
ClearML is a full system ML / DL experiment manager, versioning and ML-Ops solution.
+Machine learning metrics for distributed, scalable PyTorch applications.
octoml-profile is a python library and cloud service designed to provide a simple experience for assessing and optimizing the performance of PyTorch models.
+Basic Utilities for PyTorch Natural Language Processing (NLP).
Kornia is a differentiable computer vision library that consists of a set of routines and differentiable modules to solve generic CV problems.
+Poutyne is a Keras-like framework for PyTorch and handles much of the boilerplating code needed to train neural networks.
An open source hyperparameter optimization framework to automate hyperparameter search.
+State-of-the-art Natural Language Processing for PyTorch.
A library for state-of-the-art self-supervised learning
+PFRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using PyTorch.
Datasets, transforms, and models for geospatial data
+The Unified Machine Learning Framework
Forte is a toolkit for building NLP pipelines featuring composable components, convenient data interfaces, and cross-task interaction.
+An open source framework for deep learning on satellite and aerial imagery.
Ignite is a high-level library for training neural networks in PyTorch. It helps with writing compact, but full-featured training loops.
+FairScale is a PyTorch extension library for high performance and large scale training on one or multiple machines/nodes.
PyTorch/TorchScript compiler for NVIDIA GPUs using TensorRT
+Lightly is a computer vision framework for self-supervised learning.
depyf is a tool to help users understand and adapt to PyTorch compiler torch.compile.
+Horovod is a distributed training library for deep learning frameworks. Horovod aims to make distributed DL fast and easy to use.
A modular framework for vision & language multimodal research from Facebook AI Research (FAIR).
+pomegranate is a library of probabilistic models that is built in a modular manner and treats all models as the probability distributions that they are.
PFRL is a deep reinforcement learning library that implements various state-of-the-art deep reinforcement algorithms in Python using PyTorch.
+Fast and extensible image augmentation library for different CV tasks like classification, segmentation, object detection and pose estimation.
Captum (“comprehension” in Latin) is an open source, extensible library for model interpretability built on PyTorch.
+The easiest way to use deep metric learning in your application. Modular, flexible, and extensible.
pomegranate is a library of probabilistic models that is built in a modular manner and treats all models as the probability distributions that they are.
+torchdistill is a coding-free framework built on PyTorch for reproducible deep learning and knowledge distillation studies.
A powerful and flexible machine learning platform for drug discovery.
+Colossal-AI is a Unified Deep Learning System for Big Model Era
CrypTen is a framework for Privacy Preserving ML. Its goal is to make secure computing techniques accessible to ML practitioners.
+pystiche is a framework for Neural Style Transfer (NST) built upon PyTorch.