Skip to content

Commit

Permalink
Deploy to GitHub Pages on master [ci skip]
Browse files Browse the repository at this point in the history
  • Loading branch information
facebook-circleci-bot committed Oct 12, 2024
1 parent 02f0713 commit 641cae3
Show file tree
Hide file tree
Showing 58 changed files with 945 additions and 944 deletions.
12 changes: 6 additions & 6 deletions assets/hub/datvuthanh_hybridnets.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "98f2c55d",
"id": "276bbb76",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "3c4d78c1",
"id": "b49ed70a",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
"id": "617ee1cf",
"id": "6ac5a850",
"metadata": {},
"source": [
"## Model Description\n",
Expand Down Expand Up @@ -93,7 +93,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "44ad36f8",
"id": "ff83c420",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -109,7 +109,7 @@
},
{
"cell_type": "markdown",
"id": "bc7eeba7",
"id": "da8dd274",
"metadata": {},
"source": [
"### Citation\n",
Expand All @@ -120,7 +120,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "884a2b76",
"id": "e4797247",
"metadata": {
"attributes": {
"classes": [
Expand Down
12 changes: 6 additions & 6 deletions assets/hub/facebookresearch_WSL-Images_resnext.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "7e5330a6",
"id": "c953052f",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "9af91c0a",
"id": "5d15c01c",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -39,7 +39,7 @@
},
{
"cell_type": "markdown",
"id": "6c2a7fb1",
"id": "c3eae295",
"metadata": {},
"source": [
"All pre-trained models expect input images normalized in the same way,\n",
Expand All @@ -53,7 +53,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "7ec3379f",
"id": "49c4be66",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -67,7 +67,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "95fb9f78",
"id": "4fe3451a",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -99,7 +99,7 @@
},
{
"cell_type": "markdown",
"id": "c0cd64a5",
"id": "e4384049",
"metadata": {},
"source": [
"### Model Description\n",
Expand Down
10 changes: 5 additions & 5 deletions assets/hub/facebookresearch_pytorch-gan-zoo_dcgan.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "961a83c0",
"id": "647bbdbf",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "6374a401",
"id": "678754c4",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -34,7 +34,7 @@
},
{
"cell_type": "markdown",
"id": "9a57e357",
"id": "6c5d2ed5",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 120)` where `N` is the number of images to be generated.\n",
Expand All @@ -45,7 +45,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "8b77cafe",
"id": "da9ca5dc",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -63,7 +63,7 @@
},
{
"cell_type": "markdown",
"id": "93ed0976",
"id": "934f241e",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
Expand Down
10 changes: 5 additions & 5 deletions assets/hub/facebookresearch_pytorch-gan-zoo_pgan.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "c3ccf61b",
"id": "f1574a86",
"metadata": {},
"source": [
"### This notebook is optionally accelerated with a GPU runtime.\n",
Expand All @@ -24,7 +24,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "5b216732",
"id": "b26a90ba",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -44,7 +44,7 @@
},
{
"cell_type": "markdown",
"id": "c6bfefc1",
"id": "68c4ed59",
"metadata": {},
"source": [
"The input to the model is a noise vector of shape `(N, 512)` where `N` is the number of images to be generated.\n",
Expand All @@ -55,7 +55,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "2c70f540",
"id": "a7ad7cbd",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -74,7 +74,7 @@
},
{
"cell_type": "markdown",
"id": "28ab7dff",
"id": "7ae5efc0",
"metadata": {},
"source": [
"You should see an image similar to the one on the left.\n",
Expand Down
36 changes: 18 additions & 18 deletions assets/hub/facebookresearch_pytorchvideo_resnet.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
"cells": [
{
"cell_type": "markdown",
"id": "22d57686",
"id": "f84ba213",
"metadata": {},
"source": [
"# 3D ResNet\n",
Expand All @@ -22,7 +22,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "5d80d266",
"id": "70c83534",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -33,7 +33,7 @@
},
{
"cell_type": "markdown",
"id": "b0259433",
"id": "5d3ff11a",
"metadata": {},
"source": [
"Import remaining functions:"
Expand All @@ -42,7 +42,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "4e2f6a64",
"id": "159a75fd",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -64,7 +64,7 @@
},
{
"cell_type": "markdown",
"id": "bcfc8cf9",
"id": "63f3bd7d",
"metadata": {},
"source": [
"#### Setup\n",
Expand All @@ -75,7 +75,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "a7c790df",
"id": "a4bfee3c",
"metadata": {
"attributes": {
"classes": [
Expand All @@ -94,7 +94,7 @@
},
{
"cell_type": "markdown",
"id": "2a856bd5",
"id": "a2d0b3f1",
"metadata": {},
"source": [
"Download the id to label mapping for the Kinetics 400 dataset on which the torch hub models were trained. This will be used to get the category label names from the predicted class ids."
Expand All @@ -103,7 +103,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "ea52a918",
"id": "84ad580a",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -116,7 +116,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "47ffca80",
"id": "975a0c57",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -131,7 +131,7 @@
},
{
"cell_type": "markdown",
"id": "6c46a602",
"id": "17a9f21c",
"metadata": {},
"source": [
"#### Define input transform"
Expand All @@ -140,7 +140,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "cb63e32d",
"id": "5824a1f4",
"metadata": {},
"outputs": [],
"source": [
Expand Down Expand Up @@ -174,7 +174,7 @@
},
{
"cell_type": "markdown",
"id": "b74f1e93",
"id": "4d15949e",
"metadata": {},
"source": [
"#### Run Inference\n",
Expand All @@ -185,7 +185,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "ca353b8b",
"id": "db623c98",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -197,7 +197,7 @@
},
{
"cell_type": "markdown",
"id": "7fc64eaf",
"id": "2cd4d7d2",
"metadata": {},
"source": [
"Load the video and transform it to the input format required by the model."
Expand All @@ -206,7 +206,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "d1ae0a27",
"id": "ed4246af",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -231,7 +231,7 @@
},
{
"cell_type": "markdown",
"id": "9350b6be",
"id": "51d5cd60",
"metadata": {},
"source": [
"#### Get Predictions"
Expand All @@ -240,7 +240,7 @@
{
"cell_type": "code",
"execution_count": null,
"id": "ed0e9041",
"id": "40409862",
"metadata": {},
"outputs": [],
"source": [
Expand All @@ -259,7 +259,7 @@
},
{
"cell_type": "markdown",
"id": "ee83d0bd",
"id": "8af6b3ec",
"metadata": {},
"source": [
"### Model Description\n",
Expand Down
Loading

0 comments on commit 641cae3

Please sign in to comment.