From f43702f7117f63cfd0a2594ff617d3faf56bbf1d Mon Sep 17 00:00:00 2001 From: "A. Unique TensorFlower" Date: Mon, 10 Feb 2025 08:57:24 -0800 Subject: [PATCH] No public description PiperOrigin-RevId: 725233618 --- docs/nlp/customize_encoder.ipynb | 2 +- docs/vision/object_detection.ipynb | 8 ++++---- docs/vision/semantic_segmentation.ipynb | 6 +++--- .../circularnet-docs/content/_index.md | 2 +- 4 files changed, 9 insertions(+), 9 deletions(-) diff --git a/docs/nlp/customize_encoder.ipynb b/docs/nlp/customize_encoder.ipynb index 92baee21da4..7e6fd0f32af 100644 --- a/docs/nlp/customize_encoder.ipynb +++ b/docs/nlp/customize_encoder.ipynb @@ -497,7 +497,7 @@ "source": [ "#### Customize Feedforward Layer\n", "\n", - "Similarly, one could also customize the feedforward layer.\n", + "Similiarly, one could also customize the feedforward layer.\n", "\n", "See [the source of `nlp.layers.GatedFeedforward`](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/layers/gated_feedforward.py) for how to implement a customized feedforward layer.\n", "\n", diff --git a/docs/vision/object_detection.ipynb b/docs/vision/object_detection.ipynb index 8fa9ded6d40..f27c4b0d509 100644 --- a/docs/vision/object_detection.ipynb +++ b/docs/vision/object_detection.ipynb @@ -66,7 +66,7 @@ "This tutorial demonstrates how to:\n", "\n", "1. Use models from the Tensorflow Model Garden(TFM) package.\n", - "2. Fine-tune a pre-trained RetinaNet with ResNet-50 as backbone for object detection.\n", + "2. Fine-tune a pre-trained RetinanNet with ResNet-50 as backbone for object detection.\n", "3. Export the tuned RetinaNet model" ] }, @@ -323,7 +323,7 @@ "\n", "Use the `retinanet_resnetfpn_coco` experiment configuration, as defined by `tfm.vision.configs.retinanet.retinanet_resnetfpn_coco`.\n", "\n", - "The configuration defines an experiment to train a RetinaNet with Resnet-50 as backbone, FPN as decoder. Default Configuration is trained on [COCO](https://cocodataset.org/) train2017 and evaluated on [COCO](https://cocodataset.org/) val2017.\n", + "The configuration defines an experiment to train a RetinanNet with Resnet-50 as backbone, FPN as decoder. Default Configuration is trained on [COCO](https://cocodataset.org/) train2017 and evaluated on [COCO](https://cocodataset.org/) val2017.\n", "\n", "There are also other alternative experiments available such as\n", "`retinanet_resnetfpn_coco`, `retinanet_spinenet_coco`, `fasterrcnn_resnetfpn_coco` and more. One can switch to them by changing the experiment name argument to the `get_exp_config` function.\n", @@ -538,7 +538,7 @@ "id": "m-QW7DoKbD8z" }, "source": [ - "### Create category index dictionary to map the labels to corresponding label names." + "### Create category index dictionary to map the labels to coressponding label names." ] }, { @@ -573,7 +573,7 @@ }, "source": [ "### Helper function for visualizing the results from TFRecords.\n", - "Use `visualize_boxes_and_labels_on_image_array` from `visualization_utils` to draw bounding boxes on the image." + "Use `visualize_boxes_and_labels_on_image_array` from `visualization_utils` to draw boudning boxes on the image." ] }, { diff --git a/docs/vision/semantic_segmentation.ipynb b/docs/vision/semantic_segmentation.ipynb index 12210beb851..76e1230ad1e 100644 --- a/docs/vision/semantic_segmentation.ipynb +++ b/docs/vision/semantic_segmentation.ipynb @@ -341,7 +341,7 @@ "\n", "Use the `mnv2_deeplabv3_pascal` experiment configuration, as defined by `tfm.vision.configs.semantic_segmentation.mnv2_deeplabv3_pascal`.\n", "\n", - "Please find all the registered experiments [here](https://www.tensorflow.org/api_docs/python/tfm/core/exp_factory/get_exp_config)\n", + "Please find all the registered experiements [here](https://www.tensorflow.org/api_docs/python/tfm/core/exp_factory/get_exp_config)\n", "\n", "The configuration defines an experiment to train a [DeepLabV3](https://arxiv.org/pdf/1706.05587.pdf) model with MobilenetV2 as backbone and [ASPP](https://arxiv.org/pdf/1606.00915v2.pdf) as decoder.\n", "\n", @@ -420,7 +420,7 @@ "exp_config.task.train_data.dtype = 'float32'\n", "exp_config.task.train_data.output_size = [HEIGHT, WIDTH]\n", "exp_config.task.train_data.preserve_aspect_ratio = False\n", - "exp_config.task.train_data.seed = 21 # Reproducible Training Data\n", + "exp_config.task.train_data.seed = 21 # Reproducable Training Data\n", "\n", "# Validation Data Config\n", "exp_config.task.validation_data.input_path = val_data_tfrecords\n", @@ -429,7 +429,7 @@ "exp_config.task.validation_data.output_size = [HEIGHT, WIDTH]\n", "exp_config.task.validation_data.preserve_aspect_ratio = False\n", "exp_config.task.validation_data.groundtruth_padded_size = [HEIGHT, WIDTH]\n", - "exp_config.task.validation_data.seed = 21 # Reproducible Validation Data\n", + "exp_config.task.validation_data.seed = 21 # Reproducable Validation Data\n", "exp_config.task.validation_data.resize_eval_groundtruth = True # To enable validation loss" ] }, diff --git a/official/projects/waste_identification_ml/circularnet-docs/content/_index.md b/official/projects/waste_identification_ml/circularnet-docs/content/_index.md index 3096e00fce3..5e5d2040517 100644 --- a/official/projects/waste_identification_ml/circularnet-docs/content/_index.md +++ b/official/projects/waste_identification_ml/circularnet-docs/content/_index.md @@ -27,7 +27,7 @@ * [Aperture size (f-number)](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-camera/factors.md#aperture-size-f-number) * [Shutter speed](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-camera/factors.md#shutter-speed) * [Table of specifications](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-camera/table-of-specs.md) -* [Choose edge device hardware](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-edge-device.md) +* [Choose edge device hardware](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-edge-device/_index.md) **[Deploy CircularNet](/official/projects/waste_identification_ml/circularnet-docs/content/deploy-cn/_index.md)**