Skip to content

Commit

Permalink
No public description
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 725233618
  • Loading branch information
tensorflower-gardener committed Feb 10, 2025
1 parent 6afb844 commit f43702f
Show file tree
Hide file tree
Showing 4 changed files with 9 additions and 9 deletions.
2 changes: 1 addition & 1 deletion docs/nlp/customize_encoder.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -497,7 +497,7 @@
"source": [
"#### Customize Feedforward Layer\n",
"\n",
"Similarly, one could also customize the feedforward layer.\n",
"Similiarly, one could also customize the feedforward layer.\n",
"\n",
"See [the source of `nlp.layers.GatedFeedforward`](https://github.com/tensorflow/models/blob/master/official/nlp/modeling/layers/gated_feedforward.py) for how to implement a customized feedforward layer.\n",
"\n",
Expand Down
8 changes: 4 additions & 4 deletions docs/vision/object_detection.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -66,7 +66,7 @@
"This tutorial demonstrates how to:\n",
"\n",
"1. Use models from the Tensorflow Model Garden(TFM) package.\n",
"2. Fine-tune a pre-trained RetinaNet with ResNet-50 as backbone for object detection.\n",
"2. Fine-tune a pre-trained RetinanNet with ResNet-50 as backbone for object detection.\n",
"3. Export the tuned RetinaNet model"
]
},
Expand Down Expand Up @@ -323,7 +323,7 @@
"\n",
"Use the `retinanet_resnetfpn_coco` experiment configuration, as defined by `tfm.vision.configs.retinanet.retinanet_resnetfpn_coco`.\n",
"\n",
"The configuration defines an experiment to train a RetinaNet with Resnet-50 as backbone, FPN as decoder. Default Configuration is trained on [COCO](https://cocodataset.org/) train2017 and evaluated on [COCO](https://cocodataset.org/) val2017.\n",
"The configuration defines an experiment to train a RetinanNet with Resnet-50 as backbone, FPN as decoder. Default Configuration is trained on [COCO](https://cocodataset.org/) train2017 and evaluated on [COCO](https://cocodataset.org/) val2017.\n",
"\n",
"There are also other alternative experiments available such as\n",
"`retinanet_resnetfpn_coco`, `retinanet_spinenet_coco`, `fasterrcnn_resnetfpn_coco` and more. One can switch to them by changing the experiment name argument to the `get_exp_config` function.\n",
Expand Down Expand Up @@ -538,7 +538,7 @@
"id": "m-QW7DoKbD8z"
},
"source": [
"### Create category index dictionary to map the labels to corresponding label names."
"### Create category index dictionary to map the labels to coressponding label names."
]
},
{
Expand Down Expand Up @@ -573,7 +573,7 @@
},
"source": [
"### Helper function for visualizing the results from TFRecords.\n",
"Use `visualize_boxes_and_labels_on_image_array` from `visualization_utils` to draw bounding boxes on the image."
"Use `visualize_boxes_and_labels_on_image_array` from `visualization_utils` to draw boudning boxes on the image."
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions docs/vision/semantic_segmentation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -341,7 +341,7 @@
"\n",
"Use the `mnv2_deeplabv3_pascal` experiment configuration, as defined by `tfm.vision.configs.semantic_segmentation.mnv2_deeplabv3_pascal`.\n",
"\n",
"Please find all the registered experiments [here](https://www.tensorflow.org/api_docs/python/tfm/core/exp_factory/get_exp_config)\n",
"Please find all the registered experiements [here](https://www.tensorflow.org/api_docs/python/tfm/core/exp_factory/get_exp_config)\n",
"\n",
"The configuration defines an experiment to train a [DeepLabV3](https://arxiv.org/pdf/1706.05587.pdf) model with MobilenetV2 as backbone and [ASPP](https://arxiv.org/pdf/1606.00915v2.pdf) as decoder.\n",
"\n",
Expand Down Expand Up @@ -420,7 +420,7 @@
"exp_config.task.train_data.dtype = 'float32'\n",
"exp_config.task.train_data.output_size = [HEIGHT, WIDTH]\n",
"exp_config.task.train_data.preserve_aspect_ratio = False\n",
"exp_config.task.train_data.seed = 21 # Reproducible Training Data\n",
"exp_config.task.train_data.seed = 21 # Reproducable Training Data\n",
"\n",
"# Validation Data Config\n",
"exp_config.task.validation_data.input_path = val_data_tfrecords\n",
Expand All @@ -429,7 +429,7 @@
"exp_config.task.validation_data.output_size = [HEIGHT, WIDTH]\n",
"exp_config.task.validation_data.preserve_aspect_ratio = False\n",
"exp_config.task.validation_data.groundtruth_padded_size = [HEIGHT, WIDTH]\n",
"exp_config.task.validation_data.seed = 21 # Reproducible Validation Data\n",
"exp_config.task.validation_data.seed = 21 # Reproducable Validation Data\n",
"exp_config.task.validation_data.resize_eval_groundtruth = True # To enable validation loss"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -27,7 +27,7 @@
* [Aperture size (f-number)](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-camera/factors.md#aperture-size-f-number)
* [Shutter speed](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-camera/factors.md#shutter-speed)
* [Table of specifications](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-camera/table-of-specs.md)
* [Choose edge device hardware](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-edge-device.md)
* [Choose edge device hardware](/official/projects/waste_identification_ml/circularnet-docs/content/system-req/choose-edge-device/_index.md)

**[Deploy CircularNet](/official/projects/waste_identification_ml/circularnet-docs/content/deploy-cn/_index.md)**

Expand Down

0 comments on commit f43702f

Please sign in to comment.