Skip to content

Commit

Permalink
No public description
Browse files Browse the repository at this point in the history
PiperOrigin-RevId: 684582762
  • Loading branch information
tensorflower-gardener committed Oct 10, 2024
1 parent 889a59b commit bb7c64c
Showing 1 changed file with 63 additions and 0 deletions.
63 changes: 63 additions & 0 deletions official/projects/waste_identification_ml/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,6 +75,69 @@ Material Form Model V2| MobileNet | saved model | [click here](https://storage.g
--config_file="config.yaml"`
10. You can also start a screen session and run the training in the background.

## Config file parameters

- `annotation_file` - path to the validation file in COCO JSON format.
- `init_checkpoint` - path to the checkpoints for transfer learning.
- `init_checkpoint_modules` - to load both the backbone or decoder or any one
of them.
- `freeze_backbone` - if you want to freeze your backbone or not while
training.
- `input_size` - image size according to which the model is trained.
- `num_classes` - total number of classes + 1 ( background )
- `per_category_metrics` - in case you need metric for each class
- `global_batch_size` - batch size.
- `input_path` - path to the dataset set.
- `parser` - contains the data augmentation operations.
- `steps_per_loop` - number of steps to complete one epoch. It's usually
`training tal data size / batch size`.
- `summary_interval` - how often you want to plot the metric
- `train_steps` - total steps for training. Its equal to
`steps_per_loop x epochs`
- `validation_interval` - how often do you want to evaluate the validation
data.
- `validation_steps` - steps to cover validation data. Its equal to
`validation data size / batch size`
- `warmup_learning_rate` - it is a strategy that gradually increases the
learning rate from a very low value to a desired initial learning rate over
a predefined number of iterations or epochs.
To stabilize training in the early stages by allowing the model to adapt to
the data slowly before using a higher learning rate.
- `warmup_steps` - steps for the warmup learning rate
- `initial_learning_rate` - The initial learning rate is the value of the
learning rate at the very start of the training process.
- `checkpoint_interval` - number of steps to export the model.

A common practice to calculate the parameters are below:

`total_training_samples = 4389
total_validation_samples = 485

train_batch_size = 512
val_batch_size = 128
num_epochs = 700
warmup_learning_rate = 0.0001
initial_learning_rate = 0.001

steps_per_loop = total_training_samples // train_batch_size
summary_interval = steps_per_loop
train_steps = num_epochs * steps_per_loop
validation_interval = steps_per_loop
validation_steps = total_validation_samples // val_batch_size
warmup_steps = steps_per_loop * 10
checkpoint_interval = steps_per_loop * 5
decay_steps = int(train_steps)

print(f'steps_per_loop: {steps_per_loop}')
print(f'summary_interval: {summary_interval}')
print(f'train_steps: {train_steps}')
print(f'validation_interval: {validation_interval}')
print(f'validation_steps: {validation_steps}')
print(f'warmup_steps: {warmup_steps}')
print(f'warmup_learning_rate: {warmup_learning_rate}')
print(f'initial_learning_rate: {initial_learning_rate}')
print(f'decay_steps: {decay_steps}')
print(f'checkpoint_interval: {checkpoint_interval}')`

## Authors and Maintainers
- Umair Sabir
Expand Down

0 comments on commit bb7c64c

Please sign in to comment.