Skip to content
This repository has been archived by the owner on Nov 17, 2023. It is now read-only.

fix ssd quantization script error #13843

Merged
merged 5 commits into from
Jan 15, 2019
Merged
Show file tree
Hide file tree
Changes from 2 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 2 additions & 2 deletions example/quantization/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -218,7 +218,7 @@ data/
|---val.lxt
|---val.idx
model/
|---ssd_vgg16_reduced_300.params
|---ssd_vgg16_reduced_300-0000.params
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

please add the note, user needs to rename, such as ssd-val-fc19a535.idx to val.idx.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Another suggestion is to move/copy the SSD quantization instruction into ssd/README too.

|---ssd_vgg16_reduced_300-symbol.json
```

Expand Down Expand Up @@ -322,4 +322,4 @@ by invoking `launch_quantize.sh`.

**NOTE**:
- This example has only been tested on Linux systems.
- Performance is expected to decrease with GPU, however the memory footprint of a quantized model is smaller. The purpose of the quantization implementation is to minimize accuracy loss when converting FP32 models to INT8. MXNet community is working on improving the performance.
- Performance is expected to decrease with GPU, however the memory footprint of a quantized model is smaller. The purpose of the quantization implementation is to minimize accuracy loss when converting FP32 models to INT8. MXNet community is working on improving the performance.
10 changes: 6 additions & 4 deletions example/ssd/quantization.py
Original file line number Diff line number Diff line change
Expand Up @@ -119,12 +119,14 @@ def save_params(fname, arg_params, aux_params, logger=None):
exclude_first_conv = args.exclude_first_conv
excluded_sym_names = []
rgb_mean = '123,117,104'
calib_layer = lambda name: name.endswith('_output')
for i in range(1,19):
excluded_sym_names += ['flatten'+str(i)]
excluded_sym_names += ['relu4_3_cls_pred_conv',
'relu7_cls_pred_conv',
'relu4_3_loc_pred_conv']
'relu4_3_loc_pred_conv',
'multibox_loc_pred',
'concat0',
'concat1']
if exclude_first_conv:
excluded_sym_names += ['conv1_1']

Expand Down Expand Up @@ -156,9 +158,9 @@ def save_params(fname, arg_params, aux_params, logger=None):
ctx=ctx, excluded_sym_names=excluded_sym_names,
calib_mode=calib_mode, calib_data=eval_iter,
num_calib_examples=num_calib_batches * batch_size,
calib_layer=calib_layer, quantized_dtype=args.quantized_dtype,
calib_layer=None, quantized_dtype=args.quantized_dtype,
label_names=(label_name,),
calib_quantize_op = True,
calib_quantize_op=True,
logger=logger)
sym_name = '%s-symbol.json' % ('./model/cqssd_vgg16_reduced_300')
param_name = '%s-%04d.params' % ('./model/cqssd_vgg16_reduced_300', epoch)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not remove 0000 in here instead of adding 0000 in the readme? Any special meaning for it?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's a convention of mxnet. We can save different parameter files at different epochs.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the explanation. Agree the change makes sense :)

Expand Down