diff --git a/docs/tutorials/config.md b/docs/tutorials/config.md index caf31dfb87..59bcfb8647 100644 --- a/docs/tutorials/config.md +++ b/docs/tutorials/config.md @@ -24,7 +24,7 @@ Please refer to [mmcv](https://mmcv.readthedocs.io/en/latest/utils.html#config) We follow the below style to name config files. Contributors are advised to follow the same style. ``` -{model}_[model setting]_{backbone}_{neck}_[norm setting]_[misc]_[batch_per_gpu x gpu]_{schedule}_{dataset} +{model}_[model setting]_{backbone}_[neck]_[norm setting]_[misc]_[batch_per_gpu x gpu]_{schedule}_{dataset} ``` `{xxx}` is required field and `[yyy]` is optional. @@ -32,7 +32,7 @@ We follow the below style to name config files. Contributors are advised to foll - `{model}`: model type like `hv_pointpillars` (Hard Voxelization PointPillars), `VoteNet`, etc. - `[model setting]`: specific setting for some model. - `{backbone}`: backbone type like `regnet-400mf`, `regnet-1.6gf`. -- `{neck}`: neck type like `fpn`, `secfpn`. +- `[neck]`: neck type like `fpn`, `secfpn`. - `[norm_setting]`: `bn` (Batch Normalization) is used unless specified, other norm layer type could be `gn` (Group Normalization), `sbn` (Synchronized Batch Normalization). `gn-head`/`gn-neck` indicates GN is applied in head/neck only, while `gn-all` means GN is applied in the entire model, e.g. backbone, neck, head. - `[misc]`: miscellaneous setting/plugins of model, e.g. `strong-aug` means using stronger augmentation strategies for training. @@ -383,14 +383,14 @@ evaluation = dict(pipeline=[ # Pipeline is passed by eval_pipeline created befo ]) lr = 0.008 # Learning rate of optimizers optimizer = dict( # Config used to build optimizer, support all the optimizers in PyTorch whose arguments are also the same as those in PyTorch - type='Adam', # Type of optimizers, # Type of optimizers, refer to https://github.com/open-mmlab/mmdetection/blob/master/mmdet/core/optimizer/default_constructor.py#L13 for more details + type='Adam', # Type of optimizers, # Type of optimizers, refer to https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/optimizer/default_constructor.py#L12 for more details lr=0.008) # Learning rate of optimizers, see detail usages of the parameters in the documentaion of PyTorch -optimizer_config = dict( # Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/optimizer.py#L8 for implementation details. +optimizer_config = dict( # Config used to build the optimizer hook, refer to https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/optimizer.py#L22 for implementation details. grad_clip=dict( # Config used to grad_clip max_norm=10, # max norm of the gradients norm_type=2)) # Type of the used p-norm. Can be 'inf' for infinity norm. lr_config = dict( # Learning rate scheduler config used to register LrUpdater hook - policy='step', # The policy of scheduler, also support CosineAnnealing, Cyclic, etc. Refer to details of supported LrUpdater from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py#L9. + policy='step', # The policy of scheduler, also support CosineAnnealing, Cyclic, etc. Refer to details of supported LrUpdater from https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/lr_updater.py#L9. warmup=None, # The warmup policy, also support `exp` and `constant`. step=[24, 32]) # Steps to decay the learning rate checkpoint_config = dict( # Config to set the checkpoint hook, Refer to https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/checkpoint.py for implementation. @@ -417,7 +417,7 @@ gpu_ids = range(0, 1) # ids of gpus Sometimes, you may set `_delete_=True` to ignore some of fields in base configs. You may refer to [mmcv](https://mmcv.readthedocs.io/en/latest/utils.html#inherit-from-base-config-with-ignored-fields) for simple illustration. -In MMDetection or MMDetection3D, for example, to change the FPN neck of PointPillars with the following config. +In MMDetection3D, for example, to change the FPN neck of PointPillars with the following config. ```python model = dict( diff --git a/docs/tutorials/customize_dataset.md b/docs/tutorials/customize_dataset.md index 26c3f7add2..0501e19630 100644 --- a/docs/tutorials/customize_dataset.md +++ b/docs/tutorials/customize_dataset.md @@ -364,4 +364,4 @@ data = dict( - Before MMDetection v2.5.0, the dataset will filter out the empty GT images automatically if the classes are set and there is no way to disable that through config. This is an undesirable behavior and introduces confusion because if the classes are not set, the dataset only filter the empty GT images when `filter_empty_gt=True` and `test_mode=False`. After MMDetection v2.5.0, we decouple the image filtering process and the classes modification, i.e., the dataset will only filter empty GT images when `filter_empty_gt=True` and `test_mode=False`, no matter whether the classes are set. Thus, setting the classes only influences the annotations of classes used for training and users could decide whether to filter empty GT images by themselves. - Since the middle format only has box labels and does not contain the class names, when using `CustomDataset`, users cannot filter out the empty GT images through configs but only do this offline. -- The features for setting dataset classes and dataset filtering will be refactored to be more user-friendly in v2.8.0 or v2.9.0 (depends on the progress). +- The features for setting dataset classes and dataset filtering will be refactored to be more user-friendly in the future (depends on the progress). diff --git a/docs/tutorials/customize_models.md b/docs/tutorials/customize_models.md index 2c2587353b..8028e3961d 100644 --- a/docs/tutorials/customize_models.md +++ b/docs/tutorials/customize_models.md @@ -56,7 +56,7 @@ custom_imports = dict( to the config file to avoid modifying the original code. -#### 3. Use the backbone in your config file +#### 3. Use the voxel encoder in your config file ```python model = dict( @@ -170,14 +170,17 @@ custom_imports = dict( to the config file and avoid modifying the original code. -#### 3. Modify the config file +#### 3. Use the neck in your config file ```python -neck=dict( - type='SECONDFPN', - in_channels=[64, 128, 256], - upsample_strides=[1, 2, 4], - out_channels=[128, 128, 128]) +model = dict( + ... + neck=dict( + type='SECONDFPN', + in_channels=[64, 128, 256], + upsample_strides=[1, 2, 4], + out_channels=[128, 128, 128]), + ... ``` ### Add new heads @@ -383,7 +386,7 @@ Alternatively, the users can add ```python custom_imports=dict( - imports=['mmdet3d.models.roi_heads.part_aggregation_roi_head', 'mmdet3d.models.bbox_heads.parta2_bbox_head']) + imports=['mmdet3d.models.roi_heads.part_aggregation_roi_head', 'mmdet3d.models.roi_heads.bbox_heads.parta2_bbox_head']) ``` to the config file and achieve the same goal. diff --git a/docs/tutorials/customize_runtime.md b/docs/tutorials/customize_runtime.md index d33668e47a..bef219f5db 100644 --- a/docs/tutorials/customize_runtime.md +++ b/docs/tutorials/customize_runtime.md @@ -54,10 +54,10 @@ __all__ = ['MyOptimizer'] You also need to import `optimizer` in `mmdet3d/core/__init__.py` by adding: ```python -from .optimizer import * # noqa: F401, F403 +from .optimizer import * ``` -- Use `custom_imports` in the config to manually import it +Or use `custom_imports` in the config to manually import it ```python custom_imports = dict(imports=['mmdet3d.core.optimizer.my_optimizer'], allow_failed_imports=False) @@ -108,7 +108,7 @@ class MyOptimizerConstructor(object): ``` -The default optimizer constructor is implemented [here](https://github.com/open-mmlab/mmcv/blob/9ecd6b0d5ff9d2172c49a182eaa669e9f27bb8e7/mmcv/runner/optimizer/default_constructor.py#L11), which could also serve as a template for new optimizer constructor. +The default optimizer constructor is implemented [here](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/optimizer/default_constructor.py#L11), which could also serve as a template for new optimizer constructor. ### Additional settings @@ -122,12 +122,12 @@ Tricks not implemented by the optimizer should be implemented through optimizer _delete_=True, grad_clip=dict(max_norm=35, norm_type=2)) ``` - If your config inherits the base config which already sets the `optimizer_config`, you might need `_delete_=True` to overide the unnecessary settings. See the [config documenetation](https://mmdetection.readthedocs.io/en/latest/config.html) for more details. + If your config inherits the base config which already sets the `optimizer_config`, you might need `_delete_=True` to overide the unnecessary settings. See the [config documenetation](https://mmdetection.readthedocs.io/en/latest/tutorials/config.html) for more details. - __Use momentum schedule to accelerate model convergence__: We support momentum scheduler to modify model's momentum according to learning rate, which could make the model converge in a faster way. Momentum scheduler is usually used with LR scheduler, for example, the following config is used in 3D detection to accelerate convergence. - For more details, please refer to the implementation of [CyclicLrUpdater](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/lr_updater.py#L327) and [CyclicMomentumUpdater](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/momentum_updater.py#L130). + For more details, please refer to the implementation of [CyclicLrUpdater](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/lr_updater.py#L358) and [CyclicMomentumUpdater](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/momentum_updater.py#L225). ```python lr_config = dict( @@ -146,8 +146,8 @@ Tricks not implemented by the optimizer should be implemented through optimizer ## Customize training schedules -By default we use step learning rate with 1x schedule, this calls [`StepLRHook`](https://github.com/open-mmlab/mmcv/blob/f48241a65aebfe07db122e9db320c31b685dc674/mmcv/runner/hooks/lr_updater.py#L153) in MMCV. -We support many other learning rate schedule [here](https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/hooks/lr_updater.py), such as `CosineAnnealing` and `Poly` schedule. Here are some examples +By default we use step learning rate with 1x schedule, this calls [`StepLRHook`](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/lr_updater.py#L167) in MMCV. +We support many other learning rate schedule [here](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/lr_updater.py), such as `CosineAnnealing` and `Poly` schedule. Here are some examples - Poly schedule: @@ -248,7 +248,7 @@ __all__ = [..., 'MyHook'] ``` -- Use `custom_imports` in the config to manually import it +Or use `custom_imports` in the config to manually import it ```python custom_imports = dict(imports=['mmdet3d.core.utils.my_hook'], allow_failed_imports=False) @@ -295,17 +295,17 @@ There are some common hooks that are not registerd through `custom_hooks`, they In those hooks, only the logger hook has the `VERY_LOW` priority, others' priority are `NORMAL`. The above-mentioned tutorials already covers how to modify `optimizer_config`, `momentum_config`, and `lr_config`. -Here we reveals how what we can do with `log_config`, `checkpoint_config`, and `evaluation`. +Here we reveal what we can do with `log_config`, `checkpoint_config`, and `evaluation`. #### Checkpoint config -The MMCV runner will use `checkpoint_config` to initialize [`CheckpointHook`](https://github.com/open-mmlab/mmcv/blob/9ecd6b0d5ff9d2172c49a182eaa669e9f27bb8e7/mmcv/runner/hooks/checkpoint.py#L9). +The MMCV runner will use `checkpoint_config` to initialize [`CheckpointHook`](https://github.com/open-mmlab/mmcv/blob/v1.3.7/mmcv/runner/hooks/checkpoint.py#L9). ```python checkpoint_config = dict(interval=1) ``` -The users could set `max_keep_ckpts` to only save only small number of checkpoints or decide whether to store state dict of optimizer by `save_optimizer`. More details of the arguments are [here](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.CheckpointHook) +The users could set `max_keep_ckpts` to save only small number of checkpoints or decide whether to store state dict of optimizer by `save_optimizer`. More details of the arguments are [here](https://mmcv.readthedocs.io/en/latest/api.html#mmcv.runner.CheckpointHook) #### Log config @@ -323,7 +323,7 @@ log_config = dict( #### Evaluation config -The config of `evaluation` will be used to initialize the [`EvalHook`](https://github.com/open-mmlab/mmdetection/blob/7a404a2c000620d52156774a5025070d9e00d918/mmdet/core/evaluation/eval_hooks.py#L8). +The config of `evaluation` will be used to initialize the [`EvalHook`](https://github.com/open-mmlab/mmdetection/blob/v2.13.0/mmdet/core/evaluation/eval_hooks.py#L9). Except the key `interval`, other arguments such as `metric` will be passed to the `dataset.evaluate()` ```python diff --git a/docs/tutorials/data_pipeline.md b/docs/tutorials/data_pipeline.md index 5f1b09ecfa..16f6ce48ba 100644 --- a/docs/tutorials/data_pipeline.md +++ b/docs/tutorials/data_pipeline.md @@ -15,7 +15,7 @@ defines how to process the annotations and a data pipeline defines all the steps A pipeline consists of a sequence of operations. Each operation takes a dict as input and also output a dict for the next transform. We present a classical pipeline in the following figure. The blue blocks are pipeline operations. With the pipeline going on, each operator can add new keys (marked as green) to the result dict or update the existing keys (marked as orange). -![pipeline figure](../../resources/data_pipeline.png) +![](../../resources/data_pipeline.png) The operations are categorized into data loading, pre-processing, formatting and test-time augmentation. diff --git a/docs/tutorials/waymo.md b/docs/tutorials/waymo.md index 1b423691cd..00fcda1aa5 100644 --- a/docs/tutorials/waymo.md +++ b/docs/tutorials/waymo.md @@ -2,7 +2,7 @@ This page provides specific tutorials about the usage of MMDetection3D for waymo dataset. -## Prepare datasets +## Prepare dataset Before preparing waymo dataset, if you only installed requirements in `requirements/build.txt` and `requirements/runtime.txt` before, please install the official package for this dataset at first by running @@ -43,7 +43,7 @@ mmdetection3d ``` -You can download Waymo open dataset V1.2 [HERE](https://waymo.com/open/download/) and its data split [HERE](https://drive.google.com/drive/folders/18BVuF_RYJF0NjZpt8SnfzANiakoRMf0o?usp=sharing). Then put tfrecord files into corresponding folders in `data/waymo/waymo_format/` and put the data split txt files into `data/waymo/kitti_format/ImageSets`. Download ground truth bin file for validation set [HERE](https://console.cloud.google.com/storage/browser/waymo_open_dataset_v_1_2_0/validation/ground_truth_objects) and put it into `data/waymo/waymo_format/`. A tip is that you can use `gsutil` to download the large-scale dataset with commands. You can take this [tool](https://github.com/RalphMao/Waymo-Dataset-Tool) as an example for more details. Subsequently, prepare waymo data by running +You can download Waymo open dataset V1.2 [HERE](https://waymo.com/open/download/) and its data split [HERE](https://drive.google.com/drive/folders/18BVuF_RYJF0NjZpt8SnfzANiakoRMf0o?usp=sharing). Then put tfrecord files into corresponding folders in `data/waymo/waymo_format/` and put the data split txt files into `data/waymo/kitti_format/ImageSets`. Download ground truth bin files for validation set [HERE](https://console.cloud.google.com/storage/browser/waymo_open_dataset_v_1_2_0/validation/ground_truth_objects) and put it into `data/waymo/waymo_format/`. A tip is that you can use `gsutil` to download the large-scale dataset with commands. You can take this [tool](https://github.com/RalphMao/Waymo-Dataset-Tool) as an example for more details. Subsequently, prepare waymo data by running ```bash python tools/create_data.py waymo --root-path ./data/waymo/ --out-dir ./data/waymo/ --workers 128 --extra-tag waymo @@ -101,7 +101,7 @@ Considering there are many similar frames in the original dataset, we can basica ## Evaluation -For evaluation on waymo, please follow the [instruction](https://github.com/waymo-research/waymo-open-dataset/blob/master/docs/quick_start.md) to build the binary file `compute_detection_metrics_main` for metrics computation and put it into `mmdet3d/core/evaluation/waymo_utils/`. Basically, you can follow the commands below to install bazel and build the file. +For evaluation on waymo, please follow the [instruction](https://github.com/waymo-research/waymo-open-dataset/blob/master/docs/quick_start.md/) to build the binary file `compute_detection_metrics_main` for metrics computation and put it into `mmdet3d/core/evaluation/waymo_utils/`. Basically, you can follow the commands below to install bazel and build the file. ```shell git clone https://github.com/waymo-research/waymo-open-dataset.git waymo-od @@ -129,7 +129,7 @@ Then you can evaluate your models on waymo. An example to evaluate PointPillars 'submission_prefix=results/waymo-car/kitti_results' ``` -`pklfile_prefix` should be given in the `--eval-options` if the bin file is needed to be generated. For metrics, `waymo` is the recommended official evaluation prototype. Currently, evaluating with choice `kitti` is adapted from KITTI and the results for each difficulty are not exactly the same as the definition of KITTI. Instead, most of objects are marked with difficulty 0 currently, which will be fixed in the future. The reasons of its instability include the large computation for evalution, the lack of occlusion and truncation in the converted data, different definition of difficulty and different methods of computing average precision. +`pklfile_prefix` should be given in the `--eval-options` if the bin file is needed to be generated. For metrics, `waymo` is the recommended official evaluation prototype. Currently, evaluating with choice `kitti` is adapted from KITTI and the results for each difficulty are not exactly the same as the definition of KITTI. Instead, most of objects are marked with difficulty 0 currently, which will be fixed in the future. The reasons of its instability include the large computation for evalution, the lack of occlusion and truncation in the converted data, different definitions of difficulty and different methods of computing average precision. **Notice**: @@ -150,7 +150,7 @@ An example to test PointPillars on waymo with 8 GPUs, generate the bin files and 'submission_prefix=results/waymo-car/kitti_results' ``` -After generating the bin file, you can simply build the binary file `create_submission` and use them to create a submission file by following the [instruction](https://github.com/waymo-research/waymo-open-dataset/blob/master/docs/quick_start.md). Basically, here are some example commands. +After generating the bin file, you can simply build the binary file `create_submission` and use them to create a submission file by following the [instruction](https://github.com/waymo-research/waymo-open-dataset/blob/master/docs/quick_start.md/). Basically, here are some example commands. ```shell cd ../waymo-od/