Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Testing HoP with nuScenes mini Split #3

Open
MangoDragon opened this issue Apr 20, 2024 · 4 comments
Open

Testing HoP with nuScenes mini Split #3

MangoDragon opened this issue Apr 20, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@MangoDragon
Copy link

Hi,

I wanted to write this in a discussion, but I couldn't find the section for it.

Is it possible to test HoP without using the full (very large) dataset from nuscenes or training it? I would just like to give it data from a smaller dataset such as the mini subset (or my own recordings etc) and see the 3D bounding boxes.

Kind regards

@abhi1kumar abhi1kumar added the enhancement New feature or request label Apr 20, 2024
@abhi1kumar
Copy link
Owner

abhi1kumar commented Apr 20, 2024

Hi @MangoDragon

Thank you for your interest in SeaBird.

Is it possible to test HoP without using the full (very large) dataset from nuscenes or training it? I would just like to give it data from a smaller dataset such as the mini subset (or my own recordings etc).

mini subset should be possible. Please check the converter file. The current converter file supports train, val and test sets. To support mini set:

    elif nuscenes_version == 'v1.0-mini':
        set = 'mini'
        dataset = pickle.load(
            open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set), 'rb'))
        for id in range(len(dataset['infos'])):
            if id % 10 == 0:
                print('%d/%d' % (id, len(dataset['infos'])))
            info = dataset['infos'][id]
            # get sweep adjacent frame info
            sample = nuscenes.get('sample', info['token'])

            dataset['infos'][id]['scene_token'] = sample['scene_token']
        with open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set),
                'wb') as fid:
            pickle.dump(dataset, fid)

see the 3D bounding boxes.

The HoP baseline inherently uses mmdetection3d codebase. You could use the visualization guide to visualize the 3D boxes.

PS: It would be great if you could support the SeaBird repo by starring it.

@MangoDragon
Copy link
Author

Thank you for your reply! I starred the repo too. What should I use for the pkl? Are they created from the large dataset or do I need to download them from somewhere?

@abhi1kumar
Copy link
Owner

abhi1kumar commented Apr 20, 2024

What should I use for the pkl?

The nuscenes_data_prep() function should get the pkl files. The add_ann_adj_info() function in the converter file works on pkl files created by the nuscenes_data_prep() function.

I starred the repo too.

Thank you for your support 😄

@abhi1kumar abhi1kumar changed the title Testing HoP without large nuscenes dataset? Testing HoP with mini Split Apr 20, 2024
@abhi1kumar abhi1kumar changed the title Testing HoP with mini Split Testing HoP with nuScenes mini Split Apr 20, 2024
@MangoDragon
Copy link
Author

MangoDragon commented Apr 21, 2024

Using the following code:

 elif nuscenes_version == 'v1.0-mini':
        # Allow for the mini dataset -------------------------------
        set = 'mini'
        dataset = pickle.load(
            open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set), 'rb'))
        for id in range(len(dataset['infos'])):
            if id % 10 == 0:
                print('%d/%d' % (id, len(dataset['infos'])))
            info = dataset['infos'][id]
            # get sweep adjacent frame info
            sample = nuscenes.get('sample', info['token'])

            dataset['infos'][id]['scene_token'] = sample['scene_token']
        with open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set),
                'wb') as fid:
            pickle.dump(dataset, fid)
    else:
        raise NotImplementedError(f'{nuscenes_version} not supported')

and

    if __name__ == '__main__':

    parser = argparse.ArgumentParser(description='Data converter arg parser')
    parser.add_argument('--split', default='trainval', help='split of the dataset')

    args = parser.parse_args()

    dataset = 'nuscenes'
    version = 'v1.0'
    assert args.split in ['trainval', 'test', 'mini'] # added 'mini' ----------------------------------------`

When using the command: python tools/create_data_bevdet.py --split mini I get the following error:

`Traceback (most recent call last):
  File "tools/create_data_bevdet.py", line 182, in <module>
    add_ann_adj_info(extra_tag,
  File "tools/create_data_bevdet.py", line 147, in add_ann_adj_info
    open('./data/nuscenes/%s_infos_%s.pkl' % (extra_tag, set), 'rb'))
FileNotFoundError: [Errno 2] No such file or directory: './data/nuscenes/bevdetv2-nuscenes_infos_mini.pkl'

Only 2 pkl's are created.

If I use python tools/create_data.py nuscenes --root-path ./data/nuscenes --out-dir ./data/nuscenes --extra-tag nuscenes --version v1.0-mini
I get the following output:

[>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>] 81/81, 3.7 task/s, elapsed: 22s, ETA:     0s
Create GT Database of NuScenesDataset
[                                                  ] 0/323, elapsed: 0s, ETA:Traceback (most recent call last):
  File "tools/create_data.py", line 267, in <module>
    nuscenes_data_prep(
  File "tools/create_data.py", line 89, in nuscenes_data_prep
    create_groundtruth_database(dataset_name, root_path, info_prefix,
  File "c:\users\user\anaconda3\seabird\hop\tools\data_converter\create_gt_database.py", line 240, in create_groundtruth_database
    example = dataset.pipeline(input_dict)
  File "c:\users\user\anaconda3\seabird\hop\mmdet3d\datasets\pipelines\compose.py", line 49, in __call__
    data = t(data)
  File "c:\users\user\anaconda3\seabird\hop\mmdet3d\datasets\pipelines\loading.py", line 682, in __call__
    results = self._load_bboxes_3d(results)
  File "c:\users\user\anaconda3\seabird\hop\mmdet3d\datasets\pipelines\loading.py", line 577, in _load_bboxes_3d
    results['gt_bboxes_3d'] = results['ann_info']['gt_bboxes_3d']
KeyError: 'ann_info'

Edit: One more thing,

For the tutorial you referenced, I found the code:

import mmcv
import numpy as np
from mmengine import load

from mmdet3d.visualization import Det3DLocalVisualizer
from mmdet3d.structures import CameraInstance3DBoxes

info_file = load('demo/data/kitti/000008.pkl')
cam2img = np.array(info_file['data_list'][0]['images']['CAM2']['cam2img'], dtype=np.float32)
bboxes_3d = []
for instance in info_file['data_list'][0]['instances']:
    bboxes_3d.append(instance['bbox_3d'])
gt_bboxes_3d = np.array(bboxes_3d, dtype=np.float32)
gt_bboxes_3d = CameraInstance3DBoxes(gt_bboxes_3d)
input_meta = {'cam2img': cam2img}

visualizer = Det3DLocalVisualizer()

img = mmcv.imread('demo/data/kitti/000008.png')
img = mmcv.imconvert(img, 'bgr', 'rgb')
visualizer.set_image(img)
# project 3D bboxes to image
visualizer.draw_proj_bboxes_3d(gt_bboxes_3d, input_meta)
visualizer.show()

The pkl file it loads seems to be part of the KITTI database. How would the image and pkl loading work regarding nuscenes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants