Skip to content
This repository has been archived by the owner on Jan 1, 2025. It is now read-only.

loading swintransformer #228

Open
BuRr-Lee opened this issue Feb 8, 2024 · 0 comments
Open

loading swintransformer #228

BuRr-Lee opened this issue Feb 8, 2024 · 0 comments

Comments

@BuRr-Lee
Copy link

BuRr-Lee commented Feb 8, 2024

when loading Swin-L checkpoint (swin_large_patch4_window12_384_22k.pkl), warnings come like this:

[02/08 23:18:31 fvcore.common.checkpoint]: [Checkpointer] Loading from weights/swin_large_patch4_window12_384_22k.pkl ...
[02/08 23:18:33 fvcore.common.checkpoint]: Reading a file from 'third_party'
WARNING [02/08 23:18:33 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([1536]), while shape of sem_seg_head.pixel_decoder.adapter_1.norm.bias in model is torch.Size([256]).
WARNING [02/08 23:18:33 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [02/08 23:18:33 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([1536]), while shape of sem_seg_head.pixel_decoder.adapter_1.norm.weight in model is torch.Size([256]).
WARNING [02/08 23:18:33 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [02/08 23:18:33 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([1536]), while shape of sem_seg_head.pixel_decoder.layer_1.norm.bias in model is torch.Size([256]).
WARNING [02/08 23:18:33 d2.checkpoint.c2_model_loading]: norm.bias will not be loaded. Please double check and see if this is desired.
WARNING [02/08 23:18:33 d2.checkpoint.c2_model_loading]: Shape of norm.weight in checkpoint is torch.Size([1536]), while shape of sem_seg_head.pixel_decoder.layer_1.norm.weight in model is torch.Size([256]).
WARNING [02/08 23:18:33 d2.checkpoint.c2_model_loading]: norm.weight will not be loaded. Please double check and see if this is desired.
WARNING [02/08 23:18:33 d2.checkpoint.c2_model_loading]: Shape of norm.bias in checkpoint is torch.Size([1536]), while shape of sem_seg_head.predictor.transformer_cross_attention_layers.0.norm.bias in model is torch.Size([256]).

do you have any clue about this? thanks!

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant