Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

weird logic in positional embedding in APTModel (self.wpe)? #65

Open
othertea opened this issue Jan 21, 2024 · 3 comments
Open

weird logic in positional embedding in APTModel (self.wpe)? #65

othertea opened this issue Jan 21, 2024 · 3 comments

Comments

@othertea
Copy link
Collaborator

othertea commented Jan 21, 2024

I'm finding our handling of the initial positional embeddings, before the APT blocks, (self.wpe or its absence in the definition of APTModel) to be a bit weird.
They are initialized here:

if self.position_embedding=="learned" or self.position_embedding == 'rope' or self.position_embedding == 'rerope' or self.position_embedding=="linear_rope_scaling" or self.position_embedding =="dynamic_rope_scaling":
self.wpe = nn.Embedding(config.max_position_embeddings, self.embed_dim)
self.alibi = None
elif self.position_embedding=="alibi":
maxpos = config.n_positions
attn_heads = config.n_head
alibi = create_alibi_tensor(attn_heads,maxpos)
self.register_buffer('alibi',alibi)

and used here:
if self.position_embedding=="learned" or self.position_embedding == 'rope' or self.position_embedding == 'rerope' or self.position_embedding=="linear_rope_scaling" or self.position_embedding =="dynamic_rope_scaling":
position_embeds = self.wpe(position_ids)
hidden_states = inputs_embeds + position_embeds
else:
hidden_states = inputs_embeds

It seems that for learned embedding as well as for variants of rope, a learned positional embedding is added before passing on to the blocks. Only for alibi is this positional embedding omitted. (The APT blocks have rope/alibi as was specified, so this first positional embedding being omitted does not mean that these positional embeddings are never used.)
This seems weird to me because I don't see why rope should be grouped with learned embeddings. It makes more sense to me for rope variants to also omit having an initial positional embedding (i.e., no self.wpe). I would also be more okay with all of them having an initial positional embedding, but this doesn't seem the standard way language models are implemented e.g., in llama.

Tagging @talkhanz who I think was the original author of this logic, and @jamaliki @jeffreyruffolo @NZ99 @pascalnotin for their thoughts.

@jamaliki
Copy link
Collaborator

This is weird to me too! It should only be there for learned embeddings, IMO.

@talkhanz
Copy link
Contributor

Hey @othertea. Thanks for pointing this out! I believe you are correct about ignoring the positional embeddings for rope and its variants. I think I was trying to push a bit too hard to make the code similar to tranception :P and so made this error in that spirit.
As I understand it, the if conditions and initialization within APTModel need to be rectified? I'll make them in another PR? @othertea
@pascalnotin let me know your thoughts?

@othertea
Copy link
Collaborator Author

Thanks for confirming my suspicions, @jamaliki and @talkhanz !
@talkhanz don't worry about doing anything, I'll make the PR with the updates and tag you! I'm thinking it might be better to wait until the mup PR #64 is merged so that we avoid possibly creating merge conflict problems for @NZ99 .

talkhanz pushed a commit to talkhanz/protein-lm-scaling that referenced this issue Mar 15, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants