Skip to content

Fix attention mask handling in EoMT-DINOv3 converter#59

Draft
NielsRogge wants to merge 13 commits intocodex/integrate-eomt-dinov3-model-into-transformersfrom
codex/write-conversion-script-for-eomt-dinov3-checkpoint
Draft

Fix attention mask handling in EoMT-DINOv3 converter#59
NielsRogge wants to merge 13 commits intocodex/integrate-eomt-dinov3-model-into-transformersfrom
codex/write-conversion-script-for-eomt-dinov3-checkpoint

Conversation

@NielsRogge
Copy link
Owner

Summary

  • update the conversion verifier to build an additive attention mask that mirrors the original EoMT implementation instead of multiplying by a boolean mask
  • seed PyTorch before running verification so repeated conversions are deterministic

Testing

  • python -m compileall src/transformers/models/eomt_dinov3/convert_eomt_dinov3_to_hf.py

https://chatgpt.com/codex/tasks/task_b_68d65a86fc4c8336abf69a5c0d342a8a

@github-actions
Copy link

[For maintainers] Suggested jobs to run (before merge)

run-slow: eomt_dinov3

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant