Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
74 commits
Select commit Hold shift + click to select a range
05e2cb3
Added Model Documentation.
Sai-Suraj-27 Feb 23, 2026
331f3b4
Added conversion_mapping weight renamings
Sai-Suraj-27 Feb 23, 2026
1c32317
Added Auto Mappings.
Sai-Suraj-27 Feb 23, 2026
4014ee3
init
Sai-Suraj-27 Feb 23, 2026
efe5a39
Modular jina_embeddings_v3
Sai-Suraj-27 Feb 23, 2026
2524630
modular -> modeling + config
Sai-Suraj-27 Feb 23, 2026
3a33e40
__init__.py
Sai-Suraj-27 Feb 23, 2026
2dafe59
Created folder for tests
Sai-Suraj-27 Feb 23, 2026
baa7d91
Added documentation for the jina-embeddings-v3 Model
Sai-Suraj-27 Feb 23, 2026
5dcd47f
Tests
Sai-Suraj-27 Feb 23, 2026
6fb7d95
Update Tests
Sai-Suraj-27 Feb 23, 2026
a9575dd
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Feb 23, 2026
1ba2c54
Update Tests
Sai-Suraj-27 Feb 23, 2026
632820f
Update modular
Sai-Suraj-27 Feb 23, 2026
b2a4ec5
Fix failing test
Sai-Suraj-27 Feb 23, 2026
284ffc6
scope
Sai-Suraj-27 Feb 23, 2026
caee0ba
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Feb 24, 2026
bad2fc2
Update modular, Add docstring for adapter_mask
Sai-Suraj-27 Feb 24, 2026
ec81999
Testing
Sai-Suraj-27 Feb 24, 2026
ba3390f
Fix failing test
Sai-Suraj-27 Feb 24, 2026
e0a90bf
Added IntegrationTests
Sai-Suraj-27 Feb 24, 2026
ade3631
Updated model doc date
Sai-Suraj-27 Feb 24, 2026
c6ed80f
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Feb 24, 2026
a199bf6
post_init()
Sai-Suraj-27 Feb 24, 2026
b44f90b
make style.
Sai-Suraj-27 Feb 24, 2026
6cf10ed
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Feb 24, 2026
fe0c7cb
adapter_mask gone
Sai-Suraj-27 Feb 26, 2026
00b93db
Better Modular
Sai-Suraj-27 Feb 26, 2026
af6f908
Add conversion_mapping
Sai-Suraj-27 Feb 27, 2026
05d4190
Modular -> Modeling + Config
Sai-Suraj-27 Feb 27, 2026
8bc7dfa
Update model doc
Sai-Suraj-27 Feb 27, 2026
95e05f6
Update tests
Sai-Suraj-27 Feb 27, 2026
c92493a
Small fix
Sai-Suraj-27 Feb 27, 2026
e3a7c79
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Feb 27, 2026
d5e49ce
make fix-repo
Sai-Suraj-27 Feb 27, 2026
081af3d
fix _tied_weights_keys
Sai-Suraj-27 Feb 27, 2026
32a1722
self.is_causal=False
Sai-Suraj-27 Feb 27, 2026
5dee40f
Add tie_word_embeddings in configuration class
Sai-Suraj-27 Feb 27, 2026
c3a26e9
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Mar 2, 2026
486aae1
small fix in configuration doc-string
Sai-Suraj-27 Mar 2, 2026
74922d3
config update
Sai-Suraj-27 Mar 2, 2026
0bb18c9
fix check_docstrings.py
Sai-Suraj-27 Mar 3, 2026
45844f4
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Mar 3, 2026
03f4375
ruff: Reformat
Sai-Suraj-27 Mar 3, 2026
901d7a9
Remove extra args from config
Sai-Suraj-27 Mar 3, 2026
b0cba60
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Mar 6, 2026
8988ceb
update tests + model doc
Sai-Suraj-27 Mar 6, 2026
66b3b2e
Better, modern modular
Sai-Suraj-27 Mar 7, 2026
a0d4011
make fix-repo
Sai-Suraj-27 Mar 7, 2026
ca3a9d5
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Mar 7, 2026
f95b90d
Update conversion mapping
Sai-Suraj-27 Mar 7, 2026
457e75e
fix dropout
Sai-Suraj-27 Mar 7, 2026
d561f86
Better modular
Sai-Suraj-27 Mar 9, 2026
76a2807
Update conversion mapping
Sai-Suraj-27 Mar 10, 2026
a8fabb2
Update tests
Sai-Suraj-27 Mar 10, 2026
75ffec8
Update docs
Sai-Suraj-27 Mar 10, 2026
9c53b0f
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Mar 10, 2026
d31585f
Better modular
Sai-Suraj-27 Mar 12, 2026
1b7540c
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Mar 12, 2026
94e3b2d
Fix license
Sai-Suraj-27 Mar 12, 2026
c4f7cc6
Fix date
Sai-Suraj-27 Mar 12, 2026
0da360e
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Mar 18, 2026
0d36657
Better modular, Configuration
Sai-Suraj-27 Mar 18, 2026
c4072d8
make fix-repo
Sai-Suraj-27 Mar 18, 2026
b71bda5
Fix config
Sai-Suraj-27 Mar 18, 2026
5bce77a
Merge branch 'main' of github.com:huggingface/transformers into add_j…
Sai-Suraj-27 Mar 18, 2026
ae8ee84
Use autodocstring
Sai-Suraj-27 Mar 18, 2026
b0df7cd
lets use auto
vasqu Mar 18, 2026
3c2efc2
hmm is it this
vasqu Mar 18, 2026
f71b4ba
make hf version
vasqu Mar 18, 2026
3d38dbe
my bad...
vasqu Mar 18, 2026
6a1c9dc
Merge branch 'main' into add_jina_v3_model
vasqu Mar 18, 2026
0f37b73
retry whats up with ci
vasqu Mar 18, 2026
e5c6f35
ci pls
vasqu Mar 18, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions docs/source/en/_toctree.yml
Original file line number Diff line number Diff line change
Expand Up @@ -632,6 +632,8 @@
title: Jamba
- local: model_doc/jetmoe
title: JetMoe
- local: model_doc/jina_embeddings_v3
title: jina_embeddings_v3
- local: model_doc/led
title: LED
- local: model_doc/lfm2
Expand Down
165 changes: 165 additions & 0 deletions docs/source/en/model_doc/jina_embeddings_v3.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,165 @@
<!--Copyright 2026 The HuggingFace Team. All rights reserved.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->

*This model was released on 2024-09-16 and added to Hugging Face Transformers on 2026-03-18.*

<div style="float: right;">
<div class="flex flex-wrap space-x-1">
<img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-DE3412?style=flat&logo=pytorch&logoColor=white" >
<img alt="FlashAttention" src="https://img.shields.io/badge/%E2%9A%A1%EF%B8%8E%20FlashAttention-eae0c8?style=flat">
<img alt="SDPA" src="https://img.shields.io/badge/SDPA-DE3412?style=flat&logo=pytorch&logoColor=white">
</div>
</div>


# JinaEmbeddingsV3

The [Jina-Embeddings-v3](https://huggingface.co/papers/2409.10173) is a multilingual, multi-task text embedding model designed for a variety of NLP applications. Based on the XLM-RoBERTa architecture, this model supports **Rotary Position Embeddings (RoPE)** replacing absolute position embeddings to support long input sequences up to 8192 tokens. Additionally, it features 5 built-in **Task-Specific LoRA Adapters:** that allow the model to generate task-specific embeddings (e.g., for retrieval vs. classification) without increasing inference latency significantly.


You can find the original Jina Embeddings v3 checkpoints under the [Jina AI](https://huggingface.co/jinaai) organization.


> [!TIP]
> Click on the Jina Embeddings v3 models in the right sidebar for more examples of how to apply the model to different language tasks.

The example below demonstrates how to extract features (embeddings) with [`Pipeline`], [`AutoModel`], and from the command line.

<hfoptions id="usage">
<hfoption id="Pipeline">

```py
import torch
from transformers import pipeline

pipeline = pipeline(
task="feature-extraction",
model="jinaai/jina-embeddings-v3-hf",
)
# Returns a list of lists containing the embeddings for each token
embeddings = pipeline("Jina Embeddings V3 is great for semantic search.")
```


</hfoption>
<hfoption id="AutoModel">


```py
import torch
from transformers import AutoModel, AutoTokenizer

tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3-hf")
model = AutoModel.from_pretrained("jinaai/jina-embeddings-v3-hf", device_map="auto")

prompt = "Jina Embeddings V3 is great for semantic search."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

with torch.no_grad():
outputs = model(**inputs)
# The base AutoModel returns the raw hidden states for all tokens
last_hidden_states = outputs.last_hidden_state

print(f"Features shape: {last_hidden_states.shape}")
```

</hfoption>
</hfoptions>

## Task-Specific LoRA Adapters

A key feature of `JinaEmbeddingsV3` is it's LoRA adapters, which allow you to tailor the output embeddings to specific useful use cases without the overhead of loading entirely different models.

The following tasks are supported:

* **`retrieval.query`**: Used for query embeddings in asymmetric retrieval tasks (e.g., search queries).
* **`retrieval.passage`**: Used for passage embeddings in asymmetric retrieval tasks (e.g., the documents being searched).
* **`separation`**: Used for embeddings in clustering and re-ranking applications.
* **`classification`**: Used for embeddings in classification tasks.
* **`text-matching`**: Used for embeddings in tasks that quantify similarity between two texts, such as Semantic Textual Similarity (STS) or symmetric retrieval tasks.


To generate high-quality sentence or paragraph embeddings, you need to apply **mean pooling** to the model's token embeddings. Mean pooling takes all token embeddings from the model's output and averages them, masking out the padding tokens.

Here is how you can generate sentence embeddings tailored for a retrieval query task using the `AutoModel` API.

```python
import torch
import torch.nn.functional as F
from transformers import AutoTokenizer, AutoModel

def mean_pooling(model_output, attention_mask):
# First element of model_output contains all token embeddings
token_embeddings = model_output[0]
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()

# Sum the embeddings and divide by the number of non-padding tokens
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
sum_mask = torch.clamp(input_mask_expanded.sum(1), min=1e-9)
return sum_embeddings / sum_mask


sentences = [
"How is the weather today?",
"What is the current weather like today?"
]

tokenizer = AutoTokenizer.from_pretrained("jinaai/jina-embeddings-v3-hf")
model = AutoModel.from_pretrained("jinaai/jina-embeddings-v3-hf")

encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors="pt").to(model.device)

# Set up the adapter mask for your specific task
task = 'retrieval_query' # Can be any of (retrieval_passage, separation, classification, text_matching) depending on the use-case.

model.load_adapter("jinaai/jina-embeddings-v3-hf", adapter_name=task, adapter_kwargs={"subfolder": task})

model.set_adapter(task)

with torch.no_grad():
model_output = model(**encoded_input)

embeddings = mean_pooling(model_output, encoded_input["attention_mask"])
embeddings = F.normalize(embeddings, p=2, dim=1)

print(embeddings.shape)
# Output: torch.Size([2, 1024])
```


## JinaEmbeddingsV3Config

[[autodoc]] JinaEmbeddingsV3Config

## JinaEmbeddingsV3Model

[[autodoc]] JinaEmbeddingsV3Model
- forward

## JinaEmbeddingsV3ForMaskedLM

[[autodoc]] JinaEmbeddingsV3ForMaskedLM
- forward

## JinaEmbeddingsV3ForSequenceClassification

[[autodoc]] JinaEmbeddingsV3ForSequenceClassification
- forward

## JinaEmbeddingsV3ForTokenClassification

[[autodoc]] JinaEmbeddingsV3ForTokenClassification
- forward

## JinaEmbeddingsV3ForQuestionAnswering

[[autodoc]] JinaEmbeddingsV3ForQuestionAnswering
- forward
16 changes: 16 additions & 0 deletions src/transformers/conversion_mapping.py
Original file line number Diff line number Diff line change
Expand Up @@ -420,6 +420,22 @@ def _build_checkpoint_conversion_mapping():
target_patterns="LayerNorm.bias",
),
],
"jina_embeddings_v3": [
WeightRenaming(source_patterns="emb_ln", target_patterns="embeddings.LayerNorm"),
WeightRenaming(source_patterns="encoder.layers", target_patterns="layers"),
WeightConverter(
source_patterns="mixer.Wqkv",
target_patterns=[
"self_attn.q_proj",
"self_attn.k_proj",
"self_attn.v_proj",
],
operations=[Chunk(dim=0)],
),
WeightRenaming(source_patterns="mixer.out_proj", target_patterns="self_attn.o_proj"),
WeightRenaming(source_patterns="norm1", target_patterns="post_attention_layernorm"),
WeightRenaming(source_patterns="norm2", target_patterns="post_mlp_layernorm"),
],
}
mapping["legacy"] += [
WeightRenaming(
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -203,6 +203,7 @@
from .jamba import *
from .janus import *
from .jetmoe import *
from .jina_embeddings_v3 import *
from .kosmos2 import *
from .kosmos2_5 import *
from .kyutai_speech_to_text import *
Expand Down
2 changes: 2 additions & 0 deletions src/transformers/models/auto/configuration_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -237,6 +237,7 @@
("jamba", "JambaConfig"),
("janus", "JanusConfig"),
("jetmoe", "JetMoeConfig"),
("jina_embeddings_v3", "JinaEmbeddingsV3Config"),
("kosmos-2", "Kosmos2Config"),
("kosmos-2.5", "Kosmos2_5Config"),
("kyutai_speech_to_text", "KyutaiSpeechToTextConfig"),
Expand Down Expand Up @@ -741,6 +742,7 @@
("jamba", "Jamba"),
("janus", "Janus"),
("jetmoe", "JetMoe"),
("jina_embeddings_v3", "JinaEmbeddingsV3"),
("kosmos-2", "KOSMOS-2"),
("kosmos-2.5", "KOSMOS-2.5"),
("kyutai_speech_to_text", "KyutaiSpeechToText"),
Expand Down
5 changes: 5 additions & 0 deletions src/transformers/models/auto/modeling_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -234,6 +234,7 @@ class _BaseModelWithGenerate(PreTrainedModel, GenerationMixin):
("jamba", "JambaModel"),
("janus", "JanusModel"),
("jetmoe", "JetMoeModel"),
("jina_embeddings_v3", "JinaEmbeddingsV3Model"),
("kosmos-2", "Kosmos2Model"),
("kosmos-2.5", "Kosmos2_5Model"),
("kyutai_speech_to_text", "KyutaiSpeechToTextModel"),
Expand Down Expand Up @@ -1049,6 +1050,7 @@ class _BaseModelWithGenerate(PreTrainedModel, GenerationMixin):
("fnet", "FNetForMaskedLM"),
("funnel", "FunnelForMaskedLM"),
("ibert", "IBertForMaskedLM"),
("jina_embeddings_v3", "JinaEmbeddingsV3ForMaskedLM"),
("layoutlm", "LayoutLMForMaskedLM"),
("longformer", "LongformerForMaskedLM"),
("luke", "LukeForMaskedLM"),
Expand Down Expand Up @@ -1232,6 +1234,7 @@ class _BaseModelWithGenerate(PreTrainedModel, GenerationMixin):
("ibert", "IBertForSequenceClassification"),
("jamba", "JambaForSequenceClassification"),
("jetmoe", "JetMoeForSequenceClassification"),
("jina_embeddings_v3", "JinaEmbeddingsV3ForSequenceClassification"),
("layoutlm", "LayoutLMForSequenceClassification"),
("layoutlmv2", "LayoutLMv2ForSequenceClassification"),
("layoutlmv3", "LayoutLMv3ForSequenceClassification"),
Expand Down Expand Up @@ -1331,6 +1334,7 @@ class _BaseModelWithGenerate(PreTrainedModel, GenerationMixin):
("gpt_neox", "GPTNeoXForQuestionAnswering"),
("gptj", "GPTJForQuestionAnswering"),
("ibert", "IBertForQuestionAnswering"),
("jina_embeddings_v3", "JinaEmbeddingsV3ForQuestionAnswering"),
("layoutlmv2", "LayoutLMv2ForQuestionAnswering"),
("layoutlmv3", "LayoutLMv3ForQuestionAnswering"),
("led", "LEDForQuestionAnswering"),
Expand Down Expand Up @@ -1447,6 +1451,7 @@ class _BaseModelWithGenerate(PreTrainedModel, GenerationMixin):
("gpt_oss", "GptOssForTokenClassification"),
("helium", "HeliumForTokenClassification"),
("ibert", "IBertForTokenClassification"),
("jina_embeddings_v3", "JinaEmbeddingsV3ForTokenClassification"),
("layoutlm", "LayoutLMForTokenClassification"),
("layoutlmv2", "LayoutLMv2ForTokenClassification"),
("layoutlmv3", "LayoutLMv3ForTokenClassification"),
Expand Down
1 change: 1 addition & 0 deletions src/transformers/models/auto/tokenization_auto.py
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,7 @@
("instructblipvideo", "GPT2Tokenizer" if is_tokenizers_available() else None),
("internvl", "Qwen2Tokenizer" if is_tokenizers_available() else None),
("jais2", "GPT2Tokenizer" if is_tokenizers_available() else None),
("jina_embeddings_v3", "XLMRobertaTokenizer" if is_tokenizers_available() else None),
("kosmos-2", "XLMRobertaTokenizer" if is_tokenizers_available() else None),
("lasr_ctc", "LasrTokenizer" if is_tokenizers_available() else None),
("lasr_encoder", "LasrTokenizer" if is_tokenizers_available() else None),
Expand Down
29 changes: 29 additions & 0 deletions src/transformers/models/jina_embeddings_v3/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,29 @@
# Copyright 2026 The HuggingFace Team. All rights reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.

# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0

#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from typing import TYPE_CHECKING

from ...utils import _LazyModule
from ...utils.import_utils import define_import_structure


if TYPE_CHECKING:
from .configuration_jina_embeddings_v3 import *
from .modeling_jina_embeddings_v3 import *
else:
import sys

_file = globals()["__file__"]
sys.modules[__name__] = _LazyModule(__name__, _file, define_import_structure(_file), module_spec=__spec__)
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
# This file was automatically generated from src/transformers/models/jina_embeddings_v3/modular_jina_embeddings_v3.py.
# Do NOT edit this file manually as any edits will be overwritten by the generation of
# the file from the modular. If any change should be done, please apply the change to the
# modular_jina_embeddings_v3.py file directly. One of our CI enforces this.
# 🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨🚨
# Copyright 2026 The Jina-AI and HuggingFace Inc. teams. All rights reserved.
#
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

from huggingface_hub.dataclasses import strict

from ...configuration_utils import PreTrainedConfig
from ...modeling_rope_utils import RopeParameters
from ...utils import auto_docstring


@auto_docstring(checkpoint="jinaai/jina-embeddings-v3-hf")
@strict(accept_kwargs=True)
class JinaEmbeddingsV3Config(PreTrainedConfig):
r"""
Examples:

```python
>>> from transformers import JinaEmbeddingsV3Config, JinaEmbeddingsV3Model

>>> # Initializing a Jina-Embeddings-V3 jinaai/jina-embeddings-v3-hf style configuration
>>> configuration = JinaEmbeddingsV3Config()

>>> # Initializing a model (with random weights) from the jinaai/jina-embeddings-v3-hf style configuration
>>> model = JinaEmbeddingsV3Model(configuration)

>>> # Accessing the model configuration
>>> configuration = model.config
```"""

model_type = "jina_embeddings_v3"

vocab_size: int = 250002
hidden_size: int = 1024
num_hidden_layers: int = 24
num_attention_heads: int = 16
intermediate_size: int = 4096
hidden_act: str = "gelu"
hidden_dropout_prob: float = 0.1
attention_probs_dropout_prob: float = 0.1
max_position_embeddings: int = 8194
type_vocab_size: int = 1
initializer_range: float = 0.02
layer_norm_eps: float = 1e-5
pad_token_id: int | None = 1
bos_token_id: int | None = 0
eos_token_id: int | None = 2
use_cache: bool = True
classifier_dropout: float | int | None = None
tie_word_embeddings: bool = True
default_theta = 20000.0
rope_parameters: RopeParameters | dict | None = None


__all__ = ["JinaEmbeddingsV3Config"]
Loading
Loading