This repository contains the official resources for the paper "On the Theoretical Limitations of Embedding-based Retrieval".
This work introduces the LIMIT dataset,
designed to stress-test embedding models based on theoretical principles.
We show that for any given embedding dimension d
,
there exists a combination of documents that cannot be returned by any query.
We use this theory to instantiate the dataset LIMIT,
finding that even state-of-the-art models struggle: highlighting a fundamental
limitation of the current single-vector embedding paradigm.
The datasets used in our experiments are available in the data/
directory of this repository, formatted in MTEB style (i.e. json lines).
Each dataset contains:
- A
queries.json
file containing a line for each of the 1000 queries, each with an_id
and thetext
field. - A
corpus.json
file containing a line for each of the 50k (or 46 if using thesmall
version) documents, each with an_id
,text
and emptytitle
field. - A
qrels.json
file containing rows for each of the 2000 relevant query->doc mappings, mappingquery-id
of the queries into thecorpus-id
in the documents, withscore
indicating relevance.
-
Full Dataset (
limit
): The complete dataset, containing 50k documents. -
Small Sample (
limit-small
): A smaller version with only the 46 documents relevant to the queries.
We provide code to generate the LIMIT style datasets,
as well as to run the free embedding experiment in the code/
folder.
-
Dataset Generation: To generate the dataset from scratch, you can use the Jupyter notebook located at
code/generate_limit_dataset.ipynb
. This contains all necessary steps and dependencies. -
Free Embedding Experiments: The script to run the free embedding experiments can be found in
code/free_embedding_experiment.py
.
If you use the free embedding code, you'll need to install the following requirements.
We recommend using the uv
package manager.
# Create a virtual environment
uv venv
source .venv/bin/activate
# Install dependencies
uv pip install -r https://raw.githubusercontent.com/google-deepmind/limit/refs/heads/main/code/requirements.txt
You can also load the data using the datasets
library from Huggingface (LIMIT, LIMIT-small):
from datasets import load_dataset
ds = load_dataset("orionweller/LIMIT-small", "corpus") # also available: queries, test (contains qrels).
Evaluation was done using the MTEB framework. You can reproduce this only on the v2.0.0 branch (soon to be main
). Note that the v2.0.0 branch is changing rapidly, so please install the version pinned in the requirements until it becomes main. An example is:
import mteb
from sentence_transformers import SentenceTransformer
# load the model using MTEB
model_name = "sentence-transformers/all-MiniLM-L6-v2"
model = mteb.get_model(model_name) # will default to SentenceTransformers(model_name) if not implemented in MTEB
# or using SentenceTransformers
model = SentenceTransformer(model_name)
# select the desired tasks and evaluate
tasks = mteb.get_tasks(tasks=["LIMITSmallRetrieval"]) # or use LIMITRetrieval for the full dataset
results = mteb.evaluate(model, tasks=tasks)
Please see their Github for more details.
If you use this work, please cite the paper as:
@article{weller2025theoretical,
title={On the Theoretical Limitations of Embedding-Based Retrieval},
author={Weller, Orion and Boratko, Michael and Naim, Iftekhar and Lee, Jinhyuk},
journal={arXiv preprint arXiv:2508.21038},
year={2025}
}
Copyright 2025 Google LLC
All software is licensed under the Apache License, Version 2.0 (Apache 2.0); you may not use this file except in compliance with the Apache 2.0 license. You may obtain a copy of the Apache 2.0 license at: https://www.apache.org/licenses/LICENSE-2.0
All other materials are licensed under the Creative Commons Attribution 4.0 International License (CC-BY). You may obtain a copy of the CC-BY license at: https://creativecommons.org/licenses/by/4.0/legalcode
Unless required by applicable law or agreed to in writing, all software and materials distributed here under the Apache 2.0 or CC-BY licenses are distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the licenses for the specific language governing permissions and limitations under those licenses.
This is not an official Google product.