From 32823b1f696ff91482e30a45ea0b659ec38786e7 Mon Sep 17 00:00:00 2001 From: ChathuminaVimukthi Date: Tue, 1 Apr 2025 03:26:35 +0530 Subject: [PATCH 1/7] Updated model card for distilbert --- docs/source/en/model_doc/distilbert.md | 234 +++++++------------------ 1 file changed, 68 insertions(+), 166 deletions(-) diff --git a/docs/source/en/model_doc/distilbert.md b/docs/source/en/model_doc/distilbert.md index 3f949d9443a6..f6b47d2aff91 100644 --- a/docs/source/en/model_doc/distilbert.md +++ b/docs/source/en/model_doc/distilbert.md @@ -16,197 +16,99 @@ rendered properly in your Markdown viewer. # DistilBERT -
-PyTorch -TensorFlow -Flax -FlashAttention -SDPA +
+
+ PyTorch + TensorFlow + Flax +
-## Overview - -The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a -distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, a -distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a -small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than -*google-bert/bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language -understanding benchmark. - -The abstract from the paper is the following: - -*As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), -operating these large models in on-the-edge and/or under constrained computational training or inference budgets -remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation -model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger -counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage -knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by -40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive -biases learned by larger models during pretraining, we introduce a triple loss combining language modeling, -distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we -demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device -study.* +# DistilBERT: The Efficient Alternative to BERT -This model was contributed by [victorsanh](https://huggingface.co/victorsanh). This model jax version was -contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/huggingface/transformers-research-projects/tree/main/distillation). - -## Usage tips - -- DistilBERT doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just - separate your segments with the separation token `tokenizer.sep_token` (or `[SEP]`). -- DistilBERT doesn't have options to select the input positions (`position_ids` input). This could be added if - necessary though, just let us know if you need this option. -- Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model. The actual objective is a combination of: - - * finding the same probabilities as the teacher model - * predicting the masked tokens correctly (but no next-sentence objective) - * a cosine similarity between the hidden states of the student and the teacher model - -### Using Scaled Dot Product Attention (SDPA) - -PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function -encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the -[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) -or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) -page for more information. - -SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set -`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. - -``` -from transformers import DistilBertModel -model = DistilBertModel.from_pretrained("distilbert-base-uncased", torch_dtype=torch.float16, attn_implementation="sdpa") -``` - -For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). - -On a local benchmark (NVIDIA GeForce RTX 2060-8GB, PyTorch 2.3.1, OS Ubuntu 20.04) with `float16` and the `distilbert-base-uncased` model with -a MaskedLM head, we saw the following speedups during training and inference. - -#### Training - -| num_training_steps | batch_size | seq_len | is cuda | Time per batch (eager - s) | Time per batch (sdpa - s) | Speedup (%) | Eager peak mem (MB) | sdpa peak mem (MB) | Mem saving (%) | -|--------------------|------------|---------|---------|----------------------------|---------------------------|-------------|---------------------|--------------------|----------------| -| 100 | 1 | 128 | False | 0.010 | 0.008 | 28.870 | 397.038 | 399.629 | -0.649 | -| 100 | 1 | 256 | False | 0.011 | 0.009 | 20.681 | 412.505 | 412.606 | -0.025 | -| 100 | 2 | 128 | False | 0.011 | 0.009 | 23.741 | 412.213 | 412.606 | -0.095 | -| 100 | 2 | 256 | False | 0.015 | 0.013 | 16.502 | 427.491 | 425.787 | 0.400 | -| 100 | 4 | 128 | False | 0.015 | 0.013 | 13.828 | 427.491 | 425.787 | 0.400 | -| 100 | 4 | 256 | False | 0.025 | 0.022 | 12.882 | 594.156 | 502.745 | 18.182 | -| 100 | 8 | 128 | False | 0.023 | 0.022 | 8.010 | 545.922 | 502.745 | 8.588 | -| 100 | 8 | 256 | False | 0.046 | 0.041 | 12.763 | 983.450 | 798.480 | 23.165 | - -#### Inference +DistilBERT offers the power of BERT in a more accessible package. First introduced in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5) and academic paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108), this model delivers impressive performance with fewer resources. -| num_batches | batch_size | seq_len | is cuda | is half | use mask | Per token latency eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem eager (MB) | Mem BT (MB) | Mem saved (%) | -|-------------|------------|---------|---------|---------|----------|-----------------------------|-----------------------------|-------------|----------------|--------------|---------------| -| 50 | 2 | 64 | True | True | True | 0.032 | 0.025 | 28.192 | 154.532 | 155.531 | -0.642 | -| 50 | 2 | 128 | True | True | True | 0.033 | 0.025 | 32.636 | 157.286 | 157.482 | -0.125 | -| 50 | 4 | 64 | True | True | True | 0.032 | 0.026 | 24.783 | 157.023 | 157.449 | -0.271 | -| 50 | 4 | 128 | True | True | True | 0.034 | 0.028 | 19.299 | 162.794 | 162.269 | 0.323 | -| 50 | 8 | 64 | True | True | True | 0.035 | 0.028 | 25.105 | 160.958 | 162.204 | -0.768 | -| 50 | 8 | 128 | True | True | True | 0.052 | 0.046 | 12.375 | 173.155 | 171.844 | 0.763 | -| 50 | 16 | 64 | True | True | True | 0.051 | 0.045 | 12.882 | 172.106 | 171.713 | 0.229 | -| 50 | 16 | 128 | True | True | True | 0.096 | 0.081 | 18.524 | 191.257 | 191.517 | -0.136 | +Why Choose DistilBERT? +* Lightweight Design: Contains 40% fewer parameters than google-bert/bert-base-uncased +* Speed Advantage: Runs 60% faster than the original BERT +* Minimal Performance Loss: Maintains over 95% of BERT's performance on the GLUE benchmark +* Cost-Effective: Requires less computational power for training and inference -## Resources +DistilBERT achieves this efficiency through knowledge distillation, where a smaller model is trained to reproduce the behavior of a larger one. This makes it ideal for applications with limited computational resources or when deployment speed matters. -A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DistilBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - - - -- A blog post on [Getting Started with Sentiment Analysis using Python](https://huggingface.co/blog/sentiment-analysis-python) with DistilBERT. -- A blog post on how to [train DistilBERT with Blurr for sequence classification](https://huggingface.co/blog/fastai). -- A blog post on how to use [Ray to tune DistilBERT hyperparameters](https://huggingface.co/blog/ray-tune). -- A blog post on how to [train DistilBERT with Hugging Face and Amazon SageMaker](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face). -- A notebook on how to [finetune DistilBERT for multi-label classification](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb). 🌎 -- A notebook on how to [finetune DistilBERT for multiclass classification with PyTorch](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb). 🌎 -- A notebook on how to [finetune DistilBERT for text classification in TensorFlow](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb). 🌎 -- [`DistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). -- [`TFDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). -- [`FlaxDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). -- [Text classification task guide](../tasks/sequence_classification) - - - - -- [`DistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). -- [`TFDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). -- [`FlaxDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). -- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the πŸ€— Hugging Face Course. -- [Token classification task guide](../tasks/token_classification) - - - - -- [`DistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). -- [`TFDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). -- [`FlaxDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). -- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the πŸ€— Hugging Face Course. -- [Masked language modeling task guide](../tasks/masked_language_modeling) - - - -- [`DistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). -- [`TFDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). -- [`FlaxDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). -- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the πŸ€— Hugging Face Course. -- [Question answering task guide](../tasks/question_answering) - -**Multiple choice** -- [`DistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). -- [`TFDistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). -- [Multiple choice task guide](../tasks/multiple_choice) - -βš—οΈ Optimization +This model was contributed by [victorsanh](https://huggingface.co/victorsanh). This model jax version was +contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/huggingface/transformers-research-projects/tree/main/distillation). -- A blog post on how to [quantize DistilBERT with πŸ€— Optimum and Intel](https://huggingface.co/blog/intel). -- A blog post on how [Optimizing Transformers for GPUs with πŸ€— Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum-gpu). -- A blog post on [Optimizing Transformers with Hugging Face Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum). +You can find the official checkpoints for this model on the [Hugging Face Hub](https://huggingface.co/models). -⚑️ Inference +> [!TIP] +> Click on the right sidebar for more examples of how to use this model for other tasks. -- A blog post on how to [Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia](https://huggingface.co/blog/bert-inferentia-sagemaker) with DistilBERT. -- A blog post on [Serverless Inference with Hugging Face's Transformers, DistilBERT and Amazon SageMaker](https://www.philschmid.de/sagemaker-serverless-huggingface-distilbert). +The examples below demonstrate how to use DistilBERT for text classification with [`Pipeline`] or the [`AutoModel`], and from the command line. -πŸš€ Deploy + -- A blog post on how to [deploy DistilBERT on Google Cloud](https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds). -- A blog post on how to [deploy DistilBERT with Amazon SageMaker](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker). -- A blog post on how to [Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker). + + ```python + from transformers import pipeline + classifier = pipeline( + task="text-classification", + model="distilbert-base-uncased-finetuned-sst-2-english" + ) -## Combining DistilBERT and Flash Attention 2 + result = classifier("I love using Hugging Face Transformers!") + print(result) + # Output: [{'label': 'POSITIVE', 'score': 0.9998}] + ``` + -First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. + + ```python + from transformers import AutoTokenizer, AutoModelForSequenceClassification + import torch.nn.functional as F -```bash -pip install -U flash-attn --no-build-isolation -``` + # Load a fine-tuned model for sentiment analysis + model_name = "distilbert-base-uncased-finetuned-sst-2-english" + tokenizer = AutoTokenizer.from_pretrained(model_name) + model = AutoModelForSequenceClassification.from_pretrained(model_name) -Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16`) + # Tokenize and run inference + inputs = tokenizer("I love using Hugging Face Transformers!", return_tensors="pt") + outputs = model(**inputs) -To load and run a model using Flash Attention 2, refer to the snippet below: + # Convert logits to probabilities + probs = F.softmax(outputs.logits, dim=-1) -```python ->>> import torch ->>> from transformers import AutoTokenizer, AutoModel + # Get prediction + prediction = model.config.id2label[outputs.logits.argmax(-1).item()] + confidence = probs[0][outputs.logits.argmax(-1).item()].item() ->>> device = "cuda" # the device to load the model onto + print(f"Prediction: {prediction}, Confidence: {confidence:.4f}") + # Output: Prediction: POSITIVE, Confidence: 0.9998 + ``` + ->>> tokenizer = AutoTokenizer.from_pretrained('distilbert/distilbert-base-uncased') ->>> model = AutoModel.from_pretrained("distilbert/distilbert-base-uncased", torch_dtype=torch.float16, attn_implementation="flash_attention_2") + + ```bash + echo -e "I love using Hugging Face Transformers!" | transformers-cli run --task text-classification --model distilbert-base-uncased-finetuned-sst-2-english + ``` + ->>> text = "Replace me by any text you'd like." + ->>> encoded_input = tokenizer(text, return_tensors='pt').to(device) ->>> model.to(device) +## Notes ->>> output = model(**encoded_input) -``` +- DistilBERT doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just + separate your segments with the separation token `tokenizer.sep_token` (or `[SEP]`). +- DistilBERT doesn't have options to select the input positions (`position_ids` input). This could be added if + necessary though, just let us know if you need this option. +- Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model. The actual objective is a combination of: + * finding the same probabilities as the teacher model + * predicting the masked tokens correctly (but no next-sentence objective) + * a cosine similarity between the hidden states of the student and the teacher model ## DistilBertConfig From 7f35b268f826ab2135c0888494d67be4669914e3 Mon Sep 17 00:00:00 2001 From: ChathuminaVimukthi Date: Tue, 1 Apr 2025 04:42:43 +0530 Subject: [PATCH 2/7] Updated the distilbert model card --- docs/source/en/model_doc/distilbert.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/en/model_doc/distilbert.md b/docs/source/en/model_doc/distilbert.md index f6b47d2aff91..1deb08f3615a 100644 --- a/docs/source/en/model_doc/distilbert.md +++ b/docs/source/en/model_doc/distilbert.md @@ -50,7 +50,7 @@ The examples below demonstrate how to use DistilBERT for text classification wit - ```python + ```py from transformers import pipeline classifier = pipeline( @@ -65,7 +65,7 @@ The examples below demonstrate how to use DistilBERT for text classification wit - ```python + ```py from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch.nn.functional as F From 1c63d8b136ad8360e06986010f925582e91342ed Mon Sep 17 00:00:00 2001 From: ChathuminaVimukthi Date: Tue, 1 Apr 2025 03:26:35 +0530 Subject: [PATCH 3/7] Updated model card for distilbert --- docs/source/en/model_doc/distilbert.md | 234 +++++++------------------ 1 file changed, 68 insertions(+), 166 deletions(-) diff --git a/docs/source/en/model_doc/distilbert.md b/docs/source/en/model_doc/distilbert.md index 3f949d9443a6..f6b47d2aff91 100644 --- a/docs/source/en/model_doc/distilbert.md +++ b/docs/source/en/model_doc/distilbert.md @@ -16,197 +16,99 @@ rendered properly in your Markdown viewer. # DistilBERT -
-PyTorch -TensorFlow -Flax -FlashAttention -SDPA +
+
+ PyTorch + TensorFlow + Flax +
-## Overview - -The DistilBERT model was proposed in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a -distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5), and the paper [DistilBERT, a -distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108). DistilBERT is a -small, fast, cheap and light Transformer model trained by distilling BERT base. It has 40% less parameters than -*google-bert/bert-base-uncased*, runs 60% faster while preserving over 95% of BERT's performances as measured on the GLUE language -understanding benchmark. - -The abstract from the paper is the following: - -*As Transfer Learning from large-scale pre-trained models becomes more prevalent in Natural Language Processing (NLP), -operating these large models in on-the-edge and/or under constrained computational training or inference budgets -remains challenging. In this work, we propose a method to pre-train a smaller general-purpose language representation -model, called DistilBERT, which can then be fine-tuned with good performances on a wide range of tasks like its larger -counterparts. While most prior work investigated the use of distillation for building task-specific models, we leverage -knowledge distillation during the pretraining phase and show that it is possible to reduce the size of a BERT model by -40%, while retaining 97% of its language understanding capabilities and being 60% faster. To leverage the inductive -biases learned by larger models during pretraining, we introduce a triple loss combining language modeling, -distillation and cosine-distance losses. Our smaller, faster and lighter model is cheaper to pre-train and we -demonstrate its capabilities for on-device computations in a proof-of-concept experiment and a comparative on-device -study.* +# DistilBERT: The Efficient Alternative to BERT -This model was contributed by [victorsanh](https://huggingface.co/victorsanh). This model jax version was -contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/huggingface/transformers-research-projects/tree/main/distillation). - -## Usage tips - -- DistilBERT doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just - separate your segments with the separation token `tokenizer.sep_token` (or `[SEP]`). -- DistilBERT doesn't have options to select the input positions (`position_ids` input). This could be added if - necessary though, just let us know if you need this option. -- Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model. The actual objective is a combination of: - - * finding the same probabilities as the teacher model - * predicting the masked tokens correctly (but no next-sentence objective) - * a cosine similarity between the hidden states of the student and the teacher model - -### Using Scaled Dot Product Attention (SDPA) - -PyTorch includes a native scaled dot-product attention (SDPA) operator as part of `torch.nn.functional`. This function -encompasses several implementations that can be applied depending on the inputs and the hardware in use. See the -[official documentation](https://pytorch.org/docs/stable/generated/torch.nn.functional.scaled_dot_product_attention.html) -or the [GPU Inference](https://huggingface.co/docs/transformers/main/en/perf_infer_gpu_one#pytorch-scaled-dot-product-attention) -page for more information. - -SDPA is used by default for `torch>=2.1.1` when an implementation is available, but you may also set -`attn_implementation="sdpa"` in `from_pretrained()` to explicitly request SDPA to be used. - -``` -from transformers import DistilBertModel -model = DistilBertModel.from_pretrained("distilbert-base-uncased", torch_dtype=torch.float16, attn_implementation="sdpa") -``` - -For the best speedups, we recommend loading the model in half-precision (e.g. `torch.float16` or `torch.bfloat16`). - -On a local benchmark (NVIDIA GeForce RTX 2060-8GB, PyTorch 2.3.1, OS Ubuntu 20.04) with `float16` and the `distilbert-base-uncased` model with -a MaskedLM head, we saw the following speedups during training and inference. - -#### Training - -| num_training_steps | batch_size | seq_len | is cuda | Time per batch (eager - s) | Time per batch (sdpa - s) | Speedup (%) | Eager peak mem (MB) | sdpa peak mem (MB) | Mem saving (%) | -|--------------------|------------|---------|---------|----------------------------|---------------------------|-------------|---------------------|--------------------|----------------| -| 100 | 1 | 128 | False | 0.010 | 0.008 | 28.870 | 397.038 | 399.629 | -0.649 | -| 100 | 1 | 256 | False | 0.011 | 0.009 | 20.681 | 412.505 | 412.606 | -0.025 | -| 100 | 2 | 128 | False | 0.011 | 0.009 | 23.741 | 412.213 | 412.606 | -0.095 | -| 100 | 2 | 256 | False | 0.015 | 0.013 | 16.502 | 427.491 | 425.787 | 0.400 | -| 100 | 4 | 128 | False | 0.015 | 0.013 | 13.828 | 427.491 | 425.787 | 0.400 | -| 100 | 4 | 256 | False | 0.025 | 0.022 | 12.882 | 594.156 | 502.745 | 18.182 | -| 100 | 8 | 128 | False | 0.023 | 0.022 | 8.010 | 545.922 | 502.745 | 8.588 | -| 100 | 8 | 256 | False | 0.046 | 0.041 | 12.763 | 983.450 | 798.480 | 23.165 | - -#### Inference +DistilBERT offers the power of BERT in a more accessible package. First introduced in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5) and academic paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108), this model delivers impressive performance with fewer resources. -| num_batches | batch_size | seq_len | is cuda | is half | use mask | Per token latency eager (ms) | Per token latency SDPA (ms) | Speedup (%) | Mem eager (MB) | Mem BT (MB) | Mem saved (%) | -|-------------|------------|---------|---------|---------|----------|-----------------------------|-----------------------------|-------------|----------------|--------------|---------------| -| 50 | 2 | 64 | True | True | True | 0.032 | 0.025 | 28.192 | 154.532 | 155.531 | -0.642 | -| 50 | 2 | 128 | True | True | True | 0.033 | 0.025 | 32.636 | 157.286 | 157.482 | -0.125 | -| 50 | 4 | 64 | True | True | True | 0.032 | 0.026 | 24.783 | 157.023 | 157.449 | -0.271 | -| 50 | 4 | 128 | True | True | True | 0.034 | 0.028 | 19.299 | 162.794 | 162.269 | 0.323 | -| 50 | 8 | 64 | True | True | True | 0.035 | 0.028 | 25.105 | 160.958 | 162.204 | -0.768 | -| 50 | 8 | 128 | True | True | True | 0.052 | 0.046 | 12.375 | 173.155 | 171.844 | 0.763 | -| 50 | 16 | 64 | True | True | True | 0.051 | 0.045 | 12.882 | 172.106 | 171.713 | 0.229 | -| 50 | 16 | 128 | True | True | True | 0.096 | 0.081 | 18.524 | 191.257 | 191.517 | -0.136 | +Why Choose DistilBERT? +* Lightweight Design: Contains 40% fewer parameters than google-bert/bert-base-uncased +* Speed Advantage: Runs 60% faster than the original BERT +* Minimal Performance Loss: Maintains over 95% of BERT's performance on the GLUE benchmark +* Cost-Effective: Requires less computational power for training and inference -## Resources +DistilBERT achieves this efficiency through knowledge distillation, where a smaller model is trained to reproduce the behavior of a larger one. This makes it ideal for applications with limited computational resources or when deployment speed matters. -A list of official Hugging Face and community (indicated by 🌎) resources to help you get started with DistilBERT. If you're interested in submitting a resource to be included here, please feel free to open a Pull Request and we'll review it! The resource should ideally demonstrate something new instead of duplicating an existing resource. - - - -- A blog post on [Getting Started with Sentiment Analysis using Python](https://huggingface.co/blog/sentiment-analysis-python) with DistilBERT. -- A blog post on how to [train DistilBERT with Blurr for sequence classification](https://huggingface.co/blog/fastai). -- A blog post on how to use [Ray to tune DistilBERT hyperparameters](https://huggingface.co/blog/ray-tune). -- A blog post on how to [train DistilBERT with Hugging Face and Amazon SageMaker](https://huggingface.co/blog/the-partnership-amazon-sagemaker-and-hugging-face). -- A notebook on how to [finetune DistilBERT for multi-label classification](https://colab.research.google.com/github/DhavalTaunk08/Transformers_scripts/blob/master/Transformers_multilabel_distilbert.ipynb). 🌎 -- A notebook on how to [finetune DistilBERT for multiclass classification with PyTorch](https://colab.research.google.com/github/abhimishra91/transformers-tutorials/blob/master/transformers_multiclass_classification.ipynb). 🌎 -- A notebook on how to [finetune DistilBERT for text classification in TensorFlow](https://colab.research.google.com/github/peterbayerle/huggingface_notebook/blob/main/distilbert_tf.ipynb). 🌎 -- [`DistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification.ipynb). -- [`TFDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification-tf.ipynb). -- [`FlaxDistilBertForSequenceClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/text-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/text_classification_flax.ipynb). -- [Text classification task guide](../tasks/sequence_classification) - - - - -- [`DistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification.ipynb). -- [`TFDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/token-classification) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/token_classification-tf.ipynb). -- [`FlaxDistilBertForTokenClassification`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/token-classification). -- [Token classification](https://huggingface.co/course/chapter7/2?fw=pt) chapter of the πŸ€— Hugging Face Course. -- [Token classification task guide](../tasks/token_classification) - - - - -- [`DistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/language-modeling#robertabertdistilbert-and-masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling.ipynb). -- [`TFDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/language-modeling#run_mlmpy) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/language_modeling-tf.ipynb). -- [`FlaxDistilBertForMaskedLM`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/language-modeling#masked-language-modeling) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/masked_language_modeling_flax.ipynb). -- [Masked language modeling](https://huggingface.co/course/chapter7/3?fw=pt) chapter of the πŸ€— Hugging Face Course. -- [Masked language modeling task guide](../tasks/masked_language_modeling) - - - -- [`DistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering.ipynb). -- [`TFDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/question-answering) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/question_answering-tf.ipynb). -- [`FlaxDistilBertForQuestionAnswering`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/flax/question-answering). -- [Question answering](https://huggingface.co/course/chapter7/7?fw=pt) chapter of the πŸ€— Hugging Face Course. -- [Question answering task guide](../tasks/question_answering) - -**Multiple choice** -- [`DistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/pytorch/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice.ipynb). -- [`TFDistilBertForMultipleChoice`] is supported by this [example script](https://github.com/huggingface/transformers/tree/main/examples/tensorflow/multiple-choice) and [notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/examples/multiple_choice-tf.ipynb). -- [Multiple choice task guide](../tasks/multiple_choice) - -βš—οΈ Optimization +This model was contributed by [victorsanh](https://huggingface.co/victorsanh). This model jax version was +contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/huggingface/transformers-research-projects/tree/main/distillation). -- A blog post on how to [quantize DistilBERT with πŸ€— Optimum and Intel](https://huggingface.co/blog/intel). -- A blog post on how [Optimizing Transformers for GPUs with πŸ€— Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum-gpu). -- A blog post on [Optimizing Transformers with Hugging Face Optimum](https://www.philschmid.de/optimizing-transformers-with-optimum). +You can find the official checkpoints for this model on the [Hugging Face Hub](https://huggingface.co/models). -⚑️ Inference +> [!TIP] +> Click on the right sidebar for more examples of how to use this model for other tasks. -- A blog post on how to [Accelerate BERT inference with Hugging Face Transformers and AWS Inferentia](https://huggingface.co/blog/bert-inferentia-sagemaker) with DistilBERT. -- A blog post on [Serverless Inference with Hugging Face's Transformers, DistilBERT and Amazon SageMaker](https://www.philschmid.de/sagemaker-serverless-huggingface-distilbert). +The examples below demonstrate how to use DistilBERT for text classification with [`Pipeline`] or the [`AutoModel`], and from the command line. -πŸš€ Deploy + -- A blog post on how to [deploy DistilBERT on Google Cloud](https://huggingface.co/blog/how-to-deploy-a-pipeline-to-google-clouds). -- A blog post on how to [deploy DistilBERT with Amazon SageMaker](https://huggingface.co/blog/deploy-hugging-face-models-easily-with-amazon-sagemaker). -- A blog post on how to [Deploy BERT with Hugging Face Transformers, Amazon SageMaker and Terraform module](https://www.philschmid.de/terraform-huggingface-amazon-sagemaker). + + ```python + from transformers import pipeline + classifier = pipeline( + task="text-classification", + model="distilbert-base-uncased-finetuned-sst-2-english" + ) -## Combining DistilBERT and Flash Attention 2 + result = classifier("I love using Hugging Face Transformers!") + print(result) + # Output: [{'label': 'POSITIVE', 'score': 0.9998}] + ``` + -First, make sure to install the latest version of Flash Attention 2 to include the sliding window attention feature. + + ```python + from transformers import AutoTokenizer, AutoModelForSequenceClassification + import torch.nn.functional as F -```bash -pip install -U flash-attn --no-build-isolation -``` + # Load a fine-tuned model for sentiment analysis + model_name = "distilbert-base-uncased-finetuned-sst-2-english" + tokenizer = AutoTokenizer.from_pretrained(model_name) + model = AutoModelForSequenceClassification.from_pretrained(model_name) -Make also sure that you have a hardware that is compatible with Flash-Attention 2. Read more about it in the official documentation of flash-attn repository. Make also sure to load your model in half-precision (e.g. `torch.float16`) + # Tokenize and run inference + inputs = tokenizer("I love using Hugging Face Transformers!", return_tensors="pt") + outputs = model(**inputs) -To load and run a model using Flash Attention 2, refer to the snippet below: + # Convert logits to probabilities + probs = F.softmax(outputs.logits, dim=-1) -```python ->>> import torch ->>> from transformers import AutoTokenizer, AutoModel + # Get prediction + prediction = model.config.id2label[outputs.logits.argmax(-1).item()] + confidence = probs[0][outputs.logits.argmax(-1).item()].item() ->>> device = "cuda" # the device to load the model onto + print(f"Prediction: {prediction}, Confidence: {confidence:.4f}") + # Output: Prediction: POSITIVE, Confidence: 0.9998 + ``` + ->>> tokenizer = AutoTokenizer.from_pretrained('distilbert/distilbert-base-uncased') ->>> model = AutoModel.from_pretrained("distilbert/distilbert-base-uncased", torch_dtype=torch.float16, attn_implementation="flash_attention_2") + + ```bash + echo -e "I love using Hugging Face Transformers!" | transformers-cli run --task text-classification --model distilbert-base-uncased-finetuned-sst-2-english + ``` + ->>> text = "Replace me by any text you'd like." + ->>> encoded_input = tokenizer(text, return_tensors='pt').to(device) ->>> model.to(device) +## Notes ->>> output = model(**encoded_input) -``` +- DistilBERT doesn't have `token_type_ids`, you don't need to indicate which token belongs to which segment. Just + separate your segments with the separation token `tokenizer.sep_token` (or `[SEP]`). +- DistilBERT doesn't have options to select the input positions (`position_ids` input). This could be added if + necessary though, just let us know if you need this option. +- Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model. The actual objective is a combination of: + * finding the same probabilities as the teacher model + * predicting the masked tokens correctly (but no next-sentence objective) + * a cosine similarity between the hidden states of the student and the teacher model ## DistilBertConfig From 3d06b20f1735a84622092fee07bdebc81eec217d Mon Sep 17 00:00:00 2001 From: ChathuminaVimukthi Date: Tue, 1 Apr 2025 04:42:43 +0530 Subject: [PATCH 4/7] Updated the distilbert model card --- docs/source/en/model_doc/distilbert.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/source/en/model_doc/distilbert.md b/docs/source/en/model_doc/distilbert.md index f6b47d2aff91..1deb08f3615a 100644 --- a/docs/source/en/model_doc/distilbert.md +++ b/docs/source/en/model_doc/distilbert.md @@ -50,7 +50,7 @@ The examples below demonstrate how to use DistilBERT for text classification wit - ```python + ```py from transformers import pipeline classifier = pipeline( @@ -65,7 +65,7 @@ The examples below demonstrate how to use DistilBERT for text classification wit - ```python + ```py from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch.nn.functional as F From dc1d8324d896b2f8b1d773cdc9a23af6150529cd Mon Sep 17 00:00:00 2001 From: ChathuminaVimukthi Date: Fri, 4 Apr 2025 02:37:18 +0530 Subject: [PATCH 5/7] Addressed code review comments --- docs/source/en/model_doc/distilbert.md | 99 +++++++++++--------------- 1 file changed, 43 insertions(+), 56 deletions(-) diff --git a/docs/source/en/model_doc/distilbert.md b/docs/source/en/model_doc/distilbert.md index 1deb08f3615a..088aeb717ec5 100644 --- a/docs/source/en/model_doc/distilbert.md +++ b/docs/source/en/model_doc/distilbert.md @@ -14,8 +14,6 @@ rendered properly in your Markdown viewer. --> -# DistilBERT -
PyTorch @@ -24,76 +22,70 @@ rendered properly in your Markdown viewer.
-# DistilBERT: The Efficient Alternative to BERT +# DistilBERT -DistilBERT offers the power of BERT in a more accessible package. First introduced in the blog post [Smaller, faster, cheaper, lighter: Introducing DistilBERT, a distilled version of BERT](https://medium.com/huggingface/distilbert-8cf3380435b5) and academic paper [DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter](https://arxiv.org/abs/1910.01108), this model delivers impressive performance with fewer resources. +[DistilBERT](https://huggingface.co/papers/1910.01108) is pretrained by knowledge distillation to create a smaller model with faster inference and requires less compute to train. Through a triple loss objective during pretraining, language modeling loss, distillation loss, cosine-distance loss, DistilBERT demonstrates similar performance to a larger transformer language model. -Why Choose DistilBERT? +You can find all the original DistilBERT checkpoints under the [DistilBERT](https://huggingface.co/distilbert) organization. -* Lightweight Design: Contains 40% fewer parameters than google-bert/bert-base-uncased -* Speed Advantage: Runs 60% faster than the original BERT -* Minimal Performance Loss: Maintains over 95% of BERT's performance on the GLUE benchmark -* Cost-Effective: Requires less computational power for training and inference +> [!TIP] +> Click on the DistilBERT models in the right sidebar for more examples of how to apply DistilBERT to different language tasks. -DistilBERT achieves this efficiency through knowledge distillation, where a smaller model is trained to reproduce the behavior of a larger one. This makes it ideal for applications with limited computational resources or when deployment speed matters. +The example below demonstrates how to classify text with [`Pipeline`], [`AutoModel`], and from the command line. -This model was contributed by [victorsanh](https://huggingface.co/victorsanh). This model jax version was -contributed by [kamalkraj](https://huggingface.co/kamalkraj). The original code can be found [here](https://github.com/huggingface/transformers-research-projects/tree/main/distillation). + -You can find the official checkpoints for this model on the [Hugging Face Hub](https://huggingface.co/models). + -> [!TIP] -> Click on the right sidebar for more examples of how to use this model for other tasks. +```py +from transformers import pipeline -The examples below demonstrate how to use DistilBERT for text classification with [`Pipeline`] or the [`AutoModel`], and from the command line. +classifier = pipeline( + task="text-classification", + model="distilbert-base-uncased-finetuned-sst-2-english" +) - +result = classifier("I love using Hugging Face Transformers!") +print(result) +# Output: [{'label': 'POSITIVE', 'score': 0.9998}] +``` - - ```py - from transformers import pipeline - - classifier = pipeline( - task="text-classification", - model="distilbert-base-uncased-finetuned-sst-2-english" - ) - - result = classifier("I love using Hugging Face Transformers!") - print(result) - # Output: [{'label': 'POSITIVE', 'score': 0.9998}] - ``` - ```py - from transformers import AutoTokenizer, AutoModelForSequenceClassification - import torch.nn.functional as F - # Load a fine-tuned model for sentiment analysis - model_name = "distilbert-base-uncased-finetuned-sst-2-english" - tokenizer = AutoTokenizer.from_pretrained(model_name) - model = AutoModelForSequenceClassification.from_pretrained(model_name) +```py +from transformers import AutoTokenizer, AutoModelForSequenceClassification +import torch.nn.functional as F + +# Load a fine-tuned model for sentiment analysis +model_name = "distilbert-base-uncased-finetuned-sst-2-english" +tokenizer = AutoTokenizer.from_pretrained(model_name) +model = AutoModelForSequenceClassification.from_pretrained(model_name) + +# Tokenize and run inference +inputs = tokenizer("I love using Hugging Face Transformers!", return_tensors="pt") +outputs = model(**inputs) - # Tokenize and run inference - inputs = tokenizer("I love using Hugging Face Transformers!", return_tensors="pt") - outputs = model(**inputs) +# Convert logits to probabilities +probs = F.softmax(outputs.logits, dim=-1) - # Convert logits to probabilities - probs = F.softmax(outputs.logits, dim=-1) +# Get prediction +prediction = model.config.id2label[outputs.logits.argmax(-1).item()] +confidence = probs[0][outputs.logits.argmax(-1).item()].item() - # Get prediction - prediction = model.config.id2label[outputs.logits.argmax(-1).item()] - confidence = probs[0][outputs.logits.argmax(-1).item()].item() +print(f"Prediction: {prediction}, Confidence: {confidence:.4f}") +# Output: Prediction: POSITIVE, Confidence: 0.9998 +``` - print(f"Prediction: {prediction}, Confidence: {confidence:.4f}") - # Output: Prediction: POSITIVE, Confidence: 0.9998 - ``` - ```bash - echo -e "I love using Hugging Face Transformers!" | transformers-cli run --task text-classification --model distilbert-base-uncased-finetuned-sst-2-english - ``` + +```bash +echo -e "I love using Hugging Face Transformers!" | transformers-cli run --task text-classification --model distilbert-base-uncased-finetuned-sst-2-english +``` + @@ -104,11 +96,6 @@ The examples below demonstrate how to use DistilBERT for text classification wit separate your segments with the separation token `tokenizer.sep_token` (or `[SEP]`). - DistilBERT doesn't have options to select the input positions (`position_ids` input). This could be added if necessary though, just let us know if you need this option. -- Same as BERT but smaller. Trained by distillation of the pretrained BERT model, meaning it’s been trained to predict the same probabilities as the larger model. The actual objective is a combination of: - - * finding the same probabilities as the teacher model - * predicting the masked tokens correctly (but no next-sentence objective) - * a cosine similarity between the hidden states of the student and the teacher model ## DistilBertConfig From 7f59fb3b9f0ff61084cd093c5f26c3353b1cde39 Mon Sep 17 00:00:00 2001 From: ChathuminaVimukthi Date: Sat, 5 Apr 2025 00:43:07 +0530 Subject: [PATCH 6/7] Addressed review comments --- docs/source/en/model_doc/distilbert.md | 37 +++++++++++++------------- 1 file changed, 19 insertions(+), 18 deletions(-) diff --git a/docs/source/en/model_doc/distilbert.md b/docs/source/en/model_doc/distilbert.md index 088aeb717ec5..1eb99b845ef9 100644 --- a/docs/source/en/model_doc/distilbert.md +++ b/docs/source/en/model_doc/distilbert.md @@ -19,6 +19,8 @@ rendered properly in your Markdown viewer. PyTorch TensorFlow Flax + SDPA + FlashAttention
@@ -55,27 +57,26 @@ print(result) ```py -from transformers import AutoTokenizer, AutoModelForSequenceClassification -import torch.nn.functional as F +import torch +from transformers import AutoModelForSequenceClassification, AutoTokenizer -# Load a fine-tuned model for sentiment analysis -model_name = "distilbert-base-uncased-finetuned-sst-2-english" -tokenizer = AutoTokenizer.from_pretrained(model_name) -model = AutoModelForSequenceClassification.from_pretrained(model_name) - -# Tokenize and run inference -inputs = tokenizer("I love using Hugging Face Transformers!", return_tensors="pt") -outputs = model(**inputs) - -# Convert logits to probabilities -probs = F.softmax(outputs.logits, dim=-1) +tokenizer = AutoTokenizer.from_pretrained( + "distilbert/distilbert-base-uncased-finetuned-sst-2-english", +) +model = AutoModelForSequenceClassification.from_pretrained( + "distilbert/distilbert-base-uncased-finetuned-sst-2-english", + torch_dtype=torch.float16, + device_map="auto", + attn_implementation="sdpa" +) +inputs = tokenizer("I love using Hugging Face Transformers!", return_tensors="pt").to("cuda") -# Get prediction -prediction = model.config.id2label[outputs.logits.argmax(-1).item()] -confidence = probs[0][outputs.logits.argmax(-1).item()].item() +with torch.no_grad(): + outputs = model(**inputs) -print(f"Prediction: {prediction}, Confidence: {confidence:.4f}") -# Output: Prediction: POSITIVE, Confidence: 0.9998 +predicted_class_id = torch.argmax(outputs.logits, dim=-1).item() +predicted_label = model.config.id2label[predicted_class_id] +print(f"Predicted label: {predicted_label}") ``` From 2df01684a791aed3341f1038f8b527ae911d88b2 Mon Sep 17 00:00:00 2001 From: Steven Liu <59462357+stevhliu@users.noreply.github.com> Date: Fri, 4 Apr 2025 14:47:02 -0700 Subject: [PATCH 7/7] fix pipeline --- docs/source/en/model_doc/distilbert.md | 4 +++- 1 file changed, 3 insertions(+), 1 deletion(-) diff --git a/docs/source/en/model_doc/distilbert.md b/docs/source/en/model_doc/distilbert.md index 1eb99b845ef9..cb906234501c 100644 --- a/docs/source/en/model_doc/distilbert.md +++ b/docs/source/en/model_doc/distilbert.md @@ -44,7 +44,9 @@ from transformers import pipeline classifier = pipeline( task="text-classification", - model="distilbert-base-uncased-finetuned-sst-2-english" + model="distilbert-base-uncased-finetuned-sst-2-english", + torch_dtype=torch.float16, + device=0 ) result = classifier("I love using Hugging Face Transformers!")