diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md index 098f71f44020..64293ef7a203 100644 --- a/CONTRIBUTING.md +++ b/CONTRIBUTING.md @@ -125,7 +125,7 @@ Follow these steps to start contributing: $ git checkout -b a-descriptive-name-for-my-changes ``` - **do not** work on the `master` branch. + **Do not** work on the `master` branch. 4. Set up a development environment by running the following command in a virtual environment: diff --git a/docs/source/preprocessing.rst b/docs/source/preprocessing.rst index 10e27814c052..a684f8aaeb2c 100644 --- a/docs/source/preprocessing.rst +++ b/docs/source/preprocessing.rst @@ -2,7 +2,6 @@ Preprocessing data ======================================================================================================================= In this tutorial, we'll explore how to preprocess your data using 🤗 Transformers. The main tool for this is what we - call a :doc:`tokenizer `. You can build one using the tokenizer class associated to the model you would like to use, or directly with the :class:`~transformers.AutoTokenizer` class. @@ -52,7 +51,7 @@ The tokenizer can decode a list of token ids in a proper sentence: "[CLS] Hello, I'm a single sentence! [SEP]" As you can see, the tokenizer automatically added some special tokens that the model expects. Not all models need -special tokens; for instance, if we had used` gtp2-medium` instead of `bert-base-cased` to create our tokenizer, we +special tokens; for instance, if we had used `gpt2-medium` instead of `bert-base-cased` to create our tokenizer, we would have seen the same sentence as the original one here. You can disable this behavior (which is only advised if you have added those special tokens yourself) by passing ``add_special_tokens=False``. diff --git a/docs/source/quicktour.rst b/docs/source/quicktour.rst index 5b0ca708177f..9d1444e2d6c0 100644 --- a/docs/source/quicktour.rst +++ b/docs/source/quicktour.rst @@ -240,7 +240,9 @@ activations of the model. [ 0.08181786, -0.04179301]], dtype=float32)>,) The model can return more than just the final activations, which is why the output is a tuple. Here we only asked for -the final activations, so we get a tuple with one element. .. note:: +the final activations, so we get a tuple with one element. + +.. note:: All 🤗 Transformers models (PyTorch or TensorFlow) return the activations of the model *before* the final activation function (like SoftMax) since this final activation function is often fused with the loss. diff --git a/docs/source/serialization.rst b/docs/source/serialization.rst index 670a6a3a9db8..e8a646006a08 100644 --- a/docs/source/serialization.rst +++ b/docs/source/serialization.rst @@ -70,8 +70,8 @@ inference. optimizations afterwards. .. note:: - For more information about the optimizations enabled by ONNXRuntime, please have a look at the (`ONNXRuntime Github - `_) + For more information about the optimizations enabled by ONNXRuntime, please have a look at the `ONNXRuntime Github + `_. Quantization ----------------------------------------------------------------------------------------------------------------------- diff --git a/src/transformers/data/data_collator.py b/src/transformers/data/data_collator.py index 6ad0a6ccd210..d49c661513de 100644 --- a/src/transformers/data/data_collator.py +++ b/src/transformers/data/data_collator.py @@ -20,14 +20,14 @@ def default_data_collator(features: List[InputDataClass]) -> Dict[str, torch.Tensor]: """ - Very simple data collator that simply collates batches of dict-like objects and erforms special handling for + Very simple data collator that simply collates batches of dict-like objects and performs special handling for potential keys named: - ``label``: handles a single value (int or float) per object - ``label_ids``: handles a list of values per object - Des not do any additional preprocessing: property names of the input object will be used as corresponding inputs to - the model. See glue and ner for example of how it's useful. + Does not do any additional preprocessing: property names of the input object will be used as corresponding inputs + to the model. See glue and ner for example of how it's useful. """ # In this function we'll make the assumption that all `features` in the batch