diff --git a/docs/source/ar/custom_models.md b/docs/source/ar/custom_models.md
index 26956af811f5..d46df9cb7298 100644
--- a/docs/source/ar/custom_models.md
+++ b/docs/source/ar/custom_models.md
@@ -280,7 +280,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
الآن لإرسال النموذج إلى Hub، تأكد من تسجيل الدخول. إما تشغيل في المحطة الأوامر الطرفية الخاصة بك:
```bash
-huggingface-cli login
+hf auth login
```
أو من دفتر ملاحظات:
diff --git a/docs/source/ar/model_sharing.md b/docs/source/ar/model_sharing.md
index b802eb3ef038..b4b1bb821b9b 100644
--- a/docs/source/ar/model_sharing.md
+++ b/docs/source/ar/model_sharing.md
@@ -41,7 +41,7 @@ picture-in-picture" allowfullscreen>
قبل مشاركة نموذج على Hub، ستحتاج إلى بيانات اعتماد حساب Hugging Face الخاصة بك. إذا كنت تستخدم منصة الأوامر، فقم بتشغيل الأمر التالي في بيئة افتراضية حيث تم تثبيت 🤗 Transformers. سيقوم هذا الأمر بتخزين رمز الدخول الخاص بك في مجلد تخزين المؤقت لـ Hugging Face (`~/.cache/` بشكل افتراضي):
```bash
-huggingface-cli login
+hf auth login
```
إذا كنت تستخدم دفتر ملاحظات مثل Jupyter أو Colaboratory، فتأكد من تثبيت مكتبة [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library). تسمح لك هذه المكتبة بالتفاعل برمجيًا مع Hub.
diff --git a/docs/source/ar/run_scripts.md b/docs/source/ar/run_scripts.md
index c7aea4eb9611..f7673408ca7d 100644
--- a/docs/source/ar/run_scripts.md
+++ b/docs/source/ar/run_scripts.md
@@ -324,7 +324,7 @@ python examples/pytorch/summarization/run_summarization.py
يمكن لجميع النصوص البرمجية رفع نموذجك النهائي إلى [مركز النماذج](https://huggingface.co/models). تأكد من تسجيل الدخول إلى Hugging Face قبل البدء:
```bash
-huggingface-cli login
+hf auth login
```
ثم أضف المعلمة `push_to_hub` إلى النص البرمجي . ستقوم هذه المعلمة بإنشاء مستودع باستخدام اسم مستخدم Hugging Face واسم المجلد المحدد في `output_dir`.
diff --git a/docs/source/de/model_sharing.md b/docs/source/de/model_sharing.md
index 850d9a3454a9..3b6e55eb4bf9 100644
--- a/docs/source/de/model_sharing.md
+++ b/docs/source/de/model_sharing.md
@@ -56,7 +56,7 @@ Dateien lassen sich auch in einem Repository leicht bearbeiten, und Sie können
Bevor Sie ein Modell für den Hub freigeben, benötigen Sie Ihre Hugging Face-Anmeldedaten. Wenn Sie Zugang zu einem Terminal haben, führen Sie den folgenden Befehl in der virtuellen Umgebung aus, in der 🤗 Transformers installiert ist. Dadurch werden Ihre Zugangsdaten in Ihrem Hugging Face-Cache-Ordner (standardmäßig `~/.cache/`) gespeichert:
```bash
-huggingface-cli login
+hf auth login
```
Wenn Sie ein Notebook wie Jupyter oder Colaboratory verwenden, stellen Sie sicher, dass Sie die [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) Bibliothek installiert haben. Diese Bibliothek ermöglicht Ihnen die programmatische Interaktion mit dem Hub.
diff --git a/docs/source/de/run_scripts.md b/docs/source/de/run_scripts.md
index 4b62c73276e0..069a0c3fd3de 100644
--- a/docs/source/de/run_scripts.md
+++ b/docs/source/de/run_scripts.md
@@ -324,7 +324,7 @@ python examples/pytorch/summarization/run_summarization.py
Alle Skripte können Ihr endgültiges Modell in den [Model Hub](https://huggingface.co/models) hochladen. Stellen Sie sicher, dass Sie bei Hugging Face angemeldet sind, bevor Sie beginnen:
```bash
-huggingface-cli login
+hf auth login
```
Dann fügen Sie dem Skript das Argument `push_to_hub` hinzu. Mit diesem Argument wird ein Repository mit Ihrem Hugging Face-Benutzernamen und dem in `output_dir` angegebenen Ordnernamen erstellt.
diff --git a/docs/source/en/custom_models.md b/docs/source/en/custom_models.md
index a6f9d1238e00..68afc91531fb 100644
--- a/docs/source/en/custom_models.md
+++ b/docs/source/en/custom_models.md
@@ -271,7 +271,7 @@ The model is ready to be pushed to the Hub now. Log in to your Hugging Face acco
```bash
-huggingface-cli login
+hf auth login
```
diff --git a/docs/source/en/model_sharing.md b/docs/source/en/model_sharing.md
index a6ebdfb39657..1a31072dbd08 100644
--- a/docs/source/en/model_sharing.md
+++ b/docs/source/en/model_sharing.md
@@ -28,7 +28,7 @@ To share a model to the Hub, you need a Hugging Face [account](https://hf.co/joi
```bash
-huggingface-cli login
+hf auth login
```
diff --git a/docs/source/en/quicktour.md b/docs/source/en/quicktour.md
index 66055c3371e2..67ef9922b0e9 100755
--- a/docs/source/en/quicktour.md
+++ b/docs/source/en/quicktour.md
@@ -49,7 +49,7 @@ notebook_login()
Make sure the [huggingface_hub[cli]](https://huggingface.co/docs/huggingface_hub/guides/cli#getting-started) package is installed and run the command below. Paste your User Access Token when prompted to log in.
```bash
-huggingface-cli login
+hf auth login
```
diff --git a/docs/source/en/tasks/semantic_segmentation.md b/docs/source/en/tasks/semantic_segmentation.md
index a21ff62edf1a..13aa4804bc1a 100644
--- a/docs/source/en/tasks/semantic_segmentation.md
+++ b/docs/source/en/tasks/semantic_segmentation.md
@@ -289,7 +289,7 @@ You could also create and use your own dataset if you prefer to train with the [
}
)
- # step 3: push to Hub (assumes you have ran the huggingface-cli login command in a terminal/notebook)
+ # step 3: push to Hub (assumes you have ran the hf auth login command in a terminal/notebook)
dataset.push_to_hub("your-name/dataset-repo")
# optionally, you can push to a private repo on the Hub
diff --git a/docs/source/es/custom_models.md b/docs/source/es/custom_models.md
index 7e00505b8df6..fec50e4e7a18 100644
--- a/docs/source/es/custom_models.md
+++ b/docs/source/es/custom_models.md
@@ -285,7 +285,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
Ahora, para enviar el modelo al Hub, asegúrate de haber iniciado sesión. Ejecuta en tu terminal:
```bash
-huggingface-cli login
+hf auth login
```
o desde un _notebook_:
diff --git a/docs/source/es/model_sharing.md b/docs/source/es/model_sharing.md
index 77ee523094f4..aef87578da31 100644
--- a/docs/source/es/model_sharing.md
+++ b/docs/source/es/model_sharing.md
@@ -56,7 +56,7 @@ Los archivos son editados fácilmente dentro de un repositorio. Incluso puedes o
Antes de compartir un modelo al Hub necesitarás tus credenciales de Hugging Face. Si tienes acceso a una terminal ejecuta el siguiente comando en el entorno virtual donde 🤗 Transformers esté instalado. Esto guardará tu token de acceso dentro de tu carpeta cache de Hugging Face (~/.cache/ by default):
```bash
-huggingface-cli login
+hf auth login
```
Si usas un notebook como Jupyter o Colaboratory, asegúrate de tener instalada la biblioteca [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library). Esta biblioteca te permitirá interactuar por código con el Hub.
diff --git a/docs/source/es/run_scripts.md b/docs/source/es/run_scripts.md
index a389b2d2fe41..cbabefa47b01 100644
--- a/docs/source/es/run_scripts.md
+++ b/docs/source/es/run_scripts.md
@@ -324,7 +324,7 @@ python examples/pytorch/summarization/run_summarization.py
Todos los scripts pueden cargar tu modelo final en el [Model Hub](https://huggingface.co/models). Asegúrate de haber iniciado sesión en Hugging Face antes de comenzar:
```bash
-huggingface-cli login
+hf auth login
```
Luego agrega el argumento `push_to_hub` al script. Este argumento creará un repositorio con tu nombre de usuario Hugging Face y el nombre de la carpeta especificado en `output_dir`.
diff --git a/docs/source/fr/run_scripts_fr.md b/docs/source/fr/run_scripts_fr.md
index a68d71035f01..561f9f047005 100644
--- a/docs/source/fr/run_scripts_fr.md
+++ b/docs/source/fr/run_scripts_fr.md
@@ -327,7 +327,7 @@ python examples/pytorch/summarization/run_summarization.py
Tous les scripts peuvent télécharger votre modèle final sur le Model Hub. Assurez-vous que vous êtes connecté à Hugging Face avant de commencer :
```bash
-huggingface-cli login
+hf auth login
```
Ensuite, ajoutez l'argument `push_to_hub` au script. Cet argument créera un dépôt avec votre nom d'utilisateur Hugging Face et le nom du dossier spécifié dans `output_dir`.
diff --git a/docs/source/it/custom_models.md b/docs/source/it/custom_models.md
index a564ea606c75..5f3d4cade007 100644
--- a/docs/source/it/custom_models.md
+++ b/docs/source/it/custom_models.md
@@ -285,7 +285,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
Adesso, per inviare il modello all'Hub, assicurati di aver effettuato l'accesso. Lancia dal tuo terminale:
```bash
-huggingface-cli login
+hf auth login
```
O da un notebook:
diff --git a/docs/source/it/model_sharing.md b/docs/source/it/model_sharing.md
index 6505658616ba..c6efa717efb8 100644
--- a/docs/source/it/model_sharing.md
+++ b/docs/source/it/model_sharing.md
@@ -56,7 +56,7 @@ Anche i file possono essere modificati facilmente in un repository ed è possibi
Prima di condividere un modello nell'Hub, hai bisogno delle tue credenziali di Hugging Face. Se hai accesso ad un terminale, esegui il seguente comando nell'ambiente virtuale in cui è installata la libreria 🤗 Transformers. Questo memorizzerà il tuo token di accesso nella cartella cache di Hugging Face (di default `~/.cache/`):
```bash
-huggingface-cli login
+hf auth login
```
Se stai usando un notebook come Jupyter o Colaboratory, assicurati di avere la libreria [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) installata. Questa libreria ti permette di interagire in maniera programmatica con l'Hub.
diff --git a/docs/source/it/run_scripts.md b/docs/source/it/run_scripts.md
index b7d13f7019fb..71ccf0eed52b 100644
--- a/docs/source/it/run_scripts.md
+++ b/docs/source/it/run_scripts.md
@@ -324,7 +324,7 @@ python examples/pytorch/summarization/run_summarization.py
Tutti gli script possono caricare il tuo modello finale al [Model Hub](https://huggingface.co/models). Prima di iniziare, assicurati di aver effettuato l'accesso su Hugging Face:
```bash
-huggingface-cli login
+hf auth login
```
Poi, aggiungi l'argomento `push_to_hub` allo script. Questo argomento consentirà di creare un repository con il tuo username Hugging Face e la cartella specificata in `output_dir`.
diff --git a/docs/source/ja/custom_models.md b/docs/source/ja/custom_models.md
index cd0e70b1f484..737f5fd36d03 100644
--- a/docs/source/ja/custom_models.md
+++ b/docs/source/ja/custom_models.md
@@ -270,7 +270,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
モデルをHubに送信するには、ログインしていることを確認してください。ターミナルで次のコマンドを実行します:
```bash
-huggingface-cli login
+hf auth login
```
またはノートブックから:
diff --git a/docs/source/ja/model_sharing.md b/docs/source/ja/model_sharing.md
index 16d47057052b..83df9d8f687e 100644
--- a/docs/source/ja/model_sharing.md
+++ b/docs/source/ja/model_sharing.md
@@ -56,7 +56,7 @@ Model Hubの組み込みバージョニングはgitおよび[git-lfs](https://gi
モデルをHubに共有する前に、Hugging Faceの認証情報が必要です。ターミナルへのアクセス権がある場合、🤗 Transformersがインストールされている仮想環境で以下のコマンドを実行します。これにより、アクセストークンがHugging Faceのキャッシュフォルダに保存されます(デフォルトでは `~/.cache/` に保存されます):
```bash
-huggingface-cli login
+hf auth login
```
JupyterやColaboratoryのようなノートブックを使用している場合、[`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library)ライブラリがインストールされていることを確認してください。
diff --git a/docs/source/ja/run_scripts.md b/docs/source/ja/run_scripts.md
index 69437819e36b..ca224d75a453 100644
--- a/docs/source/ja/run_scripts.md
+++ b/docs/source/ja/run_scripts.md
@@ -337,7 +337,7 @@ python examples/pytorch/summarization/run_summarization.py
すべてのスクリプトは、最終的なモデルを [Model Hub](https://huggingface.co/models) にアップロードできます。開始する前に Hugging Face にログインしていることを確認してください。
```bash
-huggingface-cli login
+hf auth login
```
次に、スクリプトに `push_to_hub` 引数を追加します。この引数は、Hugging Face のユーザー名と `output_dir` で指定したフォルダ名でリポジトリを作成します。
diff --git a/docs/source/ko/custom_models.md b/docs/source/ko/custom_models.md
index 1a230a04b283..1e76608b1520 100644
--- a/docs/source/ko/custom_models.md
+++ b/docs/source/ko/custom_models.md
@@ -277,7 +277,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
터미널에서 다음 코드를 실행해 확인할 수 있습니다:
```bash
-huggingface-cli login
+hf auth login
```
주피터 노트북의 경우에는 다음과 같습니다:
diff --git a/docs/source/ko/model_sharing.md b/docs/source/ko/model_sharing.md
index 381150779662..934838c5ffe1 100644
--- a/docs/source/ko/model_sharing.md
+++ b/docs/source/ko/model_sharing.md
@@ -56,7 +56,7 @@ picture-in-picture" allowfullscreen>
모델을 허브에 공유하기 전에 Hugging Face 자격 증명이 필요합니다. 터미널에 액세스할 수 있는 경우, 🤗 Transformers가 설치된 가상 환경에서 다음 명령을 실행합니다. 그러면 Hugging Face 캐시 폴더(기본적으로 `~/.cache/`)에 액세스 토큰을 저장합니다:
```bash
-huggingface-cli login
+hf auth login
```
Jupyter 또는 Colaboratory와 같은 노트북을 사용 중인 경우, [`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library) 라이브러리가 설치되었는지 확인하세요. 이 라이브러리를 사용하면 API로 허브와 상호 작용할 수 있습니다.
diff --git a/docs/source/ko/run_scripts.md b/docs/source/ko/run_scripts.md
index 70520f1a97f8..7cbf2288880c 100644
--- a/docs/source/ko/run_scripts.md
+++ b/docs/source/ko/run_scripts.md
@@ -347,7 +347,7 @@ python examples/pytorch/summarization/run_summarization.py
모든 스크립트는 최종 모델을 [Model Hub](https://huggingface.co/models)에 업로드할 수 있습니다.
시작하기 전에 Hugging Face에 로그인했는지 확인하세요:
```bash
-huggingface-cli login
+hf auth login
```
그런 다음 스크립트에 `push_to_hub` 인수를 추가합니다.
diff --git a/docs/source/pt/custom_models.md b/docs/source/pt/custom_models.md
index 75376ff6e50f..1866cca182e2 100644
--- a/docs/source/pt/custom_models.md
+++ b/docs/source/pt/custom_models.md
@@ -284,7 +284,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
Agora para enviar o modelo para o Hub, certifique-se de estar logado. Ou execute no seu terminal:
```bash
-huggingface-cli login
+hf auth login
```
ou a partir do notebook:
diff --git a/docs/source/pt/run_scripts.md b/docs/source/pt/run_scripts.md
index ad19a8fdea09..8aad0f602896 100644
--- a/docs/source/pt/run_scripts.md
+++ b/docs/source/pt/run_scripts.md
@@ -327,7 +327,7 @@ python examples/pytorch/summarization/run_summarization.py
Todos os scripts podem enviar seu modelo final para o [Model Hub](https://huggingface.co/models). Certifique-se de estar conectado ao Hugging Face antes de começar:
```bash
-huggingface-cli login
+hf auth login
```
Em seguida, adicione o argumento `push_to_hub` ao script. Este argumento criará um repositório com seu nome de usuário do Hugging Face e o nome da pasta especificado em `output_dir`.
diff --git a/docs/source/zh/custom_models.md b/docs/source/zh/custom_models.md
index a96f0f545dff..d38aaf4511f2 100644
--- a/docs/source/zh/custom_models.md
+++ b/docs/source/zh/custom_models.md
@@ -246,7 +246,7 @@ resnet50d.model.load_state_dict(pretrained_model.state_dict())
现在要将模型推送到集线器,请确保你已登录。你看可以在终端中运行以下命令:
```bash
-huggingface-cli login
+hf auth login
```
或者在笔记本中运行以下代码:
diff --git a/docs/source/zh/model_sharing.md b/docs/source/zh/model_sharing.md
index 35e317bcac36..c0ce60252537 100644
--- a/docs/source/zh/model_sharing.md
+++ b/docs/source/zh/model_sharing.md
@@ -56,7 +56,7 @@ Model Hub的内置版本控制基于git和[git-lfs](https://git-lfs.github.com/)
```bash
-huggingface-cli login
+hf auth login
```
如果您正在使用像Jupyter或Colaboratory这样的`notebook`,请确保您已安装了[`huggingface_hub`](https://huggingface.co/docs/hub/adding-a-library)库。该库允许您以编程方式与Hub进行交互。
diff --git a/docs/source/zh/run_scripts.md b/docs/source/zh/run_scripts.md
index 8c21266afce0..06ce4ce0d18a 100644
--- a/docs/source/zh/run_scripts.md
+++ b/docs/source/zh/run_scripts.md
@@ -331,7 +331,7 @@ python examples/pytorch/summarization/run_summarization.py
所有脚本都可以将您的最终模型上传到[Model Hub](https://huggingface.co/models)。在开始之前,请确保您已登录Hugging Face:
```bash
-huggingface-cli login
+hf auth login
```
然后,在脚本中添加`push_to_hub`参数。这个参数会创建一个带有您Hugging Face用户名和`output_dir`中指定的文件夹名称的仓库。
diff --git a/examples/flax/README.md b/examples/flax/README.md
index 074aaa292ceb..3b1aa9d494e4 100644
--- a/examples/flax/README.md
+++ b/examples/flax/README.md
@@ -79,5 +79,5 @@ To specify a given repository name, use the `--hub_model_id` argument. You will
A few notes on this integration:
-- you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run `huggingface-cli login` and then type your username and password when prompted. You can also pass along your authentication token with the `--hub_token` argument.
+- you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run `hf auth login` and then type your username and password when prompted. You can also pass along your authentication token with the `--hub_token` argument.
- the `output_dir` you pick will either need to be a new folder or a local clone of the distant repository you are using.
diff --git a/examples/flax/image-captioning/run_image_captioning_flax.py b/examples/flax/image-captioning/run_image_captioning_flax.py
index f156057212e4..cb7dad8d583f 100644
--- a/examples/flax/image-captioning/run_image_captioning_flax.py
+++ b/examples/flax/image-captioning/run_image_captioning_flax.py
@@ -186,7 +186,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/flax/language-modeling/run_bart_dlm_flax.py b/examples/flax/language-modeling/run_bart_dlm_flax.py
index 1c5299ebc949..f3e27be1cc94 100644
--- a/examples/flax/language-modeling/run_bart_dlm_flax.py
+++ b/examples/flax/language-modeling/run_bart_dlm_flax.py
@@ -172,7 +172,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/flax/language-modeling/run_clm_flax.py b/examples/flax/language-modeling/run_clm_flax.py
index fb3ab65dc8b2..8d2daaf517ed 100755
--- a/examples/flax/language-modeling/run_clm_flax.py
+++ b/examples/flax/language-modeling/run_clm_flax.py
@@ -173,7 +173,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/flax/language-modeling/run_mlm_flax.py b/examples/flax/language-modeling/run_mlm_flax.py
index 9b83c8db394c..df548de619f7 100755
--- a/examples/flax/language-modeling/run_mlm_flax.py
+++ b/examples/flax/language-modeling/run_mlm_flax.py
@@ -179,7 +179,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/flax/language-modeling/run_t5_mlm_flax.py b/examples/flax/language-modeling/run_t5_mlm_flax.py
index afe4d202b882..9a64b6d71643 100755
--- a/examples/flax/language-modeling/run_t5_mlm_flax.py
+++ b/examples/flax/language-modeling/run_t5_mlm_flax.py
@@ -173,7 +173,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/flax/question-answering/run_qa.py b/examples/flax/question-answering/run_qa.py
index 220400c3006b..a8ecf5b64ade 100644
--- a/examples/flax/question-answering/run_qa.py
+++ b/examples/flax/question-answering/run_qa.py
@@ -159,7 +159,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/flax/summarization/run_summarization_flax.py b/examples/flax/summarization/run_summarization_flax.py
index aab44c88a02c..6c9777cf322a 100644
--- a/examples/flax/summarization/run_summarization_flax.py
+++ b/examples/flax/summarization/run_summarization_flax.py
@@ -192,7 +192,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/flax/text-classification/run_flax_glue.py b/examples/flax/text-classification/run_flax_glue.py
index ca6e77a0cb45..63544e66581e 100755
--- a/examples/flax/text-classification/run_flax_glue.py
+++ b/examples/flax/text-classification/run_flax_glue.py
@@ -107,7 +107,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/flax/token-classification/run_flax_ner.py b/examples/flax/token-classification/run_flax_ner.py
index 3a59328a54db..8257e963f50f 100644
--- a/examples/flax/token-classification/run_flax_ner.py
+++ b/examples/flax/token-classification/run_flax_ner.py
@@ -155,7 +155,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/flax/vision/run_image_classification.py b/examples/flax/vision/run_image_classification.py
index 4eddd36f962f..4c6550e6b803 100644
--- a/examples/flax/vision/run_image_classification.py
+++ b/examples/flax/vision/run_image_classification.py
@@ -163,7 +163,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/README.md b/examples/pytorch/README.md
index 9022fdb57026..6ff3389c162c 100644
--- a/examples/pytorch/README.md
+++ b/examples/pytorch/README.md
@@ -90,7 +90,7 @@ To specify a given repository name, use the `--hub_model_id` argument. You will
A few notes on this integration:
-- you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run `huggingface-cli login` and then type your username and password when prompted. You can also pass along your authentication token with the `--hub_token` argument.
+- you will need to be logged in to the Hugging Face website locally for it to work, the easiest way to achieve this is to run `hf auth login` and then type your username and password when prompted. You can also pass along your authentication token with the `--hub_token` argument.
- the `output_dir` you pick will either need to be a new folder or a local clone of the distant repository you are using.
## Distributed training and mixed precision
diff --git a/examples/pytorch/audio-classification/README.md b/examples/pytorch/audio-classification/README.md
index bc4581089c3f..6f9069b331ab 100644
--- a/examples/pytorch/audio-classification/README.md
+++ b/examples/pytorch/audio-classification/README.md
@@ -115,10 +115,10 @@ On 4 V100 GPUs (16GB), this script should run in ~1 hour and yield accuracy of *
$ apt install git-lfs
```
-2. Log in with your HuggingFace account credentials using `huggingface-cli`
+2. Log in with your HuggingFace account credentials using `hf`
```bash
-$ huggingface-cli login
+$ hf auth login
# ...follow the prompts
```
diff --git a/examples/pytorch/audio-classification/run_audio_classification.py b/examples/pytorch/audio-classification/run_audio_classification.py
index bb5651ab25f7..3ab87693bbc3 100644
--- a/examples/pytorch/audio-classification/run_audio_classification.py
+++ b/examples/pytorch/audio-classification/run_audio_classification.py
@@ -167,7 +167,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/contrastive-image-text/run_clip.py b/examples/pytorch/contrastive-image-text/run_clip.py
index 229c97dd4f33..477c5622c1bc 100644
--- a/examples/pytorch/contrastive-image-text/run_clip.py
+++ b/examples/pytorch/contrastive-image-text/run_clip.py
@@ -100,7 +100,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/image-classification/README.md b/examples/pytorch/image-classification/README.md
index 62996ee19e37..0ef36c75c726 100644
--- a/examples/pytorch/image-classification/README.md
+++ b/examples/pytorch/image-classification/README.md
@@ -129,7 +129,7 @@ dataset = load_dataset("imagefolder", data_files={"train": ["path/to/file1", "pa
Next, push it to the hub!
```python
-# assuming you have ran the huggingface-cli login command in a terminal
+# assuming you have ran the hf auth login command in a terminal
dataset.push_to_hub("name_of_your_dataset")
# if you want to push to a private repo, simply pass private=True:
@@ -152,10 +152,10 @@ $ git config --global user.email "you@example.com"
$ git config --global user.name "Your Name"
```
-2. Log in with your HuggingFace account credentials using `huggingface-cli`:
+2. Log in with your HuggingFace account credentials using `hf`:
```bash
-$ huggingface-cli login
+$ hf auth login
# ...follow the prompts
```
diff --git a/examples/pytorch/image-classification/run_image_classification.py b/examples/pytorch/image-classification/run_image_classification.py
index baf8f15d92c9..51cdaf71f0c0 100755
--- a/examples/pytorch/image-classification/run_image_classification.py
+++ b/examples/pytorch/image-classification/run_image_classification.py
@@ -168,7 +168,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/image-pretraining/README.md b/examples/pytorch/image-pretraining/README.md
index 5a5e83af8d9d..bca37f24135a 100644
--- a/examples/pytorch/image-pretraining/README.md
+++ b/examples/pytorch/image-pretraining/README.md
@@ -239,10 +239,10 @@ $ git config --global user.email "you@example.com"
$ git config --global user.name "Your Name"
```
-2. Log in with your HuggingFace account credentials using `huggingface-cli`
+2. Log in with your HuggingFace account credentials using `hf`
```bash
-$ huggingface-cli login
+$ hf auth login
# ...follow the prompts
```
diff --git a/examples/pytorch/image-pretraining/run_mae.py b/examples/pytorch/image-pretraining/run_mae.py
index 20ae2b531659..ea4fabf0651e 100644
--- a/examples/pytorch/image-pretraining/run_mae.py
+++ b/examples/pytorch/image-pretraining/run_mae.py
@@ -156,7 +156,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/image-pretraining/run_mim.py b/examples/pytorch/image-pretraining/run_mim.py
index 8d6b1ba6d5af..8ee44521863f 100644
--- a/examples/pytorch/image-pretraining/run_mim.py
+++ b/examples/pytorch/image-pretraining/run_mim.py
@@ -166,7 +166,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/image-pretraining/run_mim_no_trainer.py b/examples/pytorch/image-pretraining/run_mim_no_trainer.py
index 97b5e9577840..fe560ba901f3 100644
--- a/examples/pytorch/image-pretraining/run_mim_no_trainer.py
+++ b/examples/pytorch/image-pretraining/run_mim_no_trainer.py
@@ -200,7 +200,7 @@ def parse_args():
default=None,
help=(
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
),
)
parser.add_argument(
diff --git a/examples/pytorch/instance-segmentation/run_instance_segmentation.py b/examples/pytorch/instance-segmentation/run_instance_segmentation.py
index 6c277e49caea..a0c1554f4da9 100644
--- a/examples/pytorch/instance-segmentation/run_instance_segmentation.py
+++ b/examples/pytorch/instance-segmentation/run_instance_segmentation.py
@@ -97,7 +97,7 @@ class Arguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/language-modeling/run_clm.py b/examples/pytorch/language-modeling/run_clm.py
index c8447cd4c4ec..3a87d0731399 100755
--- a/examples/pytorch/language-modeling/run_clm.py
+++ b/examples/pytorch/language-modeling/run_clm.py
@@ -130,7 +130,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/language-modeling/run_fim.py b/examples/pytorch/language-modeling/run_fim.py
index dd2b5ab6b88f..cf31af2b9e28 100644
--- a/examples/pytorch/language-modeling/run_fim.py
+++ b/examples/pytorch/language-modeling/run_fim.py
@@ -133,7 +133,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/language-modeling/run_mlm.py b/examples/pytorch/language-modeling/run_mlm.py
index 57b2dbc7135b..7c56aa7cf229 100755
--- a/examples/pytorch/language-modeling/run_mlm.py
+++ b/examples/pytorch/language-modeling/run_mlm.py
@@ -127,7 +127,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/language-modeling/run_plm.py b/examples/pytorch/language-modeling/run_plm.py
index 844350af1fbd..175cf8130700 100755
--- a/examples/pytorch/language-modeling/run_plm.py
+++ b/examples/pytorch/language-modeling/run_plm.py
@@ -114,7 +114,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/multiple-choice/run_swag.py b/examples/pytorch/multiple-choice/run_swag.py
index b1487cdbe993..90374bb7cccf 100755
--- a/examples/pytorch/multiple-choice/run_swag.py
+++ b/examples/pytorch/multiple-choice/run_swag.py
@@ -94,7 +94,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/object-detection/README.md b/examples/pytorch/object-detection/README.md
index 3c0ce460f0d5..0459f3f9e7d6 100644
--- a/examples/pytorch/object-detection/README.md
+++ b/examples/pytorch/object-detection/README.md
@@ -217,7 +217,7 @@ dataset = load_dataset("imagefolder", data_dir="custom_dataset/")
# ... })
# ... })
-# Push to hub (assumes you have ran the huggingface-cli login command in a terminal/notebook)
+# Push to hub (assumes you have ran the hf auth login command in a terminal/notebook)
dataset.push_to_hub("name of repo on the hub")
# optionally, you can push to a private repo on the hub
diff --git a/examples/pytorch/object-detection/run_object_detection.py b/examples/pytorch/object-detection/run_object_detection.py
index 51f5d6ddca79..412eb9414d0b 100644
--- a/examples/pytorch/object-detection/run_object_detection.py
+++ b/examples/pytorch/object-detection/run_object_detection.py
@@ -320,7 +320,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/question-answering/run_qa.py b/examples/pytorch/question-answering/run_qa.py
index 0a1985ce8a8e..74e5fb548737 100755
--- a/examples/pytorch/question-answering/run_qa.py
+++ b/examples/pytorch/question-answering/run_qa.py
@@ -84,7 +84,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/question-answering/run_qa_beam_search.py b/examples/pytorch/question-answering/run_qa_beam_search.py
index ed03610c6e03..74c76d6b1987 100755
--- a/examples/pytorch/question-answering/run_qa_beam_search.py
+++ b/examples/pytorch/question-answering/run_qa_beam_search.py
@@ -82,7 +82,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/question-answering/run_seq2seq_qa.py b/examples/pytorch/question-answering/run_seq2seq_qa.py
index eb252729d112..98ad4585de59 100644
--- a/examples/pytorch/question-answering/run_seq2seq_qa.py
+++ b/examples/pytorch/question-answering/run_seq2seq_qa.py
@@ -84,7 +84,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/semantic-segmentation/README.md b/examples/pytorch/semantic-segmentation/README.md
index 287870694c62..e54be51ee899 100644
--- a/examples/pytorch/semantic-segmentation/README.md
+++ b/examples/pytorch/semantic-segmentation/README.md
@@ -66,7 +66,7 @@ dataset = DatasetDict({
}
)
-# step 3: push to hub (assumes you have ran the huggingface-cli login command in a terminal/notebook)
+# step 3: push to hub (assumes you have ran the hf auth login command in a terminal/notebook)
dataset.push_to_hub("name of repo on the hub")
# optionally, you can push to a private repo on the hub
@@ -98,7 +98,7 @@ The script leverages the [🤗 Trainer API](https://huggingface.co/docs/transfor
Here we show how to fine-tune a [SegFormer](https://huggingface.co/nvidia/mit-b0) model on the [segments/sidewalk-semantic](https://huggingface.co/datasets/segments/sidewalk-semantic) dataset:
In order to use `segments/sidewalk-semantic`:
- - Log in to Hugging Face with `huggingface-cli login` (token can be accessed [here](https://huggingface.co/settings/tokens)).
+ - Log in to Hugging Face with `hf auth login` (token can be accessed [here](https://huggingface.co/settings/tokens)).
- Accept terms of use for `sidewalk-semantic` on [dataset page](https://huggingface.co/datasets/segments/sidewalk-semantic).
```bash
diff --git a/examples/pytorch/semantic-segmentation/run_semantic_segmentation.py b/examples/pytorch/semantic-segmentation/run_semantic_segmentation.py
index cf38403ca886..37729637ea10 100644
--- a/examples/pytorch/semantic-segmentation/run_semantic_segmentation.py
+++ b/examples/pytorch/semantic-segmentation/run_semantic_segmentation.py
@@ -168,7 +168,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/speech-recognition/README.md b/examples/pytorch/speech-recognition/README.md
index d0e898b8b76b..2889919655f4 100644
--- a/examples/pytorch/speech-recognition/README.md
+++ b/examples/pytorch/speech-recognition/README.md
@@ -278,7 +278,7 @@ accordingly be called `adapter.{/wav2vec2-2-bart-base
cd wav2vec2-2-bart-base
```
diff --git a/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py b/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
index e4cf9385618a..cf297b71bd78 100755
--- a/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
+++ b/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py
@@ -258,7 +258,7 @@ class DataTrainingArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/speech-recognition/run_speech_recognition_ctc_adapter.py b/examples/pytorch/speech-recognition/run_speech_recognition_ctc_adapter.py
index c7b5df009ac9..86f8dc8350c7 100755
--- a/examples/pytorch/speech-recognition/run_speech_recognition_ctc_adapter.py
+++ b/examples/pytorch/speech-recognition/run_speech_recognition_ctc_adapter.py
@@ -248,7 +248,7 @@ class DataTrainingArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py b/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
index d68f97d280ab..6f1c6d39c1c8 100755
--- a/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
+++ b/examples/pytorch/speech-recognition/run_speech_recognition_seq2seq.py
@@ -102,7 +102,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/summarization/run_summarization.py b/examples/pytorch/summarization/run_summarization.py
index bca7ef6dc8e8..6403a2fe9a54 100755
--- a/examples/pytorch/summarization/run_summarization.py
+++ b/examples/pytorch/summarization/run_summarization.py
@@ -119,7 +119,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/text-classification/run_classification.py b/examples/pytorch/text-classification/run_classification.py
index 9e5344d69a79..1b2695592e08 100755
--- a/examples/pytorch/text-classification/run_classification.py
+++ b/examples/pytorch/text-classification/run_classification.py
@@ -250,7 +250,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/text-classification/run_glue.py b/examples/pytorch/text-classification/run_glue.py
index 6072433b588f..f3b5aba72f73 100755
--- a/examples/pytorch/text-classification/run_glue.py
+++ b/examples/pytorch/text-classification/run_glue.py
@@ -208,7 +208,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/text-classification/run_xnli.py b/examples/pytorch/text-classification/run_xnli.py
index f62b0ee248b8..16d7a268c5ca 100755
--- a/examples/pytorch/text-classification/run_xnli.py
+++ b/examples/pytorch/text-classification/run_xnli.py
@@ -171,7 +171,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/token-classification/run_ner.py b/examples/pytorch/token-classification/run_ner.py
index 609f1f57ddfe..3e6986050a4c 100755
--- a/examples/pytorch/token-classification/run_ner.py
+++ b/examples/pytorch/token-classification/run_ner.py
@@ -95,7 +95,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/pytorch/translation/run_translation.py b/examples/pytorch/translation/run_translation.py
index 671cda334049..2bc0adc55221 100755
--- a/examples/pytorch/translation/run_translation.py
+++ b/examples/pytorch/translation/run_translation.py
@@ -108,7 +108,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/contrastive-image-text/run_clip.py b/examples/tensorflow/contrastive-image-text/run_clip.py
index 4cf5dbe429b9..ebaab0c13697 100644
--- a/examples/tensorflow/contrastive-image-text/run_clip.py
+++ b/examples/tensorflow/contrastive-image-text/run_clip.py
@@ -96,7 +96,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/image-classification/README.md b/examples/tensorflow/image-classification/README.md
index a343b443ef1a..e779a29c1b97 100644
--- a/examples/tensorflow/image-classification/README.md
+++ b/examples/tensorflow/image-classification/README.md
@@ -122,7 +122,7 @@ dataset = load_dataset("imagefolder", data_files={"train": ["path/to/file1", "pa
Next, push it to the hub!
```python
-# assuming you have ran the huggingface-cli login command in a terminal
+# assuming you have ran the hf auth login command in a terminal
dataset.push_to_hub("name_of_your_dataset")
# if you want to push to a private repo, simply pass private=True:
@@ -145,10 +145,10 @@ $ git config --global user.email "you@example.com"
$ git config --global user.name "Your Name"
```
-2. Log in with your HuggingFace account credentials using `huggingface-cli`:
+2. Log in with your HuggingFace account credentials using `hf`:
```bash
-$ huggingface-cli login
+$ hf auth login
# ...follow the prompts
```
diff --git a/examples/tensorflow/image-classification/run_image_classification.py b/examples/tensorflow/image-classification/run_image_classification.py
index 3f10ca6e47c8..7b515bc04bd9 100644
--- a/examples/tensorflow/image-classification/run_image_classification.py
+++ b/examples/tensorflow/image-classification/run_image_classification.py
@@ -162,7 +162,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/language-modeling-tpu/README.md b/examples/tensorflow/language-modeling-tpu/README.md
index 0c068df8f26d..094c95cd395a 100644
--- a/examples/tensorflow/language-modeling-tpu/README.md
+++ b/examples/tensorflow/language-modeling-tpu/README.md
@@ -35,7 +35,7 @@ python train_unigram.py --batch_size 1000 --vocab_size 25000 --export_to_hub
The script will automatically load the `train` split of the WikiText dataset and train a [Unigram tokenizer](https://huggingface.co/course/chapter6/7?fw=pt) on it.
-> 💡 **Note**: In order for `export_to_hub` to work, you must authenticate yourself with the `huggingface-cli`. Run `huggingface-cli login` and follow the on-screen instructions.
+> 💡 **Note**: In order for `export_to_hub` to work, you must authenticate yourself with the `hf`. Run `hf auth login` and follow the on-screen instructions.
## Preparing the dataset
diff --git a/examples/tensorflow/language-modeling/run_clm.py b/examples/tensorflow/language-modeling/run_clm.py
index d43530669b97..0d776229d251 100755
--- a/examples/tensorflow/language-modeling/run_clm.py
+++ b/examples/tensorflow/language-modeling/run_clm.py
@@ -116,7 +116,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/language-modeling/run_mlm.py b/examples/tensorflow/language-modeling/run_mlm.py
index edae71252d5e..5dcbd35729ef 100755
--- a/examples/tensorflow/language-modeling/run_mlm.py
+++ b/examples/tensorflow/language-modeling/run_mlm.py
@@ -114,7 +114,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/multiple-choice/run_swag.py b/examples/tensorflow/multiple-choice/run_swag.py
index 92441f9391ae..ee396d509814 100644
--- a/examples/tensorflow/multiple-choice/run_swag.py
+++ b/examples/tensorflow/multiple-choice/run_swag.py
@@ -87,7 +87,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/question-answering/run_qa.py b/examples/tensorflow/question-answering/run_qa.py
index 2acf7cb0623a..41d63e800c5a 100755
--- a/examples/tensorflow/question-answering/run_qa.py
+++ b/examples/tensorflow/question-answering/run_qa.py
@@ -95,7 +95,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/summarization/run_summarization.py b/examples/tensorflow/summarization/run_summarization.py
index 714daa341fc8..6be43eb71387 100644
--- a/examples/tensorflow/summarization/run_summarization.py
+++ b/examples/tensorflow/summarization/run_summarization.py
@@ -103,7 +103,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/text-classification/run_glue.py b/examples/tensorflow/text-classification/run_glue.py
index e2f36635f63d..6664da523b95 100644
--- a/examples/tensorflow/text-classification/run_glue.py
+++ b/examples/tensorflow/text-classification/run_glue.py
@@ -168,7 +168,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/text-classification/run_text_classification.py b/examples/tensorflow/text-classification/run_text_classification.py
index 45b4a3e607e4..7546c7bd327f 100644
--- a/examples/tensorflow/text-classification/run_text_classification.py
+++ b/examples/tensorflow/text-classification/run_text_classification.py
@@ -188,7 +188,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/token-classification/run_ner.py b/examples/tensorflow/token-classification/run_ner.py
index 8a50b2a65039..0bada558fb93 100644
--- a/examples/tensorflow/token-classification/run_ner.py
+++ b/examples/tensorflow/token-classification/run_ner.py
@@ -79,7 +79,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/examples/tensorflow/translation/run_translation.py b/examples/tensorflow/translation/run_translation.py
index 26c9eefbc517..bbf69e0943a2 100644
--- a/examples/tensorflow/translation/run_translation.py
+++ b/examples/tensorflow/translation/run_translation.py
@@ -97,7 +97,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/setup.py b/setup.py
index 2810374ee5ba..8082853a8a30 100644
--- a/setup.py
+++ b/setup.py
@@ -117,7 +117,7 @@
"GitPython<3.1.19",
"hf-doc-builder>=0.3.0",
"hf_xet",
- "huggingface-hub>=0.30.0,<1.0",
+ "huggingface-hub>=0.34.0,<1.0",
"importlib_metadata",
"ipadic>=1.0.0,<2.0",
"jax>=0.4.1,<=0.4.13",
diff --git a/src/transformers/configuration_utils.py b/src/transformers/configuration_utils.py
index 243622d895fe..8abdb94c9641 100755
--- a/src/transformers/configuration_utils.py
+++ b/src/transformers/configuration_utils.py
@@ -550,7 +550,7 @@ def from_pretrained(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/dependency_versions_table.py b/src/transformers/dependency_versions_table.py
index 953fa6f56043..a5f7daf188b8 100644
--- a/src/transformers/dependency_versions_table.py
+++ b/src/transformers/dependency_versions_table.py
@@ -24,7 +24,7 @@
"GitPython": "GitPython<3.1.19",
"hf-doc-builder": "hf-doc-builder>=0.3.0",
"hf_xet": "hf_xet",
- "huggingface-hub": "huggingface-hub>=0.30.0,<1.0",
+ "huggingface-hub": "huggingface-hub>=0.34.0,<1.0",
"importlib_metadata": "importlib_metadata",
"ipadic": "ipadic>=1.0.0,<2.0",
"jax": "jax>=0.4.1,<=0.4.13",
diff --git a/src/transformers/dynamic_module_utils.py b/src/transformers/dynamic_module_utils.py
index 7a498721a911..ff0cd22fabfb 100644
--- a/src/transformers/dynamic_module_utils.py
+++ b/src/transformers/dynamic_module_utils.py
@@ -322,7 +322,7 @@ def get_cached_module_file(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
@@ -513,7 +513,7 @@ def get_class_from_dynamic_module(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/feature_extraction_utils.py b/src/transformers/feature_extraction_utils.py
index c539288cbed8..d9d6ca0e215c 100644
--- a/src/transformers/feature_extraction_utils.py
+++ b/src/transformers/feature_extraction_utils.py
@@ -311,7 +311,7 @@ def from_pretrained(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/generation/configuration_utils.py b/src/transformers/generation/configuration_utils.py
index 7d2cd21effb2..84b892251d50 100644
--- a/src/transformers/generation/configuration_utils.py
+++ b/src/transformers/generation/configuration_utils.py
@@ -921,7 +921,7 @@ def from_pretrained(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/image_processing_base.py b/src/transformers/image_processing_base.py
index 4f4597dcff8e..9b40b0da7bc3 100644
--- a/src/transformers/image_processing_base.py
+++ b/src/transformers/image_processing_base.py
@@ -132,7 +132,7 @@ def from_pretrained(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/integrations/peft.py b/src/transformers/integrations/peft.py
index 3fc1fcff28f8..c944a2359e09 100644
--- a/src/transformers/integrations/peft.py
+++ b/src/transformers/integrations/peft.py
@@ -131,7 +131,7 @@ def load_adapter(
token (`str`, `optional`):
Whether to use authentication token to load the remote folder. Useful to load private repositories
- that are on HuggingFace Hub. You might need to call `huggingface-cli login` and paste your tokens to
+ that are on HuggingFace Hub. You might need to call `hf auth login` and paste your tokens to
cache it.
device_map (`str` or `dict[str, Union[int, str, torch.device]]` or `int` or `torch.device`, *optional*):
A map that specifies where each submodule should go. It doesn't need to be refined to each
diff --git a/src/transformers/keras_callbacks.py b/src/transformers/keras_callbacks.py
index f40590691215..77c8fe428c94 100644
--- a/src/transformers/keras_callbacks.py
+++ b/src/transformers/keras_callbacks.py
@@ -306,7 +306,7 @@ class PushToHubCallback(keras.callbacks.Callback):
Will default to the name of `output_dir`.
hub_token (`str`, *optional*):
The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with
- `huggingface-cli login`.
+ `hf auth login`.
checkpoint (`bool`, *optional*, defaults to `False`):
Whether to save full training checkpoints (including epoch and optimizer state) to allow training to be
resumed. Only usable when `save_strategy` is `"epoch"`.
diff --git a/src/transformers/modeling_flax_utils.py b/src/transformers/modeling_flax_utils.py
index dfc0631abe0e..3f94a3c6cef5 100644
--- a/src/transformers/modeling_flax_utils.py
+++ b/src/transformers/modeling_flax_utils.py
@@ -588,7 +588,7 @@ def from_pretrained(
Whether or not to only look at local files (i.e., do not try to download the model).
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
@@ -1112,7 +1112,7 @@ def save_pretrained(
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
kwargs (`dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
safe_serialization (`bool`, *optional*, defaults to `False`):
diff --git a/src/transformers/modeling_tf_utils.py b/src/transformers/modeling_tf_utils.py
index 46699710983a..3e2564bc7a10 100644
--- a/src/transformers/modeling_tf_utils.py
+++ b/src/transformers/modeling_tf_utils.py
@@ -2386,7 +2386,7 @@ def save_pretrained(
Whether to save the model using `safetensors` or the traditional TensorFlow way (that uses `h5`).
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
kwargs (`dict[str, Any]`, *optional*):
Additional key word arguments passed along to the [`~utils.PushToHubMixin.push_to_hub`] method.
"""
@@ -2600,7 +2600,7 @@ def from_pretrained(
Whether or not to only look at local files (e.g., not try downloading the model).
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
@@ -3145,7 +3145,7 @@ def push_to_hub(
Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
token (`bool` or `str`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`). Will default to `True` if `repo_url`
+ when running `hf auth login` (stored in `~/.huggingface`). Will default to `True` if `repo_url`
is not specified.
max_shard_size (`int` or `str`, *optional*, defaults to `"10GB"`):
Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard
diff --git a/src/transformers/modeling_utils.py b/src/transformers/modeling_utils.py
index 1577e7db584d..5c4226ad2cae 100644
--- a/src/transformers/modeling_utils.py
+++ b/src/transformers/modeling_utils.py
@@ -3872,7 +3872,7 @@ def save_pretrained(
If specified, weights are saved in the format pytorch_model..bin.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
save_peft_format (`bool`, *optional*, defaults to `True`):
For backward compatibility with PEFT library, in case adapter weights are attached to the model, all
keys of the state dict of adapters needs to be prepended with `base_model.model`. Advanced users can
@@ -4520,7 +4520,7 @@ def from_pretrained(
Whether or not to only look at local files (i.e., do not try to download the model).
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/models/auto/feature_extraction_auto.py b/src/transformers/models/auto/feature_extraction_auto.py
index fcc93165de83..0878c5ce3014 100644
--- a/src/transformers/models/auto/feature_extraction_auto.py
+++ b/src/transformers/models/auto/feature_extraction_auto.py
@@ -183,7 +183,7 @@ def get_feature_extractor_config(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
@@ -300,7 +300,7 @@ def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/models/auto/image_processing_auto.py b/src/transformers/models/auto/image_processing_auto.py
index 6857eda31723..3d7be9f18b77 100644
--- a/src/transformers/models/auto/image_processing_auto.py
+++ b/src/transformers/models/auto/image_processing_auto.py
@@ -264,7 +264,7 @@ def get_image_processor_config(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
@@ -389,7 +389,7 @@ def from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs):
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/models/auto/processing_auto.py b/src/transformers/models/auto/processing_auto.py
index 5b73e1a137c8..69f65e23e151 100644
--- a/src/transformers/models/auto/processing_auto.py
+++ b/src/transformers/models/auto/processing_auto.py
@@ -216,7 +216,7 @@ def from_pretrained(cls, pretrained_model_name_or_path, **kwargs):
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/models/auto/tokenization_auto.py b/src/transformers/models/auto/tokenization_auto.py
index eda3d29ae777..6c4e3e98c757 100644
--- a/src/transformers/models/auto/tokenization_auto.py
+++ b/src/transformers/models/auto/tokenization_auto.py
@@ -808,7 +808,7 @@ def get_tokenizer_config(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/models/auto/video_processing_auto.py b/src/transformers/models/auto/video_processing_auto.py
index 619d67c561f8..77a8c458bd33 100644
--- a/src/transformers/models/auto/video_processing_auto.py
+++ b/src/transformers/models/auto/video_processing_auto.py
@@ -134,7 +134,7 @@ def get_video_processor_config(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
@@ -249,7 +249,7 @@ def from_pretrained(cls, pretrained_model_name_or_path, *inputs, **kwargs):
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/models/wav2vec2/modeling_wav2vec2.py b/src/transformers/models/wav2vec2/modeling_wav2vec2.py
index d92519f89002..ab3f38d644be 100755
--- a/src/transformers/models/wav2vec2/modeling_wav2vec2.py
+++ b/src/transformers/models/wav2vec2/modeling_wav2vec2.py
@@ -1175,7 +1175,7 @@ def load_adapter(self, target_lang: str, force_load=True, **kwargs):
Whether or not to only look at local files (i.e., do not try to download the model).
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/pipelines/__init__.py b/src/transformers/pipelines/__init__.py
index bf7abe324ed8..b84e1bf9d4eb 100755
--- a/src/transformers/pipelines/__init__.py
+++ b/src/transformers/pipelines/__init__.py
@@ -761,7 +761,7 @@ def pipeline(
Whether or not to use a Fast tokenizer if possible (a [`PreTrainedTokenizerFast`]).
use_auth_token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
device (`int` or `str` or `torch.device`):
Defines the device (*e.g.*, `"cpu"`, `"cuda:1"`, `"mps"`, or a GPU ordinal rank like `1`) on which this
pipeline will be allocated.
diff --git a/src/transformers/tokenization_mistral_common.py b/src/transformers/tokenization_mistral_common.py
index c93ed4161d51..95bd64049bf9 100644
--- a/src/transformers/tokenization_mistral_common.py
+++ b/src/transformers/tokenization_mistral_common.py
@@ -1726,7 +1726,7 @@ def from_pretrained(
exist.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
local_files_only (`bool`, *optional*, defaults to `False`):
Whether or not to only rely on local files and not to attempt to download any files.
revision (`str`, *optional*, defaults to `"main"`):
diff --git a/src/transformers/tokenization_utils_base.py b/src/transformers/tokenization_utils_base.py
index 51bcc9a321e7..a8d0336b4663 100644
--- a/src/transformers/tokenization_utils_base.py
+++ b/src/transformers/tokenization_utils_base.py
@@ -1790,7 +1790,7 @@ def from_pretrained(
'http://hostname': 'foo.bar:4012'}`. The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
local_files_only (`bool`, *optional*, defaults to `False`):
Whether or not to only rely on local files and not to attempt to download any files.
revision (`str`, *optional*, defaults to `"main"`):
diff --git a/src/transformers/training_args.py b/src/transformers/training_args.py
index cf5ece295ee7..8314e271343b 100644
--- a/src/transformers/training_args.py
+++ b/src/transformers/training_args.py
@@ -690,7 +690,7 @@ class TrainingArguments:
hub_token (`str`, *optional*):
The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with
- `huggingface-cli login`.
+ `hf auth login`.
hub_private_repo (`bool`, *optional*):
Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
hub_always_push (`bool`, *optional*, defaults to `False`):
@@ -2930,7 +2930,7 @@ def set_push_to_hub(
token (`str`, *optional*):
The token to use to push the model to the Hub. Will default to the token in the cache folder obtained
- with `huggingface-cli login`.
+ with `hf auth login`.
private_repo (`bool`, *optional*, defaults to `False`):
Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
always_push (`bool`, *optional*, defaults to `False`):
diff --git a/src/transformers/utils/hub.py b/src/transformers/utils/hub.py
index b4c4c276b36d..c4921459a791 100644
--- a/src/transformers/utils/hub.py
+++ b/src/transformers/utils/hub.py
@@ -286,7 +286,7 @@ def cached_file(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
@@ -363,7 +363,7 @@ def cached_files(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
@@ -508,7 +508,7 @@ def cached_files(
raise OSError(
f"{path_or_repo_id} is not a local folder and is not a valid model identifier "
"listed on 'https://huggingface.co/models'\nIf this is a private repository, make sure to pass a token "
- "having permission to this repo either by logging in with `huggingface-cli login` or by passing "
+ "having permission to this repo either by logging in with `hf auth login` or by passing "
"`token=`"
) from e
elif isinstance(e, RevisionNotFoundError):
@@ -699,7 +699,7 @@ def has_file(
raise OSError(
f"{path_or_repo} is a gated repository. Make sure to request access at "
f"https://huggingface.co/{path_or_repo} and pass a token having permission to this repo either by "
- "logging in with `huggingface-cli login` or by passing `token=`."
+ "logging in with `hf auth login` or by passing `token=`."
) from e
except RepositoryNotFoundError as e:
logger.error(e)
@@ -873,7 +873,7 @@ def push_to_hub(
Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.
token (`bool` or `str`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`). Will default to `True` if `repo_url`
+ when running `hf auth login` (stored in `~/.huggingface`). Will default to `True` if `repo_url`
is not specified.
max_shard_size (`int` or `str`, *optional*, defaults to `"5GB"`):
Only applicable for models. The maximum size for a checkpoint before being sharded. Checkpoints shard
diff --git a/src/transformers/utils/peft_utils.py b/src/transformers/utils/peft_utils.py
index 0a7263b5a6b3..e3976acf168b 100644
--- a/src/transformers/utils/peft_utils.py
+++ b/src/transformers/utils/peft_utils.py
@@ -59,7 +59,7 @@ def find_adapter_config_file(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or *bool*, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, will use the token generated
- when running `huggingface-cli login` (stored in `~/.huggingface`).
+ when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/src/transformers/video_processing_utils.py b/src/transformers/video_processing_utils.py
index 715912846cdb..0db55024e6d2 100644
--- a/src/transformers/video_processing_utils.py
+++ b/src/transformers/video_processing_utils.py
@@ -456,7 +456,7 @@ def from_pretrained(
'http://hostname': 'foo.bar:4012'}.` The proxies are used on each request.
token (`str` or `bool`, *optional*):
The token to use as HTTP bearer authorization for remote files. If `True`, or not specified, will use
- the token generated when running `huggingface-cli login` (stored in `~/.huggingface`).
+ the token generated when running `hf auth login` (stored in `~/.huggingface`).
revision (`str`, *optional*, defaults to `"main"`):
The specific model version to use. It can be a branch name, a tag name, or a commit id, since we use a
git-based system for storing models and other artifacts on huggingface.co, so `revision` can be any
diff --git a/templates/adding_a_new_example_script/{{cookiecutter.directory_name}}/run_{{cookiecutter.example_shortcut}}.py b/templates/adding_a_new_example_script/{{cookiecutter.directory_name}}/run_{{cookiecutter.example_shortcut}}.py
index 83916cc58736..3cd69eb95630 100755
--- a/templates/adding_a_new_example_script/{{cookiecutter.directory_name}}/run_{{cookiecutter.example_shortcut}}.py
+++ b/templates/adding_a_new_example_script/{{cookiecutter.directory_name}}/run_{{cookiecutter.example_shortcut}}.py
@@ -120,7 +120,7 @@ class ModelArguments:
metadata={
"help": (
"The token to use as HTTP bearer authorization for remote files. If not specified, will use the token "
- "generated when running `huggingface-cli login` (stored in `~/.huggingface`)."
+ "generated when running `hf auth login` (stored in `~/.huggingface`)."
)
},
)
diff --git a/tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py b/tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py
index 64b86c8de1c7..2b1a19b9ab25 100644
--- a/tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py
+++ b/tests/sagemaker/scripts/pytorch/run_glue_model_parallelism.py
@@ -180,7 +180,7 @@ class ModelArguments:
default=False,
metadata={
"help": (
- "Will use the token generated when running `huggingface-cli login` (necessary to use this script "
+ "Will use the token generated when running `hf auth login` (necessary to use this script "
"with private models)."
)
},