Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
87 commits
Select commit Hold shift + click to select a range
f48a47b
remove attributes and add all missing sub processors to their auto cl…
yonigozlan Oct 15, 2025
d5d5c58
remove all mentions of .attributes
yonigozlan Oct 15, 2025
dd505b5
cleanup
yonigozlan Oct 15, 2025
6a1448f
fix processor tests
yonigozlan Oct 15, 2025
a292900
fix modular
yonigozlan Oct 15, 2025
63a255d
remove last attributes
yonigozlan Oct 16, 2025
ef73759
fixup
yonigozlan Oct 16, 2025
b5e8b2e
Merge remote-tracking branch 'upstream/main' into remove-attributes-f…
yonigozlan Oct 16, 2025
f14ff3c
fixes after merge
yonigozlan Oct 16, 2025
0306430
fix wrong tokenizer in auto florence2
yonigozlan Oct 16, 2025
01cb815
fix missing audio_processor + nits
yonigozlan Oct 17, 2025
49ec906
Override __init__ in NewProcessor and change hf-internal-testing-repo…
yonigozlan Oct 17, 2025
7dd5682
Merge remote-tracking branch 'upstream/main' into remove-attributes-f…
yonigozlan Oct 17, 2025
946cc5c
fix auto tokenizer test
yonigozlan Oct 17, 2025
b0cb3e0
add init to markup_lm
yonigozlan Oct 17, 2025
3b9e846
update CustomProcessor in custom_processing
yonigozlan Oct 17, 2025
53de7a4
remove print
yonigozlan Oct 17, 2025
93d2c4d
Merge branch 'main' into remove-attributes-from-processors
yonigozlan Oct 17, 2025
feeec28
Merge remote-tracking branch 'upstream/main' into remove-attributes-f…
yonigozlan Oct 22, 2025
4a6b080
nit
yonigozlan Oct 22, 2025
02402a0
Merge branch 'remove-attributes-from-processors' of https://github.co…
yonigozlan Oct 22, 2025
757e1f1
fix test modeling owlv2
yonigozlan Oct 22, 2025
bf763b2
fix test_processing_layoutxlm
yonigozlan Oct 22, 2025
0799a0a
Fix owlv2, wav2vec2, markuplm, voxtral issues
yonigozlan Oct 22, 2025
bf1a4b6
Merge remote-tracking branch 'upstream/main' into remove-attributes-f…
yonigozlan Oct 31, 2025
e3f130d
add support for loading and saving multiple tokenizer natively
yonigozlan Oct 31, 2025
cc45a7e
remove exclude_attributes from save_pretrained
yonigozlan Oct 31, 2025
6b9e7c9
Run slow v2 (#41914)
ydshieh Nov 1, 2025
0ccb0e3
Fix `detectron2` installation in docker files (#41975)
ydshieh Nov 2, 2025
1eeece5
Fix `autoawq[kernels]` installation in quantization docker file (#41978)
ydshieh Nov 2, 2025
e4d0a09
add support for saving encoder only so any parakeet model can be load…
nithinraok Nov 2, 2025
09702b2
Use indices as position_ids in modernebert (#41789)
remi-or Nov 3, 2025
688a79c
test tensor parallel: make tests for dense model more robust (#41968)
3outeille Nov 3, 2025
e44838b
fix: dict[RopeParameters] to dict[str, RopeParameters] (#41963)
RyanMullins Nov 3, 2025
98287d9
docs: add continuous batching page (#41847)
McPatate Nov 3, 2025
85b0bd9
Fix `torchcodec` version in quantization docker file (#41988)
ydshieh Nov 3, 2025
e798fe4
[kernels] Add Tests & CI for kernels (#41765)
MekkCyber Nov 3, 2025
c7a631b
Move the Mi355 to regular docker (#41989)
remi-or Nov 3, 2025
55938f4
More data in benchmarking (#41848)
remi-or Nov 3, 2025
76fbe5a
fix (CI): Refactor SSH runners (#41991)
glegendre01 Nov 3, 2025
3b87190
fix 3 failed test cases for video_llama_3 model on Intel XPU (#41931)
kaixuanliu Nov 3, 2025
c33037b
Integrate colqwen2.5 using colqwen2 modelling code (#40600)
sahil-kabir Nov 3, 2025
de10840
Fixed wrong padding value in OWLv2 (#41938)
gjamesgoenawan Nov 3, 2025
f639ad6
Fix `run slow v2`: empty report when there is only one model (#42002)
ydshieh Nov 4, 2025
135543a
[kernels] change import time in KernelConfig (#42004)
MekkCyber Nov 4, 2025
adf6777
DOC Fix typo in argument name: pseudoquant (#41994)
BenjaminBossan Nov 4, 2025
f37903b
Fix `torch+deepspeed` docker file (#41985)
ydshieh Nov 4, 2025
6a5d5ce
Correct syntax error in trainer.md (#42001)
Yacklin Nov 4, 2025
1f8ae37
Reduce the number of benchmark in the CI (#42008)
remi-or Nov 4, 2025
9488b26
Fix continuous batching tests (#42012)
Rocketknight1 Nov 4, 2025
0a703ee
add back `logging_dir` (#42013)
SunMarc Nov 4, 2025
2f2a82c
Fix issue with from pretrained and kwargs in image processors (#41997)
yonigozlan Nov 4, 2025
0143c60
Fix default image_rows and image_cols initialization in Idefics3 and …
MilkClouds Nov 4, 2025
af380ff
Add GLPNImageProcessorFast (#41725)
Aravind-11 Nov 4, 2025
98c0528
add fuyu fast image processors (#41817)
DeXtAr47-oss Nov 4, 2025
8ec8436
[kernels] Fix XPU layernorm kernel (#41583)
MekkCyber Nov 4, 2025
a63b6da
[v5] Deprecate Text2Text and related pipelines (#41996)
Rocketknight1 Nov 4, 2025
a3f3937
[FPQuant] MXFP8 and MXFP4 backwards support (#41897)
BlackSamorez Nov 4, 2025
5b552a9
Merge remote-tracking branch 'upstream/main' into support-auto_doctri…
yonigozlan Nov 6, 2025
09d5527
add working auto_docstring for processors
yonigozlan Nov 6, 2025
b542e95
add auto_docstring to processors first part
yonigozlan Nov 7, 2025
552509c
add auto_docstring to processors part 2
yonigozlan Nov 7, 2025
8979645
modifs after review
yonigozlan Nov 7, 2025
6cc30f9
Merge remote-tracking branch 'upstream/main' into remove-attributes-f…
yonigozlan Nov 7, 2025
30f1b92
Merge branch 'remove-attributes-from-processors' into support-auto_do…
yonigozlan Nov 7, 2025
bd5aae2
Merge remote-tracking branch 'upstream/main' into support-auto_doctri…
yonigozlan Nov 7, 2025
0af5a60
Merge remote-tracking branch 'upstream/main' into support-auto_doctri…
yonigozlan Dec 18, 2025
4f8a7ba
fully working auto_docstring and check_docstring with placeholder doc…
yonigozlan Jan 6, 2026
b9136ef
Working check_docstrings for Typed dicts
yonigozlan Jan 6, 2026
68f178b
Add recurring processor args to auto_docstring and add support for re…
yonigozlan Jan 6, 2026
1ce14e9
replace placeholders with real docstrings
yonigozlan Jan 6, 2026
0ee2c3f
Merge remote-tracking branch 'upstream/main' into support-auto_doctri…
yonigozlan Jan 6, 2026
22b29b8
fix copies
yonigozlan Jan 6, 2026
8d5ffa8
fixup
yonigozlan Jan 6, 2026
ab1f03b
remove unwanted changes
yonigozlan Jan 6, 2026
525804c
fix unprotected imports
yonigozlan Jan 6, 2026
852b458
Fix unprotected imports
yonigozlan Jan 6, 2026
03d1cd3
fix unprotected imports
yonigozlan Jan 6, 2026
22721cd
Add __call__ to all docs of processors
yonigozlan Jan 6, 2026
b170599
Merge remote-tracking branch 'upstream/main' into support-auto_doctri…
yonigozlan Jan 7, 2026
b73220d
nits docs
yonigozlan Jan 7, 2026
dcea25a
Merge branch 'main' into support-auto_doctring-in-processor
yonigozlan Jan 7, 2026
80b849f
Merge branch 'main' into support-auto_doctring-in-processor
yonigozlan Jan 8, 2026
edae136
Merge remote-tracking branch 'upstream/main' into support-auto_doctri…
yonigozlan Jan 8, 2026
14a5070
Merge branch 'support-auto_doctring-in-processor' of https://github.c…
yonigozlan Jan 8, 2026
b3bf0e3
add flaky test
yonigozlan Jan 8, 2026
d639cd9
Merge remote-tracking branch 'upstream/main' into support-auto_doctri…
yonigozlan Jan 8, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions docs/source/en/model_doc/align.md
Original file line number Diff line number Diff line change
Expand Up @@ -166,6 +166,7 @@ for label, score in zip(candidate_labels, probs):
## AlignProcessor

[[autodoc]] AlignProcessor
- __call__

## AlignModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/altclip.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,3 +126,4 @@ for label, prob in zip(labels, probs[0]):
## AltCLIPProcessor

[[autodoc]] AltCLIPProcessor
- __call__
1 change: 1 addition & 0 deletions docs/source/en/model_doc/aria.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,6 +149,7 @@ print(response)
## AriaProcessor

[[autodoc]] AriaProcessor
- __call__

## AriaTextConfig

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/audioflamingo3.md
Original file line number Diff line number Diff line change
Expand Up @@ -390,6 +390,7 @@ are forwarded, so you can tweak padding or tensor formats just like when calling
## AudioFlamingo3Processor

[[autodoc]] AudioFlamingo3Processor
- __call__

## AudioFlamingo3Encoder

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/aya_vision.md
Original file line number Diff line number Diff line change
Expand Up @@ -260,6 +260,7 @@ print(processor.tokenizer.decode(generated[0], skip_special_tokens=True))
## AyaVisionProcessor

[[autodoc]] AyaVisionProcessor
- __call__

## AyaVisionConfig

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/blip-2.md
Original file line number Diff line number Diff line change
Expand Up @@ -72,6 +72,7 @@ If you're interested in submitting a resource to be included here, please feel f
## Blip2Processor

[[autodoc]] Blip2Processor
- __call__

## Blip2VisionModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/blip.md
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,7 @@ Refer to this [notebook](https://github.com/huggingface/notebooks/blob/main/exam
## BlipProcessor

[[autodoc]] BlipProcessor
- __call__

## BlipImageProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/chameleon.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,6 +182,7 @@ model = ChameleonForConditionalGeneration.from_pretrained(
## ChameleonProcessor

[[autodoc]] ChameleonProcessor
- __call__

## ChameleonImageProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/chinese_clip.md
Original file line number Diff line number Diff line change
Expand Up @@ -98,6 +98,7 @@ Currently, following scales of pretrained Chinese-CLIP models are available on
## ChineseCLIPProcessor

[[autodoc]] ChineseCLIPProcessor
- __call__

## ChineseCLIPModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/clap.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,6 +79,7 @@ print(f"Text embeddings: {text_features}")
## ClapProcessor

[[autodoc]] ClapProcessor
- __call__

## ClapModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/clip.md
Original file line number Diff line number Diff line change
Expand Up @@ -119,6 +119,7 @@ print(f"Most likely label: {most_likely_label} with probability: {probs[0][most_
## CLIPProcessor

[[autodoc]] CLIPProcessor
- __call__

## CLIPModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/clipseg.md
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
## CLIPSegProcessor

[[autodoc]] CLIPSegProcessor
- __call__

## CLIPSegModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/cohere2_vision.md
Original file line number Diff line number Diff line change
Expand Up @@ -139,3 +139,4 @@ print(outputs)
## Cohere2VisionProcessor

[[autodoc]] Cohere2VisionProcessor
- __call__
1 change: 1 addition & 0 deletions docs/source/en/model_doc/colpali.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,7 @@ print(scores)
## ColPaliProcessor

[[autodoc]] ColPaliProcessor
- __call__

## ColPaliForRetrieval

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/colqwen2.md
Original file line number Diff line number Diff line change
Expand Up @@ -189,6 +189,7 @@ processor = ColQwen2Processor.from_pretrained(model_name)
## ColQwen2Processor

[[autodoc]] ColQwen2Processor
- __call__

## ColQwen2ForRetrieval

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/deepseek_vl.md
Original file line number Diff line number Diff line change
Expand Up @@ -209,6 +209,7 @@ model = DeepseekVLForConditionalGeneration.from_pretrained(
## DeepseekVLProcessor

[[autodoc]] DeepseekVLProcessor
- __call__

## DeepseekVLImageProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/deepseek_vl_hybrid.md
Original file line number Diff line number Diff line change
Expand Up @@ -208,6 +208,7 @@ model = DeepseekVLHybridForConditionalGeneration.from_pretrained(
## DeepseekVLHybridProcessor

[[autodoc]] DeepseekVLHybridProcessor
- __call__

## DeepseekVLHybridImageProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/emu3.md
Original file line number Diff line number Diff line change
Expand Up @@ -155,6 +155,7 @@ for i, image in enumerate(images['pixel_values']):
## Emu3Processor

[[autodoc]] Emu3Processor
- __call__

## Emu3ImageProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/ernie4_5_vl_moe.md
Original file line number Diff line number Diff line change
Expand Up @@ -201,6 +201,7 @@ print(output_text)
## Ernie4_5_VL_MoeProcessor

[[autodoc]] Ernie4_5_VL_MoeProcessor
- __call__

## Ernie4_5_VL_MoeTextModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/flava.md
Original file line number Diff line number Diff line change
Expand Up @@ -63,6 +63,7 @@ This model was contributed by [aps](https://huggingface.co/aps). The original co
## FlavaProcessor

[[autodoc]] FlavaProcessor
- __call__

## FlavaImageProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/florence2.md
Original file line number Diff line number Diff line change
Expand Up @@ -171,6 +171,7 @@ print(parsed_answer)
## Florence2Processor

[[autodoc]] Florence2Processor
- __call__

## Florence2Model

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/gemma3.md
Original file line number Diff line number Diff line change
Expand Up @@ -243,6 +243,7 @@ visualizer("<img>What is shown in this image?")
## Gemma3Processor

[[autodoc]] Gemma3Processor
- __call__

## Gemma3TextConfig

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/gemma3n.md
Original file line number Diff line number Diff line change
Expand Up @@ -161,6 +161,7 @@ echo -e "Plants create energy through a process known as" | transformers run --t
## Gemma3nProcessor

[[autodoc]] Gemma3nProcessor
- __call__

## Gemma3nTextConfig

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/glm46v.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,6 +39,7 @@ rendered properly in your Markdown viewer.
## Glm46VProcessor

[[autodoc]] Glm46VProcessor
- __call__

## Glm46VModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/glm4v.md
Original file line number Diff line number Diff line change
Expand Up @@ -196,6 +196,7 @@ print(output_text)
## Glm4vProcessor

[[autodoc]] Glm4vProcessor
- __call__

## Glm4vVisionModel

Expand Down
3 changes: 2 additions & 1 deletion docs/source/en/model_doc/glmasr.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ limitations under the License.
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be rendered properly in your Markdown viewer.

-->
*This model was released on {release_date} and added to Hugging Face Transformers on 2025-12-15.*
*This model was released on {release_date} and added to Hugging Face Transformers on 2025-12-24.*


# GlmAsr
Expand Down Expand Up @@ -162,6 +162,7 @@ print(decoded_outputs)
## GlmAsrProcessor

[[autodoc]] GlmAsrProcessor
- __call__

## GlmAsrEncoder

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/got_ocr2.md
Original file line number Diff line number Diff line change
Expand Up @@ -281,6 +281,7 @@ alt="drawing" width="600"/>
## GotOcr2Processor

[[autodoc]] GotOcr2Processor
- __call__

## GotOcr2Model

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/granite_speech.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,7 @@ for i, transcription in enumerate(transcriptions):
## GraniteSpeechProcessor

[[autodoc]] GraniteSpeechProcessor
- __call__

## GraniteSpeechFeatureExtractor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/granitevision.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ This model was contributed by [Alexander Brooks](https://huggingface.co/abrooks9
## LlavaNextProcessor

[[autodoc]] LlavaNextProcessor
- __call__

## LlavaNextForConditionalGeneration

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/grounding-dino.md
Original file line number Diff line number Diff line change
Expand Up @@ -114,6 +114,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
## GroundingDinoProcessor

[[autodoc]] GroundingDinoProcessor
- __call__
- post_process_grounded_object_detection

## GroundingDinoConfig
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/instructblip.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,6 +57,7 @@ The attributes can be obtained from model config, as `model.config.num_query_tok
## InstructBlipProcessor

[[autodoc]] InstructBlipProcessor
- __call__

## InstructBlipVisionModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/internvl.md
Original file line number Diff line number Diff line change
Expand Up @@ -348,6 +348,7 @@ This example showcases how to handle a batch of chat conversations with interlea
## InternVLProcessor

[[autodoc]] InternVLProcessor
- __call__

## InternVLVideoProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/janus.md
Original file line number Diff line number Diff line change
Expand Up @@ -205,6 +205,7 @@ for i, image in enumerate(images['pixel_values']):
## JanusProcessor

[[autodoc]] JanusProcessor
- __call__

## JanusImageProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/kosmos2_5.md
Original file line number Diff line number Diff line change
Expand Up @@ -224,6 +224,7 @@ print(generated_text[0])
## Kosmos2_5Processor

[[autodoc]] Kosmos2_5Processor
- __call__

## Kosmos2_5Model

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/lfm2_vl.md
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,7 @@ processor.batch_decode(outputs, skip_special_tokens=True)[0]
## Lfm2VlProcessor

[[autodoc]] Lfm2VlProcessor
- __call__

## Lfm2VlConfig

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/llama4.md
Original file line number Diff line number Diff line change
Expand Up @@ -416,6 +416,7 @@ model = Llama4ForConditionalGeneration.from_pretrained(
## Llama4Processor

[[autodoc]] Llama4Processor
- __call__

## Llama4ImageProcessorFast

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/llava.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,6 +250,7 @@ A list of official Hugging Face and community (indicated by 🌎) resources to h
## LlavaProcessor

[[autodoc]] LlavaProcessor
- __call__

## LlavaModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/llava_next.md
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,7 @@ print(processor.decode(output[0], skip_special_tokens=True))
## LlavaNextProcessor

[[autodoc]] LlavaNextProcessor
- __call__

## LlavaNextModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/llava_onevision.md
Original file line number Diff line number Diff line change
Expand Up @@ -298,6 +298,7 @@ model = LlavaOnevisionForConditionalGeneration.from_pretrained(
## LlavaOnevisionProcessor

[[autodoc]] LlavaOnevisionProcessor
- __call__

## LlavaOnevisionImageProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/mllama.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,7 @@ print(processor.decode(output[0], skip_special_tokens=True))
## MllamaProcessor

[[autodoc]] MllamaProcessor
- __call__

## MllamaImageProcessor

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/musicgen.md
Original file line number Diff line number Diff line change
Expand Up @@ -272,6 +272,7 @@ Tips:
## MusicgenProcessor

[[autodoc]] MusicgenProcessor
- __call__

## MusicgenModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/musicgen_melody.md
Original file line number Diff line number Diff line change
Expand Up @@ -266,6 +266,7 @@ Tips:
## MusicgenMelodyProcessor

[[autodoc]] MusicgenMelodyProcessor
- __call__
- get_unconditional_inputs

## MusicgenMelodyFeatureExtractor
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/omdet-turbo.md
Original file line number Diff line number Diff line change
Expand Up @@ -164,6 +164,7 @@ Detected statue with confidence 0.2 at location [428.1, 205.5, 767.3, 759.5] in
## OmDetTurboProcessor

[[autodoc]] OmDetTurboProcessor
- __call__
- post_process_grounded_object_detection

## OmDetTurboForObjectDetection
Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/oneformer.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ The resource should ideally demonstrate something new instead of duplicating an
## OneFormerProcessor

[[autodoc]] OneFormerProcessor
- __call__

## OneFormerModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/ovis2.md
Original file line number Diff line number Diff line change
Expand Up @@ -107,3 +107,4 @@ with torch.inference_mode():
## Ovis2Processor

[[autodoc]] Ovis2Processor
- __call__
1 change: 1 addition & 0 deletions docs/source/en/model_doc/paddleocr_vl.md
Original file line number Diff line number Diff line change
Expand Up @@ -242,6 +242,7 @@ model = AutoModelForImageTextToText.from_pretrained("PaddlePaddle/PaddleOCR-VL",
## PaddleOCRVLProcessor

[[autodoc]] PaddleOCRVLProcessor
- __call__

## PaddleOCRVisionTransformer

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/paligemma.md
Original file line number Diff line number Diff line change
Expand Up @@ -175,6 +175,7 @@ visualizer("<img> What is in this image?")
## PaliGemmaProcessor

[[autodoc]] PaliGemmaProcessor
- __call__

## PaliGemmaModel

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/perception_lm.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,7 @@ The original code can be found [here](https://github.com/facebookresearch/percep
## PerceptionLMProcessor

[[autodoc]] PerceptionLMProcessor
- __call__

## PerceptionLMImageProcessorFast

Expand Down
1 change: 1 addition & 0 deletions docs/source/en/model_doc/phi4_multimodal.md
Original file line number Diff line number Diff line change
Expand Up @@ -152,6 +152,7 @@ print(f'>>> Response\n{response}')
## Phi4MultimodalProcessor

[[autodoc]] Phi4MultimodalProcessor
- __call__

## Phi4MultimodalAudioConfig

Expand Down
Loading