Skip to content

Conversation

@Narsil
Copy link
Contributor

@Narsil Narsil commented Dec 15, 2022

What does this PR do?

Fixes the slow test by making sure we're loading the FeatureExtractor.

LayoutLM doesn't have a FeatureExtractor while LayoutLMV2 does and this repo uses a combination of both.

Putting LayoutLM in the MULTI_MODAL config enables the pipeline to load feature_extractor regardless of FEATURE_EXTRACTION_MAPPING.

Fixes # (issue)

Before submitting

  • This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case).
  • Did you read the contributor guideline,
    Pull Request section?
  • Was this discussed/approved via a Github issue or the forum? Please add a link
    to it if that's the case.
  • Did you make sure to update the documentation with your changes? Here are the
    documentation guidelines, and
    here are tips on formatting docstrings.
  • Did you write any new necessary tests?

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@Narsil Narsil requested review from sgugger and ydshieh December 15, 2022 10:37
@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint.

Copy link
Collaborator

@ydshieh ydshieh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you! I confirm it works

Copy link
Collaborator

@sgugger sgugger left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thnaks for the fix!

@ydshieh ydshieh merged commit fca66ab into huggingface:main Dec 15, 2022
Narsil added a commit to Narsil/transformers that referenced this pull request Dec 16, 2022
@Narsil Narsil mentioned this pull request Dec 16, 2022
5 tasks
Narsil added a commit that referenced this pull request Dec 16, 2022
* Revert "Fixing object detection with `layoutlm` (#20776)"

This reverts commit fca66ab.

* Better fix for layoutlm object detection.

* Style.
gsarti added a commit to gsarti/transformers that referenced this pull request Dec 16, 2022
… add_get_encoder_decoder_fsmt

* 'main' of ssh://github.com/huggingface/transformers: (1433 commits)
  Add Universal Segmentation class + mapping (huggingface#20766)
  Stop calling expand_1d on newer TF versions (huggingface#20786)
  Fix object detection2 (huggingface#20798)
  [Pipeline] skip feature extraction test if in `IMAGE_PROCESSOR_MAPPING` (huggingface#20790)
  Recompile `apex` in `DeepSpeed` CI image (huggingface#20788)
  Move convert_to_rgb to image_transforms module (huggingface#20784)
  Generate: use `GenerationConfig` as the basis for `.generate()` parametrization (huggingface#20388)
  Install video dependency for pipeline CI (huggingface#20777)
  Fixing object detection with `layoutlm` (huggingface#20776)
  [Pipeline] fix failing bloom `pipeline` test (huggingface#20778)
  Patch for FlanT5-XXL 8bit support (huggingface#20760)
  Install vision for TF pipeline tests (huggingface#20771)
  Even more validation. (huggingface#20762)
  Add Swin backbone (huggingface#20769)
  Install `torch-tensorrt 1.3.0` for DeepSpeed CI (huggingface#20764)
  Replaces xxx_required with requires_backends (huggingface#20715)
  [CI-Test] Fixes but also skips the mT5 tests (huggingface#20755)
  Fix attribute error problem  (huggingface#20765)
  [Tests] Improve test_attention_outputs (huggingface#20701)
  Fix missing `()` in some usage of `is_flaky` (huggingface#20749)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants