Skip to content
This repository has been archived by the owner on Jan 15, 2024. It is now read-only.

Quantize QuestionAnswering models #1581

Open
wants to merge 10 commits into
base: master
Choose a base branch
from

Conversation

bgawrych
Copy link
Contributor

@bgawrych bgawrych commented Oct 6, 2021

Description

This PR enables quantization on question answering scripts.
Added custom calibration collector to avoid significant accuracy drop

@bgawrych bgawrych requested a review from a team as a code owner October 6, 2021 12:03
Copy link

@bartekkuncer bartekkuncer left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me.

@github-actions
Copy link

github-actions bot commented Oct 6, 2021

@@ -287,10 +289,14 @@ def forward(self, tokens, token_types, valid_length, p_mask, start_position):
Shape (batch_size, sequence_length)
answerable_logits
"""
backbone_net = self.backbone
if self.quantized:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
if self.quantized:
if self.quantized_bacbone is not None:

end remove quantized ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I thought about it, but I remained this quantized flag as switch on/off of quantized model - not sure if it is really usable. What do you think?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems to be not usable for me.

@github-actions
Copy link

github-actions bot commented Oct 8, 2021

@github-actions
Copy link

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants