Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

What is R_MULT used in forward_convolutional_layer_q #81

Open
LiangLeon opened this issue Apr 1, 2020 · 1 comment
Open

What is R_MULT used in forward_convolutional_layer_q #81

LiangLeon opened this issue Apr 1, 2020 · 1 comment

Comments

@LiangLeon
Copy link

Hi
I have learned how to get the weights multiplier and input multiplier.
but I have question of the following code which is in the forward_convolutional_layer_q function
float ALPHA1 = R_MULT / (l.input_quant_multipler * l.weights_quant_multipler);
for (i = 0; i < l.outputs; ++i) {
l.output[i] = output_q[i] * ALPHA1; // cuDNN: alpha1
}
R_MULT is a constant and the value is 32.
Could you give brief explanation about why we need R_MULT and how to set the value?
I really appreciate your time and great work.

@ArtyZe
Copy link

ArtyZe commented May 21, 2021

Hi
I have learned how to get the weights multiplier and input multiplier.
but I have question of the following code which is in the forward_convolutional_layer_q function
float ALPHA1 = R_MULT / (l.input_quant_multipler * l.weights_quant_multipler);
for (i = 0; i < l.outputs; ++i) {
l.output[i] = output_q[i] * ALPHA1; // cuDNN: alpha1
}
R_MULT is a constant and the value is 32.
Could you give brief explanation about why we need R_MULT and how to set the value?
I really appreciate your time and great work.

In gemm function:
image
maybe protect from overflow ?
so when dequant output from int to float value, need to mul 32 too

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants