Skip to content

fix: bigfield veridise audit fixes#16842

Merged
suyash67 merged 17 commits intomerge-train/barretenbergfrom
bigfield-veridise-fixes
Sep 22, 2025
Merged

fix: bigfield veridise audit fixes#16842
suyash67 merged 17 commits intomerge-train/barretenbergfrom
bigfield-veridise-fixes

Conversation

@suyash67
Copy link
Contributor

@suyash67 suyash67 commented Sep 7, 2025

Summary of the fixes based on the audit of the bigfield class by Veridise (on commit 480f49d):

V-AZT-VUL-001: Missing zero-check for division by constant

Added an assertion that triggers when the divisor is a constant zero and the flag check_for_zero is set to true. Also added a test that validates this behaviour.

Response: Fixed in bbf2f73

V-AZT-VUL-002: Incomplete ‘assert_is_not_equal‘ assumes random distribution

The function assert_is_not_equal is no longer exposed as a public function (via noir). In fact, none of the non-native field methods (and group operations) are exposed to developers via noir. Noir instead uses the natively written noir-bignum library. (The bigint_constraint.cpp file that contains the interface for non-native field methods between noir and barretenberg is yet to be removed from the repository, this will be done in an upcoming PR.)

Response: Acknowledged

V-AZT-VUL-003: borrow_lo rounds down

Updated the computation of borrow_lo_value to use ceiling instead of floor:

uint256_t borrow_lo_value = (max_remainders_lo + ((uint256_t(1) << (2 * NUM_LIMB_BITS)) - 1)) >> (2 * NUM_LIMB_BITS);

Response: Fixed in c1bdd88

V-AZT-VUL-004: Unsafe default for msub_div()

Changed the default value of enable_divisor_nz_check to true and explicitly pass it as false in its usage in the biggroup class. In a few instances, passing this parameter as false can lead to incorrect handling of the point at infinity, added TODOs at such places. These instances will be analysed separately to avoid any unexpected behaviour.

Response: Fixed in 25e820c, usage of msub_div will be analysed separately

V-AZT-VUL-005: Limbs with a maximum value of 0 trigger compilation failure

In the function decompose_into_default_range we have an assertion:

ASSERT(num_bits > 0)

because number of bits (of a circuit variable) being 0 does not really make sense. The only case when carry_hi_msb or carry_lo_msb can be 0 is when:

template <typename Builder, typename T>
void bigfield<Builder, T>::unsafe_evaluate_multiply_add(const bigfield& input_left,
                                                        const bigfield& input_to_mul,
                                                        const std::vector<bigfield>& to_add,
                                                        const bigfield& input_quotient,
                                                        const std::vector<bigfield>& input_remainders)

is called with input_to_mul as constant 0 and/or input_quotient as constant 0. In the usage of the function unsafe_evaluate_multiply_add (or unsafe_evaluate_multiple_multiply_add) the input_quotient is always a circuit witness. This ensures that the carry_lo_msb > 0 and carry_hi_msb > 0 (added assertions to verify this empirically).

Response: We think it does not make sense to support num_bits = 0 in range constraint functions, but open to discussion on this if there's any way carry_lo_msb or carry_hi_msb can actually be 0 in practice.

V-AZT-VUL-006: Off-by-one comparison of maximum_unreduced_limb_value

Changed the definition of get_maximum_unreduced_limb_value() to $2^Q - 1$ (from $2^Q$).

Response: Fixed in 3da35ba

V-AZT-VUL-007: Maintainability Improvements

  1. Removed unused constants:
    • bigfield.prime_basis_maximum_limb
    • bigfield.shift_right_1
    • bigfield.shift_right_2
  2. Renamed negative_prime_modulus_mod_binary_basis to negative_prime_modulus_mod_native_basis and use that while checking the prime limb consistency (in evaluate_polynomial_identity)
  3. Used NUM_LIMBS instead of hard-coded 4.
  4. Fixed comments (using $L$ instead of $t$)
  5. Changed the definition of get_maximum_unreduced_value() to a form $(2^N - 1)$ and use the comparison operators correctly
  6. Fixed static constexpr uint64_t MAX_UNREDUCED_LIMB_BITS = NUM_LIMB_BITS + MAX_ADDITION_LOG
  7. Got rid of variable neg_prime
  8. Used is_constant() instead of iterating over each limb to check constant-ness in operator+ and operator-.
  9. Removed duplicate/redundant assertion: ASSERT(input_left.size() == input_right.size() && input_left.size() < 1024);
  10. Fixed misc comments

Response: Fixed in e2024df

V-AZT-VUL-008: Incorrect term used during calculation on bound of upper limb

Corrected the calculation of the resultant maximum bound of the upper and lower limbs:

$$D_{\textsf{hi}} < 2^{2Q + L + \textcolor{red}{3}},$$ $$D_{\textsf{lo}} < 2^{2Q + L + \textcolor{red}{2}}.$$

Response: Fixed in f41813e

V-AZT-VUL-009: Optimization Improvements

  1. In the function add_two, we could add another if condition when two of the three inputs are constant and reduced by the prime. But the current implementation should handle that implicitly on calling add_two on each limb. Note that no additional gates would be added since each add_two call on limbs would just update additive constants on the witness.
  2. Agree with the observation but we think its fine to have redundant reductions in constant case as that would cause negligible impact on prover performance.
  3. It is a good suggestion to use a tighter bound for range constraint on the quotient when reduction is not necessary. However, while creating the quotient witness using the constructor:
    template <typename Builder, typename T>
    bigfield<Builder, T> bigfield<Builder, T>::create_from_u512_as_witness(Builder* ctx,
                                                                           const uint512_t& value,
                                                                           const bool can_overflow,
                                                                           const size_t maximum_bitlength)
    when can_overflow is set to false, maximum_bitlength must be greater than $3L$. This assertion breaks if we use tighter range constraint on the quotient. This needs to be investigated further.

@suyash67 suyash67 changed the base branch from merge-train/barretenberg to sb/bigfield-veridise-tests September 7, 2025 22:20
@suyash67 suyash67 force-pushed the bigfield-veridise-fixes branch from 7b18af0 to f41813e Compare September 7, 2025 23:52
@suyash67 suyash67 marked this pull request as ready for review September 8, 2025 01:15
@suyash67 suyash67 requested a review from Rumata888 September 8, 2025 01:17
@suyash67 suyash67 force-pushed the sb/bigfield-veridise-tests branch from 700d0f4 to d67c99b Compare September 8, 2025 10:58
@suyash67 suyash67 force-pushed the bigfield-veridise-fixes branch from f41813e to 2d5ea7e Compare September 8, 2025 11:02
Copy link
Contributor

@Rumata888 Rumata888 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great, thanks for doing this

Base automatically changed from sb/bigfield-veridise-tests to merge-train/barretenberg September 18, 2025 20:46
@suyash67 suyash67 force-pushed the bigfield-veridise-fixes branch from d8f4dd0 to 781740b Compare September 18, 2025 20:59
@suyash67 suyash67 force-pushed the bigfield-veridise-fixes branch from 0da7b91 to 2a85245 Compare September 22, 2025 10:08
@suyash67 suyash67 merged commit 2166a06 into merge-train/barretenberg Sep 22, 2025
6 checks passed
@suyash67 suyash67 deleted the bigfield-veridise-fixes branch September 22, 2025 10:42
github-merge-queue bot pushed a commit that referenced this pull request Sep 22, 2025
BEGIN_COMMIT_OVERRIDE
chore: Tube-related clean up (#17064)
chore: cycle group cleanup #5 (#17026)
chore: `field_conversion` clean-up/int audit (#16898)
fix: bigfield veridise audit fixes (#16842)
END_COMMIT_OVERRIDE
fcarreiro pushed a commit that referenced this pull request Sep 22, 2025
Summary of the fixes based on the audit of the `bigfield` class by
Veridise (on commit
[480f49d](https://github.com/AztecProtocol/aztec-packages/tree/480f49dad314ea4e753ff1e5180992a16926d2b3)):

#### [V-AZT-VUL-001: Missing zero-check for division by constant]()

Added an assertion that triggers when the divisor is a constant zero and
the flag `check_for_zero` is set to true. Also added a test that
validates this behaviour.

Response: Fixed in
bbf2f73

#### [V-AZT-VUL-002: Incomplete ‘assert_is_not_equal‘ assumes random
distribution]()

The function `assert_is_not_equal` is no longer exposed as a public
function (via noir). In fact, none of the non-native field methods (and
group operations) are exposed to developers via noir. Noir instead uses
the natively written
[noir-bignum](https://github.com/noir-lang/noir-bignum) library. (The
`bigint_constraint.cpp` file that contains the interface for non-native
field methods between noir and barretenberg is yet to be removed from
the repository, this will be done in an upcoming PR.)

Response: Acknowledged 

#### [V-AZT-VUL-003: borrow_lo rounds down]()

Updated the computation of `borrow_lo_value` to use ceiling instead of
floor:
```cpp
uint256_t borrow_lo_value = (max_remainders_lo + ((uint256_t(1) << (2 * NUM_LIMB_BITS)) - 1)) >> (2 * NUM_LIMB_BITS);
```

Response: Fixed in
c1bdd88

#### [V-AZT-VUL-004: Unsafe default for `msub_div()`]()

Changed the default value of `enable_divisor_nz_check` to `true` and
explicitly pass it as `false` in its usage in the `biggroup` class. In a
few instances, passing this parameter as `false` can lead to incorrect
handling of the point at infinity, added TODOs at such places. These
instances will be analysed separately to avoid any unexpected behaviour.

Response: Fixed in
25e820c,
usage of `msub_div` will be analysed separately

#### [V-AZT-VUL-005: Limbs with a maximum value of 0 trigger compilation
failure]()

In the function `decompose_into_default_range` we have an assertion:
```cpp
ASSERT(num_bits > 0)
``` 
because number of bits (of a circuit variable) being 0 does not really
make sense. The only case when `carry_hi_msb` or `carry_lo_msb` can be 0
is when:
```cpp
template <typename Builder, typename T>
void bigfield<Builder, T>::unsafe_evaluate_multiply_add(const bigfield& input_left,
                                                        const bigfield& input_to_mul,
                                                        const std::vector<bigfield>& to_add,
                                                        const bigfield& input_quotient,
                                                        const std::vector<bigfield>& input_remainders)
```
is called with `input_to_mul` as constant 0 and/or `input_quotient` as
constant 0. In the usage of the function `unsafe_evaluate_multiply_add`
(or `unsafe_evaluate_multiple_multiply_add`) the `input_quotient` is
always a circuit witness. This ensures that the `carry_lo_msb > 0` and
`carry_hi_msb > 0` (added assertions to verify this empirically).

Response: We think it does not make sense to support `num_bits = 0` in
range constraint functions, but open to discussion on this if there's
any way `carry_lo_msb` or `carry_hi_msb` can actually be 0 in practice.

#### [V-AZT-VUL-006: Off-by-one comparison of
maximum_unreduced_limb_value]()

Changed the definition of `get_maximum_unreduced_limb_value()` to $2^Q -
1$ (from $2^Q$).

Response: Fixed in
3da35ba

#### [V-AZT-VUL-007: Maintainability Improvements]()

1. Removed unused constants:
    - `bigfield.prime_basis_maximum_limb`
    - `bigfield.shift_right_1`
    - `bigfield.shift_right_2`
2. Renamed `negative_prime_modulus_mod_binary_basis` to
`negative_prime_modulus_mod_native_basis` and use that while checking
the prime limb consistency (in `evaluate_polynomial_identity`)
3. Used `NUM_LIMBS` instead of hard-coded 4.
4. Fixed comments (using $L$ instead of $t$)
5. Changed the definition of `get_maximum_unreduced_value()` to a form
$(2^N - 1)$ and use the comparison operators correctly
6. Fixed `static constexpr uint64_t MAX_UNREDUCED_LIMB_BITS =
NUM_LIMB_BITS + MAX_ADDITION_LOG`
7. Got rid of variable `neg_prime`
8. Used `is_constant()` instead of iterating over each limb to check
constant-ness in `operator+` and `operator-`.
9. Removed duplicate/redundant assertion: `ASSERT(input_left.size() ==
input_right.size() && input_left.size() < 1024);`
10. Fixed misc comments

Response: Fixed in
e2024df

#### [V-AZT-VUL-008: Incorrect term used during calculation on bound of
upper limb]()

Corrected the calculation of the resultant maximum bound of the upper
and lower limbs:

$$D_{\textsf{hi}} < 2^{2Q + L + \textcolor{red}{3}},$$
$$D_{\textsf{lo}} < 2^{2Q + L + \textcolor{red}{2}}.$$ 

Response: Fixed in
f41813e

#### [V-AZT-VUL-009: Optimization Improvements]()

1. In the function `add_two`, we could add another `if` condition when
two of the three inputs are constant and reduced by the prime. But the
current implementation should handle that implicitly on calling
`add_two` on each limb. Note that no additional gates would be added
since each `add_two` call on limbs would just update additive constants
on the witness.
2. Agree with the observation but we think its fine to have redundant
reductions in constant case as that would cause negligible impact on
prover performance.
3. It is a good suggestion to use a tighter bound for range constraint
on the quotient when reduction is not necessary. However, while creating
the quotient witness using the constructor:
    ```cpp
    template <typename Builder, typename T>
bigfield<Builder, T> bigfield<Builder,
T>::create_from_u512_as_witness(Builder* ctx,
const uint512_t& value,
const bool can_overflow,
const size_t maximum_bitlength)
    ```
when `can_overflow` is set to `false`, `maximum_bitlength` must be
greater than $3L$. This assertion breaks if we use tighter range
constraint on the quotient. This needs to be investigated further.
mralj pushed a commit that referenced this pull request Oct 13, 2025
Summary of the fixes based on the audit of the `bigfield` class by
Veridise (on commit
[480f49d](https://github.com/AztecProtocol/aztec-packages/tree/480f49dad314ea4e753ff1e5180992a16926d2b3)):

Added an assertion that triggers when the divisor is a constant zero and
the flag `check_for_zero` is set to true. Also added a test that
validates this behaviour.

Response: Fixed in
bbf2f73

distribution]()

The function `assert_is_not_equal` is no longer exposed as a public
function (via noir). In fact, none of the non-native field methods (and
group operations) are exposed to developers via noir. Noir instead uses
the natively written
[noir-bignum](https://github.com/noir-lang/noir-bignum) library. (The
`bigint_constraint.cpp` file that contains the interface for non-native
field methods between noir and barretenberg is yet to be removed from
the repository, this will be done in an upcoming PR.)

Response: Acknowledged

Updated the computation of `borrow_lo_value` to use ceiling instead of
floor:
```cpp
uint256_t borrow_lo_value = (max_remainders_lo + ((uint256_t(1) << (2 * NUM_LIMB_BITS)) - 1)) >> (2 * NUM_LIMB_BITS);
```

Response: Fixed in
c1bdd88

Changed the default value of `enable_divisor_nz_check` to `true` and
explicitly pass it as `false` in its usage in the `biggroup` class. In a
few instances, passing this parameter as `false` can lead to incorrect
handling of the point at infinity, added TODOs at such places. These
instances will be analysed separately to avoid any unexpected behaviour.

Response: Fixed in
25e820c,
usage of `msub_div` will be analysed separately

failure]()

In the function `decompose_into_default_range` we have an assertion:
```cpp
ASSERT(num_bits > 0)
```
because number of bits (of a circuit variable) being 0 does not really
make sense. The only case when `carry_hi_msb` or `carry_lo_msb` can be 0
is when:
```cpp
template <typename Builder, typename T>
void bigfield<Builder, T>::unsafe_evaluate_multiply_add(const bigfield& input_left,
                                                        const bigfield& input_to_mul,
                                                        const std::vector<bigfield>& to_add,
                                                        const bigfield& input_quotient,
                                                        const std::vector<bigfield>& input_remainders)
```
is called with `input_to_mul` as constant 0 and/or `input_quotient` as
constant 0. In the usage of the function `unsafe_evaluate_multiply_add`
(or `unsafe_evaluate_multiple_multiply_add`) the `input_quotient` is
always a circuit witness. This ensures that the `carry_lo_msb > 0` and
`carry_hi_msb > 0` (added assertions to verify this empirically).

Response: We think it does not make sense to support `num_bits = 0` in
range constraint functions, but open to discussion on this if there's
any way `carry_lo_msb` or `carry_hi_msb` can actually be 0 in practice.

maximum_unreduced_limb_value]()

Changed the definition of `get_maximum_unreduced_limb_value()` to $2^Q -
1$ (from $2^Q$).

Response: Fixed in
3da35ba

1. Removed unused constants:
    - `bigfield.prime_basis_maximum_limb`
    - `bigfield.shift_right_1`
    - `bigfield.shift_right_2`
2. Renamed `negative_prime_modulus_mod_binary_basis` to
`negative_prime_modulus_mod_native_basis` and use that while checking
the prime limb consistency (in `evaluate_polynomial_identity`)
3. Used `NUM_LIMBS` instead of hard-coded 4.
4. Fixed comments (using $L$ instead of $t$)
5. Changed the definition of `get_maximum_unreduced_value()` to a form
$(2^N - 1)$ and use the comparison operators correctly
6. Fixed `static constexpr uint64_t MAX_UNREDUCED_LIMB_BITS =
NUM_LIMB_BITS + MAX_ADDITION_LOG`
7. Got rid of variable `neg_prime`
8. Used `is_constant()` instead of iterating over each limb to check
constant-ness in `operator+` and `operator-`.
9. Removed duplicate/redundant assertion: `ASSERT(input_left.size() ==
input_right.size() && input_left.size() < 1024);`
10. Fixed misc comments

Response: Fixed in
e2024df

upper limb]()

Corrected the calculation of the resultant maximum bound of the upper
and lower limbs:

$$D_{\textsf{hi}} < 2^{2Q + L + \textcolor{red}{3}},$$
$$D_{\textsf{lo}} < 2^{2Q + L + \textcolor{red}{2}}.$$

Response: Fixed in
f41813e

1. In the function `add_two`, we could add another `if` condition when
two of the three inputs are constant and reduced by the prime. But the
current implementation should handle that implicitly on calling
`add_two` on each limb. Note that no additional gates would be added
since each `add_two` call on limbs would just update additive constants
on the witness.
2. Agree with the observation but we think its fine to have redundant
reductions in constant case as that would cause negligible impact on
prover performance.
3. It is a good suggestion to use a tighter bound for range constraint
on the quotient when reduction is not necessary. However, while creating
the quotient witness using the constructor:
    ```cpp
    template <typename Builder, typename T>
bigfield<Builder, T> bigfield<Builder,
T>::create_from_u512_as_witness(Builder* ctx,
const uint512_t& value,
const bool can_overflow,
const size_t maximum_bitlength)
    ```
when `can_overflow` is set to `false`, `maximum_bitlength` must be
greater than $3L$. This assertion breaks if we use tighter range
constraint on the quotient. This needs to be investigated further.
ludamad pushed a commit that referenced this pull request Dec 16, 2025
Summary of the fixes based on the audit of the `bigfield` class by
Veridise (on commit
[480f49d](https://github.com/AztecProtocol/aztec-packages/tree/480f49dad314ea4e753ff1e5180992a16926d2b3)):

#### [V-AZT-VUL-001: Missing zero-check for division by constant]()

Added an assertion that triggers when the divisor is a constant zero and
the flag `check_for_zero` is set to true. Also added a test that
validates this behaviour.

Response: Fixed in
bbf2f73

#### [V-AZT-VUL-002: Incomplete ‘assert_is_not_equal‘ assumes random
distribution]()

The function `assert_is_not_equal` is no longer exposed as a public
function (via noir). In fact, none of the non-native field methods (and
group operations) are exposed to developers via noir. Noir instead uses
the natively written
[noir-bignum](https://github.com/noir-lang/noir-bignum) library. (The
`bigint_constraint.cpp` file that contains the interface for non-native
field methods between noir and barretenberg is yet to be removed from
the repository, this will be done in an upcoming PR.)

Response: Acknowledged 

#### [V-AZT-VUL-003: borrow_lo rounds down]()

Updated the computation of `borrow_lo_value` to use ceiling instead of
floor:
```cpp
uint256_t borrow_lo_value = (max_remainders_lo + ((uint256_t(1) << (2 * NUM_LIMB_BITS)) - 1)) >> (2 * NUM_LIMB_BITS);
```

Response: Fixed in
c1bdd88

#### [V-AZT-VUL-004: Unsafe default for `msub_div()`]()

Changed the default value of `enable_divisor_nz_check` to `true` and
explicitly pass it as `false` in its usage in the `biggroup` class. In a
few instances, passing this parameter as `false` can lead to incorrect
handling of the point at infinity, added TODOs at such places. These
instances will be analysed separately to avoid any unexpected behaviour.

Response: Fixed in
25e820c,
usage of `msub_div` will be analysed separately

#### [V-AZT-VUL-005: Limbs with a maximum value of 0 trigger compilation
failure]()

In the function `decompose_into_default_range` we have an assertion:
```cpp
ASSERT(num_bits > 0)
``` 
because number of bits (of a circuit variable) being 0 does not really
make sense. The only case when `carry_hi_msb` or `carry_lo_msb` can be 0
is when:
```cpp
template <typename Builder, typename T>
void bigfield<Builder, T>::unsafe_evaluate_multiply_add(const bigfield& input_left,
                                                        const bigfield& input_to_mul,
                                                        const std::vector<bigfield>& to_add,
                                                        const bigfield& input_quotient,
                                                        const std::vector<bigfield>& input_remainders)
```
is called with `input_to_mul` as constant 0 and/or `input_quotient` as
constant 0. In the usage of the function `unsafe_evaluate_multiply_add`
(or `unsafe_evaluate_multiple_multiply_add`) the `input_quotient` is
always a circuit witness. This ensures that the `carry_lo_msb > 0` and
`carry_hi_msb > 0` (added assertions to verify this empirically).

Response: We think it does not make sense to support `num_bits = 0` in
range constraint functions, but open to discussion on this if there's
any way `carry_lo_msb` or `carry_hi_msb` can actually be 0 in practice.

#### [V-AZT-VUL-006: Off-by-one comparison of
maximum_unreduced_limb_value]()

Changed the definition of `get_maximum_unreduced_limb_value()` to $2^Q -
1$ (from $2^Q$).

Response: Fixed in
3da35ba

#### [V-AZT-VUL-007: Maintainability Improvements]()

1. Removed unused constants:
    - `bigfield.prime_basis_maximum_limb`
    - `bigfield.shift_right_1`
    - `bigfield.shift_right_2`
2. Renamed `negative_prime_modulus_mod_binary_basis` to
`negative_prime_modulus_mod_native_basis` and use that while checking
the prime limb consistency (in `evaluate_polynomial_identity`)
3. Used `NUM_LIMBS` instead of hard-coded 4.
4. Fixed comments (using $L$ instead of $t$)
5. Changed the definition of `get_maximum_unreduced_value()` to a form
$(2^N - 1)$ and use the comparison operators correctly
6. Fixed `static constexpr uint64_t MAX_UNREDUCED_LIMB_BITS =
NUM_LIMB_BITS + MAX_ADDITION_LOG`
7. Got rid of variable `neg_prime`
8. Used `is_constant()` instead of iterating over each limb to check
constant-ness in `operator+` and `operator-`.
9. Removed duplicate/redundant assertion: `ASSERT(input_left.size() ==
input_right.size() && input_left.size() < 1024);`
10. Fixed misc comments

Response: Fixed in
e2024df

#### [V-AZT-VUL-008: Incorrect term used during calculation on bound of
upper limb]()

Corrected the calculation of the resultant maximum bound of the upper
and lower limbs:

$$D_{\textsf{hi}} < 2^{2Q + L + \textcolor{red}{3}},$$
$$D_{\textsf{lo}} < 2^{2Q + L + \textcolor{red}{2}}.$$ 

Response: Fixed in
f41813e

#### [V-AZT-VUL-009: Optimization Improvements]()

1. In the function `add_two`, we could add another `if` condition when
two of the three inputs are constant and reduced by the prime. But the
current implementation should handle that implicitly on calling
`add_two` on each limb. Note that no additional gates would be added
since each `add_two` call on limbs would just update additive constants
on the witness.
2. Agree with the observation but we think its fine to have redundant
reductions in constant case as that would cause negligible impact on
prover performance.
3. It is a good suggestion to use a tighter bound for range constraint
on the quotient when reduction is not necessary. However, while creating
the quotient witness using the constructor:
    ```cpp
    template <typename Builder, typename T>
bigfield<Builder, T> bigfield<Builder,
T>::create_from_u512_as_witness(Builder* ctx,
const uint512_t& value,
const bool can_overflow,
const size_t maximum_bitlength)
    ```
when `can_overflow` is set to `false`, `maximum_bitlength` must be
greater than $3L$. This assertion breaks if we use tighter range
constraint on the quotient. This needs to be investigated further.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants