Skip to content

chore: cycle group cleanup #2#16876

Merged
ledwards2225 merged 7 commits intomerge-train/barretenbergfrom
lde/cycle-group-2
Sep 10, 2025
Merged

chore: cycle group cleanup #2#16876
ledwards2225 merged 7 commits intomerge-train/barretenbergfrom
lde/cycle-group-2

Conversation

@ledwards2225
Copy link
Contributor

@ledwards2225 ledwards2225 commented Sep 8, 2025

Cleanup and minor performance related bugfixes for cycle_group::_variable_base_batch_mul_internal()

  • Utilize native hints in stdlib straus lookup table construction where they were erroneously not being used before
  • remove duplicate construction of native straus lookup tables (see PR comments for more details)
  • virtually every instance of emplace_back was being used incorrectly
  • remove std::optional logic associated with broken support for scalars of different sizes (now protected with an assert)

@ledwards2225 ledwards2225 changed the title chore: more cycle group cleanup chore: cycle group cleanup #2 Sep 8, 2025
@ledwards2225 ledwards2225 marked this pull request as ready for review September 8, 2025 23:21
@ledwards2225 ledwards2225 changed the base branch from next to merge-train/barretenberg September 8, 2025 23:21
std::vector<cycle_group> points_to_add;
for (size_t i = 0; i < num_rounds; ++i) {
for (size_t j = 0; j < num_points; ++j) {
const std::optional<field_t> scalar_slice = scalar_slices[j].read(num_rounds - i - 1);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing optional logic which was only needed to support scalars of different size (which was not actually used/supported)


// perform each round's additions
for (size_t j = 0; j < num_points; ++j) {
const std::optional<field_t> scalar_slice = scalar_slices[j].read(num_rounds - i - 1);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

again just removing no longer needed optional logic here

EXPECT_EQ(proof_result, true);
}

TYPED_TEST(CycleGroupTest, TestBatchMul)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All I've done here is split a single test for BatchMul with many cases into multiple tests with one case each. Previously we were using the same builder for ALL cases then doing a check_circuit which makes failures very hard to debug. Splitting them provides immediate feedback into which case is failing.

{
// Construct native straus lookup table for each point
std::vector<std::vector<Element>> native_straus_tables;
for (size_t i = 0; i < num_points; ++i) {
Copy link
Contributor Author

@ledwards2225 ledwards2225 Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It turns out that this code was computing the native straus lookup tables twice. This may have been intentional for the following somewhat convoluted reason: To avoid edge cases in the stdlib version of the tables, we replace the point at infinity with a generator point (one), then perform the additions required to compute the straus table values. To create "hints" for these in-circuit computations (i.e. the underlying native witness values) we would need to handle the point at infinity the same way in the native table construction. Fine. BUT we also need the genuine native tables (i.e. where infinity is treated as infinity) to compute the native intermediate accumulator values in the straus algorithm. So, if we want to be able to provide hints to the in-circuit version, we would need to construct the native tables twice in the two ways just described. This is needlessly complicated so what I do instead is just construct the native tables once (correctly treating infinity as infinity) and tell the in-circuit version that no hint exists if the point is the point at infinity. There is really no efficiency loss here since performing an on-the-fly addition with the point at infinity is basically free.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice simplification, the original logic seems quite convoluted indeed

std::span<AffineElement> table_hints(&operation_hints[i * hints_per_table], hints_per_table);
// Merge tags
tag = OriginTag(tag, scalars[i].get_origin_tag(), base_points[i].get_origin_tag());
point_tables.emplace_back(straus_lookup_table(context, base_points[i], offset_generators[i + 1], TABLE_BITS));
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note a bug here where we were not even using the hints constructed a few lines above when we go to construct the straus_lookup_table! Corrected on the right.

size_t table_bits)
{
const size_t table_size = 1UL << table_bits;
Element base = base_point.is_point_at_infinity() ? Group::one : base_point;
Copy link
Contributor Author

@ledwards2225 ledwards2225 Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As explained previously, This conditional was necessary in order for the native hints (in the case of infinity) to match what was expected by the stdlib version of this code (i.e. infinity is replaced by one). But its far simpler (and no less efficient) to just have this code construct the correct tables so they can also be used in the actual native straus algorithm, then simply not use the "hints" when encountering infinity in the stdlib straus table construction. (Indeed, before this PR the hints were not being used anyway!)

std::vector<straus_scalar_slice> scalar_slices;
scalar_slices.reserve(num_points);
for (size_t i = 0; i < num_points; ++i) {
scalar_slices.emplace_back(straus_scalar_slice(context, scalars[i], TABLE_BITS));
Copy link
Contributor Author

@ledwards2225 ledwards2225 Sep 9, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The purpose of emplace_back is to pass the constructor arguments directly so that the object can be constructed in place rather than constructed then moved. Its possible this would be optimized to be equivalent anyway but at the very least its poor code. I also fixed cases using emplace_back on already constructed objects, which should use push_back. Maybe a bit pedantic but I prefer to have correctness on this issue.

@ledwards2225 ledwards2225 changed the base branch from merge-train/barretenberg to next September 9, 2025 19:01
@ledwards2225 ledwards2225 changed the base branch from next to merge-train/barretenberg September 9, 2025 19:01
size_t num_rounds = (num_bits + TABLE_BITS - 1) / TABLE_BITS;

const size_t num_points = scalars.size();
size_t num_rounds = numeric::ceil_div(num_bits, TABLE_BITS);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oops, I've done ceil_div by hand in several tests

@@ -90,15 +93,15 @@ straus_lookup_table<Builder>::straus_lookup_table(Builder* context,
point_table[0] = cycle_group<Builder>::from_constant_witness(_context, offset_generator.get_value());
for (size_t i = 1; i < table_size; ++i) {
std::optional<AffineElement> hint =
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

are you also removing optional<AffineElement> stuff in follow-ups?

Copy link
Contributor Author

@ledwards2225 ledwards2225 Sep 10, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

optionals are being used incorrectly in a number of places in this module but this one is actually valid. Optional makes sense when both paths are meaningful - i.e. when the parameter is not provided, take some alternative meaningful path. That's the case here. If we have no precomputed hint, we simply have to compute it on the fly. In other places (e.g. with the different sized scalars) there was no viable alternate path to take if the optional was null

Copy link
Contributor

@iakovenkos iakovenkos left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

@ledwards2225 ledwards2225 merged commit 0275e5f into merge-train/barretenberg Sep 10, 2025
9 checks passed
@ledwards2225 ledwards2225 deleted the lde/cycle-group-2 branch September 10, 2025 16:32
github-merge-queue bot pushed a commit that referenced this pull request Sep 11, 2025
BEGIN_COMMIT_OVERRIDE
fix: Origin Tags edgecase (#16921)
chore: cycle group cleanup #2 (#16876)
chore: civc tidy 3 (#16671)
refactor(bb): optimize batch_mul_with_endomorphism (#16905)
feat: check op queue wires are zero past minicircuit in Translator
(#16858)
feat: Add CPU scaling benchmark script for remote execution (#16918)
fix: Add free witness tag to field constructor (#16827)
fix(bb): darwin build (#16957)
END_COMMIT_OVERRIDE
Umarb97 pushed a commit to Umarb97/aztec-packages that referenced this pull request Sep 16, 2025
Cleanup and minor performance related bugfixes for
`cycle_group::_variable_base_batch_mul_internal()`
- Utilize native hints in stdlib straus lookup table construction where
they were erroneously not being used before
- remove duplicate construction of native straus lookup tables (see PR
comments for more details)
- virtually every instance of `emplace_back` was being used incorrectly
- remove `std::optional` logic associated with broken support for
scalars of different sizes (now protected with an assert)
mralj pushed a commit that referenced this pull request Oct 13, 2025
Cleanup and minor performance related bugfixes for
`cycle_group::_variable_base_batch_mul_internal()`
- Utilize native hints in stdlib straus lookup table construction where
they were erroneously not being used before
- remove duplicate construction of native straus lookup tables (see PR
comments for more details)
- virtually every instance of `emplace_back` was being used incorrectly
- remove `std::optional` logic associated with broken support for
scalars of different sizes (now protected with an assert)
ludamad pushed a commit that referenced this pull request Dec 16, 2025
Cleanup and minor performance related bugfixes for
`cycle_group::_variable_base_batch_mul_internal()`
- Utilize native hints in stdlib straus lookup table construction where
they were erroneously not being used before
- remove duplicate construction of native straus lookup tables (see PR
comments for more details)
- virtually every instance of `emplace_back` was being used incorrectly
- remove `std::optional` logic associated with broken support for
scalars of different sizes (now protected with an assert)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants