Skip to content

Commit

Permalink
Fix docs (easy) (#291)
Browse files Browse the repository at this point in the history
* Improve rustdoc

- Updated references in documentation comments, changing from Kotlin-style to rustdoc style.
- Corrected the use of brackets to backticks for proper Rust code referencing in the comments in the `lib.rs` and `r1cs/mod.rs` files.
- The changes made were purely stylistic and cosmetic. No modifications were made to the actual code logic or implementation.

* doc: fix Rustdoc

- Updated hyperlink format in `HyperKZG` module documentation.
- CI should be testing this when rust-lang/rust#56232 resolves.
  • Loading branch information
huitseeker authored Feb 1, 2024
1 parent 28a4395 commit 22616f5
Show file tree
Hide file tree
Showing 12 changed files with 30 additions and 30 deletions.
4 changes: 2 additions & 2 deletions src/lib.rs
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ impl<E: Engine> R1CSWithArity<E> {
}
}

/// Return the [R1CSWithArity]' digest.
/// Return the [`R1CSWithArity`]' digest.
pub fn digest(&self) -> E::Scalar {
let dc: DigestComputer<'_, <E as Engine>::Scalar, Self> = DigestComputer::new(self);
dc.digest().expect("Failure in computing digest")
Expand Down Expand Up @@ -1038,7 +1038,7 @@ where
}
}

/// Compute the circuit digest of a [StepCircuit].
/// Compute the circuit digest of a [`StepCircuit`].
///
/// Note for callers: This function should be called with its performance characteristics in mind.
/// It will synthesize and digest the full `circuit` given.
Expand Down
2 changes: 1 addition & 1 deletion src/provider/hyperkzg.rs
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
//! This module implements Nova's evaluation engine using `HyperKZG`, a KZG-based polynomial commitment for multilinear polynomials
//! HyperKZG is based on the transformation from univariate PCS to multilinear PCS in the Gemini paper (section 2.4.2 in https://eprint.iacr.org/2022/420.pdf).
//! HyperKZG is based on the transformation from univariate PCS to multilinear PCS in the Gemini paper (section 2.4.2 in `<https://eprint.iacr.org/2022/420.pdf>`).
//! However, there are some key differences:
//! (1) HyperKZG works with multilinear polynomials represented in evaluation form (rather than in coefficient form in Gemini's transformation).
//! This means that Spartan's polynomial IOP can use commit to its polynomials as-is without incurring any interpolations or FFTs.
Expand Down
2 changes: 1 addition & 1 deletion src/provider/non_hiding_kzg.rs
Original file line number Diff line number Diff line change
Expand Up @@ -227,7 +227,7 @@ pub type UVKZGPoly<F> = crate::spartan::polys::univariate::UniPoly<F>;
#[derive(Debug, Eq, PartialEq, Default)]
/// KZG Polynomial Commitment Scheme on univariate polynomial.
/// Note: this is non-hiding, which is why we will implement traits on this token struct,
/// as we expect to have several impls for the trait pegged on the same instance of a pairing::Engine.
/// as we expect to have several impls for the trait pegged on the same instance of a `pairing::Engine`.
#[allow(clippy::upper_case_acronyms)]
pub struct UVKZGPCS<E> {
#[doc(hidden)]
Expand Down
6 changes: 3 additions & 3 deletions src/provider/non_hiding_zeromorph.rs
Original file line number Diff line number Diff line change
Expand Up @@ -128,8 +128,8 @@ pub struct ZMProof<E: Engine> {

#[derive(Debug, Clone, Eq, PartialEq, Default)]
/// Zeromorph Polynomial Commitment Scheme on multilinear polynomials.
/// Note: this is non-hiding, which is why we will implement the EvaluationEngineTrait on this token struct,
/// as we will have several impls for the trait pegged on the same instance of a pairing::Engine.
/// Note: this is non-hiding, which is why we will implement the `EvaluationEngineTrait` on this token struct,
/// as we will have several impls for the trait pegged on the same instance of a `pairing::Engine`.
#[allow(clippy::upper_case_acronyms)]
pub struct ZMPCS<E, NE> {
#[doc(hidden)]
Expand Down Expand Up @@ -314,7 +314,7 @@ where
///
/// where `poly(point)` is the evaluation of `poly` at `point`, and each `q_k` is a polynomial in `k` variables.
///
/// Since our evaluations are presented in order reverse from the coefficients, if we want to interpret index q_k
/// Since our evaluations are presented in order reverse from the coefficients, if we want to interpret index `q_k`
/// to be the k-th coefficient in the polynomials returned here, the equality that holds is:
///
/// ```text
Expand Down
2 changes: 1 addition & 1 deletion src/provider/pedersen.rs
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ where
ck: Vec<<E::GE as PrimeCurve>::Affine>,
}

/// [CommitmentKey]s are often large, and this helps with cloning bottlenecks
/// [`CommitmentKey`]s are often large, and this helps with cloning bottlenecks
impl<E> Clone for CommitmentKey<E>
where
E: Engine,
Expand Down
4 changes: 2 additions & 2 deletions src/r1cs/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -160,7 +160,7 @@ impl<E: Engine> R1CSShape<E> {
})
}

/// Generate a random [R1CSShape] with the specified number of constraints, variables, and public inputs/outputs.
/// Generate a random [`R1CSShape`] with the specified number of constraints, variables, and public inputs/outputs.
pub fn random<R: RngCore + CryptoRng>(
num_cons: usize,
num_vars: usize,
Expand Down Expand Up @@ -197,7 +197,7 @@ impl<E: Engine> R1CSShape<E> {
}
}

/// Generate a satisfying [RelaxedR1CSWitness] and [RelaxedR1CSInstance] for this [R1CSShape].
/// Generate a satisfying [`RelaxedR1CSWitness`] and [`RelaxedR1CSInstance`] for this [`R1CSShape`].
pub fn random_witness_instance<R: RngCore + CryptoRng>(
&self,
commitment_key: &CommitmentKey<E>,
Expand Down
2 changes: 1 addition & 1 deletion src/r1cs/sparse.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ pub struct SparseMatrix<F: PrimeField> {
pub cols: usize,
}

/// [SparseMatrix]s are often large, and this helps with cloning bottlenecks
/// [`SparseMatrix`]s are often large, and this helps with cloning bottlenecks
impl<F: PrimeField> Clone for SparseMatrix<F> {
fn clone(&self) -> Self {
Self {
Expand Down
8 changes: 4 additions & 4 deletions src/spartan/sumcheck/engine.rs
Original file line number Diff line number Diff line change
Expand Up @@ -31,7 +31,7 @@ pub trait SumcheckEngine<E: Engine>: Send + Sync {
fn final_claims(&self) -> Vec<Vec<E::Scalar>>;
}

/// The [WitnessBoundSumcheck] ensures that the witness polynomial W defined over n = log(N) variables,
/// The [`WitnessBoundSumcheck`] ensures that the witness polynomial W defined over n = log(N) variables,
/// is zero outside of the first `num_vars = 2^m` entries.
///
/// # Details
Expand Down Expand Up @@ -132,7 +132,7 @@ pub(in crate::spartan) struct MemorySumcheckInstance<E: Engine> {
}

impl<E: Engine> MemorySumcheckInstance<E> {
/// Computes witnesses for MemoryInstanceSumcheck
/// Computes witnesses for `MemoryInstanceSumcheck`
///
/// # Description
/// We use the logUp protocol to prove that
Expand All @@ -147,8 +147,8 @@ impl<E: Engine> MemorySumcheckInstance<E> {
/// W_col[i] = addr_col[i] * gamma + addr_col[i]
/// = z[col[i]] * gamma + addr_col[i]
/// and
/// TS_row, TS_col are integer-valued vectors representing the number of reads
/// to each memory cell of L_row, L_col
/// `TS_row`, `TS_col` are integer-valued vectors representing the number of reads
/// to each memory cell of `L_row`, `L_col`
///
/// The function returns oracles for the polynomials TS[i]/(T[i] + r), 1/(W[i] + r),
/// as well as auxiliary polynomials T[i] + r, W[i] + r
Expand Down
24 changes: 12 additions & 12 deletions src/supernova/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -67,19 +67,19 @@ impl<E: Engine> std::ops::Deref for CircuitDigests<E> {
}

impl<E: Engine> CircuitDigests<E> {
/// Construct a new [CircuitDigests]
/// Construct a new [`CircuitDigests`]
pub fn new(digests: Vec<E::Scalar>) -> Self {
Self { digests }
}

/// Return the [CircuitDigests]' digest.
/// Return the [`CircuitDigests`]' digest.
pub fn digest(&self) -> E::Scalar {
let dc: DigestComputer<'_, <E as Engine>::Scalar, Self> = DigestComputer::new(self);
dc.digest().expect("Failure in computing digest")
}
}

/// A vector of [R1CSWithArity] adjoined to a set of [PublicParams]
/// A vector of [`R1CSWithArity`] adjoined to a set of [`PublicParams`]
#[derive(Debug, Serialize, Deserialize)]
#[serde(bound = "")]
pub struct PublicParams<E1, E2, C1, C2>
Expand Down Expand Up @@ -109,9 +109,9 @@ where
_p: PhantomData<(C1, C2)>,
}

/// Auxiliary [PublicParams] information about the commitment keys and
/// Auxiliary [`PublicParams`] information about the commitment keys and
/// secondary circuit. This is used as a helper struct when reconstructing
/// [PublicParams] downstream in lurk.
/// [`PublicParams`] downstream in lurk.
#[derive(Debug, Clone, PartialEq, Serialize, Deserialize)]
#[serde(bound = "")]
pub struct AuxParams<E1, E2>
Expand Down Expand Up @@ -244,7 +244,7 @@ where
C1: StepCircuit<E1::Scalar>,
C2: StepCircuit<E2::Scalar>,
{
/// Construct a new [PublicParams]
/// Construct a new [`PublicParams`]
///
/// # Note
///
Expand Down Expand Up @@ -344,7 +344,7 @@ where
pp
}

/// Breaks down an instance of [PublicParams] into the circuit params and auxiliary params.
/// Breaks down an instance of [`PublicParams`] into the circuit params and auxiliary params.
pub fn into_parts(self) -> (Vec<R1CSWithArity<E1>>, AuxParams<E1, E2>) {
let digest = self.digest();

Expand Down Expand Up @@ -379,7 +379,7 @@ where
(circuit_shapes, aux_params)
}

/// Create a [PublicParams] from a vector of raw [R1CSWithArity] and auxiliary params.
/// Create a [`PublicParams`] from a vector of raw [`R1CSWithArity`] and auxiliary params.
pub fn from_parts(circuit_shapes: Vec<R1CSWithArity<E1>>, aux_params: AuxParams<E1, E2>) -> Self {
let pp = Self {
circuit_shapes,
Expand All @@ -403,7 +403,7 @@ where
pp
}

/// Create a [PublicParams] from a vector of raw [R1CSWithArity] and auxiliary params.
/// Create a [`PublicParams`] from a vector of raw [`R1CSWithArity`] and auxiliary params.
/// We don't check that the `aux_params.digest` is a valid digest for the created params.
pub fn from_parts_unchecked(
circuit_shapes: Vec<R1CSWithArity<E1>>,
Expand Down Expand Up @@ -440,7 +440,7 @@ where
E1::CE::setup(b"ck", size_primary)
}

/// Return the [PublicParams]' digest.
/// Return the [`PublicParams`]' digest.
pub fn digest(&self) -> E1::Scalar {
self
.digest
Expand All @@ -452,7 +452,7 @@ where
.expect("Failure in retrieving digest")
}

/// All of the primary circuit digests of this [PublicParams]
/// All of the primary circuit digests of this [`PublicParams`]
pub fn circuit_param_digests(&self) -> CircuitDigests<E1> {
let digests = self
.circuit_shapes
Expand Down Expand Up @@ -1178,7 +1178,7 @@ where
{
}

/// Compute the circuit digest of a supernova [StepCircuit].
/// Compute the circuit digest of a supernova [`StepCircuit`].
///
/// Note for callers: This function should be called with its performance characteristics in mind.
/// It will synthesize and digest the full `circuit` given.
Expand Down
2 changes: 1 addition & 1 deletion src/traits/evaluation.rs
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ pub trait EvaluationEngineTrait<E: Engine>: Clone + Send + Sync {
/// A method to perform any additional setup needed to produce proofs of evaluations
///
/// **Note:** This method should be cheap and should not copy most of the
/// commitment key. Look at CommitmentEngineTrait::setup for generating SRS data.
/// commitment key. Look at `CommitmentEngineTrait::setup` for generating SRS data.
fn setup(
ck: Arc<<<E as Engine>::CE as CommitmentEngineTrait<E>>::CommitmentKey>,
) -> (Self::ProverKey, Self::VerifierKey);
Expand Down
2 changes: 1 addition & 1 deletion src/traits/mod.rs
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,7 @@ pub trait ROCircuitTrait<Base: PrimeField> {
) -> Result<Vec<AllocatedBit>, SynthesisError>;
}

/// An alias for constants associated with E::RO
/// An alias for constants associated with `E::RO`
pub type ROConstants<E> =
<<E as Engine>::RO as ROTrait<<E as Engine>::Base, <E as Engine>::Scalar>>::Constants;

Expand Down
2 changes: 1 addition & 1 deletion src/traits/snark.rs
Original file line number Diff line number Diff line change
Expand Up @@ -77,7 +77,7 @@ pub trait BatchedRelaxedR1CSSNARKTrait<E: Engine>:
/// Produces the keys for the prover and the verifier
///
/// **Note:** This method should be cheap and should not copy most of the
/// commitment key. Look at CommitmentEngineTrait::setup for generating SRS data.
/// commitment key. Look at `CommitmentEngineTrait::setup` for generating SRS data.
fn setup(
ck: Arc<CommitmentKey<E>>,
S: Vec<&R1CSShape<E>>,
Expand Down

1 comment on commit 22616f5

@github-actions
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Benchmarks

Table of Contents

Overview

This benchmark report shows the Arecibo GPU benchmarks.
NVIDIA L4
Intel(R) Xeon(R) CPU @ 2.20GHz
32 vCPUs
125 GB RAM
Workflow run: https://github.com/lurk-lab/arecibo/actions/runs/7748806075

Benchmark Results

RecursiveSNARK-NIVC-2

ref=28a4395 ref=22616f5
Prove-NumCons-6540 52.81 ms (✅ 1.00x) 52.76 ms (✅ 1.00x faster)
Verify-NumCons-6540 32.99 ms (✅ 1.00x) 33.20 ms (✅ 1.01x slower)
Prove-NumCons-1028888 324.48 ms (✅ 1.00x) 343.01 ms (✅ 1.06x slower)
Verify-NumCons-1028888 233.75 ms (✅ 1.00x) 256.19 ms (✅ 1.10x slower)

CompressedSNARK-NIVC-Commitments-2

ref=28a4395 ref=22616f5
Prove-NumCons-6540 14.07 s (✅ 1.00x) 13.91 s (✅ 1.01x faster)
Verify-NumCons-6540 78.72 ms (✅ 1.00x) 78.59 ms (✅ 1.00x faster)
Prove-NumCons-1028888 111.61 s (✅ 1.00x) 111.36 s (✅ 1.00x faster)
Verify-NumCons-1028888 774.49 ms (✅ 1.00x) 775.31 ms (✅ 1.00x slower)

Made with criterion-table

Please sign in to comment.