Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[SessionManager] Implement off-chain proof params usage #765

Merged
merged 16 commits into from
Sep 6, 2024
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Next Next commit
feat: Implement off-chain proof params usage
red-0ne committed Aug 27, 2024

Verified

This commit was created on GitHub.com and signed with GitHub’s verified signature. The key has expired.
commit 8a9a8fa31c3df95f5af42a83ce0a24424946395e
40 changes: 36 additions & 4 deletions e2e/tests/0_settlement.feature
Original file line number Diff line number Diff line change
@@ -17,14 +17,46 @@ Feature: Tokenomics Namespace
And the "application" account for "app1" is staked
And the service "anvil" registered for application "app1" has a compute units per relay of "1"
# Start servicing
When the supplier "supplier1" has serviced a session with "10" relays for service "anvil" for application "app1"
# The number of relays serviced is set to make the resulting compute units sum
# above the current ProofRequirementThreshold governance parameter so a proof
# is always required.
# TODO_TECHDEBT(#745): Once the SMST is updated with the proper weights,
# using the appropriate compute units per relay, we can then restore the
# previous relay count.
When the supplier "supplier1" has serviced a session with "21" relays for service "anvil" for application "app1"
# Wait for the Claim & Proof lifecycle
And the user should wait for the "proof" module "CreateClaim" Message to be submitted
And the user should wait for the "proof" module "SubmitProof" Message to be submitted
And the user should wait for the "tokenomics" module "ClaimSettled" end block event to be broadcast
And the user should wait for the "tokenomics" module "ClaimSettled" end block event with "THRESHOLD" proof requirement to be broadcast
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
# Validate the results
Then the account balance of "supplier1" should be "420" uPOKT "more" than before
And the "application" stake of "app1" should be "420" uPOKT "less" than before
Then the account balance of "supplier1" should be "882" uPOKT "more" than before
And the "application" stake of "app1" should be "882" uPOKT "less" than before

Scenario: Emissions equals burn when a claim is created but a proof is not required
# Baseline
Given the user has the pocketd binary installed
# Network preparation
And an account exists for "supplier1"
And the "supplier" account for "supplier1" is staked
And an account exists for "app1"
And the "application" account for "app1" is staked
And the service "anvil" registered for application "app1" has a compute units per relay of "1"
# Set proof_request_probability to 0 and proof_requirement_threshold to 100 to make sure a proof is not required.
And the proof governance parameters are set as follows to not require a proof
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
| name | value | type |
| relay_difficulty_target_hash | ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff | bytes |
| proof_request_probability | 0 | float |
| proof_requirement_threshold | 100 | int64 |
| proof_missing_penalty | 320 | coin |
# Start servicing
When the supplier "supplier1" has serviced a session with "21" relays for service "anvil" for application "app1"
# Wait for the Claim & Proof lifecycle
And the user should wait for the "proof" module "CreateClaim" Message to be submitted
# We intentionally skip waiting for the proof to be submitted since the event will not be emitted.
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
And the user should wait for the "tokenomics" module "ClaimSettled" end block event with "NOT_REQUIRED" proof requirement to be broadcast
# Validate the results
Then the account balance of "supplier1" should be "882" uPOKT "more" than before
And the "application" stake of "app1" should be "882" uPOKT "less" than before

# TODO_ADDTEST: Implement the following scenarios
# Scenario: Emissions equals burn when a claim is created and a valid proof is submitted but not required
12 changes: 10 additions & 2 deletions e2e/tests/session.feature
Original file line number Diff line number Diff line change
@@ -2,12 +2,20 @@ Feature: Session Namespace

Scenario: Supplier completes claim/proof lifecycle for a valid session
Given the user has the pocketd binary installed
When the supplier "supplier1" has serviced a session with "5" relays for service "svc1" for application "app1"
# The number of relays serviced is set to make the resulting compute units sum
# above the current ProofRequirementThreshold governance parameter so a proof
# is always required.
# TODO_TECHDEBT(#745): Once the SMST is updated with the proper weights,
# using the appropriate compute units per relay, we can then restore the
# previous relay count.
When the supplier "supplier1" has serviced a session with "21" relays for service "anvil" for application "app1"
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
And the user should wait for the "proof" module "CreateClaim" Message to be submitted
And the user should wait for the "proof" module "ClaimCreated" tx event to be broadcast
Then the claim created by supplier "supplier1" for service "svc1" for application "app1" should be persisted on-chain
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Am I understanding correctly that eliminating this step improved flakiness because the claim was being pruned (after settlement) too quickly for the test?

My main concern is that while the ClaimSettled event may be a valid signal that the claim was created ( subsequently settled, and pruned), the step removed here attempts to assert that:

  1. The claim is retrievable by the CLI (end-to-end-to-end test)
  2. That the retrieved query satisfies our expectations

Do you see a way we could maintain coverage around those assertions?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this is coming from the assertion we make about the claims count

	require.Greater(s, len(allClaimsRes.Claims), len(preExistingClaims), "number of claims must have increased")

Which in some circumstances (tests repeated to quickly) fails because the previous claim gets removed.

I'll put back the step even if it's at the cost of some flakiness.

And the user should wait for the "proof" module "SubmitProof" Message to be submitted
And the user should wait for the "proof" module "ProofSubmitted" tx event to be broadcast
# Observing the "ClaimSettled" block event also asserts that the claim was
# present at the moment of settlement. So there is no need to query the claim
# directly.
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
Then the claim created by supplier "supplier1" for service "anvil" for application "app1" should be successfully settled

# TODO_BLOCKER(@red-0ne): Make sure to implement and validate this test
50 changes: 15 additions & 35 deletions e2e/tests/session_steps_test.go
Original file line number Diff line number Diff line change
@@ -10,6 +10,7 @@ import (

abci "github.com/cometbft/cometbft/abci/types"
cosmostypes "github.com/cosmos/cosmos-sdk/types"
"github.com/regen-network/gocuke"
"github.com/stretchr/testify/require"

"github.com/pokt-network/poktroll/pkg/client/block"
@@ -49,49 +50,16 @@ func (s *suite) TheUserShouldWaitForTheModuleTxEventToBeBroadcast(module, eventT
s.waitForTxResultEvent(newEventTypeMatchFn(module, eventType))
}

func (s *suite) TheUserShouldWaitForTheModuleEndBlockEventToBeBroadcast(module, eventType string) {
func (s *suite) TheUserShouldWaitForTheModuleEndBlockEventWithProofRequirementToBeBroadcast(module, eventType, proofRequirement string) {
s.waitForNewBlockEvent(
combineEventMatchFns(
newEventTypeMatchFn(module, eventType),
newEventModeMatchFn("EndBlock"),
newEventAttributeMatchFn("proof_requirement", fmt.Sprintf("%q", proofRequirement)),
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
),
)
}

// TODO_FLAKY: See how 'TheClaimCreatedBySupplierForServiceForApplicationShouldBeSuccessfullySettled'
// was modified to using an event replay client, instead of a query, to eliminate the flakiness.
func (s *suite) TheClaimCreatedBySupplierForServiceForApplicationShouldBePersistedOnchain(supplierOperatorName, serviceId, appName string) {
ctx := context.Background()

allClaimsRes, err := s.proofQueryClient.AllClaims(ctx, &prooftypes.QueryAllClaimsRequest{
Filter: &prooftypes.QueryAllClaimsRequest_SupplierOperatorAddress{
SupplierOperatorAddress: accNameToAddrMap[supplierOperatorName],
},
})
require.NoError(s, err)
require.NotNil(s, allClaimsRes)

// Assert that the number of claims has increased by one.
preExistingClaims, ok := s.scenarioState[preExistingClaimsKey].([]prooftypes.Claim)
require.Truef(s, ok, "%s not found in scenarioState", preExistingClaimsKey)
// NB: We are avoiding the use of require.Len here because it provides unreadable output
// TODO_TECHDEBT: Due to the speed of the blocks of the LocalNet validator, along with the small number
// of blocks per session, multiple claims may be created throughout the duration of the test. Until
// these values are appropriately adjusted
require.Greater(s, len(allClaimsRes.Claims), len(preExistingClaims), "number of claims must have increased")

// TODO_IMPROVE: assert that the root hash of the claim contains the correct
// SMST sum. The sum can be retrieved by parsing the last 8 bytes as a
// binary-encoded uint64; e.g. something like:
// `binary.Uvarint(claim.RootHash[len(claim.RootHash-8):])`

// TODO_IMPROVE: add assertions about serviceId and appName and/or incorporate
// them into the scenarioState key(s).

claim := allClaimsRes.Claims[0]
require.Equal(s, accNameToAddrMap[supplierOperatorName], claim.SupplierOperatorAddress)
}

func (s *suite) TheSupplierHasServicedASessionWithRelaysForServiceForApplication(supplierOperatorName, numRelaysStr, serviceId, appName string) {
ctx := context.Background()

@@ -151,6 +119,18 @@ func (s *suite) TheClaimCreatedBySupplierForServiceForApplicationShouldBeSuccess
s.waitForNewBlockEvent(isValidClaimSettledEvent)
}

func (suite *suite) TheProofGovernanceParametersAreSetAsFollowsToNotRequireAProof(params gocuke.DataTable) {
suite.AnAuthzGrantFromTheAccountToTheAccountForTheMessageExists(
"gov",
"module",
"pnf",
"user",
"/poktroll.proof.MsgUpdateParams",
)

suite.TheAccountSendsAnAuthzExecMessageToUpdateAllModuleParams("pnf", "proof", params)
}
red-0ne marked this conversation as resolved.
Show resolved Hide resolved

func (s *suite) sendRelaysForSession(
appName string,
supplierOperatorName string,
106 changes: 103 additions & 3 deletions pkg/relayer/session/claim.go
Original file line number Diff line number Diff line change
@@ -5,6 +5,7 @@ import (
"fmt"

"github.com/pokt-network/poktroll/pkg/client"
poktrand "github.com/pokt-network/poktroll/pkg/crypto/rand"
"github.com/pokt-network/poktroll/pkg/either"
"github.com/pokt-network/poktroll/pkg/observable"
"github.com/pokt-network/poktroll/pkg/observable/channel"
@@ -180,9 +181,26 @@ func (rs *relayerSessionsManager) newMapClaimSessionsFn(
return either.Success(sessionTrees), false
}

// Map key is the supplier operator address.
claimMsgs := make([]client.MsgCreateClaim, 0)
claimMsgs := make([]client.MsgCreateClaim, 0, len(sessionTrees))
// sessionTreesWithProofRequired will accumulate all the sessionTrees that
// will require a proof to be submitted.
sessionTreesWithProofRequired := make([]relayer.SessionTree, 0)
for _, sessionTree := range sessionTrees {
isProofRequired, err := rs.isProofRequired(ctx, sessionTree)

// If an error is encountered while determining if a proof is required,
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
// do not create the claim since the proof requirement is unknown.
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
if err != nil {
failedCreateClaimsSessionsPublishCh <- sessionTrees
rs.logger.Error().Err(err).Msg("failed to determine if proof is required")
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
continue
}

// If a proof is required, add the session to the list of sessions that require a proof.
if isProofRequired {
sessionTreesWithProofRequired = append(sessionTreesWithProofRequired, sessionTree)
}

claimMsgs = append(claimMsgs, &prooftypes.MsgCreateClaim{
RootHash: sessionTree.GetClaimRoot(),
SessionHeader: sessionTree.GetSessionHeader(),
@@ -197,7 +215,14 @@ func (rs *relayerSessionsManager) newMapClaimSessionsFn(
return either.Error[[]relayer.SessionTree](err), false
}

return either.Success(sessionTrees), false
// Skip the proof submission logic if none of the sessionTrees require a proof.
if len(sessionTreesWithProofRequired) == 0 {
return either.Success(sessionTreesWithProofRequired), true
}

// Only return the sessionTrees that require a proof so that they can be further
// processed by the proof submission logic.
return either.Success(sessionTreesWithProofRequired), false
}
}

@@ -231,3 +256,78 @@ func (rs *relayerSessionsManager) goCreateClaimRoots(
failSubmitProofsSessionsCh <- failedClaims
claimsFlushedCh <- flushedClaims
}

// isProofRequired determines whether a proof is required for the given session's
// claim based on the proof governance parameters.
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
func (rs *relayerSessionsManager) isProofRequired(
ctx context.Context,
sessionTree relayer.SessionTree,
) (isProofRequired bool, err error) {
logger := rs.logger.With(
"session_id", sessionTree.GetSessionHeader().GetSessionId(),
"claim_root", fmt.Sprintf("%x", sessionTree.GetClaimRoot()),
"supplier_operator_address", sessionTree.GetSupplierOperatorAddress().String(),
)

// Create the claim object and use its methods to determine if a proof is required.
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
claim := prooftypes.Claim{
SupplierOperatorAddress: sessionTree.GetSupplierOperatorAddress().String(),
SessionHeader: sessionTree.GetSessionHeader(),
RootHash: sessionTree.GetClaimRoot(),
}
red-0ne marked this conversation as resolved.
Show resolved Hide resolved

// Get the number of compute units accumulated through the given session.
numClaimComputeUnits, err := claim.GetNumComputeUnits()
if err != nil {
return false, err
}

proofParams, err := rs.proofQueryClient.GetParams(ctx)
if err != nil {
return false, err
}

logger = logger.With(
"num_claim_compute_units", numClaimComputeUnits,
"proof_requirement_threshold", proofParams.GetProofRequirementThreshold(),
)
red-0ne marked this conversation as resolved.
Show resolved Hide resolved

// Require a proof if the claim's compute units meets or exceeds the threshold.
if numClaimComputeUnits >= proofParams.GetProofRequirementThreshold() {
logger.Info().Msg("compute units is above threshold, claim requires proof")

return true, nil
}

// Get the hash of the claim to seed the random number generator.
var claimHash []byte
claimHash, err = claim.GetHash()
if err != nil {
return false, err
}

// Sample a pseudo-random value between 0 and 1 to determine if a proof is required probabilistically.
var randFloat float32
randFloat, err = poktrand.SeededFloat32(claimHash[:])
if err != nil {
return false, err
}
red-0ne marked this conversation as resolved.
Show resolved Hide resolved

logger = logger.With(
"claim_hash", fmt.Sprintf("%x", claimHash),
"rand_float", randFloat,
"proof_request_probability", proofParams.GetProofRequestProbability(),
)

// Require a proof probabilistically based on the proof_request_probability param.
// NB: A random value between 0 and 1 will be less than or equal to proof_request_probability
// with probability equal to the proof_request_probability.
if randFloat <= proofParams.GetProofRequestProbability() {
logger.Info().Msg("claim hash seed is below proof request probability, claim requires proof")

return true, nil
}

logger.Info().Msg("claim does not require proof")
return false, nil
}
5 changes: 5 additions & 0 deletions pkg/relayer/session/session.go
Original file line number Diff line number Diff line change
@@ -54,13 +54,17 @@ type relayerSessionsManager struct {

// sharedQueryClient is used to query shared module parameters.
sharedQueryClient client.SharedQueryClient
// proofQueryClient is used to query for the proof requirement threshold and
// requirement probability to determine whether a submitted claim requires a proof.
red-0ne marked this conversation as resolved.
Show resolved Hide resolved
proofQueryClient client.ProofQueryClient
}

// NewRelayerSessions creates a new relayerSessions.
//
// Required dependencies:
// - client.BlockClient
// - client.SupplierClientMap
// - client.ProofQueryClient
//
// Available options:
// - WithStoresDirectory
@@ -81,6 +85,7 @@ func NewRelayerSessions(
&rs.blockClient,
&rs.supplierClients,
&rs.sharedQueryClient,
&rs.proofQueryClient,
); err != nil {
return nil, err
}
Loading