Skip to content

Conversation

@vahidlazio
Copy link
Contributor

No description provided.

response, err := instance.Resolve(request)

// If instance is closed, retry with the current instance (which may have been swapped)
if err != nil && errors.Is(err, ErrInstanceClosed) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should rather be a while loop?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

for how long?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it only happens once since the instance had to be swapped for this to happen.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. It's very rare, but technically it could happen more than once if we for some reason flush more frequently.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We won't swap the instance very often, so it's unlikely we end up in a loop of tries on unloaded instances. I think we can keep this code as is

oldInstance := s.currentInstance.Swap(newInstance).(*ResolverApi)

// Close the old instance
oldInstance.Close(ctx)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There might be concurrent resolves while we close, so close can't call into the wasm module.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we have a lock in the resolver api, might be multiple calls to the resolve but if it's closing, all of them return isClosing error and new instance is aquired.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the flush is also locked with same lock so it's safe.

func (r *ResolverApi) FlushLogs() error {
r.mu.Lock()
// Mark instance as closing to prevent new resolves
r.isClosing = true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Seems odd that we close on FlushLogs?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no why? we do the same in Java, every time we close we flush logs and create a new instance.

Copy link
Member

@nicklasl nicklasl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not really done here but leaving some comments to start.
Also: Needs a full suit of unit tests

Comment on lines 40 to 42
# WASM file path (default: ../../../bin/resolver.wasm)
export CONFIDENCE_WASM_FILE_PATH="/path/to/resolver.wasm"

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not used.

Suggested change
# WASM file path (default: ../../../bin/resolver.wasm)
export CONFIDENCE_WASM_FILE_PATH="/path/to/resolver.wasm"


// Do initial state fetch to get initial state for SwapWasmResolverApi
if err := stateFetcher.Reload(ctx); err != nil {
log.Printf("Initial state fetch failed, using empty state: %v", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should throw an error?

// Get account name from token
token, err := tokenHolder.GetToken(ctx)
if err != nil {
log.Printf("Warning: failed to get initial token, account name will be unknown: %v", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shouldn't this also be a big problem?

vahidlazio and others added 15 commits November 4, 2025 11:41
  Embed the Confidence resolver WASM module directly into the Go provider
  binary using Go 1.16+ embed directive. This eliminates runtime file
  dependencies and ensures version consistency between the Go code and
  WASM module.

  ## Changes

  **Embedding mechanism:**
  - Add `wasm_data.go` with `//go:embed wasm/confidence_resolver.wasm`
  - WASM file committed to `openfeature-provider/go/wasm/` as source
  - Update `provider_builder.go` to use `defaultWasmBytes` instead of
    `os.ReadFile()`
  - Remove `os` import (no longer needed)

  **Version safety with Docker validation:**
  - Add `openfeature-provider-go.validate-wasm` Docker stage that compares
    built WASM (from artifact) with committed WASM (from git)
  - Build fails if files don't match, preventing version drift
  - Validation runs automatically in CI as part of 'all' target

  **Developer workflow:**
  - Add `make sync-wasm-go` target that:
    1. Builds WASM using Docker (ensures correct toolchain/dependencies)
    2. Extracts artifact to local directory
    3. Copies to `openfeature-provider/go/wasm/`
    4. Prompts developer to commit
  - Update `.gitignore` to track embedded WASM while ignoring build artifact

  **User experience:**
  - SDK users: `go get` just works, WASM embedded in module
  - No manual WASM file management required
  - Version consistency guaranteed by CI

  **Backwards compatibility:**
  - `ProviderConfigWithStateProvider.WasmBytes` still supported for
    testing/advanced use cases
  - Falls back to embedded WASM if not provided
Copy link
Member

@fabriziodemaria fabriziodemaria left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel free to disregard the nit comments, if you don't think they are worth doing now

Comment on lines +40 to +55
// NewLocalResolverFactory creates a new LocalResolverFactory with gRPC clients and WASM bytes
// If customStateProvider is not nil, it will be used; otherwise creates a FlagsAdminStateFetcher
// Exposure logging is automatically enabled with gRPC services, disabled with custom StateProvider
// accountId is required when using customStateProvider, otherwise it's extracted from the token
func NewLocalResolverFactory(
ctx context.Context,
runtime wazero.Runtime,
wasmBytes []byte,
resolverStateServiceAddr string,
flagLoggerServiceAddr string,
authServiceAddr string,
apiClientID string,
apiClientSecret string,
customStateProvider StateProvider,
accountId string,
) (*LocalResolverFactory, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: We could create two separate constructors for gRPC-based vs. custom state provider, to make it a bit easier to read and prevent errors

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On second thought, we probably can avoid exposing the ResolverFactory to the integrator (see comment below): in that case I don't mind keeping these constructors as they are

func NewLocalResolverFactory(
ctx context.Context,
runtime wazero.Runtime,
wasmBytes []byte,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: let's rename this to ConfidenceResolverWasm or something like that?

if err != nil {
log.Printf("Initial state fetch failed, using empty state: %v", err)
// TODO should we return an error here?
// return nil, fmt.Errorf("failed to get initial state: %w", err)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should throw an error here: missing an initial state would resolve in default values, which is worse than "stale" values in the case a later fetch fails

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to have tests to verify that throwing here does not escape through OpenFeature as an exception/error.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like to have tests to verify that throwing here does not escape through OpenFeature as an exception/error.

Good point. The OF strategy is to never throw, so really I am not sure what we gain by throwing an error here; thinking in OF terms, I suppose we should set the error code PROVIDER_NOT_READY to all evaluations, but I don't think this is a blocker for getting this PR merged. We can improve on the error codes in a future PR.

In other words, I think we can leave this as is for now

response, err := instance.Resolve(request)

// If instance is closed, retry with the current instance (which may have been swapped)
if err != nil && errors.Is(err, ErrInstanceClosed) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We won't swap the instance very often, so it's unlikely we end up in a loop of tries on unloaded instances. I think we can keep this code as is

return err
}

f.accountID = response.Account
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We have unsynchronized access to f.accountID for read and write, which could cause race conditions. Not sure how dangerous they might be, but perhaps we can avoid this line altogether as we don't expect the account id to change between two fetches within the same session?

if _, err := stub.WriteFlagLogs(rpcCtx, request); err != nil {
log.Printf("Failed to write flag logs: %v", err)
} else {
log.Printf("Successfully sent flag log with %d entries", len(request.FlagAssigned))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: we might need to make the logging levels configurable to not "spam" the integrator's console

go func() {
defer logger.wg.Done()
// Create a context with timeout for the RPC
rpcCtx, cancel := context.WithTimeout(context.Background(), 30*time.Second)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

probably just a nit: to my understanding is usually better to use ctx rather than creating a new detached background context here. AI suggests me that, if we really want to have a detached context, a slightly better way is detachedCtx := context.WithoutCancel(ctx) (but I haven't tested this)

Disable provenance attestations and set SOURCE_DATE_EPOCH to fix
non-deterministic builds in docker/build-push-action@v6.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Pin protoc and protobuf-dev to version 29.4-r0 in all Alpine
stages to ensure consistent proto code generation across
different build environments.

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <[email protected]>
Comment on lines +356 to +361
case int:
return structpb.NewNumberValue(float64(v)), nil
case int64:
return structpb.NewNumberValue(float64(v)), nil
case float64:
return structpb.NewNumberValue(v), nil
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add other common types like int32, float32 (and maybe uint?)

@nicklasl nicklasl changed the title feat: move go local provider feat: add go local provider Nov 4, 2025
@nicklasl nicklasl merged commit 5c0895b into main Nov 4, 2025
10 checks passed
@nicklasl nicklasl deleted the move-go-provider branch November 4, 2025 15:16
@github-actions github-actions bot mentioned this pull request Nov 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants