Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LaunchDarkly Token Analyzer #3948

Open
wants to merge 19 commits into
base: main
Choose a base branch
from

Conversation

kashifkhan0771
Copy link
Contributor

@kashifkhan0771 kashifkhan0771 commented Feb 27, 2025

Description:

This PR adds a new analyzer for launchdarkly token
OSSTicket: OSS-95
Screenshot from 2025-03-04 12-10-31

Checklist:

  • Tests passing (make test-community)?
  • Lint passing (make lint this requires golangci-lint)?

@kashifkhan0771 kashifkhan0771 force-pushed the feat/oss-95-launchdarkly-analyzer branch from 897b201 to cc2ac00 Compare March 4, 2025 06:08
@kashifkhan0771 kashifkhan0771 marked this pull request as ready for review March 4, 2025 09:12
@kashifkhan0771 kashifkhan0771 requested a review from a team as a code owner March 4, 2025 09:12
@kashifkhan0771 kashifkhan0771 changed the title LaunchDarkly Token Analyzer [WIP] LaunchDarkly Token Analyzer Mar 4, 2025
@kashifkhan0771 kashifkhan0771 requested a review from a team as a code owner March 5, 2025 14:24
Copy link
Collaborator

@ahrav ahrav left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Left a couple questions, but otherwise this looks great! Nice work.


// getPermissionType return what type of permission is assigned to token
func getPermissionType(token Token) string {
permission := ""
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit (potential alternative solution): A switch is a little more idiomatic, but totally up to you.

Ex:

switch {
case token.Role != "":
    return token.Role
case token.hasInlineRole():
    return "Inline Policy"
case token.hasCustomRoles():
    return "Custom Roles"
default:
    return ""
}

Permissions []string
Resources []Resource
// to concurrently read and write
mu sync.RWMutex
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: I've found it helpful to use mu almost like a "hat" for the resource(s) it protects by placing it directly above them. This makes it visually clear what it's guarding and can even replace the need for a comment.

That said, I’m not entirely sure which resources mu is protecting in this case. 🤔 This is definitely an opinionated take, so feel free to ignore it! 🤣

I forget where I first read about this, but here is a separate article about it. 😅

Ex:

type SecretInfo struct {
    ...
    mu sync.RWMutex <-- see, like a "hat" for "Resources"
    Resources []Resource
}

/*
policy is a set of statements

Jargon:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🤣


// listResourceByType returns a list of resources matching the given type.
func (s *SecretInfo) listResourceByType(resourceType string) []Resource {
s.mu.RLock()
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: Do we actually need the mutex? 🤔 I didn’t notice any concurrent operations, but I might have missed something.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess you missed the requests.go file. Large diffs are not rendered by default.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

😢 I knew there had to be a reason. Sorry about that.

Copy link
Collaborator

@ahrav ahrav left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry I missed some of the files in my initial review.

I left a few suggestions, but only one of them is really blocking. The rest of this looks great. Nice job!

var (
wg sync.WaitGroup
errChan = make(chan error)
aggregatedErrs = make([]string, 0)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: If we use a []error, we can avoid repeatedly converting between string → error → string → error.

optional: Since aggregatedErrs isn’t used in any of the goroutines, moving its declaration right before it's built makes the code a bit easier to read—no need to jump back and forth.

Ex:

var aggregatedErrs []error
for err := range errChan {
    aggregatedErrs = append(aggregatedErrs, err)
}

if len(aggregatedErrs) > 0 {
    return errors.Join(aggregatedErrs...)
}

return responseBodyByte, resp.StatusCode, nil
}

func CaptureResources(client *http.Client, token string, secretInfo *SecretInfo) error {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: It looks like the current approach has a potential deadlock issue.

Using an unbuffered error channel means that if there’s no active consumer, any goroutine trying to send an error will block indefinitely. In this case, the consumer for errChan only starts after wg.Wait(), meaning a goroutine could be stuck waiting to send, preventing wg.Wait() from ever completing.

Possible Solutions:

  1. Buffer the error channel – This would allow it to accommodate all potential errors. However, since the number of projects is unknown when this function is called, we don’t know how many errors to expect. We could use an arbitrarily large buffer size, but that comes with extra memory overhead.
  2. Consume errors as they occur – Instead of trying to predict the number of errors, we could read from the channel continuously and build aggregatedErrs dynamically. I’d personally lean toward this approach, but I’ll leave it up to you.

Ex:

func CaptureResources(client *http.Client, token string, secretInfo *SecretInfo) error {
	errChan := make(chan error, 1)

	// Aggregator goroutine: collects errors as they come.
	var aggregatedErrs []error
	var errAggWg sync.WaitGroup
	errAggWg.Add(1)
	go func() {
		defer errAggWg.Done()
		for err := range errChan {
			aggregatedErrs = append(aggregatedErrs, err)
		}
	}()

	var wg sync.WaitGroup
	// Helper to launch tasks concurrently.
	launch := func(task func() error) {
		wg.Add(1)
		go func() {
			defer wg.Done()
			if err := task(); err != nil {
				errChan <- err
			}
		}()
	}

	// Launch top-level tasks.
	launch(func() error { return captureApplications(client, token, secretInfo) })
	launch(func() error { return captureRepositories(client, token, secretInfo) })
	// Capture projects.
	launch(func() error {
		if err := captureProjects(client, token, secretInfo); err != nil {
			return err
		}
		projects := secretInfo.listResourceByType(projectKey)
		for _, proj := range projects {
			launch(func() error { return captureProjectFeatureFlags(client, token, proj, secretInfo) })
			launch(func() error { return captureProjectEnv(client, token, proj, secretInfo) })
		}
		return nil
	})
	launch(func() error { return captureMembers(client, token, secretInfo) })
	launch(func() error { return captureDestinations(client, token, secretInfo) })
	launch(func() error { return captureTemplates(client, token, secretInfo) })
	launch(func() error { return captureTeams(client, token, secretInfo) })
	launch(func() error { return captureWebhooks(client, token, secretInfo) })

	wg.Wait()
	close(errChan)
	errAggWg.Wait()

	if len(aggregatedErrs) > 0 {
		return errors.Join(aggregatedErrs...)
	}
	return nil
}

// makeLaunchDarklyRequest send the HTTP GET API request to passed url with passed token and return response body and status code
func makeLaunchDarklyRequest(client *http.Client, endpoint, token string) ([]byte, int, error) {
// create request
req, err := http.NewRequest(http.MethodGet, baseURL+endpoint, http.NoBody)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion/question: Do you think we should use a context with a timeout (ctx) to prevent any API calls from hanging? It might help avoid cases where a request gets stuck indefinitely.
Ex:

const defaultTimeout = 10 * time.Second // I chose this value arbitrarily
ctx, cancel := context.WithTimeout(context.Background(), defaultTimeout)
defer cancel()

// create request
req, err := http.NewRequestWithContext(ctx, http.MethodGet, baseURL+endpoint, http.NoBody)
...

var (
baseURL = "https://app.launchdarkly.com/api"

endpoints = map[string]string{
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like this pattern. Makes finding all the endpoints so much easier.

Type: applicationKey,
}

resource.updateResourceMetadata("Maintainer Email", application.Maintainer.Email)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: Do we actually need the updateResourceMetadata method on Resource?

It looks like all it does is check if the map is nil, initialize it if needed, and then set entries. Could we just handle this inline when constructing the Resource above? That would eliminate the need for nil checks altogether.

Am I missing a different use case?

Similar question for this pattern in the functions below for the other resource types.

Ex:

resource := Resource{
    ID:   fmt.Sprintf("launchdarkly/app/%s", application.Key),
    Name: application.Name,
    Type: applicationKey,
    MetaData: map[string]string{
	    MetadataKey: application.Key,
	    "Maintainer Email": application.Maintainer.Email,
	    "Kind": application.Kind,
    },
}

secretInfo.appendResource(resource)

}

resource.updateResourceMetadata("Default branch", repository.DefaultBranch)
resource.updateResourceMetadata("Version", fmt.Sprintf("%d", repository.Version))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: strconv.Itoa(repository.Version) is a bit more idiomatic for converting a single int to a string. It’s also slightly more performant, though that’s probably not a big deal here 😅.

}

resource.updateResourceMetadata("Kind", destination.Kind)
resource.updateResourceMetadata("Version", fmt.Sprintf("%d", destination.Version))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: ditto strconv.Itoa(destination.Version)

Type: teamsKey,
}

resource.updateResourceMetadata("Total Roles Count", fmt.Sprintf("%d", team.Roles.TotalCount))
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: strconv.Itoa for these as well.

baseURL = "https://app.launchdarkly.com/api"

endpoints = map[string]string{
"callerIdentity": "/v2/caller-identity",
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: How come these three are raw strings instead of constants like the others? 🤔 If it was done on purpose, would you mind just including a brief comment as to why.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

4 participants