Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update limitador to 0.9.0 #608

Merged
merged 7 commits into from
Jul 23, 2024
Merged

Update limitador to 0.9.0 #608

merged 7 commits into from
Jul 23, 2024

Conversation

eguzki
Copy link
Contributor

@eguzki eguzki commented May 2, 2024

What

Update Limitador to v0.9.0

Bring to the Kuadrant CRD Limtiador's new features:

  • rate limit headers
  • redis-cached -> options -> response-timeout
  • telemetry
  • tracing -> purposefully left out, for now.
  • verbosity

Added some integrations test to increase coverage.

Refactored reconciliation logic to enforce limitador CR configuration only when defined in the Kuadrant CR object. If not defined in the kuadrant CR object, the controller will not revert changes in the limitador object.

Copy link

codecov bot commented May 2, 2024

Codecov Report

Attention: Patch coverage is 69.23077% with 32 lines in your changes missing coverage. Please review.

Project coverage is 82.89%. Comparing base (ece13e8) to head (20735d0).
Report is 136 commits behind head on main.

Additional details and impacted files
@@            Coverage Diff             @@
##             main     #608      +/-   ##
==========================================
+ Coverage   80.20%   82.89%   +2.68%     
==========================================
  Files          64       77      +13     
  Lines        4492     6023    +1531     
==========================================
+ Hits         3603     4993    +1390     
- Misses        600      681      +81     
- Partials      289      349      +60     
Flag Coverage Δ
bare-k8s-integration 4.52% <0.00%> (?)
controllers-integration 72.54% <53.84%> (?)
gatewayapi-integration 10.95% <25.00%> (?)
integration ?
istio-integration 56.16% <25.00%> (?)
unit 33.27% <41.34%> (+3.23%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Components Coverage Δ
api/v1beta1 (u) 71.42% <ø> (ø)
api/v1beta2 (u) 92.17% <93.93%> (+0.75%) ⬆️
pkg/common (u) 88.13% <ø> (-0.70%) ⬇️
pkg/istio (u) 73.88% <ø> (-0.03%) ⬇️
pkg/log (u) 94.73% <ø> (ø)
pkg/reconcilers (u) ∅ <ø> (∅)
pkg/rlptools (u) 83.33% <ø> (+3.87%) ⬆️
controllers (i) 82.44% <84.23%> (+5.64%) ⬆️
Files Coverage Δ
api/v1beta1/kuadrant_types.go 71.42% <ø> (ø)
controllers/kuadrant_controller.go 64.76% <78.37%> (+11.19%) ⬆️
pkg/kuadranttools/limitador_mutators.go 64.17% <64.17%> (ø)

... and 40 files with indirect coverage changes

@eguzki eguzki marked this pull request as ready for review May 2, 2024 18:12
@eguzki eguzki requested a review from a team as a code owner May 2, 2024 18:12
@alexsnaps
Copy link
Member

There are few things here that we deliberately left out... e.g. tracing. We purposefully decided to not have them part of the API. This is fairly big change and I think we should be cautious about adding all these. It would also possibly mean needing to support these moving forward!

@eguzki
Copy link
Contributor Author

eguzki commented May 3, 2024

There are few things here that we deliberately left out... e.g. tracing. We purposefully decided to not have them part of the API. This is fairly big change and I think we should be cautious about adding all these. It would also possibly mean needing to support these moving forward!

removing tracing. Anything else?

@alexsnaps
Copy link
Member

Honestly, anything around the cached_redis makes me uncomfortable...

@eguzki
Copy link
Contributor Author

eguzki commented May 3, 2024

Honestly, anything around the cached_redis makes me uncomfortable...

cached_redis option was already there. This PR adds response-timeout extra option available to the Kuadrant CRD. we should make a call. We either allow all available params for redis_cached or remove redis_cached entirely from the Kuadrant CRD. Provided that redis_cached is the storage used for global rate limiting, I opt to add it.

@eguzki eguzki force-pushed the update-limitador branch 4 times, most recently from 312603b to bab68a7 Compare May 7, 2024 15:51
@eguzki eguzki changed the title Update limitador Update limitador to 0.9.0 Jul 10, 2024
@eguzki eguzki marked this pull request as draft July 10, 2024 16:06
on parallel run, multiple kuadrant CR were create at the same time
When kuadrant CR object defined limitador replica 1, limitador obj was not updated
@eguzki eguzki marked this pull request as ready for review July 15, 2024 09:32
@eguzki
Copy link
Contributor Author

eguzki commented Jul 15, 2024

ready for review @Kuadrant/engineering

Comment on lines +126 to +133
func LimitadorVerbosityMutator(desired, existing *limitadorv1alpha1.Limitador) bool {
update := false
if !reflect.DeepEqual(existing.Spec.Verbosity, desired.Spec.Verbosity) {
existing.Spec.Verbosity = desired.Spec.Verbosity
update = true
}
return update
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you think it's possible to DRY these funcs a bit, with something like...

func UpdateFieldIfNotEqual(existing, desired interface{}, fieldName string) bool {
	existingVal := reflect.ValueOf(existing).Elem().FieldByName(fieldName)
	desiredVal := reflect.ValueOf(desired).Elem().FieldByName(fieldName)

	if existingVal.IsValid() && desiredVal.IsValid() {
		if !existingVal.Equal(desiredVal) {
			existingVal.Set(desiredVal)
			return true
		}
	}
	return false
}

Since we are updating fields when are not matching the desired state, then we can populate the fields in an array and call it either in a for loop or reducer fn like...

limitadorSpecFieldNames := make(string, 0)
limitadorSpecFieldNames = append(limitadorSpecFieldNames,"Affinity")

....

for _, fieldName := range limitadorSpecFieldNames {
		if UpdateFieldIfNotEqual(existing, desired, fieldName) {
			update = true
		}
	}

not sure if we can even use the limitador spec types for the iteration instead of strings and interface{}

Maybe worth to try... maybe not

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems a great idea to avoid DRY. However, I have few little issues with the proposed implementation

  • What is the semantics of that Equal method? Each field may need different comparing method.
  • The decision call of which fields needs to be "updated if not equal", belongs to the caller, not to the mutator.
  • TBH: I am not very fan of the reflection lib, only when strictly needed.

}
if kObj.Spec.Limitador.Verbosity != nil {
limitadorMutators = append(limitadorMutators, kuadranttools.LimitadorVerbosityMutator)
if kObj.Spec.Limitador != nil {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you feel the DRYing is worthy, maybe the reflect lib can help with iteration...

if kObj.Spec.Limitador != nil {
fields := reflect.ValueOf(kObj.Spec.Limitador)
for i := 0; i < v.NumField(); i++ {
// here check if the value `v.Field(i).Interface()` is nil and call the mutator with the desired and expected
}

@didierofrivia
Copy link
Member

It looks good, maybe there's a pattern for crafting the mutators other than the suggested, we could put a pin and look into it later on too.

Copy link
Member

@didierofrivia didierofrivia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's leave the DRYing pending until we know a common pattern for comparing specs. 👍🏼

@eguzki eguzki merged commit 8cc83c6 into main Jul 23, 2024
25 of 26 checks passed
@eguzki eguzki deleted the update-limitador branch July 23, 2024 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/enhancement New feature or request
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

3 participants