-
-
Notifications
You must be signed in to change notification settings - Fork 1.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🩹 Fix limiter middleware db connection #1813
Conversation
i will check this tomorrow |
@qracer I think releasing the memory and putting it back into the pool is important for the performance. You are right, if another storage is used, the following construct makes no sense I think we should execute the get on the key in the body of the block and set it again after the change, then the releasing can also remain in it. Should work or? |
@ReneWerner87 I reckon we can do without calling |
Ok. Thought one the position is binding, because one has only after the next the results. |
@ReneWerner87 I've updated the pull request and changed the bugfix approach. Sorry for no test, I don't know how to mock every available database efficiently to test |
Ok, no problem |
middleware/limiter/limiter_fixed.go
Outdated
@@ -69,12 +72,6 @@ func (FixedWindow) New(cfg Config) fiber.Handler { | |||
// Set how many hits we have left | |||
remaining := cfg.Max - e.currHits | |||
|
|||
// Update storage | |||
manager.set(key, e, cfg.Expiration) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
think moving the set function must also be done in the part where the limit was reached
middleware/limiter/limiter_fixed.go
Outdated
@@ -44,6 +44,9 @@ func (FixedWindow) New(cfg Config) fiber.Handler { | |||
|
|||
// Lock entry | |||
mux.Lock() | |||
defer func() { | |||
mux.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is not a good idea, the mutex in the defer would block the reading for too long
Because with c.Next() all possible handlers are executed
@qracer i added some comments, can you please check it again |
@ReneWerner87 I've read your comments and agree with your point, mutexing in my solution is inefficient. I'll push changes right now |
Closes #1812
This PR just removes
release
method from manager.go soSkipSuccessfulRequests
andSkipFailedRequests
from the limiter config will work properly.Close my PR if you find my approach inappropriate to merge. I just used the simplest way to fix the bug but it can be fixed via refactoring code in limited_config.go