-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make 3scale batching policy compatible with the caching policy #757
Conversation
c616be6
to
90f498e
Compare
|
||
-- Note: "update_func" should be one of the handlers exposed by the caching | ||
-- policy. | ||
function _M:update(transaction, backend_status, update_func) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll move update_func
to the initializer so it becomes an attr of the class.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about not passing update_func
at all?
It is passed here only to be called. This function just gets keys for a transaction.
Actually this class feels a bit weird as it has several collaborators, but they are grouped only by one property: they need keys for the transaction. One reads from shdict and other is a function to be executed. Seems to me they don't really belong to be together.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. I was having second thoughts about this class when I opened this PR, and that's why I added the comment. The objective was to isolate the management of the backend downtime cache, but we need to use a handler stored in the context and that makes it a bit weird.
In the end, I left the cache handling responsibility in the main class of the policy. I think that's better for now.
@@ -136,6 +164,9 @@ function _M:access(context) | |||
|
|||
ensure_report_timer_on(self, service_id, backend) | |||
|
|||
-- The caching policy sets a cache handler in the context | |||
self.cache_handler = context.cache_handler |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, the cache handler should be set in the initializer and passed to BackendDowntimeCache.new()
. Unfortunately, that's not possible because the exported cache_handler is not available there.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it necessary to mutate self
? I know right now each service will get own copy the policy, but that might not be necessary in the future.
I'd like to keep policies "multi tenant" in a way they could operate with the same configuration on multiple services.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right 👍
Fixed.
local cached = self.cache_handler and | ||
self.backend_downtime_cache:get(transaction) | ||
|
||
if not cached or cached ~= 200 then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Might be better to do first the positive check ? if cached == 200 then ... else ... end
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed. Looks better 👍
I was thinking in the case of cached
being a table, that's why I added that nil check, but it's an integer, so there's no need to do that.
…e context when backend is down
Seems to be needed with the new version of OpenResty.
2ce1533
to
4497c73
Compare
@@ -5,7 +5,7 @@ local concat = table.concat | |||
local sort = table.sort | |||
local unpack = table.unpack | |||
local ngx_re = ngx.re | |||
local table = table | |||
local new_tab = require('resty.core.base').new_tab |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mikz I remember that we discussed this in a previous PR.
I needed to require this in order to make the test pass. I guess it has to do with the OpenResty upgrade that we merged today.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍
local function handle_backend_ok(self, transaction) | ||
local function update_downtime_cache(cache, transaction, backend_status, cache_handler) | ||
local key = keys_helper.key_for_cached_auth(transaction) | ||
cache_handler(cache, key, backend_status) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this handler being called twice ? Once by APIcast policy and once by this one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, because when this policy is enabled, APIcast does not authorize nor report to the 3scale backend. And the handler is only called when there's a backend response to be handled.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Great!
Closes #738