Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make 3scale batching policy compatible with the caching policy #757

Merged
merged 5 commits into from
Jun 11, 2018

Conversation

davidor
Copy link
Contributor

@davidor davidor commented Jun 8, 2018

Closes #738

@davidor davidor requested a review from a team as a code owner June 8, 2018 15:11
@davidor davidor force-pushed the batching-policy-combined-with-caching-policy branch 3 times, most recently from c616be6 to 90f498e Compare June 8, 2018 16:00
@davidor davidor changed the title [WIP] Make 3scale batching policy compatible with the caching policy Make 3scale batching policy compatible with the caching policy Jun 8, 2018
@davidor davidor requested review from a team and removed request for a team June 8, 2018 16:06

-- Note: "update_func" should be one of the handlers exposed by the caching
-- policy.
function _M:update(transaction, backend_status, update_func)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll move update_func to the initializer so it becomes an attr of the class.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about not passing update_func at all?

It is passed here only to be called. This function just gets keys for a transaction.

Actually this class feels a bit weird as it has several collaborators, but they are grouped only by one property: they need keys for the transaction. One reads from shdict and other is a function to be executed. Seems to me they don't really belong to be together.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're right. I was having second thoughts about this class when I opened this PR, and that's why I added the comment. The objective was to isolate the management of the backend downtime cache, but we need to use a handler stored in the context and that makes it a bit weird.

In the end, I left the cache handling responsibility in the main class of the policy. I think that's better for now.

@@ -136,6 +164,9 @@ function _M:access(context)

ensure_report_timer_on(self, service_id, backend)

-- The caching policy sets a cache handler in the context
self.cache_handler = context.cache_handler
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ideally, the cache handler should be set in the initializer and passed to BackendDowntimeCache.new(). Unfortunately, that's not possible because the exported cache_handler is not available there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it necessary to mutate self ? I know right now each service will get own copy the policy, but that might not be necessary in the future.
I'd like to keep policies "multi tenant" in a way they could operate with the same configuration on multiple services.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right 👍
Fixed.

local cached = self.cache_handler and
self.backend_downtime_cache:get(transaction)

if not cached or cached ~= 200 then
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be better to do first the positive check ? if cached == 200 then ... else ... end

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed. Looks better 👍

I was thinking in the case of cached being a table, that's why I added that nil check, but it's an integer, so there's no need to do that.

@davidor davidor force-pushed the batching-policy-combined-with-caching-policy branch from 2ce1533 to 4497c73 Compare June 11, 2018 09:38
@@ -5,7 +5,7 @@ local concat = table.concat
local sort = table.sort
local unpack = table.unpack
local ngx_re = ngx.re
local table = table
local new_tab = require('resty.core.base').new_tab
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikz I remember that we discussed this in a previous PR.
I needed to require this in order to make the test pass. I guess it has to do with the OpenResty upgrade that we merged today.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍

@davidor davidor requested a review from a team June 11, 2018 09:47
local function handle_backend_ok(self, transaction)
local function update_downtime_cache(cache, transaction, backend_status, cache_handler)
local key = keys_helper.key_for_cached_auth(transaction)
cache_handler(cache, key, backend_status)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this handler being called twice ? Once by APIcast policy and once by this one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because when this policy is enabled, APIcast does not authorize nor report to the 3scale backend. And the handler is only called when there's a backend response to be handled.

Copy link
Contributor

@mikz mikz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Great!

@davidor davidor merged commit b94979d into master Jun 11, 2018
@davidor davidor deleted the batching-policy-combined-with-caching-policy branch June 11, 2018 10:49
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants