Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make 3scale batching policy compatible with the caching policy #757

Merged
merged 5 commits into from
Jun 11, 2018
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ and this project adheres to [Semantic Versioning](http://semver.org/).

- OpenTracing support [PR #669](https://github.com/3scale/apicast/pull/669)
- Generate new policy scaffold from the CLI [PR #682](https://github.com/3scale/apicast/pull/682)
- 3scale batcher policy [PR #685](https://github.com/3scale/apicast/pull/685), [PR #710](https://github.com/3scale/apicast/pull/710)
- 3scale batcher policy [PR #685](https://github.com/3scale/apicast/pull/685), [PR #710](https://github.com/3scale/apicast/pull/710), [PR #757](https://github.com/3scale/apicast/pull/757)
- Liquid templating support in the headers policy configuration [PR #716](https://github.com/3scale/apicast/pull/716)
- Ability to modify query parameters in the URL rewriting policy [PR #724](https://github.com/3scale/apicast/pull/724)
- 3scale referrer policy [PR #728](https://github.com/3scale/apicast/pull/728)
Expand Down
43 changes: 38 additions & 5 deletions gateway/src/apicast/policy/3scale_batcher/3scale_batcher.lua
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
local backend_client = require('apicast.backend_client')
local AuthsCache = require('auths_cache')
local ReportsBatcher = require('reports_batcher')
local keys_helper = require('keys_helper')
local policy = require('apicast.policy')
local errors = require('apicast.errors')
local reporter = require('reporter')
Expand Down Expand Up @@ -38,6 +39,12 @@ function _M.new(config)
end
self.semaphore_report_timer = semaphore_report_timer

-- Cache for authorizations to be used in the event of a 3scale backend
-- downtime.
-- This cache allows us to use this policy in combination with the caching
-- one.
self.backend_downtime_cache = ngx.shared.api_keys

return self
end

Expand Down Expand Up @@ -110,17 +117,42 @@ local function error(service, rejection_reason)
end
end

local function handle_backend_ok(self, transaction)
local function update_downtime_cache(cache, transaction, backend_status, cache_handler)
local key = keys_helper.key_for_cached_auth(transaction)
cache_handler(cache, key, backend_status)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Isn't this handler being called twice ? Once by APIcast policy and once by this one?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, because when this policy is enabled, APIcast does not authorize nor report to the 3scale backend. And the handler is only called when there's a backend response to be handled.

end

local function handle_backend_ok(self, transaction, cache_handler)
if cache_handler then
update_downtime_cache(self.backend_downtime_cache, transaction, 200, cache_handler)
end

self.auths_cache:set(transaction, 200)
self.reports_batcher:add(transaction)
end

local function handle_backend_denied(self, service, transaction, status, headers)
local function handle_backend_denied(self, service, transaction, status, headers, cache_handler)
if cache_handler then
update_downtime_cache(self.backend_downtime_cache, transaction, status, cache_handler)
end

local rejection_reason = rejection_reason_from_headers(headers)
self.auths_cache:set(transaction, status, rejection_reason)
return error(service, rejection_reason)
end

local function handle_backend_error(self, service, transaction, cache_handler)
local cached = cache_handler and self.backend_downtime_cache:get(transaction)

if cached == 200 then
self.reports_batcher:add(transaction)
else
-- The caching policy does not store the rejection reason, so we can only
-- return a generic error.
return error(service)
end
end

-- Note: when an entry in the cache expires, there might be several requests
-- with those credentials and all of them will call auth() on backend with the
-- same parameters until the auth status is cached again. In the future, we
Expand All @@ -142,14 +174,15 @@ function _M:access(context)
local formatted_usage = format_usage(usage)
local backend_res = backend:authorize(formatted_usage, credentials)
local backend_status = backend_res.status
local cache_handler = context.cache_handler -- Set by Caching policy

if backend_status == 200 then
handle_backend_ok(self, transaction)
handle_backend_ok(self, transaction, cache_handler)
elseif backend_status >= 400 and backend_status < 500 then
handle_backend_denied(
self, service, transaction, backend_status, backend_res.headers)
self, service, transaction, backend_status, backend_res.headers, cache_handler)
else
return error(service)
handle_backend_error(self, service, transaction, cache_handler)
end
else
if cached_auth.status == 200 then
Expand Down
4 changes: 2 additions & 2 deletions gateway/src/apicast/policy/3scale_batcher/keys_helper.lua
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ local concat = table.concat
local sort = table.sort
local unpack = table.unpack
local ngx_re = ngx.re
local table = table
local new_tab = require('resty.core.base').new_tab
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@mikz I remember that we discussed this in a previous PR.
I needed to require this in order to make the test pass. I guess it has to do with the OpenResty upgrade that we merged today.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍


local _M = {}

Expand All @@ -20,7 +20,7 @@ local function creds_part_in_key(creds)
end

local function metrics_part_in_key(usage)
local usages = table.new(#usage.metrics, 0)
local usages = new_tab(#usage.metrics, 0)

local deltas = usage.deltas

Expand Down
75 changes: 74 additions & 1 deletion spec/policy/3scale_batcher/3scale_batcher_spec.lua
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,17 @@ describe('3scale batcher policy', function()
batcher_policy.auths_cache = AuthsCache.new(lrucache.new(10), 10)
stub(batcher_policy.reports_batcher, 'add')

stub(batcher_policy, 'backend_downtime_cache')

context = {
service = service,
usage = usage,
credentials = credentials
credentials = credentials,
-- cache_handler does nothing because we just need to check if it's called
cache_handler = function() end
}

stub(context, 'cache_handler')
end)

describe('when the request is cached', function()
Expand Down Expand Up @@ -79,5 +85,72 @@ describe('3scale batcher policy', function()
end)
end)
end)

describe('when the request is not cached', function()
describe('and backend is available', function()
before_each(function()
local backend_client = require('apicast.backend_client')
stub(backend_client, 'authorize').returns({ status = 200 })
end)

it('updates the backend downtime cache using the handler in the context', function()
batcher_policy:access(context)

assert.stub(context.cache_handler).was_called()
end)
end)

describe('and backend is not available', function()
before_each(function()
local backend_client = require('apicast.backend_client')
stub(backend_client, 'authorize').returns({ status = 500 })
end)

describe('and the authorization is in the downtime cache', function()
describe('and it is OK', function()
before_each(function()
stub(batcher_policy.backend_downtime_cache, 'get').returns(200)
end)

it('adds the report to the batcher', function()
batcher_policy:access(context)

assert.stub(batcher_policy.reports_batcher.add).was_called_with(
batcher_policy.reports_batcher, transaction)
end)
end)

describe('and it is denied', function()
before_each(function()
stub(batcher_policy.backend_downtime_cache, 'get').returns(409)
end)

it('does not add the report to the batcher', function()
batcher_policy:access(context)

assert.stub(batcher_policy.reports_batcher.add).was_not_called()
end)

it('returns an error', function()
batcher_policy:access(context)

assert.is_true(ngx.status >= 400 and ngx.status < 500)
end)
end)
end)

describe('and the authorization is not in the downtime cache', function()
before_each(function()
stub(batcher_policy.backend_downtime_cache, 'get').returns(nil)
end)

it('returns an error', function()
batcher_policy:access(context)

assert.is_true(ngx.status >= 400 and ngx.status < 500)
end)
end)
end)
end)
end)
end)
84 changes: 84 additions & 0 deletions t/apicast-policy-3scale-batcher.t
Original file line number Diff line number Diff line change
Expand Up @@ -372,3 +372,87 @@ push $res, "GET /force_report_to_backend";
$res
--- no_error_log
[error]

=== TEST 5: with caching policy (resilient mode)
The purpose of this test is to test that the 3scale batcher policy works
correctly when combined with the caching one.
In this case, the caching policy is configured as "resilient". We define a
backend that returns "limits exceeded" on the first request, and an error in
all the rest. The caching policy will cache the first result and return it
while backend is down. Notice that the caching policy does not store the
rejection reason, it just returns a generic error (403/Authentication failed).
To make sure that nothing is cached in the 3scale batcher policy, we flush its
auth cache on every request (see rewrite_by_lua_block).
--- http_config
include $TEST_NGINX_UPSTREAM_CONFIG;
lua_shared_dict api_keys 10m;
lua_shared_dict cached_auths 1m;
lua_shared_dict batched_reports 1m;
lua_shared_dict batched_reports_locks 1m;
lua_package_path "$TEST_NGINX_LUA_PATH";

init_by_lua_block {
require('apicast.configuration_loader').mock({
services = {
{
id = 42,
backend_version = 1,
backend_authentication_type = 'service_token',
backend_authentication_value = 'token-value',
proxy = {
backend = { endpoint = "http://127.0.0.1:$TEST_NGINX_SERVER_PORT" },
api_backend = "http://127.0.0.1:$TEST_NGINX_SERVER_PORT/api-backend/",
proxy_rules = {
{ pattern = '/', http_method = 'GET', metric_system_name = 'hits', delta = 2 }
},
policy_chain = {
{
name = 'apicast.policy.3scale_batcher',
configuration = { }
},
{
name = 'apicast.policy.apicast'
},
{
name = 'apicast.policy.caching',
configuration = { caching_type = 'resilient' }
}
}
}
}
}
})
}

rewrite_by_lua_block {
ngx.shared.cached_auths:flush_all()
}
--- config
include $TEST_NGINX_APICAST_CONFIG;

location /transactions/authorize.xml {
content_by_lua_block {
local test_counter = ngx.shared.test_counter or 0
if test_counter == 0 then
ngx.shared.test_counter = test_counter + 1
ngx.header['3scale-rejection-reason'] = 'limits_exceeded'
ngx.status = 409
ngx.exit(ngx.HTTP_OK)
else
ngx.shared.test_counter = test_counter + 1
ngx.exit(502)
end
}
}

location /api-backend {
echo 'yay, api backend';
}
--- request eval
["GET /test?user_key=foo", "GET /foo?user_key=foo", "GET /?user_key=foo"]
--- response_body eval
["Limits exceeded", "Authentication failed", "Authentication failed"]
--- error_code eval
[ 429, 403, 403 ]
--- no_error_log
[error]