-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
access-api: invoking store/list in production errors with HandlerExecutionError
+ HTTPError
#363
Comments
The |
HTTPError
HandlerExecutionError
+ HTTPError
|
…y non-ok responses (to debug) (#364) …that logs unexpected responses with status=530 Motivation: * gather more info for [this error happening in production](#363 (comment)) Plan * gather info, then revert this PR
…or, by default catches HTTPError -> error result w/ status=502 (#366) , which defaults to something that catches HTTPErrors and rewrites to a status=502 error result Motivation: * relates to: #363 * instead of that behavior of getting a `HandlerExecutionError` (due to internally having an uncaught `HTTPError`), after this PR we should * not have the uncaught `HTTPError`, so no uncaught `HandlerExecutionError` * the result will have `status=502` and `x-proxy-error` with information about the proxy invocation request that errored (which will help debug #363) Limitations * This inlcudes in `result['x-proxy-error']` any info from the underlying `@ucanto/transport/http` `HTTPError`, but unless/until we put more info on that error, we still won't have the raw response object and e.g. won't have response headers. * but if those properties are ever added to `HTTPError`, they should show up in `x-proxy-error`
phew I recovered response text from the mysterious 530 response:
|
#status - we worked around this by using a different prod URL that goes directly to aws, but will leave this open until there is more update from my support ticket to cloudflare |
Motivation: * cloudflare support asked for a URL to reproduce the undelrying cause of #363 * this adds a route just for that. once they use it to reproduce and diagnose, we remove it
cloudflare support got back to me basically saying this is because it's resolving |
…y non-ok responses (to debug) (#364) …that logs unexpected responses with status=530 Motivation: * gather more info for [this error happening in production](#363 (comment)) Plan * gather info, then revert this PR
…or, by default catches HTTPError -> error result w/ status=502 (#366) , which defaults to something that catches HTTPErrors and rewrites to a status=502 error result Motivation: * relates to: #363 * instead of that behavior of getting a `HandlerExecutionError` (due to internally having an uncaught `HTTPError`), after this PR we should * not have the uncaught `HTTPError`, so no uncaught `HandlerExecutionError` * the result will have `status=502` and `x-proxy-error` with information about the proxy invocation request that errored (which will help debug #363) Limitations * This inlcudes in `result['x-proxy-error']` any info from the underlying `@ucanto/transport/http` `HTTPError`, but unless/until we put more info on that error, we still won't have the raw response object and e.g. won't have response headers. * but if those properties are ever added to `HTTPError`, they should show up in `x-proxy-error`
Motivation: * cloudflare support asked for a URL to reproduce the undelrying cause of #363 * this adds a route just for that. once they use it to reproduce and diagnose, we remove it
Motivation:
Context:
can=store/list aud=did:web:staging.web3.storage
invocations through access-api stagingProblem
env.{DID,PRIVATE_KEY}
same as production. This leads me to think it's something specific about our production deployment running in cloudflare workersPossible Solutions
The text was updated successfully, but these errors were encountered: