Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[core] LambdaRuntimeClient do nothing when server closes the connection #465

Open
sebsto opened this issue Jan 15, 2025 · 1 comment
Open
Assignees
Labels
kind/bug Feature doesn't work as expected.
Milestone

Comments

@sebsto
Copy link
Contributor

sebsto commented Jan 15, 2025

Expected behavior

When the server closes the connection, the client should try to reconnect and eventually shutdown if it can't.

We should implement a retry strategy to re-establish the connection (ideally with an exponential backoff) and gracefully shut down the client if it can't reconnect after n attempts.

Actual behavior

The LambdaRunTimeClient change its internal state but does not attempt to reconnect nor exit.
See this line of code

Steps to reproduce

  1. checkout this branch to use the new MockServer
    https://github.com/sebsto/swift-aws-lambda-runtime/tree/sebsto/mockserver

  2. Start the Mock server

LOG_LEVEL=trace  swift run

2025-01-15T11:43:07+0100 info MockServer : host="127.0.0.1" maxInvocations=1 port=7000 [MockServer] Server started and listening
  1. From another terminal, start the HelloWorld example in DEBUG mode.
cd Examples/HelloWorld
export AWS_LAMBDA_RUNTIME_API=127.0.0.1:7000
LAMBDA_USE_LOCAL_DEPS=../.. LOG_LEVEL=trace swift run

uild of product 'MyLambda' complete! (0.12s)
2025-01-15T11:52:18+0100 debug LambdaRuntime : [AWSLambdaRuntimeCore] LambdaRuntime initialized
2025-01-15T11:52:18+0100 trace LambdaRuntime : lambda_ip=127.0.0.1 lambda_port=7000 [AWSLambdaRuntimeCore] Connection to control plane created

[hangs after the /next and /response requests have been processed and the mock server shutdown]

The mock server closes the connection after 2 HTTP request on 1 connection.
The runtime will send the /next and the /response requests and then it will stay up and running without retrying the connection.

Adding a print() statement at line 267 shows that the connection close even is trapped, but no action is taken.

If possible, minimal yet complete reproducer code (or URL to code)

n/a

What version of this project (swift-aws-lambda-runtime) are you using?

main

Swift version

swift-driver version: 1.115.1 Apple Swift version 6.0.3 (swiftlang-6.0.3.1.10 clang-1600.0.30.1)
Target: arm64-apple-macosx15.0

Amazon Linux 2 docker image version

n/a

@sebsto sebsto added the kind/bug Feature doesn't work as expected. label Jan 15, 2025
@sebsto sebsto added this to the 2.0 milestone Jan 15, 2025
@sebsto sebsto self-assigned this Jan 15, 2025
@sebsto
Copy link
Contributor Author

sebsto commented Jan 15, 2025

Having given a second tough about it, maybe we can just be a bad citizen and crash when the connection is closed. The Lambda service will recreate another microVM to continue to server client request.

The current performance testing script measures performance by calculating the runtime of an executable target. If we implement a retry with backoff strategy, this will impact the performance measurement (or we will need to redesign the perf testing)

Furthermore, the retry logic will add a few Kb to the library size that will be only needed in few rare occurence of the service stopping serving requests to the LambdaRuntime.

Maybe a simple fatalError() would be good enough in this situation.

Wdyt ?

sebsto added a commit that referenced this issue Jan 22, 2025
Rewrite to MockServer (used only for performance testing and part of a
separate target) to comply with Swift 6 language mode and concurrency.

The new MockServer now uses `NIOAsyncChannel` and structured
concurrency.

Instead of adding support to
[MAX_REQUEST](https://github.com/swift-server/swift-aws-lambda-runtime/blob/11756b4e00ca75894826b41666bdae506b6eb496/Sources/AWSLambdaRuntimeCore/LambdaConfiguration.swift#L53)
environment variable like v1 did, we implemented support for
`MAX_REQUEST` environment variable in the MockServer itself. It closes
the connection and shutdown the server after servicing MAX_INVOCATIONS
Lambda requests). This allow to add the MAX_REQUEST penalty on the
MockServer and not on the LambdaRuntimeClient.

However, currently, the LambdaRuntimeClient does not shutdown when the
MockServer ends. I created
#465 to
track this issue.

See #377
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Feature doesn't work as expected.
Projects
None yet
Development

No branches or pull requests

1 participant