-
Notifications
You must be signed in to change notification settings - Fork 734
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
refactor: proxy/preview server #231
Conversation
🦋 Changeset detectedLatest commit: c3ea521 The changes in this PR will be included in the next version bump. This PR includes changesets to release 1 package
Not sure what this means? Click here to learn what changesets are. Click here if you're a maintainer who wants to add another changeset to this PR |
e7e6a1f
to
ed94cd8
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK so I found it quite hard to follow what had changed and what was going on - mainly due to my lack of experience in this part of the code base. But I think it looks like it should work.
.changeset/angry-schools-walk.md
Outdated
|
||
refactor: proxy/preview server | ||
|
||
This PR refactors how we setup the proxy server between the developer and the edge preview service during `wrangler dev`. Of note, we start the server immediately. We also buffers requests/streams and hold on to the, when starting/refreshing the token. This means a developer should never see `ERR_CONNECTION_REFUSED` error page, or have an older worker respond after making a change to the code. And when the token does get refreshed, we flush said streams/requests with the newer values, making the iteration process a lot smoother and predictable. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This PR refactors how we setup the proxy server between the developer and the edge preview service during `wrangler dev`. Of note, we start the server immediately. We also buffers requests/streams and hold on to the, when starting/refreshing the token. This means a developer should never see `ERR_CONNECTION_REFUSED` error page, or have an older worker respond after making a change to the code. And when the token does get refreshed, we flush said streams/requests with the newer values, making the iteration process a lot smoother and predictable. | |
This PR refactors how we setup the proxy server between the developer and the edge preview service during `wrangler dev`. Of note, we start the server immediately. We also buffer requests/streams and hold on to them, when starting/refreshing the token. This means a developer should never see `ERR_CONNECTION_REFUSED` error page, or have an older worker respond after making a change to the code. And when the token does get refreshed, we flush said streams/requests with the newer values, making the iteration process a lot smoother and predictable. |
|
||
/** | ||
* Creates a HTTP/1 proxy that sends requests over HTTP/2. | ||
* `usePreviewServer` is a React hook that creates a local development |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When you say "local" do you really mean "on the local machine" (e.g. Miniflare)?
Or is this preview server actually a real Worker running at the Edge?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
By the way, this comment seems in the wrong place. Shouldn't it be directly above the usePreview()
function below?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's both - the worker we upload gets exposed from the, and we start up a local development server on our own machine that acts as a proxy to it. In local mode, it proxies to miniflare.
host: string, | ||
port: number | ||
) { | ||
for (const [name, value] of Object.entries(headers)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do prefer this approach to iteration over using .forEach()
etc. Nice!
packages/wrangler/src/proxy.ts
Outdated
host: string, | ||
port: number |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT - consider renaming these for clarity: host
-> remoteHost
, port
-> localPort
.
proxy.on("request", bufferRequestResponse); | ||
cleanupListeners.push(() => proxy.off("request", bufferRequestResponse)); | ||
return () => { | ||
cleanupListeners.forEach((cleanup) => cleanup()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to worry about memory leaks here?
Would it be safer to also empty the cleanupListeners
array?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thinking this out aloud - Every reference inside the effect is dereferenced after this function is called, so I don't think clearing the array would help any more. Specifically, cleanupListeners itself is dreferenced. If any of the listeners itself is caught hanging on to something, then wiping this array wouldn't help anyway.
packages/wrangler/src/proxy.ts
Outdated
if (assetPath) { | ||
const handleAsset = serveStatic(assetPath, { | ||
cacheControl: false, | ||
}); | ||
const handleAssetsAndRequests = ( | ||
request: IncomingMessage, | ||
response: ServerResponse | ||
) => { | ||
handleAsset(request, response, () => { | ||
handleRequest(request, response); | ||
}); | ||
}; | ||
|
||
proxy.on("request", handleAssetsAndRequests); | ||
cleanupListeners.push(() => | ||
proxy.off("request", handleAssetsAndRequests) | ||
); | ||
|
||
// flush and replay buffered requests | ||
requestResponseBufferRef.current.forEach(({ request, response }) => | ||
handleAssetsAndRequests(request, response) | ||
); | ||
requestResponseBufferRef.current = []; | ||
} else { | ||
proxy.on("request", handleRequest); | ||
cleanupListeners.push(() => proxy.off("request", handleRequest)); | ||
// flush and replay buffered requests | ||
requestResponseBufferRef.current.forEach(({ request, response }) => | ||
handleRequest(request, response) | ||
); | ||
requestResponseBufferRef.current = []; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NIT - this whole block could be simplified to remove some of the duplication - for example:
const actualHandleRequest = assetPath ?
createHandleAssetsRequest(assetPath, handleRequest) :
handleRequest;
proxy.on("request", actualHandleRequest);
cleanupListeners.push(() => proxy.off("request", actualHandleRequest));
// flush and replay buffered requests
requestResponseBufferRef.current.forEach(({ request, response }) =>
actualHandleRequest(request, response)
);
requestResponseBufferRef.current = [];
// ... somewhere else
function createHandleAssetsRequest(assetPath: string, handleRequest: RequestListener) {
const handleAsset = serveStatic(assetPath, {
cacheControl: false,
});
return (
request: IncomingMessage,
response: ServerResponse
) => {
handleAsset(request, response, () => {
handleRequest(request, response);
};
};
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this!
addCfPreviewTokenHeader(headers, previewToken.value); | ||
headers["host"] = previewToken.host; | ||
const localWebsocket = new WebSocket(message, socket, body); | ||
// TODO(soon): Custom WebSocket protocol is not working? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this where we need to handshake over the "Subprotocol negotiation" of websocket that is being requested?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm actually not sure, this cod eis from the previous impl.
Closes #205 This PR refactors how we setup the proxy server between the developer and the edge preview service during `wrangler dev`. Of note, we start the server immediately. We also buffers requests/streams and hold on to the, when starting/refreshing the token. This means a developer should never see `ERR_CONNECTION_REFUSED` error page, or have an older worker respond after making a change to the code. When the token does get refreshed, we flush said streams/requests with the newer values.
ed94cd8
to
c3ea521
Compare
Closes #205.
This PR refactors how we setup the proxy server between the developer and the edge preview service during
wrangler dev
. Of note, we start the server immediately. We also buffers requests/streams and hold on to them when starting/refreshing the token. This means a developer should never seeERR_CONNECTION_REFUSED
error page, or have an older worker respond after making a change to the code. And when the token does get refreshed, we flush said streams/requests with the newer values, making the iteration process a lot smoother and predictable.