Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A way to stream content into an element #2142

Open
jakearchibald opened this issue Dec 7, 2016 · 21 comments
Open

A way to stream content into an element #2142

jakearchibald opened this issue Dec 7, 2016 · 21 comments
Labels
addition/proposal New features or enhancements

Comments

@jakearchibald
Copy link
Contributor

Use-case: I work on a news site and I want to create visual transitions between articles, but I don't want to lose the benefits of streaming. So:

  • User clicks on link
  • Create a new article element to contain the incoming article
  • Start fetching content
  • Pipe content into new article element
  • Begin the visual transition once elements appear in the new article element

Not only is innerHTML is slower way to do this (due to a lack of streaming), it also introduces a number of behavioural differences. It'd be great to try to limit these, eg allow inline scripts to execute before additional elements are inserted.

@jakearchibald
Copy link
Contributor Author

Parts of this can be hacked using document.write and <iframe>, but it'd be good to have a non-hacky way.

Code for the above use-case if element.writable provided a way to stream HTML into the element.

const article = document.createElement('article');
const response = await fetch('article.include');
const articleHasContent = new Promise(resolve => {
  const observer = new MutationObserver(() => {
    observer.disconnect();
    resolve();
  });
  observer.observe(article, {childList: true});
});

response.body
  .pipeThrough(new TextDecoder())
  .pipeTo(article.writable);

await articleHasContent;
performTransition();

@zcorpan
Copy link
Member

zcorpan commented Dec 7, 2016

There is also https://w3c.github.io/DOM-Parsing/#idl-def-range-createcontextualfragment(domstring) which does execute scripts (when inserted into a document). As a possible alternative to document.write in the meantime...

@jakearchibald
Copy link
Contributor Author

TIL! I guess you mean an alternative to innerHTML?

@zcorpan
Copy link
Member

zcorpan commented Dec 7, 2016

No, as an alternative to document.write in your hack. Don't even need an iframe, just a Range instance.

http://software.hixie.ch/utilities/js/live-dom-viewer/saved/4716

@jakearchibald
Copy link
Contributor Author

@zcorpan ah, that doesn't allow partial trees http://software.hixie.ch/utilities/js/live-dom-viewer/?saved=4717, which you kinda need if you're trying to stream something like an article.

@rianby64
Copy link

rianby64 commented Dec 7, 2016

Hello! I'm curious about the possibility to insert scripts.

I'm trying to execute an script and I see the script is executed after insertion.

<!DOCTYPE html>
<body>
<script>
var r = new Range();
// Write some more content - this should be done async:
document.body.appendChild(r.createContextualFragment('<p>hello'));
document.body.appendChild(r.createContextualFragment(' world</p>'));
document.body.appendChild(r.createContextualFragment('<script>console.log("yeap");<\/script>'));
// done!!
</script>
</body>

I've two questions:
What about if the inserted script has the flag... async or defer? Those flags will not have effect.

Will you keep this function in the standard? Or are you planning to remove it?

Finally I found (you gave me) the best way to insert content to the document.Thanks a lot!

@jakearchibald
Copy link
Contributor Author

@rianby64 note that the example above creates two paragraphs rather than one.

What about if the inserted script has the flag... async or defer? Those flags will not have effect.

The scripts will be async, as if you'd created them with document.createElement('script'). For the streaming solution I mentioned in the OP, I'd like the parser to queue DOM modifications while a non-async/defer script downloads and executes, but allow something like a look-ahead parser.

Will you keep this function in the standars?

Which function? createContextualFragment? I don't see why it'd be removed.

@rianby64
Copy link

rianby64 commented Dec 7, 2016

OK. Thanks a lot again.

@zcorpan
Copy link
Member

zcorpan commented Dec 7, 2016

ah, that doesn't allow partial trees

Indeed.

@wanderview
Copy link
Member

In general I think we want to be able to provide ReadableStream or Response objects to APIs that currently take a URL. @jakearchibald, would something that let you assign a ReadableStream or Response (backed by a stream) to an iframe.src satisfy your use case?

@bzbarsky
Copy link
Contributor

bzbarsky commented Dec 7, 2016

The key part here is to not have to use a separate iframe plus adoption of the current parser insertion point into a different document. Instead, we just want to parse into an existing document location.

@wanderview
Copy link
Member

This means creating an element that has the concept of partially loaded state, right? An iframe already has all of that, but do other html container elements? So wouldn't we need to create something that has all the load event, error event, and other stateful information of an iframe? Or maybe all that exists today. HTML always catches me out.

@jakearchibald
Copy link
Contributor Author

jakearchibald commented Dec 8, 2016

@wanderview

This means creating an element that has the concept of partially loaded state, right?

I think we can get away without this. If an element has a writable endpoint you'll get locking on that writable for free. However, during streaming you'll be able to modify the children of the element, even set the element's innerHTML. The HTML parser already has to deal with this during page load, so I don't think we need to do anything different.

So wouldn't we need to create something that has all the load event, error event, and other stateful information of an iframe?

We probably don't need this either. htmlStream.pipeTo(div.writable) - since pipeTo already returns a promise you can use that for success/failure info.

@zcorpan zcorpan added the addition/proposal New features or enhancements label Dec 8, 2016
@blaine
Copy link

blaine commented Dec 10, 2016

How would this interact / compare with the following scenario:

Rather than fetching HTML snippets from the server, I'm much more likely to be able to fetch [we'll assume newline-delimited to enable stream parsing] a minimal JSON encoding of whatever entity I'm trying to display.

Partially, this is just down to the fact that most web servers wrap HTML output in a series of filters, one of which is a base "..." template. Obviously, that can [easily, from a purely technical perspective] change, but spitting out independent

s is going to take some cultural change on the server side. JSON we have today, and won't take any convincing.

... so, assuming that we use JSON, Is there a performance win to being able to render JSON snippets [as they come over the network] to HTML? The trade-off I'd assume we're making is on triggering additional layouts; put another way, is it faster to do:

  1. [ JSON blob representing n divs ]: JSON -> HTML -> DOM in one step -or-
  2. [ streaming JSON blob representing n divs ]: while (nextJSONitem) { JSON -> HTML -> DOM }

My expectation is that the answer is "it depends"; I don't have a sufficiently reliable playground for testing this to any degree of accuracy, but I would expect we'd want to keep the render pipeline as unobtrusive as possible while minimizing network->screen latency for individual items, using the following as trade-offs:

  • time to render & re-compute layout for:

    • each items in an array individually vs
    • all items in an array as a single DOM manipulation vs
    • batches of items
  • total time to:

    • fetch all items
    • pull a single item out of a streaming JSON blob

... ideally all while minimizing client complexity ("they wrote a lot of code to make things that slow"). Thankfully that part should be hidden in frameworks.

... OR am I totally barking up the wrong tree with the idea that JSON is the right delivery mechanism, and we should aim to generate server-side HTML snippets for pretty much anything that can be fetched-with-latency?

@jakearchibald
Copy link
Contributor Author

@blaine I think I cover what you're asking over at https://jakearchibald.com/2016/fun-hacks-faster-content/

@wanderview
Copy link
Member

@jakearchibald It kind of feels like there should be a way for code other than the one writing to the element to know if it's complete. The pipeTo promise, while useful, does not seem adequate for that.

For example, code that uses a query selector to get an element and operate on it should have some way to know if the element is in a good state. Seems like that kind of code is usually pretty independent.

@hemanth
Copy link

hemanth commented Dec 10, 2016

response.body
  .pipeThrough(new TextDecoder())
  .pipeTo(article.writable);

Would indeed be a big win! ❤️

@blaine
Copy link

blaine commented Dec 10, 2016

@jakearchibald durr. I'd read that a few days ago and forgotten the second part of your post in this context. Sorry, I blame lack of coffee. ;-)

Re-reading this more carefully, the element.writable pipe makes a ton of sense, and it'd be trivial for a rendering pipeline to make use of it, even in the JSON case. +1

@isonmad
Copy link
Contributor

isonmad commented Jan 24, 2017

Wait, how would the element.writable getter even work, since a WritableStream usually (bar explicitly passing 'preventClose') can only be pipeTo'd once, after which it becomes closed and can't be written to again?

htmlStream.pipeTo(div.writable).then(() => htmlStream2.pipeTo(div.writable) /* cancels source stream and does nothing? */);

What happens when it's already locked to a previous, still incomplete, still streaming request but you changed your mind/ the user clicked to the next article already?

htmlStream.pipeTo(div.writable); // locked
htmlStream2.pipeTo(div.writable); // doesn't work, stuck waiting?

Would it have to produce a new fresh WritableStream on every access? Then every access would have to instantly invalidate all the previous writable streams so that writing to them does nothing, and only the latest effects the element's contents?

@domenic
Copy link
Member

domenic commented Feb 1, 2017

@jakearchibald I'm curious how you respond to @isonmad's comment; it seems like a valid argument against a WritableStream here. And of course the lack of cancelable promises is hurting us here...

@jakearchibald
Copy link
Contributor Author

Yeah, this seems like a good argument against element.writable and for something like:

htmlStream.pipeTo(div.getWritable());

or

const domStreamer = new DOMStreamer();
div.appendChild(domStreamer);
htmlStream.pipeTo(domStreamer.writable);

What happens when it's already locked to a previous, still incomplete, still streaming request but you changed your mind/ the user clicked to the next article already?

This could be done with domStreamer.abort() or somesuch, but maybe it's a more general problem to solve - how to abort a pipe.

Would it have to produce a new fresh WritableStream on every access? Then every access would have to instantly invalidate all the previous writable streams so that writing to them does nothing, and only the latest effects the element's contents?

Taking the above models, would it be bad to allow two streams to operate within the same element? Sure you could get interleaving, but that's already true with two bits of code calling appendChild asynchronously.

The browser already has to cope with the html parser and appendChild operating on the same element, so it doesn't feel like anything new.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
addition/proposal New features or enhancements
Development

No branches or pull requests

9 participants