-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
REST API: Progressively load data into components through pagination #6694
Comments
If we went this route (makes sense to me), it would be useful for the JS to log responses with the maximum number of results. |
I may be wrong, but isn't this the case for the current core approach as well? As far as I can tell, the select rendered by Is there a difference in memory usage between what gets rendered server-side vs. how it would be done with an API call and rendered through React? |
Sorry for the confusion. To clarify, the arbitrarily high limit was a workaround to a current bug in WordPress core that will need to be addressed in the ticket. The end goal is to pass If you were to comment out the workaround and run
Ultimately, the problem is
I've updated the Trac ticket accordingly: https://core.trac.wordpress.org/ticket/43998#comment:2
Can you explain what the user experience for this might be, particularly as it relates to issues like #6487?
Correct, and neither does For a summary of the path core went down, see #5921 (comment) |
My suggestion from earlier: a number high enough (20,000 or whatever) will probably just crash most servers anyway. Returning the first 20,000 results and outputting a console warning might be better than 500ing. |
I would expect the REST API to perform significantly worse than
Right, this is the main part I disagree with. I think it is better to always set a reasonable limit to the number of results the REST API can return.
I suspect that 20,000 is way too high for most servers. |
I second this concern (and previously raised on #6657). We should not enable unbounded requests - even for authenticated users. |
@nylen @adamsilverstein To move the conversation forward, can you suggest an alternative implementation with design- and accessibility-approved UX? |
Great question. I still think my original concept of a select that also performs an ajax search both the best UX and the least resource intensive design: Unfortunately, my original implementation in #5921 was with I've explored leveraging our autocompletion code here, this would work similar to the current tags selector (except you can't add authors, and you can only select one)... The goal is for it to would look something like this: Unless we can find an accessible select component we like, this seems like a reasonable second choice. |
Building a UI that uses the existing pagination features of the API would be ideal. I realize this is difficult, and this is not the right issue to debate how this is accomplished. So my proposal here is not related to UI implementation at all, instead it is just a robustness improvement to the API code merged in #6657:
|
There was a time when WP Core was working with and auditing Select2 for compatibility, perhaps that could be used as an alternative (not sure how far along that project got before it seemed to stall out)? In any case, I further agree, a limit, likely not more than 5k if we want to look at the average server of an average WP user should be used. The average WordPress site has resources that are often significantly less than an avg WP dev's localhost resources. An ajax based approach has the ability to scale, meet accessibility requirements, provide for a significantly better user experience, and also meet the design of Gutenberg. |
See #5921 (comment), which I mentioned earlier.
Can you share your data and methodology for coming to this amount?
Again, all of the prior research has not identified an AJAX-based approach that meets accessibility requirements. If you have a specific proposal to share, please do. |
Also, as I mentioned in Post Status Slack, this was a Hard decision, not an Easy one. Easy decisions make sense to everyone. Hard decisions involve trade-offs, and require a great deal of background information to understand fully. #4622 sat for months before it was fixed with #6627. It was not a rushed decision, but rather weeks of discussion and consideration leveraging years of domain experience. The decision we came to is documented in #6180 (comment) If someone wants to put in the effort towards producing an accessible and acceptable UI alternative, we welcome your participation. Keep in mind that it will be multiple weeks of research, documentation, and planning — not a quick pull request. And, if you have time to spend on other Hard problems, we have an entire Back Compat milestone to work through. |
Again, this issue is not the right place to discuss alternative UI implementations, because that task is orders of magnitude more complicated than what I am proposing here. @danielbachhuber so far, I have not seen a response to the specific suggestion of moving to a tested and considered higher limit that is not essentially infinite. Is this something worth pursuing, or is there a concern with this approach that I am not aware of? I created this issue because I think the current use of the |
WordPress is run on plenty of shared hosts. Plenty of developers working on Gutenberg are using quad-core MacBook Pros with 16GB of RAM. I think it’s pretty safe assumption. |
But that is the core of the problem: if you introduce some arbitrary upper limit to the request, you'll need to communicate (in some way) to the end user why they can't access all potential authors, categories, pages, etc.
If you can get design sign-off on the user experience, then by all means. Based on the best of my knowledge, this would be an inadequate user experience and unlikely to be approved from a design/UX perspective.
Just so it's stated, this is an assumption, not data and methodology. |
To take this a step forward... The specific data I'd like to see is:
Some other mitigation technicals we can explore are:
|
How about moving from |
Actually, we already identified the Gutenberg autocomplete component that fills the need. It’s accessible, already in Gutenberg and performs Ajax searches. |
Can you open a new issue with the specifics of your proposal, please? |
Sorry, I just realized you posted detail with #6694 (comment) I'll defer to design as to whether autocomplete is an appropriate replacement for author, category, and page selection. It doesn't seem like it would solve the problem for shared blocks. |
I've posted some follow up work on enabling autocomplete as an author search mechanism, see #5921 (comment). |
I've looked a bit at the super-select example http://alsoscotland.github.io/react-super-select/react-super-select-examples.html#basic_example and, although it doesn't fully meet the ARIA patterns, it doesn't differ so much from the Gutenberg autocompleters. From an UX perspective, the main difference is that it presents an initial set of results, as also mentioned by Adam on #5921 (comment) Will comment further there. |
I just ran into this in a pre-production deployment of Gutenberg. We have a few thousand pages, so trying to show the Page Attributes brings MySQL to it's knees. Of course there's other factors, this site is using Visual Composer which overload In may not come as a surprise that my opinion is all unbound queries should be an absolute no-go, if we want WordPress to be able to scale. 100k pages, or 100k users should be perfectly possible with WordPress, and right now with Gutenberg that is not the case. I mainly just want to add one datapoint though: we have Gutenberg on a site with a moderate amount of content (maybe 6GB DB in total) and I'm now having to work out how to disable this Gutenberg behaviour. |
Taking this a little further, there are a lot optimisations that could be done in the REST API endpoints. In the above example from @danielbachhuber, the pages endpoint is running hooks for the content, author, and pretty much all other fields. The controllers I believe are not optimised for |
What you've identified is likely the true cause of the problem. I'd be curious to know if there's anything else.
This is fixed in WordPress trunk: https://core.trac.wordpress.org/changeset/43087 Can you apply that change in your pre-prod environment to assess its impact? |
Nice patch! I'm leaving for paternity break but I'll see if I can pick up testing this after that.
I don't know how much this line will be taken forever more to try justify to unbound queries, but as I said, I mainly wanted to added a data point of one reason why for practicality this is going to cause issues in the wild. |
Thanks @danielbachhuber! New ticket looks good. |
As of this morning, I think we have an alternative path forward:
To resolve this particular issue, we can update the API fetching code to traverse pagination after a general purpose solution for #6723 lands. The JavaScript code for the former doesn't necessarily need to be blocked by the latter. |
So an idea: WHat about using localstorage? For any collection, we could add a |
@aaronjorbin as a data cache? I think the |
👍 to this. Let's fix the blocker first. |
I've begun work on an implementation of the recursive page fetching, I will update by tomorrow at the latest with progress. |
Just so it's noted, this is a blocker for 5.0 beta1. Because https://core.trac.wordpress.org/ticket/43998 was closed, this issue needs to be resolved prior to beta1. Without a solution to this issue, Gutenberg the editor will query with |
Notes from @joehoyle on this paginated approach:
This is good feedback, and this approach raises a few questions:
In the core REST-API channel I've proposed upping the |
@kadamwhite Our options at this point are:
Arbitrarily capping the number of results is not an option.
Like I've mentioned several times before, an accessible typeahead search dropdown doesn't solve for all five UI elements that are impacted by this:
|
Arbitrarily capping the results is not an option, as in raising the cap to 500? May I ask why? It feels like that would be a nicely complementary modification to any other changes we make here. |
Just to clarify, if we raise the cap to 500, would I be able to access my 501st item? If yes, then we're talking about option 2. If no, then it's not a viable option.
We'd need to evaluate performance / memory consumption. My gut says 200 would be fine, 350 might be ok, and 500 is where we get to a magically bad number for |
I was specifically proposing upping the limit in conjunction with paginating over each page response to assemble the complete set. The goal would be to balance the cost of bootstrapping WordPress with the cost of requesting too many items; so for example if we cap at 350, a request for 800 items will generate three requests to the server. |
Ok. I'd be fine with 350 unless @pento has opinions otherwise. |
To be clear though, we should land #10762 first and foremost. We can more easily raise the limit after beta1. |
For ideal implementations of these, do you think that there is ever a need to load all content? I'm not sure if it's widely accepted that it's pretty much always a bad idea to have to load all of |
We're not blue-sky thinking right now. We're a day away from 5.0 beta1, this issue is blocking it, and we're producing the most practical solution possible given constraints and circumstances. |
FYI: I am unsubscribing from this issue because despite repeating myself multiple times and asking follow-up questions, there was one single comment from a Gutenberg team member where the original goal of the issue was understood, and the rest are either unrelated to what I was proposing, worse solutions, or orders of magnitude more difficult.
This was my goal from the start with this issue. If you'd like to discuss what this could look like, pick a Slack instance and ping me on it. |
I have opened #10845 implementing paging handling directly within |
Fixed by #10762 |
Follow-up to #6657 - cc @danielbachhuber. This PR introduced a magic value of
-1
into a couple of API endpoints, which means to return an "unbounded" number of items:gutenberg/lib/rest-api.php
Lines 512 to 513 in fb804e2
However, this value doesn't actually do what it says - it really means some arbitrarily high limit instead of unbounded:
gutenberg/lib/rest-api.php
Lines 532 to 536 in fb804e2
This is a bit unclear and requires more complicated code than necessary, as well as a core patch referenced on the initial PR.
Allowing unbounded (or arbitrarily high) requests is also basically guaranteed to break for some sites due to memory or execution time constraints. This is a pretty opaque failure mode which doesn't leave the average user much chance of recovery.
I think a better trade-off would be to increase the limit for users who can
edit_posts
to a larger, but still plausible value, such as 1000 or 2000. This value should also be filterable by hosting providers. This way a site with a very large number of items would still work, but the cut-off for when items stop appearing in the UI would be much higher. This will lead to simpler API code with more predictable failure modes.Ideally a site with this high number of posts would also be tested so that the performance characteristics are roughly understood.
The text was updated successfully, but these errors were encountered: