Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reodering of pages takes a long time and also times out #1954

Closed
tdelam opened this issue Sep 18, 2012 · 14 comments
Closed

Reodering of pages takes a long time and also times out #1954

tdelam opened this issue Sep 18, 2012 · 14 comments

Comments

@tdelam
Copy link
Contributor

tdelam commented Sep 18, 2012

Current site has about 15 pages and 2 engines. When I order pages and click on "Done reordering pages" it sits here with the "spinner" for a good 10-12 seconds.

After I noticed this I double the pages to 30 pages and tried to reorder again and it times out completely.

@Matho
Copy link
Contributor

Matho commented Sep 19, 2012

Do you use 2.0.8 ? In < 2.0.8 version there is bug, which produces all tree reordering and of course more time. But yes, act-as-nested-set produce many sql queries when reordering tree. It should be more efficient. Try hide all children by default in config.auto_expand_admin_tree = false in initializers/refinery/pages.rb .

@tdelam
Copy link
Contributor Author

tdelam commented Sep 19, 2012

@Matho I am using edge.

I was talking with parndt and robyurkowski and it sounds like the slugs get re-generated instead of just slotting the new ordered set into the rght lft positions so it sounds more like that would be the bottleneck.

@Matho
Copy link
Contributor

Matho commented Sep 19, 2012

I dig deeper a little and I think I have found problem. (but only partial). In crud.rb (under refinerycmscore/core/lib/refinery) in method update_positions is called #{class_name}.rebuild! , which is alias to rebuild_with_invalidate_cached_urls! . There is called find_each - it select all pages from DB and run method invalidate_cached_urls for each. And there, the problem is [self, children].flatten.each. If you have 140 pages, like I have, that produce 140 SELECT queries looking for children of each page to create url adress and flush it from disk. If the main goal of that function is to flush page cache (on disk), it is extremely inefficient. We can flush disk cache by call expire_cache from pagesweeper which flush disk folder by cache_root.rmtree .

As I see the commit message in #1706 (comment) the main problem was, that cache was not flushed after update_position, so he has problems by cached content on frontend.

But as I see - there is after filter lambda{::Refinery::Page.expire_page_caching}, :only => [:update_positions] , which should expire cache after update_positions. So only delete code #{class_name}.rebuild! if #{class_name}.respond_to?(:rebuild!) and all code associated with it, should speed up this process.

@parndt
Copy link
Member

parndt commented Sep 19, 2012

The problem is that if we delete the rebuild! code then nothing will get reordered, right? That's what rebuild! does :-)

I think the better way forward is to optimise so that we don't get 140 SELECT queries using efficient invalidation as @Matho says.

@Matho
Copy link
Contributor

Matho commented Sep 19, 2012

No, no, reordering with that line commented works.All the work it do is invalidate page cache very inefficiently. (Expect method rebuild_without_invalidate_cached_urls! - I'm unable to find, where it is from )

@parndt
Copy link
Member

parndt commented Sep 19, 2012

Looks like it was only ever merged to 2-0-stable (cc @ugisozols) so it is defined here:

https://github.com/refinery/refinerycms/blob/2-0-stable/pages/app/models/refinery/page.rb#L191-L195

alias_method_chain aliases the original method as method_name_without_alias_chain i.e. rebuild_without_invalidate_cached_urls! so this is the original rebuild! (which is defined in awesome_nested_set)

@Matho
Copy link
Contributor

Matho commented Sep 19, 2012

Ah now it makes sense, where it is from. With that commented line reordering worked. It rebuilds only if tree is not valid and in my situation it allways was valid so no rebuild was needed. But that 140 queries was produced by code find_each { |page| page.send :invalidate_cached_urls }

@parndt
Copy link
Member

parndt commented Sep 19, 2012

Makes sense. We need to optimise what happens when find_each is called (perhaps not calling it but just invalidating all cached urls?)

@Matho
Copy link
Contributor

Matho commented Sep 19, 2012

Yes - drop that line find_each. I think page cache is flushed - there is after filter in pages_controller. But I didn't test it, if it work, yet.

@Matho
Copy link
Contributor

Matho commented Sep 20, 2012

Tested with commented find_each. (but without running test suite - I have problems with run tests) page cache is correctly flushed after update position by calling expire_page_caching in after_filter.

@ugisozols
Copy link
Member

Speaking about cache - is it even created? As I understand https://github.com/refinery/refinerycms/blob/2-0-stable/pages/app/models/refinery/page.rb#L339-341 should create new cache entries in case it can't find them but that method isn't called anywhere in the code.

Am I missing something?

Btw, by removing find_each specs still pass (because we don't have any spec coverage for that part ;)).

@parndt
Copy link
Member

parndt commented Oct 17, 2012

So what should we do here?

ugisozols added a commit that referenced this issue Oct 24, 2012
It's not needed because:

* cache is cleared by after_filter in admin pages controller
* when rebuild is called it runs #invalidate_cached_urls on every page
and its children so in case user has many pages it causes a lot of
queries which increases load time and sometimes even time out.

See #1954 for discussion.
@ugisozols
Copy link
Member

@parndt see ^ commit message.

ugisozols added a commit that referenced this issue Oct 30, 2012
It's not needed because:

* cache is cleared by after_filter in admin pages controller
* when rebuild is called it runs #invalidate_cached_urls on every page
and its children so in case user has many pages it causes a lot of
queries which increases load time and sometimes even time out.

See #1954 for discussion.
ugisozols added a commit that referenced this issue Nov 7, 2012
It's not needed because:

* cache is cleared by after_filter in admin pages controller
* when rebuild is called it runs #invalidate_cached_urls on every page
and its children so in case user has many pages it causes a lot of
queries which increases load time and sometimes even time out.

See #1954 for discussion.
ugisozols added a commit that referenced this issue Nov 9, 2012
It's not needed because:

* cache is cleared by after_filter in admin pages controller
* when rebuild is called it runs #invalidate_cached_urls on every page
and its children so in case user has many pages it causes a lot of
queries which increases load time and sometimes even time out.

See #1954 for discussion.
@ugisozols
Copy link
Member

#1998 was merged. Closing.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants