Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cache is invalidated at each new release #1550

Open
vuntz opened this issue Jul 28, 2024 · 4 comments
Open

Cache is invalidated at each new release #1550

vuntz opened this issue Jul 28, 2024 · 4 comments
Labels
need:reporter feedback feedback from reporter required performance

Comments

@vuntz
Copy link
Contributor

vuntz commented Jul 28, 2024

At every update, radicale recreates the cache from scratch. It's a relatively slow process for my server (can be 5 minutes for one calendar, not sure if it should be that slow). But this got me wondering if we could avoid invalidating the cache like this?

The version of the cache depends on the versions of radicale / vobject / python-dateutil (see https://github.com/Kozea/Radicale/blob/master/radicale/storage/__init__.py#L41-L44). This is understandable due to the use of pickle. While radicale has no control over its dependencies, it does know when there are incompatible changes in radicale itself. Would it be acceptable to have an internal version for radicale instead of the package version? This would require some discipline to bump this whenever the internal structure changes, which may or may not be acceptable...

@pbiering pbiering added the need:reporter feedback feedback from reporter required label Jul 28, 2024
@pbiering
Copy link
Collaborator

@vuntz :

  • what do you mean with "update"?
  • which version are you using?
  • which client is in use?

If I e.g. create a new item with "Thunderbird", then all existing files in related collection's cache directory .Radicale.cache/item are untouched, only a new one will be created related to the new item.

I've used to check:

watch "ls -ltr .Radicale.cache/item/ | tail -10"

@vuntz
Copy link
Contributor Author

vuntz commented Jul 28, 2024

@vuntz :

* what do you mean with "update"?

Updating radicale to a new version on my setup (3.2.1 -> 3.2.2, for instance)

* which version are you using?

3.2.2. But it's been happening for every update for a while, I just hadn't investigated until now.

* which client is in use?

It happens only the first time a calendar is being "used" after the update (usually with a PROPFIND request). That first request triggers the cache invalidation (since the cache version is not what radicale expects) and the cache is re-created. Afterwards, the cache works fine.

I've seen it with evolution, caldav-sync on android, standard iOS, etc.

It's really 100% reproducable here. But you need to bump the version of radicale before testing.

@vuntz
Copy link
Contributor Author

vuntz commented Jul 28, 2024

As an example of the first request for a calendar:

[2024-07-28 18:31:40 +0200] [1404023/Thread-37 (process_request_thread)] [INFO] PROPFIND request for '/xxx/calendar/' with depth '0' received from 127.0.0.1 (forwarded for 'a.b.c.d') using 'Evolution/3.50.3'
[2024-07-28 18:31:40 +0200] [1404023/Thread-37 (process_request_thread)] [INFO] Successful login: 'xxx'
[2024-07-28 18:41:23 +0200] [1404023/Thread-37 (process_request_thread)] [INFO] PROPFIND response status for '/xxx/calendar/' with depth '0' in 582.752 seconds: 207 Multi-Status

This calendar has 1377 items.

The first PROPFIND request for another calendar with 2616 items took 738.828 seconds.

@pbiering
Copy link
Collaborator

@vuntz : understood now the situation, never watched for it so far.

Touching the storage cache code for this "upgrade" scenario only sounds like an overkill for now, potentially it would be better to run a storage verification as post-ugrade task (anyhow usefull...)

Can you check whether it would help after the upgrade? Check here for hints:
https://github.com/Kozea/Radicale/wiki/Server-Diagnostics---Troubleshooting

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
need:reporter feedback feedback from reporter required performance
Projects
None yet
Development

No branches or pull requests

2 participants