-
-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error when assembling chunks, status code 504 when using S3 #17992
Comments
@Corrupto what web server are you using? What type of PHP (Apache or FPM)? |
PHP FPM 7.2 Hope to answer everything you asked. |
@Corrupto We were seeing this too... because of the size of the file involved, nginx is timing-out before PHP is done assembling the final file. With the current version of Nextcloud, you would want to adjust your NGinx configuration to increase timeouts to give NC more time to assemble files after upload:
This allows up to 30 mins of time to pass. It probably doesn't need to be quite that high, but it all depends on how fast your server disk I/O is. It'd be great if a future version of Nextcloud had a daemon to handle this kind of task out-of-proc (like Seafile does), since larger files are becoming pretty normal. |
Can you let me know the location of the Nginx configuration? its not on the nextcloud folder. Maybe on /etc/php/7.2/fpm? wish file? |
@Corrupto It depends on what nginx containers you are using... In our case, it's located in |
Issue still reproducible in Nextcloud 20.0 I cannot set nginx/PHP-fpm timeouts to high values, as it only shifts the problem, instead of solving it, and at some point I start getting network disconnects instead of 504. My proposed solution would be to mark files as ready to be reassembled, and handle the lengthy reassembly process in a cron task ( |
This comment was marked as off-topic.
This comment was marked as off-topic.
It returns this error with big files upload even when actually they are correctly uploaded (tested checksum, no integrity issues, after upload in processing task it is triggered timeout event while uploaded file chucks are in PHP temp folder with assembling still in progress). Configurations where I have reproduced this issue [1] File size uploaded with this error 12GB, 33GB [2] File size uploaded with this error 40GB |
This comment was marked as off-topic.
This comment was marked as off-topic.
After adjusting the timeouts we have the situation that big uploads end with "Error when assembling chunks, status code 504" errors only to have the files appear "magically" in nextcloud after some time. It seems that die the feedback on screen can also be misleading. |
This comment was marked as off-topic.
This comment was marked as off-topic.
I confirm this with 21.0.1.1. I hope we won't have to upgrade to version 22 in order to have this fixed, since some of us on shared hosting don't have access to the newer MySQL required to run it. |
Did anyone manage to find solution for this? In short I can upload, it seems, any size file, but when it says processing files "it fails with that 504 error" but file IS there and matches checksum of original file. I thought it might be chunks size (default to 10MB) - I coudn't for some reason get it changed to different value so I changed code in PHP to 100 * 1024 * 1024 instead of default 10 * 1024 * 1024. But problem still persists. Any other ideas? I tried now adding that timeout parameter but I somehow doubt that is the reason because I tested it is QUITE close to 2.5GB wehere it fails for me ... so 2.3GB I can upload no problem (either on LAN 1G speed or 10mbit adsl line) but 2.6 fails (that is why I thought maybe it can handle only 256 or close chunks). I am running it inside docker container using SSD as storage and image linuxserver/nextcloud:amd64-version-21.0.1 |
OK I think I have resolved this after hours of debugging - when I got desperate and posted comment above and resumed investigation of the issue. I tried mounting upload and destination directories separatly as ramdisk (tmpfs), then both, but benefit was neglible. So after realizing it is some kind of timeout and using my kitchen timer found it at 60 sec. I found this to resolve my issue: In nc container I edited /config/nginx/site-confs default site config for nginx and ADDED (just bold part: fastcgi_read_timeout 3600s;):
|
@voyager-hr This is a bug that must be fixed Nextcloud side, keeping connection alive till the completion, rather than increasing timeout. As I have said above, using huge timeout in order to workaround the issue isn't a good practice. |
This comment was marked as off-topic.
This comment was marked as off-topic.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as resolved.
If you are using Nginx, did you make sure to configure it correctly? |
The server is configured correctly and the bug is caused by the fact Nextcloud doesn't keep connection alive till the completion, so when a very large file is uploaded during the time needed to assembly all chunks it goes in timeout. |
Hm... I am not sure if this is fixable by us since it looks like it runs into a timeout issue from the webserver side if it takes too long. cc @nextcloud/server-triage for more input on this. |
This comment was marked as resolved.
This comment was marked as resolved.
This comment was marked as off-topic.
This comment was marked as off-topic.
Since about 23, don't remember the version, it is no longer possible to upload large files because float is not integer. Because of that inability to handle large files any longer and because it is marked as "won't fix" I have long uninstalled nextcloud. IMHO, due to its inability to handle large files at all (runtime type exception when handling file size), the chunking issue is no longer relevant. |
Handling large files is not a problem at all with e.g. https://github.com/nextcloud/all-in-one. So it is really only a configuration issue when it doesn't work in my experience. |
I am on 25.0.2 and the issue persists. |
Die you follow all the steps of https://docs.nextcloud.com/server/stable/admin_manual/configuration_files/big_file_upload_configuration.html to enable large file uploads? |
I am on 25.0.3 and have this issue also.
Followed to the letter, and still encountered this issue on any file above 1GB. This suggestion worked: #17992 (comment) |
@szaimen Following the provided documentation, which isn't the most clear on all the settings described, is only able to mitigate the issue in certain cases. As many users report, there are issues with timeouts when uploading large files. Timeouts are particularly hard to reproduce, especially with networked storage, or with slow connections in between uploading user and receiving nextcloud server. The solution is to not make operations that potentially take a long time dependent on a single HTTP request being upheld for the length of that operation. In the case of large files, the timeouts can also be caused by network settings of the uploading user. You cannot tell everyone to switch internet providers or get another job where the company network is configured differently 😉 Instead, nextcloud needs to work on being more resistant against timeouts that are always possible. They just surface most for users working with large files. Now even if increasing timeout settings and the max post request size across a dozen config files on nextcloud infrastructure would solve all uploading issues, which, as explained above, it doesn't, having such long timeouts is bad practice for performance and security. It makes it possible for a malicious client to just keep loads of HTTP requests active without sending much data at all, quickly reaching the limit of the threads the webserver or reverse proxy can handle. This then prevents regular users from using the service at all. So, please register this issue as what it is and give it the priority it deserves. If this behavior was fixed I could stop renting temporary VMs located in close proximity to my nextcloud deployment just to upload large files to it. |
I was forced to switch to community version of ownCloud and it works perfectly without any changes to the default code. Uploaded and download 100+ GB zip from webpage and a lot of sync files 4+ GB and never fails. |
This only prooves that I am right that you need to adjust the values that are mentioned in https://docs.nextcloud.com/server/stable/admin_manual/configuration_files/big_file_upload_configuration.html. If you dont want to adjust the values on your instance manually, feel free to switch to https://github.com/nextcloud/all-in-one where things should work out of the box. |
@szaimen I think there is a general misunderstanding. While changing these values helps some users, it doesn't help others. There would be no reason for users to change values if all nextcloud clients would use chunked uploads and the re-assembly of the chunks into one file wouldn't be dependent on a single HTTP request not timing out. Even the AIO instructions mention
As long as the maximum file size that can be put on nextcloud is restricted by the maximum size of a POST or PUT request, this problem will persist. |
Actually all Nextcloud clients should indeed be able to use chunked uploads. This is more a general hint for e.g. public uploads. I'll adjust the doc of AIO accordingly. See nextcloud/all-in-one#1843 Also this is actually a problem of your infrastructure if clouflare does not supports uploads bigger than 100MB or rather a feature request to use chunked uploads also in the public upload case. |
Thanks for changing the AIO documentation @szaimen. It is true that even nextcloud's basic browser uploader in standard configuration splits large files into 10MB chunks, except for the public file drop situation. This is great because the data transfer part of an upload is very unlikely to time out then. A client could even be smart enough that if a single chunk transfer times out it would be retried with a smaller chunk size, which I believe the desktop client already does. Now once the chunks are transferred to nextcloud, they are re-assembled into a single file. Depending on the storage used as well as the number and size of the chunks, this process can take quite some time. The re-assembling must be finished within the timeout set for a single HTTP request. Probably since nextcloud doesn't send any data to a client during a lengthy re-assembly process, proxies and/or clients are likely to discard the connection, typically to protect from (unintentional) DOS. Discarding the connection will fail the upload. The time for re-assembly can increase massively if nextcloud is configured to use networked storage (like iSCSI, NFS, SFTP, S3) and moves chunks and final file in between remote storage and local temp storage several times. As a user, currently I have two choices to transfer a large file:
Changing timeouts and chunk sizes in configuration can mitigate the problem under certain conditions, while degrading security and performance in some cases, but it is not a reliable solution. Ideally clients, and in particular the desktop client, would be able to wait for the re-assembling of chunks and register a successful upload. Currently this doesn't seem to be implemented, or is not working. This is a problem for users that have to deal with large files. It will hardly affect users that deal with typical office documents, but those working with audio, video, large documents for print, disk images, etc, will have a very hard time. I'm one of the folks who has their nextcloud configuration cranked up to 11 regarding timeouts, request body sizes, etc, and still am unable to get my 8 GB digital video file uploaded, while I have no issues to, for instance, upload it to my peertube instance which is self-hosted just the same. When exchanging large files with collaborators I would love to avoid having to use WeTransfer or DropBox or other proprietary and centralized tools, but sometimes I have to rely on them, because these services figured out how to deal with large files reliably. It would also be amazing for people currently using CreativeCloud to move to sovereign storage with nextcloud. But as things stand I would only be able to recommend nextcloud for collaborating on light office files. |
This jumped out at me, from @despens's comment above:
We find ourselves in exactly the same situation, and this makes remarks like #17992 (comment) so frustrating. Our organisation uses a large S3 configuration provided as a managed Openstack instance, and we run individual compartmentalised VMs, all running the same VM image. We are able to effortlessly send 10-12GB files to Peertube out of the box, but experience easily reproduce-able and consistent failures on Nextcloud. Peertube is built by a team of one paid individual, and completely staffed by voluntary contributors beyond that financially supported developer. What's worse, is that the upload method via Desktop client described by @despens has additional ironic bugs that are intrinsically linked to this issue: Error messages are truncated and unreadable on chunk failure, which confuses users. In certain circumstances, the UI will even reset after a period of time and display a large "✅ ALL FILES SYNCED" interface to the user, despite a very clear and serious discrepancy between the client and server directory structure. |
To illustrate the agonizing everyday impact of this bug:
In short, this bug is not only affecting large files, but also
💀 |
I can reproduce this too. Further to this, when a user is experiencing this issue, it will hang the web interface for all users (for example, the nextcloud server takes 30+ seconds to even respond to an http request), and furthermore this bug destroys file versioning over S3. |
This comment was marked as off-topic.
This comment was marked as off-topic.
So guys, can there be a possibility to rise the priority of the task? Because it seems to me that this is quite a sensitive issue :) |
This is no longer an issue for me since nextcloud dropped support for large files altogether. Now trying to upload anything larger than 2GB triggers a 500 because |
This comment was marked as off-topic.
This comment was marked as off-topic.
Now you can upload to s3 with chuncks directly, so this bug should be closed I think. If anybody can confirm? |
This comment was marked as off-topic.
This comment was marked as off-topic.
This shouldn't be an issue anymore with:
|
How to use GitHub
Steps to reproduce
Expected behaviour
Tell us what should happen
Expected file to be upload without errors
Actual behaviour
Tell us what happens instead
Error 504 assembling chunks and file is locked, unable to move or delete
Server configuration
Operating system:
Ubuntu 18.04
Web server:
Database:
Type: pgsql
Version: PostgreSQL 10.10 (Ubuntu 10.10-0ubuntu0.18.04.1) on x86_64-pc-linux-gnu, compiled by gcc (Ubuntu 7.4.0-1ubuntu1~18.04.1) 7.4.0, 64-bit
Size: 14.4 MB
PHP version:
PHP
Version: 7.2.24
Memory Limit: 512 MB
Max Execution Time: 3600
Upload max size: 1000 MB
Nextcloud version: (see Nextcloud admin page)
Updated from an older Nextcloud/ownCloud or fresh install:
Where did you install Nextcloud from:
Professionaly install
Daniel Hansson
T&M Hansson IT AB
https://www.hanssonit.se
and installed from Unraid app, 2 different sources, same issue
Signing status:
Signing status
List of activated apps:
App list
Nextcloud configuration:
Config report
Are you using external storage, if yes which one: local/smb/sftp/...
Are you using encryption: yes/no
Are you using an external user-backend, if yes which one: LDAP/ActiveDirectory/Webdav/...
LDAP configuration (delete this part if not used)
LDAP config
Client configuration
Browser:
Google Chrome latest version
Operating system:
Windows 10
Logs
Web server error log
Web server error log
Nextcloud log (data/nextcloud.log)
Nextcloud log
Browser log
Browser log
The text was updated successfully, but these errors were encountered: