-
Notifications
You must be signed in to change notification settings - Fork 253
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow write speed with nfs #712
Comments
Hi, I have used a similar setup for about eight years. Using larger block sizes than you, I recently saw for plain NFS:
This was a lot slower than your data. The destination was a Btrfs file system in an LVM partition on a rotational RAID-1 array managed by For encrypted data, I saw:
I used no additional flags to On the server, my exports were declared with Kind regards |
Beautiful ASCII art ❤️ I'm running essentially the same setup. The 132 kByte/s is horrifying. Here's what I get:
Looking at /etc/exports on this Synology NAS:
Maybe "async" is what makes the difference? Can you test? |
changed to async in exports, speed goes up to 6,8 Mbyte/s - much better! The speed on plain dir is dependant on the CPU? gocryptfs thread uses 6-8% CPU with an i7-6500U with 4 Cores- |
How do your effective mount flags look like? Here's mine:
|
I did not understand this question |
additional info for this: I mount sub-dirs of the topmost cipher dir on my client. For that purpose I copied the gocryptfs.conf from the topmost cipher dir to the corresponding subdirs (only on subdir level 1):
The difference between my 6,8 MB/s and your 14,0 MB/s seems not to be dependent on the network, target HDD or similar underlying layers since I am able to write to the mounted cipher dir on the server with 47,3 MB/s, correct? |
I get about the same directly to nfs:
I'm not sure why you only get half the data rate via gocryptfs. One difference I see is that you use nfs4, while I have nfs3 ( |
Hm okay, thank you very much. The 7 MB/s will work for me. If I have time I may execute some more tests with different parameters. |
Hi, I can also warmly recommend running |
Wow, looks like I broke the isConsecutiveWrite optimization during the v2.0 refactor! And my numbers got a 4x boost: before:
after:
|
Wow, thanks! Is that enough for a release? It's been a little while. |
Hi, Thanks for the release! For the purpose of completeness, here are my numbers before upgrading but after dropping For straight NFS:
For encrypted data:
Those speed-ups are modest, but meaningful. I also ran the smaller block size of For straight NFS:
(That process did not engage well with my Emacs/EXWM environment.) For encrypted data:
I'll also post additional results after upgrading to 2.3.1 in a little while. Kind regards |
This is amazing. I just updated to v2.3.1 (77a0410), still async for nfs. Results:
I also measured over Wifi: This is a factor of 15,7 compared to v2.2.1 with async and 833x with v2.2.1 with sync 😮 All measurements lasted 20s. Maybe the number would go down a bit if the test would last longer since dd starts at 240 MB/s which is impossible with my 1GBit LAN (seems to be a caching thing). Basically I have full LAN speed now, could not wish for more. |
I encountered a problem when using my "archive" folders from my home server. When Writing to a gocryptfs plain dir for which the cypher dir is mounted via nfs, the speed is surprisingly slow.
This is my setup:
This is my configuration for the mounts:
And these are the measured results, measured with dd with bs=16k:
client:/dev/zero -> client:data/plain: 132 kByte/s
client:/dev/zero -> client:data/cipher: 47,3 MByte/s
server:/dev/zero -> server:data/plain: 104 MByte/s
server:/dev/zero -> server:data/Cipher: 203 MByte/s
client:/data/plain -> client:/desktop: 78 MByte/s
The value which bothers me is the 132 kByte/s when writing to the plain dir on the client. Reading from the same dir nearly at full speed of the LAN connection. I think I may messed up some configuration of nfs or gocryptfs. But since only the plain dir with gocryptfs is such slow, it seems that gocryptfs is struggling with something.
The text was updated successfully, but these errors were encountered: