-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
NFS Slower than VirtualBox Shared Folders? #46
Comments
After reading Mitchell's filesystem performance article, realizing that one of my graphs had a scale of a different order of magnitude, and thinking about my actual workload, I will have to re-run my tests. Since most of the activity on this particular VM is going to be web development, it will probably be a ton of reads of shared source files in the 5-64KB range, a lot of database activity (but I chose to keep my MariaDB on the VM's native filesystem anyway, so I won't bother with that), and a small fraction of small writes to shared source files. I'll focus on shared reads in the desired size range. Also, I don't really know a lot about partial file reads (which I think has to do with iozone's "record size"), but I suspect all or most of my shared file reads are of the complete file, so I'll focus on the places where record size=file size. |
Hi, Thank for your study, it's really interesting, I looking forward new Greetings, Yann 2015-04-07 0:46 GMT+02:00 Jamie Jackson [email protected]:
Yann Schepens |
Benchmarking winnfsd vs. VirtualBox Shared Folder I/O Where File Size Equals Record Sizeread and reread compared
NFS read and reread ignored
ConclusionNFS dwarfs vboxsf's read/reread performance, while in other I/O tests, vboxsf outperforms NFS. Unfortunately, there are pros and cons to each of the many host/guest file sharing solutions--even more than I had anticipated. For my needs, NFS would seem to perform best, but if I were looking at development of another type (where, say, writes were more important), I'd stick to VirtualBox shared folders (or some other solution). NotesNFS vboxsf |
@jamiejackson Wow great work! Could you put that to the wiki, I think it would worth it and hiding it in an issue would be a shame. |
Hopefully my methodology was valid. Here's the article. |
Unfortunatly, the slow write performance of NFS leads to 404-Errors when developing symfony2 apps and using winnfsd on windows systems. Symfony has a profiler implemented which collects profile data and writes it on the filesystem. Subrequests (like rendering different controllers in your app) also collects profile data and writes it (separately) to the fs. The significant part of the time section in the profiler is the ProfilerListener, which runs as long as the data is collected and saved on the filesystem: vboxsf (1 main request, 4 subrequests) - 170 msNFS (1 main request, 4 subrequests) - 1779 msSymfony also has a debug toolbar which is loaded by ajax after the page was processed by the server and sent to the client. When using NFS, it tries to grab the profile data, which is not yet written on the file system, which results in a 404, so it retries after a while. Anyway, the read performance is AWESOME! Best |
@exchentrix please take a look at winnfsd/winnfsd#22 and try again. @jamiejackson would you run your tests again with the version from the issue linked above? |
@marcharding You’re heaven-sent! I tried it: Things are awesome now. Thank you very much. :) |
I re-ran @jamiejackson 's iozone scripts using vagrant-winnfsd-1.3, which I believe included winnfsd2.2? Unfortunately the results were nearly identical. It's still terribly slow in a symfony2 environment...
|
vboxfs should be even faster as of Virtualbox 5.0.2 per https://web.archive.org/web/20170722174402/http://techblog.en.klab-blogs.com/archives/11851752.html |
Explanation
I'm trying to evaluate whether the pros (purported I/O performance) outweigh the cons (extra configuration) of NFS in my team's Vagrant-based development.
As part of this exercise, I wanted to benchmark I/O performance of both methods.
Vagrantfile:
With
type: "nfs"
I end up with a mount of:192.168.56.1:/C/www/apps/big_directory /theshare nfs rw,vers=3,udp,nolock,addr=192.168.56.1 0 0
With no type specified, I end up with a mount of:
vagrant /vagrant vboxsf uid=1001,gid=1001,rw 0 0
To make things easier (and because iozone seems to want things in fstab for unmounting and remounting), I added those options to
/etc/fstab
, and commented out one or the other, depending on which I was testing. (And unmounted and re-mounted afterward.)I then used Iozone in the VM to give some benchmark data.
NFS:
sudo iozone -a -f /theshare/iozonetest -g 512k -i 0 -i 1 -i 2 -i 3 -i 4 -i 6 -i 7 -i 8 -i 9 -i 10 -i 11 -i 12 -U /theshare -Racb /vagrant/iozone_nfs.xls
vboxsf:
sudo iozone -a -f /theshare/iozonetest -g 512k -i 0 -i 1 -i 2 -i 3 -i 4 -i 6 -i 7 -i 8 -i 9 -i 10 -i 11 -i 12 -U /theshare -Rab /vagrant/iozone_vboxsf.xls
(Note: I skipped test # 5--strided reads--because it didn't seem to work with NFS.)
I then ran the output of those through iozone_3D.xls to generate some graphs.*
Results
I got some counter-intuitive results:
Read
NFS
vboxsf
Write
NFS
vboxsf
Conclusion
Either I'm benchmarking/interpreting incorrectly, or NFS is significantly slower than vboxsf, for many use-cases (except for reads with large file and record sizes). (However, I am a noob, with regard to NFS and I/0 benchmarking.)
Can anyone (who knows more than I do about this stuff) explain what I'm seeing and why I'm seeing it?
* Note to self: I opened the results files in Excel, renamed the data sheet as
data
and re-saved the files as .xls (97-2003) in Excel, before selecting them with iozone_3D.xls.The text was updated successfully, but these errors were encountered: