Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Benchmark command #449

Merged
merged 2 commits into from
Jul 2, 2018
Merged

Benchmark command #449

merged 2 commits into from
Jul 2, 2018

Conversation

gilbertchen
Copy link
Owner

This adds a new benchmark command that can be used to measure disk access and network speed.

Usage:

NAME:
   duplicacy benchmark - Run a set of benchmarks to test download and upload speeds

USAGE:
   duplicacy benchmark [command options]  

OPTIONS:
   -file-size <size>		the size of the local file to write to and read from (in MB, default to 256)
   -chunk-count <count>		the number of chunks to upload and download (default to 64)
   -chunk-size <size>		the size of chunks to upload and download (in MB, default to 4)
   -upload-threads <n>		the number of upload threads (default to 1)
   -download-threads <n>	the number of download threads (default to 1)

Sample output:

duplicacy benchmark
Storage set to sftp://[email protected]/storage
Generating 244.14M byte random data in memory
Writing random data to local disk
Wrote 244.14M bytes in 3.05s: 80.00M/s
Reading the random data from local disk
Read 244.14M bytes in 0.18s: 1388.05M/s
Split 244.14M bytes into 53 chunks without compression/encryption in 1.69s: 144.25M/s
Split 244.14M bytes into 53 chunks with compression but without encryption in 2.32s: 105.02M/s
Split 244.14M bytes into 53 chunks with compression and encryption in 2.44s: 99.90M/s
Generating 64 chunks
Uploaded 256.00M bytes in 62.88s: 4.07M/s
Downloaded 256.00M bytes in 63.01s: 4.06M/s
Deleting 64 temporary files

@TheBestPessimist
Copy link
Contributor

Neat idea!

@TheBestPessimist
Copy link
Contributor

TheBestPessimist commented Jun 18, 2018

Won't the read speed be "wrong" due to caching, since we write the data first to disk, and straight away read it back? I see this even in your example: write = 80MBPS, read = 1388 MBPS (it's over one thousand!).

Second q: why 244 MB for disk operations, and 256 for network?

@leftytennis
Copy link
Contributor

Shouldn't this benchmark command accept a -storage parameter?

@gilbertchen
Copy link
Owner Author

I'll merge this PR first and then add an -storage parameter.

@ Yes, it is most likely due to caching but I guess the disk read speed is unlikely the bottleneck so there is really a need to figure out the exact read speed. The difference between 244MB and 2566MB is due to the units being used, which is being addressed by #437.

@gilbertchen gilbertchen merged commit dfdbfed into master Jul 2, 2018
@gilbertchen
Copy link
Owner Author

This pull request has been mentioned on Duplicacy Forum. There might be relevant details there:

http://forum.duplicacy.com/t/benchmark-command-details/1078/1

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants