-
-
Notifications
You must be signed in to change notification settings - Fork 940
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
SFTP performance #100
Comments
Below you find the results I get with a 843 Kb file hosted on a VM (OpenSSH_6.6.1p1). WinSCP download test, 50 iteration(s). ___________________________________________________________________
| Run | Client Cr | Connection | Download | Delete |
| 1 | 3,4 ms | 1 422,9 ms | 56,0 ms | 101,7 ms |
| 2 | 101,7 ms | 910,2 ms | 50,9 ms | 101,2 ms |
| 3 | 101,3 ms | 910,2 ms | 50,9 ms | 101,7 ms |
| 4 | 101,8 ms | 913,8 ms | 50,8 ms | 101,3 ms |
| 5 | 101,3 ms | 912,3 ms | 50,9 ms | 101,2 ms |
| 6 | 101,2 ms | 909,3 ms | 50,9 ms | 102,6 ms |
| 7 | 102,7 ms | 912,3 ms | 50,8 ms | 102,1 ms |
| 8 | 102,2 ms | 914,6 ms | 51,8 ms | 102,8 ms |
| 9 | 103,0 ms | 914,4 ms | 50,8 ms | 101,4 ms |
| 10 | 101,5 ms | 911,8 ms | 50,6 ms | 103,0 ms |
| 11 | 103,2 ms | 914,2 ms | 51,7 ms | 101,3 ms |
| 12 | 101,4 ms | 913,8 ms | 50,9 ms | 102,4 ms |
| 13 | 102,5 ms | 912,7 ms | 50,8 ms | 100,9 ms |
| 14 | 101,0 ms | 909,6 ms | 50,6 ms | 101,1 ms |
| 15 | 101,2 ms | 909,3 ms | 50,8 ms | 100,9 ms |
| 16 | 100,9 ms | 911,3 ms | 50,8 ms | 102,3 ms |
| 17 | 102,5 ms | 913,7 ms | 50,4 ms | 100,9 ms |
| 18 | 100,9 ms | 910,7 ms | 51,5 ms | 101,5 ms |
| 19 | 101,5 ms | 911,8 ms | 101,2 ms | 101,4 ms |
| 20 | 101,5 ms | 915,2 ms | 51,6 ms | 100,8 ms |
| 21 | 100,9 ms | 910,4 ms | 50,5 ms | 101,9 ms |
| 22 | 102,0 ms | 915,1 ms | 50,8 ms | 101,8 ms |
| 23 | 101,9 ms | 911,3 ms | 51,6 ms | 101,7 ms |
| 24 | 101,9 ms | 914,8 ms | 51,4 ms | 101,0 ms |
| 25 | 101,1 ms | 912,4 ms | 50,9 ms | 100,7 ms |
| 26 | 100,8 ms | 910,8 ms | 50,8 ms | 101,4 ms |
| 27 | 101,5 ms | 916,5 ms | 50,8 ms | 101,1 ms |
| 28 | 101,1 ms | 910,1 ms | 50,7 ms | 102,6 ms |
| 29 | 102,7 ms | 913,5 ms | 51,3 ms | 101,2 ms |
| 30 | 101,2 ms | 912,3 ms | 50,8 ms | 101,5 ms |
| 31 | 101,5 ms | 914,6 ms | 51,6 ms | 101,1 ms |
| 32 | 101,1 ms | 909,6 ms | 50,8 ms | 100,9 ms |
| 33 | 100,9 ms | 911,7 ms | 50,9 ms | 101,3 ms |
| 34 | 101,4 ms | 913,5 ms | 51,7 ms | 101,3 ms |
| 35 | 101,4 ms | 912,7 ms | 50,4 ms | 101,0 ms |
| 36 | 101,0 ms | 911,5 ms | 50,8 ms | 101,6 ms |
| 37 | 101,7 ms | 911,4 ms | 51,8 ms | 101,5 ms |
| 38 | 101,6 ms | 910,7 ms | 50,8 ms | 101,3 ms |
| 39 | 101,4 ms | 912,2 ms | 50,7 ms | 101,7 ms |
| 40 | 101,8 ms | 911,5 ms | 50,3 ms | 102,0 ms |
| 41 | 102,1 ms | 915,6 ms | 50,8 ms | 101,5 ms |
| 42 | 101,5 ms | 914,5 ms | 51,2 ms | 101,5 ms |
| 43 | 101,6 ms | 912,8 ms | 50,7 ms | 101,0 ms |
| 44 | 101,1 ms | 912,5 ms | 50,9 ms | 101,2 ms |
| 45 | 101,3 ms | 914,5 ms | 51,4 ms | 100,9 ms |
| 46 | 101,0 ms | 911,5 ms | 50,5 ms | 100,8 ms |
| 47 | 100,8 ms | 911,7 ms | 50,8 ms | 102,3 ms |
| 48 | 102,5 ms | 914,8 ms | 101,9 ms | 101,5 ms |
| 49 | 101,6 ms | 913,2 ms | 51,3 ms | 101,0 ms |
| 50 | 101,0 ms | 907,6 ms | 50,6 ms | 101,0 ms |
|=======|============|==============|================|============|
| avg | 99,6 ms | 922,6 ms | 53,1 ms | 101,5 ms |
|_______|____________|______________|________________|____________|
*** Average download speed for 43 161 600 bytes: 15 881,22 KB/s
SSH.NET download test, 50 iteration(s).
___________________________________________________________________
| Run | Client Cr | Connection | Download | Delete |
| 1 | 0,8 ms | 156,5 ms | 35,2 ms | 4,2 ms |
| 2 | 4,3 ms | 162,4 ms | 33,5 ms | 4,1 ms |
| 3 | 4,1 ms | 165,7 ms | 33,0 ms | 3,7 ms |
| 4 | 3,7 ms | 162,9 ms | 34,2 ms | 3,6 ms |
| 5 | 3,6 ms | 170,9 ms | 33,0 ms | 4,3 ms |
| 6 | 4,4 ms | 171,1 ms | 36,4 ms | 4,0 ms |
| 7 | 4,1 ms | 181,1 ms | 33,8 ms | 3,6 ms |
| 8 | 3,6 ms | 161,5 ms | 33,9 ms | 3,7 ms |
| 9 | 3,7 ms | 176,6 ms | 36,8 ms | 4,0 ms |
| 10 | 4,0 ms | 193,9 ms | 32,9 ms | 3,9 ms |
| 11 | 3,9 ms | 179,4 ms | 33,9 ms | 3,6 ms |
| 12 | 3,6 ms | 190,0 ms | 33,6 ms | 4,1 ms |
| 13 | 4,1 ms | 170,9 ms | 33,7 ms | 3,5 ms |
| 14 | 3,5 ms | 178,8 ms | 33,7 ms | 3,9 ms |
| 15 | 3,9 ms | 170,0 ms | 32,7 ms | 4,0 ms |
| 16 | 4,1 ms | 176,7 ms | 34,5 ms | 4,7 ms |
| 17 | 4,8 ms | 169,6 ms | 33,2 ms | 4,0 ms |
| 18 | 4,0 ms | 189,9 ms | 35,0 ms | 3,6 ms |
| 19 | 3,6 ms | 173,6 ms | 33,4 ms | 3,5 ms |
| 20 | 3,6 ms | 168,4 ms | 33,0 ms | 3,7 ms |
| 21 | 3,8 ms | 166,5 ms | 34,3 ms | 3,7 ms |
| 22 | 3,7 ms | 168,0 ms | 35,6 ms | 3,6 ms |
| 23 | 3,6 ms | 169,1 ms | 33,5 ms | 3,7 ms |
| 24 | 3,8 ms | 163,9 ms | 33,1 ms | 3,5 ms |
| 25 | 3,5 ms | 167,4 ms | 34,8 ms | 4,9 ms |
| 26 | 4,9 ms | 159,9 ms | 36,3 ms | 3,9 ms |
| 27 | 3,9 ms | 177,8 ms | 32,9 ms | 3,5 ms |
| 28 | 3,5 ms | 165,8 ms | 38,3 ms | 3,9 ms |
| 29 | 3,9 ms | 160,1 ms | 32,1 ms | 3,7 ms |
| 30 | 3,7 ms | 168,0 ms | 33,0 ms | 3,9 ms |
| 31 | 4,0 ms | 174,2 ms | 35,2 ms | 4,0 ms |
| 32 | 4,0 ms | 170,8 ms | 36,1 ms | 3,5 ms |
| 33 | 3,6 ms | 177,5 ms | 35,4 ms | 3,9 ms |
| 34 | 3,9 ms | 178,4 ms | 36,4 ms | 3,6 ms |
| 35 | 3,6 ms | 163,0 ms | 33,3 ms | 3,9 ms |
| 36 | 4,0 ms | 194,6 ms | 35,3 ms | 3,5 ms |
| 37 | 3,5 ms | 165,6 ms | 34,4 ms | 4,1 ms |
| 38 | 4,2 ms | 172,6 ms | 32,9 ms | 3,7 ms |
| 39 | 3,8 ms | 160,1 ms | 33,9 ms | 3,6 ms |
| 40 | 3,7 ms | 162,1 ms | 41,3 ms | 4,1 ms |
| 41 | 4,1 ms | 173,9 ms | 32,2 ms | 3,9 ms |
| 42 | 3,9 ms | 162,5 ms | 33,7 ms | 4,0 ms |
| 43 | 4,0 ms | 163,7 ms | 33,5 ms | 3,5 ms |
| 44 | 3,5 ms | 202,2 ms | 33,6 ms | 3,5 ms |
| 45 | 3,6 ms | 170,2 ms | 33,5 ms | 3,8 ms |
| 46 | 3,9 ms | 168,7 ms | 33,3 ms | 4,7 ms |
| 47 | 4,7 ms | 164,4 ms | 33,7 ms | 3,8 ms |
| 48 | 3,9 ms | 166,5 ms | 36,2 ms | 3,6 ms |
| 49 | 3,6 ms | 165,6 ms | 32,8 ms | 3,6 ms |
| 50 | 3,6 ms | 157,6 ms | 32,0 ms | 4,4 ms |
|=======|============|==============|================|============|
| avg | 3,8 ms | 171,0 ms | 34,2 ms | 3,9 ms |
|_______|____________|______________|________________|____________|
*** Average download speed for 43 161 600 bytes: 24 624,47 KB/s |
Thank you, I'll have to try it with the dev build. |
Following because I'm having timeout issues, also with a SFTP server I can't control. |
I pulled dev code today. Using the same test file and machines, I saw big improvements over the 2016.0.0 release. Looking good! I just wish I knew more about that server. It's got to have something to do with these transfer numbers. Not to change the subject here but if anyone has suggestions on exploratory/diagnostic code to get more information about the server or transfer, please let me know.
|
I haven't done any thorough testing as above, but I was just evaluating the library and the performance issue is large in the current NuGet package (v2016.0.0). Once I get a chance in near the future I will revisit to test further but thought it might help if I added a comment. The server is internal to the corporate network and always / easily provides near gigabit downloads when using a standalone sftp client (FileZilla). Connectivity is > 1Gbps Using SSH.NET SftpClient to a memory stream will not go any faster than ~2.78MB/s. Also tried to provide a buffer the full size of the desired file to the memorystream constructor without luck. Just downloaded the WinSCP NuGet package and it achieves ~70MB/s but there are various issues with that project as well. Thanks, |
@lochnar187 I'll do some tests with WS-FTP later this week, or early next week. I may need more information on your setup at the point. |
@ceastwood Those numbers really trouble me. Can you provide more info on that SSH server (software, version, crypto used, ...). If not, I could provide a custom built version of SSH.NET that adds more debug output). |
@drieseng I'll give you all the info I have, sadly that's not much for the WS-FTP server as it's not mine. Just let me know what you need. For the moment, here's a verbose log from FileZilla's connection to that server:
|
The reason for the poor performance is that requests aren't pipelining despite the code appearing built to do so. Say the buffers are configured such that you can receive 5 SSH_FXP_READ responses (SSH_FXP_DATA) before you can read any more. The pattern I'm seeing is like this for downloads, in shorthand: READ 0-> What you want to see is something more like this: READ 0 -> Does that make sense? Since SFTP has a 32KB max data size, with the pattern I'm seeing now, there is a round-trip-time inflicted for each 32KB in the file, which is why buffer sizes don't really matter much. It's only actually buffering 32KB before writing it right away. From my cursory reading of the code, it may be due to the locking on _requests dictionary in SftpSession or perhaps the SftpFileStream itself. Another thought I had is that since response handling is event-driven, you're also likely to be stuck handling a response when you could be sending one out, especially when there is low latency on the connection. Disk I/O is more costly than sending a new request, and since you're handling responses right away and locking the _requests in the meantime, it can't get another request out by the time it's received a new response to handle from the previous request. It seems to be an issue on writes as well. You want to send out as many writes as you can until you block waiting for WINDOW_ADJUST. That could be a similar thing, where WRITE's get built/sent while a lock is held, preventing further WRITE's from being built/sent simultaneously because you're already handling a STATUS. Not really sure I'm anywhere close on the underlying cause in the code. But hey, it's pretty to look at, so maybe I'll look some more soon. If it does have something to do with locking or event handling, I'm not sure the best solution for your code. When I've implemented pipelining on an SFTP client in the past, I had to kind of refactor the response handling so that it was closely in-tune with request sending, such that I had more direct control over when to send new request (i.e. immediately after receiving a response). In any case the most important thing is to completely fill the pipe at the start of the transfer, and then the existing paradigm should work relatively well. That said, you really don't want to assume it won't start falling behind, by handling more responses than making new requests, but handling that is a lot more difficult than simply putting some blocker up on your response handling while you're busy filling the pipe at the start. I'd probably start there to see if it even does try to pipeline the requests in the first place. You might be surprised at the results. |
Just occurred to me that SftpSession.RequestRead is essentially synchronous. That would do it. |
We may effectively be able to send multiple read requests at once; either by first checking what the size of the remote file is, and determine the optimal number of the requests to send in parallel, or by sending a given number of requests and ignoring those responses that return an EOF (and stop sending any further requests when any of these async requests return an EOF). We would however need to write the responses to those requests in sequence, as such we would need to buffer more data and - as a consequence- consume more memory. I currently don't have time to do a POC for this, but please don't hesitate to take a stab at this. |
I've started working on this. |
My (draft) read-ahead implementation of DownloadFile shows a "slight" increase from 31488 Kb/s Ìncrease should be even higher on slower connection. I'll test this tomorrow. |
What I can say from my part is that the raw channel overhead is non-existent as I can hit my gigabit local network limit with the exec channel meaning any throughput slowdown happens within the SFTP handling code. I have some ideas I could try if I can reproduce the slowdown. I'm not too familiar with the SFTP subsystem of SSH, though. |
I'm using the dev code and seeing a huge slow down as well. 28Meg file - 8 seconds with FileZilla but 2 minutes with SSH.NET. Increase buffer to 63K and I get it down to a minute. I don't know where the overhead issue is but could it be that some of these other tools are multi-threading the download? |
I committed a preliminary version of my read-ahead changes in the develop branch. |
Will do - probably tomorrow.
… On Jan 22, 2017, at 3:00 PM, Gert Driesen ***@***.***> wrote:
I committed a preliminary version of my read-ahead changes in the develop branch.
Please hammer away at them, and let me know if it reduces download speed for you.
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#100 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ADQXsWwhjzth8QiI4jv6AuQZgxPitKS1ks5rU7V6gaJpZM4KUBlH>.
|
@drieseng I'm stuck firefighting atm, I hope to get to this in a few days. Thank all for working on this issue! |
I did some debugging and the isCompleted is not being set at the end of the transfer. It's processing 32K chunks up until a short read at end of file but it doesn't set the isCompleted flag so it loops in the Monitor.Wait and never returns. I did a little MORE debugging and after the short last chunk is read we never get back to another attempt but the isCompleted is not set. I'm going to try once more to see where we might be able to set this but so far I haven't been able to debug the right area to get there. In the public byte[] Read() This code... Is falling into MonitorWait and it's never coming out - the while condition is true as theres no next chunk but isCompleted is not set true. I tried setting it on the short read, but that fails to write out the last chunk in that case. The speed is phenomenal up to that point. |
I "fixed" this as follows. In SftpFileReader see code //tah
.... I am NOT at all certain this is the best/correct way to fix this but it does work. This doesn't work unfortunately. Because it is async it fails to write the complete file out..... so this has to be set elsewhere - I just do not know where. |
The _isCompleted change that you added should not be necessary, but I could be wrong. Please check if my latest commit fixes the problem for you. |
Ignore my last comment, I'm looking into it. |
I just committed a new version that should fix this issue. |
I know you say it's a WIP but I'd call it a rousing success. WinSCP which was the fastest transfer I could find has been doing 29-31 seconds on my sample file. Consistently getting 15-16 using the changes you put into SSH.NET. I say go for it and put it in the next beta! THANK YOU! Happy Anniversary to your gf too! |
@drieseng I will definitely help test this out when I get a chance over the next couple of days. |
@lochnar187 Let's keep the discussion around the file/folder not found problem in issue #94. |
I've now hardened the implementation so that a broken session or an unresponsive SSH server interrupts the read-ahead loop (and any blocking waits). This adds quite some complexity, which of course does not come for free. To compensate for that additional cost (and the corresponding performance degradation), I've introduced some new optimizations. The net result is that - even after the "hardening" - performance has increased even a little further (compared to the draft). I started working on SSH.NET after the 2013.4.7 release, so I used this as a baseline for my performance tests. Downloading a 50 MB file 100 times takes 259850 ms on v2013.4.7, 159730 ms on v2016.0.0 and 90852 ms when using the develop branch. If you look at the transfer speed, we went from 19703 KB/s for v2013.4.7, to 32054 KB/s for v2016.0.0 and finally settled at 56355 KB/s for that same 50 MB file when using the develop branch. Note that I tested with different file sizes, to make sure there are no regressions for small files. |
I’ll pull the developer branch and give it some run throughs.
|
I downloaded the latest dev build and I'm back in slow land. With the last version that didn't have the robust recovery I was getting 9 seconds on my download. With this version I get a full minute 5 seconds. This is the same throughput I get using the release version of SFTP with a 32K buffer. This version, sadly, isn't really improved for me. |
We could see it as bad news, or consider it a challenge. I prefer the latter. Did you explicitly set a BufferSize ? In the version that you previously tested, the read-ahead used a hard-coded buffer size of 32 KB. Now we use whatever is configured (the default for SftpClient is 32 KB, which is a good default), with a small adjustment to take the protocol overhead into account. Would it be possible for you to debug into When the file size is known, the maximum number of pending reads is based on the file size and the buffer size with a maximum of 10 pending reads. |
@imapangolin Also, you don't happen to have a server I can test against to reproduce this? |
Can you send me a private email address and I’ll share the actual server and credentials.
send to thane at software simple dot com
… On Feb 7, 2017, at 3:24 PM, Gert Driesen ***@***.***> wrote:
@imapangolin <https://github.com/imapangolin> Also, you don't happen to have a server I can test against to reproduce this?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#100 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ADQXsYFT1OdwBdRp6DU_hY3mEz0MY7Oqks5raNLtgaJpZM4KUBlH>.
|
On Feb 7, 2017, at 3:17 PM, Gert Driesen ***@***.***> wrote:
We could see it as bad news, or consider it a challenge. I prefer the latter.
Let's try to see what made it regress for you.
Did you explicitly set a BufferSize ? In the version that you previously tested, the read-ahead used a hard-coded buffer size of 32 KB. Now we use whatever is configured (the default for SftpClient is 32 KB, which is a good default), with a small adjustment to take the protocol overhead into account.
I did set 4K. Going with the default buffer increased the speed back to 19 seconds vs 9. I know there’s some overhead to make it robust but I’d rather see it faster and not 100% slower than we have seen is possible. Not a criticism - an observation.
Would it be possible for you to debug into ServiceFactory.CreateSftpFileReader(...)? When we're unable to determine the size of the file (SSH_FXP_LSTAT failed), we fall back to having maximum 3 pending reads.
Also what (exact) size of file(s) are you testing with?
File is 24,901,370 with this iteration. When debugging now - the buffer size is 32K. The pending reads debugging are: 10
When the file size is known, the maximum number of pending reads is based on the file size and the buffer size with a maximum of 10 pending reads.
In the version that you previously tested, the maximum number of pending reads was hard coded to 15.
With the new version, there will be a lot less read-aheads for small files (to avoid wasting round-trips to the server).
For large(r) files, there will be 10 read-aheads in the new version (vs. 15 before).
Why 10 not 15? Not that it really matters or if this is where the “slow down” is. Slow down really is a misnomber as we have gone from 2+ minutes to really 19 seconds in my example.
… —
You are receiving this because you commented.
Reply to this email directly, view it on GitHub <#100 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ADQXsd8PbtR6WyuFf5FD0lVG5EYmOfnwks5raNFNgaJpZM4KUBlH>.
|
There's almost no reason to play around with the buffer size. On my system, with a local (and responsive) SSH server, there's almost no difference between 2 and 10 read-aheads. I guess 10 was the last value that I tested with; there was no million dollar usage study to get to that value :-) Please do some tests with values higher than 10, and let me know if that changes much for you. |
15 seems to be a sweet spot as far as read-aheads go for my environment, and with a specific server. There’s not a truly significant difference though over 10. Going above 15, however, tends to actually slow down the transfers.
I think I can be content with the value of 10 that is present currently.
… On Feb 7, 2017, at 7:08 PM, Gert Driesen ***@***.***> wrote:
There's almost no reason to play around with the buffer size.
As I mentioned earlier, 32 KB is a good default.
On my system, with a local (and responsive) SSH server, there's almost no difference between 2 and 10 read-aheads. I guess 10 was the last value that I tested with; there was no million dollar usage study to get to that value :-)
Please do some tests with values higher than 10, and let me know if that changes much for you.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#100 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ADQXsYPG3iS48E-OkXCvcPISt0BXMwIxks5raQdsgaJpZM4KUBlH>.
|
I did some further tests and attempts to improve transfer rate, but I failed to make a notable difference. I'm gonna try to see if using the crypto classes from the .NET base class library makes a difference. |
Before making the transfer more robust you were faster. I wonder if there's something there that can be done.
… On Feb 11, 2017, at 5:03 AM, Gert Driesen ***@***.***> wrote:
I did some further tests and attempt to improve transfer rate, but I failed to make a difference.
Putty is still quite a lot faster.
I'm gonna try to see if using the crypto classes from the .NET base class library makes a difference.
But to be honest, I'm not a crypto wizard and have no ambition to become one :)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub, or mute the thread.
|
Please try using version 2016.1.0-beta2, and let me know if performance is good now. |
Did a quick run with 2016-1.0-beta2, I'm seeing a major boost! The speed almost doubled. Here's the results, same test program as above.
Keep up the great work and thank you. |
Thanks for the feedback! |
@lochnar187 |
I was out of town but I will be happy to test with beta3 when available or with any developer build.
… On Aug 29, 2017, at 11:28 AM, Gert Driesen ***@***.***> wrote:
@lochnar187 <https://github.com/lochnar187>
I've discover a performance regression when used with Sun SSH (issue #292 <#292>).
The fix for that issue should also improve performance a little for other SFTP servers.
I might call upon you to perform another test when beta3 is available.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#100 (comment)>, or mute the thread <https://github.com/notifications/unsubscribe-auth/ADQXsQLRU1l7AJ2Ff_xfBRNjPki7Q69Oks5sdFhLgaJpZM4KUBlH>.
|
@drieseng I'd be happy to do that! Let me know when. |
@lochnar187 @imapangolin I've committed the fix for issue #292. I had to revert sending SSH_FXP_OPEN and SSH_FXP_LSTAT requests in parallel since Sun SSH does not handle this well. This may result in a minor performance regression, but this should only be measurable for small files. On the other hand, we now no longer send unnecessary SSH_FXP_READ requests once we've reached the reported file size. This should have a positive effect on all downloads. Can you perform some tests with the develop branch? |
@drieseng I gave it another run with the dev branch code and I'm seeing another 15-20% increase in speed over the last run (Aug 16). Mostly, this was due to less variation in today's runs with each cycle coming in with 600 to 750 ms while the last run saw wider variance. Very nice :) I'm not done yet, I want to do a new bit of code to play with pending read count and see what that does. I'll post more in a while. Thank you. |
Hi @drieseng great work on this project! All I did was add this simple debug statement if (_queue.Count > 1)
{
Debug.WriteLine("Queue count is : " + _queue.Count);
} And my results showed that over the course of my "large" download (290 mb file) over a high latency and high bandwidth network, I only ever reached a _queue count of 2 a couple of times. Do I understand correctly that my queue is not being filled and as such I am downloading the entire file almost without any readaheads? Should I open a new issue for this? Thank you very much in advance. UPDATE: |
We're also still seeing very poor performance on large file downloads in 2020.0.1 - 250kB transfers quickly (a few seconds), but 61MB transfers very slowly (7-8 minutes, 100-200kB/sec throughput during the transfer). Setting a large (256kB) buffer for large files doesn't help. The same file downloads very quickly in WinSCP (7-8 MB/sec). |
@jhardin-accumula I write extension for vscode and visual studio. Npm package for sftp downloads 70 mb file in a minute while it takes 10 minute or more to download the same file using SSH.NET. |
Actually, my quick comparison for the above may have been misleading - I'm doing C# dev in a MacOS-hosted Parallels VM and I was actually comparing SSH.NET in the VM to sftp from the Mac environment... Running WinSCP in the VM gives similar mediocre network performance, and the network throughput reported by Task Manager in the VM is much higher than the network throughput for the VM environment reported by the MacOS host. I'm not sure SSH.NET is the problem for me. The production install at AWS is showing > 400kB/s throughput from the same SFTP server. Modified SSH.NET parameters: timeout = 10m, keepalive = 30s (to allow for connection idle delay from processing the file after download), and if the file being retrieved is >1MB the buffer size is set to 256kB (even though Noah's comment suggests that's pointless... I'm anticipating a fix :) ). |
I've added two PRs today that improve SFTP performance (by a LOT, in my case), and simultaneously reduce CPU usage: #865 and #866. I'm not sure if the SCP file transfers also benefit, I didn't test/check that. According to my benchmarks, both FileUpload and FileDownload have massive speed gains and are now comparable to Filezilla. These changes are for SftpClient.UploadFile/DownloadFile and variants. The Stream/CopyTo versions will also benefit a little on the CPU-usage side, but since they are synchronous and don't queue requests, I don't think they'll benefit that much performance-wise. However, if you are using (like I was) the Stream versions because they support Resume, you might want to check out #864. I'd be very interested to know if these changes also help you guys :) |
I am seeing some performance issues and would like to ask others to take my test code and see if they have the same problem. Mostly, this is because I am running this against an SFTP server that I don't control or have any knowledge about, it's "black box" to me. So I want to make sure it's not that.
After noticing I wasn't getting the speeds I expected (sometimes as little as half) I wrote some code that gives a head to head comparison of speeds between SSH.NET and WinSCP. I used NuGet to get the latest versions of both SSH.NET and WinSCP. Testing file is an 834KB file.
Using VS 2015
Code build is targeted on .NET Framework 4.5.2
OS is Windows 10 (dev machine) and Windows Server 2012 R2 (test machine)
SSH.NET version, 2016.0.0
WinSCP version, 5.9.2
SFTP server reports it is: SSH-2.0-WS_FTP-SSH_7.6.3
Test Code:
Output from command prompt session on test machine:
I'd welcome any suggestions to improve performance.
Thanks
The text was updated successfully, but these errors were encountered: