-
Notifications
You must be signed in to change notification settings - Fork 380
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add optional allocator #344
Conversation
after processing a packet we keep in memory the allocated slices and we reuse them for new packets. Slices are allocated in: - recvPacket - when we receive a sshFxpReadPacket (downloads) The allocated slices have a fixed size = maxMsgLength. Allocated slices are referenced to the request order id and are marked for reuse after a request is served in maybeSendPackets. The allocator is added to the packetManager struct and it is cleaned at the end of the Serve() function. This allocation mode is optional and disabled by default
f5a3cb4
to
083e69c
Compare
@puellanivis thank you for your review. Please let me know if I need to change other things. |
Other minor changes as per review comments
we can keep compatibility removing the error return value from RequestServerOption
Wanted to add some feedback on circumstances on my end. I do intend to evaluate the last changes, and you’ve probably covered all the important stuff. But work has been nuts with the COVID stuff, and work from home, and everything has just been quite hectic here, and i have not been able to give this the attention it deserves. I’m sure I’ll be back to being able to provide the help to get this improvement put in soon. |
@puellanivis no problem, take your time and thanks for all your suggestions |
The allocator is now enabled in SFTPGo. I don't expect any issue, anyway this patch will be now tested by more users. Thanks |
Awesome. I think the code is probably all set here, so with some actual real-world testing, we can then have extra safety on top. |
@puellanivis Do you think this is good to merge? I keep meaning to review it but I've just had no time. I had a bit today and did a quick skim review and things looked good. I'll trust your opinion and if you think it is ready, please go ahead and merge it (or let me know and I can do it if you don't want to). |
Yeah, I think any concerns I would have would be lint-like comments anyways. The solid chunk of concerns have all be addressed. |
a polite ping here. I would like to release SFTPGo 1.0.0 before July 15, do you think this patch can be merged before then? |
we need a POSIX path filepath.IsAbs can give unexpected results on Windows
Oh, sorry. I was up-to-my-eyeballs in stuff as of late. |
We’ll probably need to release at least a version 1.11.1 right? Should we instead do a 1.12? I think we’re reasonably backwards compatible that all users of the code will still work with the same behavior if untouched, right? |
Thank you! Yes it should be full backwards compatible. Please note that I pushed by mistake #355 to this branch (without the test case) since I included it in SFTPGo 1.0.0, do you want a patch to revert it? Or a patch to add the test case only? Sorry my bad |
Uh, depends on if it’s actually a bug or not. Various parts of the SFTP standard dictate that we should be using POSIX |
I think so. Not so much due to compatibility issues but to help highlight all the performance work drakkan contributed. I'm going to add a 1.12.0 release milestone and add all the relevant issues/prs to it. So we have some semblance of a changelog. |
after processing a packet we keep in memory the allocated slices and we reuse
them for new packets.
Slices are allocated in:
The allocated slices have a fixed size = maxMsgLength.
Allocated slices are referenced to the request order id and are marked for reuse
after a request is served in maybeSendPackets.
The allocator is added to the packetManager struct and it is cleaned at the end
of the Serve() function.
This allocation mode is optional and disabled by default.
I tested several different approachs, please take a look at my branches here:
the included benchmark show that this is the better approach. Here are some profiling results:
Upload 1GB file
Download 1GB file
Fixes #334