Skip to content

fix in upload session #377

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1 commit into from
Aug 5, 2021
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
31 changes: 16 additions & 15 deletions office365/sharepoint/actions/upload_session.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
import os
import uuid
from functools import partial

from office365.runtime.client_result import ClientResult
from office365.runtime.queries.service_operation_query import ServiceOperationQuery
Expand Down Expand Up @@ -66,23 +67,23 @@ def _build_upload_session_query(self, response):
:type response: requests.Response
"""
st = os.stat(self._source_path)
# upload a file in chunks
f_pos = 0
fh = open(self._source_path, 'rb')
for piece in read_in_chunks(fh, size=self._chunk_size):
if f_pos == 0:
upload_result = self.file.start_upload(self._upload_id, piece)
self._upload_results.append(upload_result)
elif f_pos + len(piece) < st.st_size:
upload_result = self.file.continue_upload(self._upload_id, f_pos, piece)
self._upload_results.append(upload_result)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vgrem
also I am questioning the need to store the result. It is not used anywhere but consumes memory as well
or am I missing the place where it is utilized?

Copy link
Owner

@vgrem vgrem Aug 5, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Greetings! First of all, thank you for catching it! Agree with you, the way how it is implemented in the current version, does not prevent the whole file to be read into memory.
That basically contradicts to the advantage of utilizing lazy load approach which was supposed to bring an efficiency in terms of memory consumption.

And last but not least, thank you for the proposal and PR, lets merge it and see if works as expected.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

speaking of storing of results, right call! 👍

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great, thank you!

else:
self._return_type = self.file.finish_upload(self._upload_id, f_pos, piece)
f_pos += len(piece)
fh.close()

if callable(self._chunk_uploaded):
self.context.after_execute(self._process_chunk_upload)
# upload a file in chunks
f_pos = 0
with open(self._source_path, 'rb') as fh:
for piece in iter(partial(fh.read, self._chunk_size), b''):
if f_pos == 0:
upload_result = self.file.start_upload(self._upload_id, piece)
self._upload_results.append(upload_result)
elif f_pos + len(piece) < st.st_size:
upload_result = self.file.continue_upload(self._upload_id, f_pos, piece)
self._upload_results.append(upload_result)
else:
self._return_type = self.file.finish_upload(self._upload_id, f_pos, piece)
f_pos += len(piece)

self.context.execute_query()

def _process_chunk_upload(self, resp):
"""
Expand Down