Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion setup.cfg
Original file line number Diff line number Diff line change
Expand Up @@ -28,7 +28,7 @@ install_requires=
dictdiffer>=0.8.1
pygtrie>=2.3.2
shortuuid>=0.5.0
dvc-objects==0.16.0
dvc-objects==0.17.0
diskcache>=5.2.1
nanotime>=0.5.2
attrs>=21.3.0
Expand Down
16 changes: 11 additions & 5 deletions src/dvc_data/hashfile/gc.py
Original file line number Diff line number Diff line change
Expand Up @@ -23,18 +23,24 @@ def _is_dir_hash(_hash):
return _hash.endswith(HASH_DIR_SUFFIX)

removed = False
# hashes must be sorted to ensure we always remove .dir files first
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just doubt I have, why .dir files have to be removed first?

Copy link
Collaborator

@skshetry skshetry Dec 27, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We consider .dir files (and tracked objects inside) to be one single unit. We use this for optimization (assumptions such as if .dir file exists in the remote, all the directory contents also exist, etc).

https://github.com/iterative/dvc-data/blob/e0d19abd5d25525d8d4bc0068c9ac748f3c2aad6/src/dvc_data/hashfile/status.py#L63-L64

We also try to upload all directory contents, and then only if they succeed, we upload .dir files.

So ideally here, we should try to delete .dir file first, and then all of it's contents in bulk and so on for all objects. Then again do bulk for HashFile objects.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So ideally here, we should try to delete .dir file first, and then all of it's contents in bulk and so on for all objects. Then again do bulk for HashFile objects.

Should we open a separate issue for that?

This P.R. just keeps the previous behavior but benefits from filesystems that implement bulk delete like gcfs and s3fs

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@daavoo You probably don't want to remove that comment completely.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should it be better a # TODO: comment to implement what @skshetry described above?

Copy link
Collaborator

@skshetry skshetry Jan 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@daavoo, let's just leave it out. It's harder to associate later in practice (Eg: what if there are overlaps between different .dir files, etc.).

It is unsafe, but gc always is.

Copy link
Contributor

@pmrowla pmrowla Jan 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As long as we remove .dir files before all other files first then gc should be safe. The optimization isn't affected by overlaps between dirs. The only thing we assume when checking remote status is that if a .dir file exists on the remote, then all files inside that .dir exist. It's more unsafe for us to actually attempt to remove .dir+directory contents, then another .dir+directory contents, and so on, because then we do have to get into figuring out directory overlaps.

The existing behavior (and the current state of the PR) should be fine for now.


hashes = QueryingProgress(odb.all(jobs), name=odb.path)
for hash_ in sorted(hashes, key=_is_dir_hash, reverse=True):
dir_paths = []
file_paths = []
for hash_ in QueryingProgress(odb.all(jobs), name=odb.path):
if hash_ in used_hashes:
continue
path = odb.oid_to_path(hash_)
if _is_dir_hash(hash_):
# backward compatibility
# pylint: disable=protected-access
odb._remove_unpacked_dir(hash_)
odb.fs.remove(path)
removed = True
dir_paths.append(path)
else:
file_paths.append(path)

for paths in (dir_paths, file_paths):
if paths:
removed = True
odb.fs.remove(paths)

return removed