-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot restart daemon after files API ops and repo gc
#2698
Comments
Note after the steps above, nothing will work even when offline:
To fix this I need to somehow remove the key "/local/filesroot" from ".ipfs/datastore/". Or just start over by killing the ".ipfs/datastore" directory or if I don't care about the cache or my node id just kill ".ipfs" and do an |
Can you also give us output of |
|
Hrm... We should probably do things a little differently. The files api isnt designed to pin content added to it, but we should probably 'pin' at least the directories (or maybe the top level directory) to ensure things like this don't happen. |
If something is accessible via the files API, I as I user would not want it to be garbage collected. See #2697. A recursive pine on the root should indirectly pin any thing accessible via the files API. If a file is appended to or modified than I don't care if the old content get garbage collected, it is just the current version I care about. |
Think of this usecase, I have a large (TB scale) directory I want to be able to work on, without having to have it all locally. I can do Pinning definitely needs some work though, and i think the solution here is to have a files api specific pinning command that pins content by path instead of by hash directly. |
Then, we need someway to distinguish between files added locally via the files API and files copied/linked in from somewhere else. The latest version of any file added via "files write" should also be pinned, while old version should be allowed to be garbage collected. |
we could do a 'best effort' pin on the files api content. Essentially, when gathering pins, we take every block referenced by the files API that is local and add them to the 'do not gc' set. |
This is quite good idea in my option. |
Doing a 'best effort' pin would work. The only problem is in the use case of a large (TB scale) directory, if parts of the directories in any way make there way into the local cache there will be no way to remove them without locally from the cache without also removing them the files API. For now, I guess, we can can ignore this problem as I don't think it will come up that often. |
Yeah, @kevina i can see that being a problem, but i think youre right, it likely won't come up that often yet. |
Okay, should I try to implement something? For now I will just add a special case and read the |
@whyrusleeping @Kubuxu I would like to move forward on this. Eventually I would like a new general purpose "best effort" pin type as that will work nicely with my new filestore (#2634), but I can start small and just see how much work it will be to support a best effort pin as a special case for the files API. Does this need more input from others before something is implemented? |
@kevina: @whyrusleeping is on vacation at the moment, and he should be back early next week. Just so you know! |
@kevina we probably can start by just augmenting what we pass into the GC function with the computed 'best effort' pinset from the files api. I agree though that having a 'best effort' pin type would be really nice and we should get to that at some point |
Closes ipfs#2697. Closes ipfs#2698. License: MIT Signed-off-by: Kevin Atkinson <[email protected]>
I have the same issue when executing ipfs daemon: tonycai@dolphin:~$ ipfs version --commit how to fix it? Thank you so much! |
Closes ipfs#2697. Closes ipfs#2698. License: MIT Signed-off-by: Kevin Atkinson <[email protected]>
The final command
ipfs daemon
will hang. When I press Ctrl-c I get:This is very likely related to #2697
The text was updated successfully, but these errors were encountered: