-
-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
please revert to stable versions until 0.8rc bugs are fixed #40
Comments
...any reaction? according to what i see here ipfs/kubo#7707 the progress on 0.8 release is taking forever so i think waiting for the release isn't the way to go. :-( can i help you with the downgrade somehow? |
What we really need is a second cluster. It's silly that the pacman-on-ipfs premise was decentralising package delivery and yet here we are stuck with a new single point of failure. Ruben can't take on that burden alone. If we make make sure the second cluster publishes the same CIDs as Ruben's one, then they are not even competing, as both cluster members will share their packages to those who seek em. |
Hey guys, sorry for the inconveniences, really! Had trouble with my machines as well. :/ We don't need a second cluster, what we need is having the CIDs of the packages directly integrated into the databases of the packages. I've talked a while ago about this on the pacman dev mailing list - but it was kind of rejected. https://lists.archlinux.org/pipermail/pacman-dev/2020-April/024179.html This would still require the databases to be centralized stored in a cluster, as well as the packages, but it would be done by an "official" team instead of me - in the best case. This way the IPFS update push would happen automatically and the updates would be faster and somewhat more reliably than a rsync to ipfs script written by some random guy on the internet. It would also allow having multiple writing servers on the same cluster, which can do the updates seamlessly - since when they do the same update, the cluster would just merge them as the same change. This means we can completely eliminate any single point of failure. Anyway, I tracked down the issue to the garbage collector yesterday with some confidence. So it's not safe to use it - regardless if 0.6 or 0.8. But this only affects the "import server". This is good news: we don't have to downgrade at all and I can avoid this issue in the future by deactivating the garbage collector. The server has enough space to run the cluster for a year without any garbage collector - so no issue here as well. I'll now start resetting the cluster content and it should be up and running in a few hours again - there are zero interventions necessary on the cluster member sides. |
The issue is "fixed" and everything is reimported. :) Happy updating! |
...since "going offline" just ruined updates on all my machines. :-(
The text was updated successfully, but these errors were encountered: