Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

please revert to stable versions until 0.8rc bugs are fixed #40

Closed
rpodgorny opened this issue Jan 4, 2021 · 4 comments
Closed

please revert to stable versions until 0.8rc bugs are fixed #40

rpodgorny opened this issue Jan 4, 2021 · 4 comments
Assignees
Labels
bug Something isn't working task

Comments

@rpodgorny
Copy link

...since "going offline" just ruined updates on all my machines. :-(

@rpodgorny rpodgorny mentioned this issue Jan 5, 2021
@rpodgorny
Copy link
Author

...any reaction? according to what i see here ipfs/kubo#7707 the progress on 0.8 release is taking forever so i think waiting for the release isn't the way to go. :-(

can i help you with the downgrade somehow?

@guysv
Copy link

guysv commented Jan 19, 2021

What we really need is a second cluster.

It's silly that the pacman-on-ipfs premise was decentralising package delivery and yet here we are stuck with a new single point of failure. Ruben can't take on that burden alone.

If we make make sure the second cluster publishes the same CIDs as Ruben's one, then they are not even competing, as both cluster members will share their packages to those who seek em.

@RubenKelevra
Copy link
Owner

RubenKelevra commented Jan 19, 2021

Hey guys,

sorry for the inconveniences, really! Had trouble with my machines as well. :/

We don't need a second cluster, what we need is having the CIDs of the packages directly integrated into the databases of the packages.

I've talked a while ago about this on the pacman dev mailing list - but it was kind of rejected.

https://lists.archlinux.org/pipermail/pacman-dev/2020-April/024179.html

This would still require the databases to be centralized stored in a cluster, as well as the packages, but it would be done by an "official" team instead of me - in the best case.

This way the IPFS update push would happen automatically and the updates would be faster and somewhat more reliably than a rsync to ipfs script written by some random guy on the internet.

It would also allow having multiple writing servers on the same cluster, which can do the updates seamlessly - since when they do the same update, the cluster would just merge them as the same change. This means we can completely eliminate any single point of failure.


Anyway, I tracked down the issue to the garbage collector yesterday with some confidence. So it's not safe to use it - regardless if 0.6 or 0.8. But this only affects the "import server".

This is good news: we don't have to downgrade at all and I can avoid this issue in the future by deactivating the garbage collector. The server has enough space to run the cluster for a year without any garbage collector - so no issue here as well.

I'll now start resetting the cluster content and it should be up and running in a few hours again - there are zero interventions necessary on the cluster member sides.

@RubenKelevra RubenKelevra self-assigned this Jan 19, 2021
@RubenKelevra RubenKelevra added bug Something isn't working task labels Jan 19, 2021
@RubenKelevra
Copy link
Owner

The issue is "fixed" and everything is reimported. :)

Happy updating!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working task
Projects
None yet
Development

No branches or pull requests

3 participants