-
-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Keep others’ IPNS records alive #1958
Comments
You'll be able to pin IPNS records like anything else once we have IPRS |
Awesome |
Waiting for this feature 👍 |
But doesn't it make more sense if they are automatically pinned by nodes? Or would it be resource heavy,? |
Consider that if pinned those have to be updated constantly via signatures etc etc... |
The issue here is that the signature on IPNS records currently expires and random nodes won't be able to re-sign them as they'd need the associated private key. We expire them because the DHT isn't persistent and will eventually forget these records anyways. When it does, an attacker would be able to replay an old IPNS record from any point in time. |
Is it really considered more dangerous than possibility of practically disappearing whole materials published under certain IPNS key if one (just one!) publisher node with its private key once disappears too? Doesn't this publisher node look like the central point of failure? Outdated, but valid records are really worse than no records at all? I think that ability to replay is not an critical security issue, at least in condition that user is explicitly notified that the obtained result could be outdated. After all, «it will always return valid records (even if a bit stale)», as mentioned in 0.4.18 changelog. So what do you think about |
@lockedshadow I've been thinking about (and discussing this) this and, well, you're right. Record authors should be able to specify a timeout but there's no reason to remove expired records from the network. Whether or not to accept an expired record would be up to the client. |
@Stebalien What is the best way to go about introducing this change to the protocol? |
@T0admomo since this is a client and UX change rather than a spec one mostly I would propose what the UX should be along with the various changes that would need to happen in order to enable it. Some of the work here is in ironing out the UX and then there's some in implementation. By discussing your proposed plan in advance it makes it easier to ensure that your work is likely to be reviewed and accepted. |
According to the IPNS spec, the signature contains the concatenated That means that as long as Moreover, since Am I understanding this correctly? |
I think this could be an attack vector as a malicious node could publish a lot of signed records with near infinite validity. They will accumulate on the DHT and clog it sooner or later, and never be flushed out. So other clients needs to reject very old records, even if the original publisher wanted them to have very long validity. (An attacker could also spawn many nodes and publish records from them, with the same effect) |
I recently read that DHT nodes will drop stored values after ~24 hours, no matter what Lifetime and TTL you set. So it's not really possible to clog the DHT or use this as an attack vector. As far as I understand, clients don't reject old records as they have no way of knowing a record's age, they just drop them after 24 hours, when a newer sequence comes or once they expire (the earliest of the three).
I believe that this is what Fierro allows you to do, though without any malicious intent. |
Yes, you're right. Droping records is not based on age, I oversimplified. The point is that they are not in the DHT after some time if they are not republished, so they can't accumulate.
Yes, but since records are droped by clients after abiut 24 hours, they still can't accumulate |
When keeping someone else's IPNS record alive, what do you do when you learn about a new record for the same name? I see these possibilities:
An IPNS record is typically of little use without the data to which it points. I guess, in many applications, someone keeping the IPNS name alive might also want to (recursively) keep the pointed-to data alive ("recursive pinning"). If you've recursively pinned a name, and you receive an update for that name, that'd make you unpin the old pointed-to data, and pin the new pointed-to data. One potential issue with this is that the new data might be arbitrarily large, and therefore much larger than the storage space you'd be willing to spend on it. "Pinning the record" does not have this issue. There are applications where receiving old data isn't harmful, and where receiving old data is always better than receiving no data. For such applications, "pinning the record" might be the preferred choice, in combination with an application process that gets to decide what to do with a record update. It might, for instance, make an application-level choice to pin only certain parts of the pointed-to DAG, to stay below a storage quotum. And only once the pointed-to data is (partially) downloaded and pinned, the application will replace the old pinned-to record with the new one. |
As a poor man's solution, wouldn't it be possible to have an application run besides Kubo, which periodically polls Kubo for the name? If I understand correctly, Kubo caches objects for 24 hours after the last time they were touched, so if the application asks, say, every 12 hours for the name, it'll always stay in Kubo's cache. As a bonus, the application could store a copy of the latest record received for the name. If Kubo somehow still loses the name, the application can re-upload the last-known record to Kubo[*]. This would double the storage requirement for names, but name records shouldn't be that big. [*] apparently |
Periodically get and store the IPNS record and keep serving the latest seen version to the network until the record’s EOL.
The text was updated successfully, but these errors were encountered: