-
Notifications
You must be signed in to change notification settings - Fork 989
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dandelion++ Algorithm Changes #2176
Comments
This would make a lot of sense to me. The D++ paper also talks about using approximate 4-regular graphs, i.e. every node at the start of the epoch would get ~two inbound connections and ~two outbound connections. With a rule on how to forward transactions from specific nodes (a transaction coming from one inbound node always is forwarded to the same node, as is the node's own transaction). The end of §4.2 notes:
Are we deliberately avoiding the 4-regular graph approach here due to the concern of the linkability of transactions? |
I believe we went with a simplified "one outbound relay" originally after discussions with Giulia Fanti. |
How would having one of two possible routes have an impact on aggregation? Based on the approach proposed in this issue, the likelihood of aggregation will depend on the number of 'sink nodes' in each epoch, i.e. the number of nodes that are set to fluff, and the time it will take for a transaction to end up there. Saying this, I wonder if the current approach would allow for the creation of "stem-loops" where a transaction is forwarded around in circles between stem-only nodes that are linked in a circle? Is that a real issue? |
I believe it has to do with the forwarding rule of forwarding multiple transaction received from a node in on the same path. From the paper (§4.2):
|
As a circuit breaker, if you are a stem node and receive an already seen stem transaction, you can fluff it. |
@Kargakis This would allow an adversary to know which transactions are in your stempool. |
@quentinlesceller @antiochp in the new epoch scheme are you able to easily tell if your peers are stem or fluff nodes at a certain point in time without having to closely monitor everyone? If so, you could ensure that these "stem-loop" transactions are sent to a fluff node, if you happen to have such a peer, which most nodes could have, assuming the # of connected peers a node has on average (eg. 12) exceeds the percentage of fluff nodes in the network (eg. one out of ten). |
A passive observer should be able to identify this easily so we might as well make it part of the peer api. |
@Kargakis not sure I'm following you.
Please consider an active attacker who could connect to as many nodes as
possible, and send messages to them.
If I understood your idea; should sending the same message twice to any
node would make them reveal their stemming buffer, then wouldn't the
attacker easily discover by timing measurements what tx originated where?
In general I think that any stemming-node node should never react to
outside "signaling" since any signal may be fake. Only for on an
incoming non-aggregated tx being publicly transmitted should a node act out it's
current epoch role (to fluff or stem-forward).
So instant fluffing would only occur if incoming tx wants it, or the node's
epoch role is to fluff, or embargo ends.
The user can sabotage their own anonymity by resending a tx (thinking it
was lost) while it was still being slowly stemmed.
To prevent this, wallets could show a progress bar along with the string
"Sending... (anonymizing xx%)" and/or offer sending via an aggregating
service, fire&forget style.
Are there any other edge cases?
|
It's not immediately obvious in the Dandelion++ paper but there is a footnote on page 21 -
Other nodes would not know if the node doing the fluffing was doing so because they had seen a loop/cycle (via seeing a tx twice) or if that node had entered a new epoch and was now fluffing. So I guess this is one valid approach. The other being wait for the embargo timer to expire (but this gets tricky, if not impossible, with fast propagation in the proposed Dandelion++ impl). Edit: Ooh. I think this maybe explains why we see a lot of embargo timer expirations in our current Dandelion implementation. If we see the same tx a second time in stem phase then I think we just silently drop it and do not forward it on. In which case the stem comes to a halt. |
In the current impl in Grin? It means we can potentially aggregate txs in different ways and then forward them down different multiple paths. So we run the very real risk of having conflicting txs in the network at fluff time. Tx1 comes in and we forward it to peer B. This would not be an issue for Dandelion++ in something like Bitcoin where its being used purely to anonymize the entry point of the tx. |
No, in your current proposal, with D++.
Hang on a sec. The paper proposes a 4-regular graph with one-to-one routing. A 4 regular graph:
Where
These routes are kept consistent and do not change throughout the duration of the epoch. So now going back to what you describe:
Based on the above behaviour, wouldn't multiple transactions coming in to |
Based on
which I read as (6) being about an "artifact of our experimental setup" - bug/feature? And what @antiochp said:
It does seem we might have a privacy leak currently. Scenario:
Possible defences? I think we can't differentiate between "new, should be stemmed" and "partially stemmed, in the process of stemming"? That might be a bad idea anyway. Maybe make sure a node knows if a node is our upstream stemming peer, and if a tx is seen twice (logically from different upstreams, or else one of them is faylty) from upstreams we fluff, otherwise we don't reveal that we've seen that tx before and just send it to our downstream stemming peer. |
We cannot cut-through conflicting txs.
So we have to drop one of them. Which results in |
To help my understanding, based on the setup outlined above (4-regular graphs with one-to-one routing), could you give me an example of how two conflicting txs ( |
Simplest scenario would be Mallory sends But I guess this would apply whether stem routing was done with single paths or with 4-regular graphs like this. So yeah maybe with Dandelion++ we can consider alternative stem routing without it impacting tx aggregation. |
We can approach this in two discrete stages I think -
If I understand the routing correctly: The additional pseudorandom routing over two different outbound connections allows a single node to effectively have two different but deterministic routes passing through and crossing over it. Which intuitively adds significant complexity to the possible layouts of the stem paths for a given set of connected nodes. But we would also get a lot of the benefits by simply reworking the "per epoch" changes first as a discrete piece of work. |
Agreed - makes sense to get a quick win in with the changes you suggest above first and then look at 4-regular graphs second. |
[WIP] PR is here - #2344. |
Fixed by #2628. |
The Dandelion++ paper talks about nodes choosing stem vs. fluff per epoch and not per tx.
i.e. every 600s (in Grin) we choose a new outbound Dandelion relay and set our own relay to either stem or fluff for the duration of this "epoch".
This might allow us to remove the "patience timer" on all relay nodes set in stem mode.
i.e. We would immediately forward a stem tx to the next relay node.
Only on nodes currently set as "fluff" would the patience timer be applicable.
These would then act as "sinks", txs would flow into them from various stem paths and would enable aggregation until the patience timer fired.
This would provide (temporary) places in the network for unaggregated txs to collect - so we could have a longer patience timer, and intuitively, a better chance of achieving tx aggregation as other txs would not be delayed in prior stem phase nodes on the way to the fluff node.
Right now in Grin we delay txs for 10-15s at _every_relay node. And I think this is actually detrimental to tx aggregation as it makes it less likely for 2 txs traveling along the same stem path to actually meet at the same node at the same point in time. One is always delayed behind the other and will never catch up to the earlier one.
At the start of every "epoch" (every 600s) -
Then for every stem tx received (from a peer) -
This seems to be simpler than our current approach and reduces unnecessary latency while allowing a longer patience timer, so potentially allowing more aggregation.
Related - #1982.
The text was updated successfully, but these errors were encountered: