-
Notifications
You must be signed in to change notification settings - Fork 83
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Change DAG read offsets every iteration to prevent ASIC optimization #13
Conversation
…of memory subsystem
Based on suggestion from PR #13
This is a unique ASIC structure that we had not analyzed before. I don't think that it's a practical system, but it is plausible. In both Ethash and the 0.9.0 ProgPoW spec a single lane will only ever access the same word(s) within the data that's loaded from the DAG. This means the DAG can be split into 32 (Ethash) or 16 (ProgPoW) chunks that each reside on a single chip. Each chip would require ~128mb (Ethash) or ~256mb (ProgPoW) of eDRAM for the system to hold a 4GB DAG and be viable for a few years. For reference the IBM Power9 has 120 mb of eDRAM, so this is plausible. In both algorithms every load address comes from a different lane, so the address needs to be broadcast across lanes. The most reasonable structure would be a central coordinator chip that broadcasts address data to all the per-lane chips. A DAG load across all the chips that consumes 128 bytes (Ethash) or 256 bytes (ProgPoW) requires a 4 byte address to be broadcast. For ProgPoW a GPU + DRAM with 256 GB/sec of memory bandwidth could be replaced at equal perf with 16 eDRAM chips and 1 central coordinator in a system with 4GB/sec of broadcast bandwidth. Without a lot more analysis it's unclear what the overall performance, power, or cost of this multi-chip system would be. Since it's easy enough to break this architecture we've decided to update the spec. The 0.9.1 version XORs the loop iteration with the lane_id when accessing within the loaded DAG element. Your suggestion to ADD the two means lane data could be rotated across a single-directional ring bus while the DAG data remained in place. By doing an XOR there would need to be a full mesh network or high bandwidth switch so any chip could shuffle data to any other chip, which almost certainly makes the system impractical. Some quick benchmarking shows this makes no performance difference on either AMD or NVidia hardware. |
Based on suggestion from PR ifdefelse#13
Merge from ethminer master
Based on suggestion from PR ifdefelse#13
Based on suggestion from PR ifdefelse#13
Just to have this thought written down where I think it belongs: For the fix introduced in ProgPoW 0.9.1+ and described above to be effective it's crucial that each lane's mix state be no smaller than the lane's DAG reads per loop iteration. Otherwise, inter-chip transfer of lanes' mix state between loop iterations would allow for the original attack at a fraction of the cost of full inter-chip DAG reads. Luckily and quite obviously, this condition holds, with quite some margin: we have 32 mix registers (128 bytes) but only 4 DAG reads (16 bytes) per lane. Maybe this should also be somewhere in ProgPoW documentation, as part of design rationale and constraints on parameter values. |
No description provided.