Skip to content
This repository has been archived by the owner on May 13, 2022. It is now read-only.

A new protocol #171

Closed
chris-belcher opened this issue Aug 3, 2015 · 71 comments
Closed

A new protocol #171

chris-belcher opened this issue Aug 3, 2015 · 71 comments

Comments

@chris-belcher
Copy link
Collaborator

Issue which combines all the features we want when the protocol is updated.

Talking to some people, it seems doing a breaking change, a "hardfork", is quite an unpopular idea. It would split the liquidity, updating would end up having to be rushed by some people, giving not enough time to check the code. So it seems the compromise solution is to code it that new bots will accept both protocols and the old protocol is gradually phased out.

Stuff to fix:
#88 Fix bug where maker must known the private key of the coinjoin address.
#83 Using p2sh / multisig as inputs.
#45 Some cleanup, changing names, adding a field
#29 Minimum other maker count field

#90 (comment) Allowing taker to not authenticate
#120 Fixing miner fees. Changing maker's tx fee contribution to a fee/kB and having a new field of "lowest fee/kB tx this maker is willing to sign off for"

It should also be noted that the patientsendpayment.py script does not work at all in the current protocol, because makers must know the private key of the coinjoin address they're sending to.
#90 Some code already written

The branch https://github.com/chris-belcher/joinmarket/tree/newprotocol

@chris-belcher
Copy link
Collaborator Author

More code / bugfix to add #170

@chris-belcher
Copy link
Collaborator Author

Allow option of maker not having a change address #27 (comment)

@dcousens
Copy link

@chris-belcher is the current algorithm/protocol written up anywhere?

@chris-belcher
Copy link
Collaborator Author

Not really, the code is the documentation right now.

@kristovatlas
Copy link

I might try to help with documentation in the future...

@chris-belcher
Copy link
Collaborator Author

Worth noting that this won't be the last time we update the protocol. We'll probably end up adding segwit and op_schnorr to JoinMarket eventually too. So the code for new protocol should be written in a way that makes adding more order types easier.

@chris-belcher
Copy link
Collaborator Author

And anything we decide on regarding this issue will be included in a new protocol too: #156

@AdamISZ
Copy link
Member

AdamISZ commented Jul 9, 2016

#568 is now a protocol change (so as to find a logical way to use multiple messaging servers without impersonation).

I am not sure what people feel comfortable with as a reasonable set of changes; anti- #156 is priority 1, it seems; in line with #343 goals for 0.2 I think the above multi-mc should also be included. Re: the above list, I'm not sure, but maybe we should hash out, here, what set is both highly desired, and also a realistic goal for a new version.

@chris-belcher
Copy link
Collaborator Author

I agree with that, I was thinking of including a few small other things in the first new protocol if theres time/effort (like fixing patientsendpayment.py and the #537 signing change)

@AdamISZ
Copy link
Member

AdamISZ commented Jul 11, 2016

@chris-belcher I'm thinking to start a 0.2.0 branch to get up and running (take current develop + #568 and go from there). (i know there's an existing "newprotocol" branch but seems too old). Make sense?

@chris-belcher
Copy link
Collaborator Author

Yes that's a good idea.

One thing we should get from this protocol break is that the codebase is written so that it's easier to add future new offer types (coinswap, segwit, #229)

@AdamISZ
Copy link
Member

AdamISZ commented Jul 11, 2016

OK, got started. Tweaking/testing a basic podle PR on the now-existing 0.2.0 branch. Won't be final but will be something to look at.

Re: new ordertypes, yeah, good thing is I think it mostly works out of the box (bots just ignore what's not in their list of known ordertypes right).

Although: when I did segwit I noticed there was some rather ugly hardcoding of ordertypes allowed, but it's just a matter of cleanup.

@chris-belcher
Copy link
Collaborator Author

That's what I meant regarding the hardcoded ordertypes. And both relorder and absorder today invoke the same functions when the order is filled (CoinJoinOrder in maker.py.) When adding segwit and coinswap it would need an entirely different function.
I was thinking an OOP way of organising this. There's an abstract class which has functions called things like handle_offer(), subclasses can override it and implement segwit or whatever it might be.

Plus a nitpick: rename it all to offer instead of order.

@AdamISZ
Copy link
Member

AdamISZ commented Jul 11, 2016

https://gist.github.com/AdamISZ/baf93ce2589854a7992383b3c69fae13

An overview, so we can see what the new message protocol looks like. Of course, only a first cut.

I'll just push the initial PoDLE update to the 0.2.0 branch now, after all we are just going to keep working on it, so I guess no point having some kind of PR process really.

@AdamISZ
Copy link
Member

AdamISZ commented Jul 12, 2016

43f6b2a is specifically to address the formatting inconsistencies of blockchaininterface.py and support.py, it does nothing else. It's a little painful, but you might want to pass an eye over it to make sure nothing insane happens. The code passes regtest.

It seemed wise to isolate it in that one commit.

@AdamISZ
Copy link
Member

AdamISZ commented Jul 12, 2016

7d83041 addresses #88 after prompting by @chris-belcher on IRC.

This implementation simply takes the first utxo that the maker has decided to spend and extracts its key from the wallet and uses that for the authorisation pubkey and signature.

Hence the protocol is changed in line with the proposal in #88:
from

M*: !ioauth ulist cjpub changeA B(mencpubkey) (NS)

to

M*: !ioauth ulist auth_pub cj_addr changeA B(mencpubkey) (NS)

and the taker verifies that the address derived from auth_pub appears in one of the input utxos.

Presumably other cases could be handled with additional code, but without needing to change the protocol again.

I'll update the protocol gist to reflect this change.

@chris-belcher
Copy link
Collaborator Author

I've looked at 43f6b2a and it looks fine. 7d83041 is very good. Thanks for doing it.

@AdamISZ
Copy link
Member

AdamISZ commented Jul 12, 2016

Another thing I want to mention before I forget: the current PoDLE implementation requires secp256k1 since it makes extensive use of EC point arithmetic. I'm leaning more towards ditching non-secp256k1 (it also finally clears out the mess from the bitcoin module).

@AdamISZ
Copy link
Member

AdamISZ commented Jul 12, 2016

As per IRC, 8ecae43 for standardised bitcoin signature format with prefix.

Whole build test passes except tx_broadcast. I think I know what that is, but can be addressed later.

@AdamISZ
Copy link
Member

AdamISZ commented Jul 13, 2016

@chris-belcher thanks for paying attention, fixed in ad1d876 ; it is not actually a change, just a tidy up, as you said pybitcointools version was already "standard".

However I did get a chance to investigate more how it works in libsecp256k1 + Core; they use SignCompact with a recoverable signature, which is a recid single byte + 64 bytes. I'm left unsure about the whether, to support hardware wallet, we should be using this kind of signature; I guess probably so. Afaict this should not be a problem as the python binding supports it. But, it's slightly non-trivial work.

@chris-belcher
Copy link
Collaborator Author

I hope one day we figure out an alternative to the age restriction because it's not always great for user experience; often people want to receive coins and then spend them right away.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 2, 2016

Re: bad user exp, yeah, but it won't be a problem for those with "fuller" wallets. I guess a 1hr delay is quite annoying for a completely new user but given all the other steps you have to go through to get started, I feel like it's not a dealbreaker. At least it doesn't cost or lock up any money. And a heavy user may choose to create external commitments as a backup to avoid problems.

Re: ordernames, I like the suggestion, a little worried that it confuses with all the other stuff in the code; but, let's go with that, we could change the use of the word "order" later in the code and logs without breaking anything.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 2, 2016

f152495 PROTOCOL BREAK (Let's write this from now on for any commits that change the proto - hopefully there should be very few more, just feature adds/changes)

All hardcoded relorder/absorder changed to reloffer/absoffer. This could be encapsulated better to remove the hardcoded instances at the top level; but that can be done later easily. Note it does not change the use of code variables called 'order' anywhere. Tests OK. I deliberately checked the different ygs in the tests too.

Edit: guess it's worth mentioning, though anyone reading this probably knows: offers with a different ordername (so, including 'relorder') are just ignored if they're not in jm_single().ordername_list. Enforced here

@AdamISZ
Copy link
Member

AdamISZ commented Aug 2, 2016

fe9f6d5 prints out detailed debug info in a file commitments_debug.txt in case of failed commitment generation. It shows (1) the list of utxos which were rejected due to too many retries, (2) the list of utxos which were rejected due to being too new and (3) the same for too small. It gives brief advice for each case, and at the end prints out the full set of utxos in the wallet also. The user is pointed to this file in the terminal output.

This is a bit crude (just a single re-used static file), but is the least that's needed to give a user some guidance. A more sophisticated approach would be better; note that it's written to a file as it's too much stuff for a terminal output (which is already super noisy). This is the kind of thing where a GUI really helps.

I used the ygrunner.py tool to do a bunch of manual cases to check it produces sensible output.

Here is an example of commitments_debug.txt :

THIS IS A TEMPORARY FILE FOR DEBUGGING; IT CAN BE SAFELY DELETED ANY TIME.
***
1: Utxos that passed age and size limits, but have been used too many times (see taker_utxo_retries in the config):
None
2: Utxos that have less than 9 confirmations:
28d46cacb295d493a62c5bdc4486febd31ed9704cbeb77e26b5ef579a0251091:1
0a143a3a228a8cba93e492a17194aa04c5d3633cb91fe4e03bdfe0a2c730f268:0
ab6c495fbf4b8d2105a7aa1afb462ec31e70469725e62e768914849a1b1f6761:1
db71144041842e5625ae68c268d8ece2079d1b84593af06eddb37fead8f04865:0
e3174ba8c8bb88d67e737095c79cd8f21ee8807be9ea6dc1082f31a8f960b042:1
3: Utxos that were not at least 20% of the size of the coinjoin amount 199164661
None
***
Utxos that appeared in item 1 cannot be used again.
Utxos only in item 2 can be used by waiting for more confirmations, (set by the value of taker_utxo_age).
Utxos only in item 3 are not big enough for this coinjoin transaction, set by the value of taker_utxo_amtpercent.
If you cannot source a utxo from your wallet according to these rules, use the tool add-utxo.py to source a utxo external to your joinmarket wallet. Read the help with 'python add-utxo.py --help'

You can also reset the rules in the joinmarket.cfg file, but this is generally inadvisable.
***
For reference, here are the utxos in your wallet:

{u'28d46cacb295d493a62c5bdc4486febd31ed9704cbeb77e26b5ef579a0251091:1': {'value': 100000000, 'address': u'n19n1b9LEqUp2T8FJS9t7NEQXiKhwwvbNi'}, u'0a143a3a228a8cba93e492a17194aa04c5d3633cb91fe4e03bdfe0a2c730f268:0': {'value': 100000000, 'address': u'miTgCc9DQjRaiuyUN3JTomjXmTGMt9DaA5'}, u'ab6c495fbf4b8d2105a7aa1afb462ec31e70469725e62e768914849a1b1f6761:1': {'value': 100000000, 'address': u'mhZJMVoZAv7BzJqBMpfodUxQEtPrSi3fLj'}, u'db71144041842e5625ae68c268d8ece2079d1b84593af06eddb37fead8f04865:0': {'value': 100000000, 'address': u'mxtRWHA87F4RRs7CtJbXBgoCcPuk5eWbGQ'}, u'e3174ba8c8bb88d67e737095c79cd8f21ee8807be9ea6dc1082f31a8f960b042:1': {'value': 100000000, 'address': u'mmy54TphEoHmrruJXP2XkWQVSwo6ieYbxr'}}

While doing this I noticed something very important that I hadn't noticed about sendpayment, which happens to affect the commitment creation algo. sendpayment previously constructed a Wallet object only up to the used spending mixdepth; but to get a commitment from the whole wallet need an object containing all mixdepths. For this reason, added an option -a to specify the actual max mixdepth of the wallet so all utxos get pulled in for commitment generation. People using the default 5 won't be affected of course.

Tests passing.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 2, 2016

c5860c6 nkuttler noticed when I removed the legacy sig conversion - bot can crash on an invalid signature during verify (asserts in the Python binding code), so wrapped the parsing in a try block; only for verify, not sign of course.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 4, 2016

db8a2ae PROTOCOL BREAK -
from @chris-belcher to use the announced network field from the IRC server to specify the hostid variable (obviously it's intended this could also be set by other non-IRC networks, that can be done later). Doing some live testing in the testpit today. Did one tx OK with my own bots(not others due to protocol break), but noticed that the network name for Freenode doesn't seem to be getting picked up. Will look at it later. due to stuff like:

005 J5F6kNS1xHAhCWMv CHANTYPES=# EXCEPTS INVEX CHANMODES=eIbq,k,flj,CFLMPQScgimnprstz CHANLIMIT=#:120 PREFIX=(ov)@+ MAXLIST=bqeI:100 MODES=4 NETWORK=freenode KNOCK STATUSMSG=@+ CALLERID=g :are supported by this server

note the ":" in CHANLIMIT=#:120, hence parsing to ":" doesn't work.

126a206 just removes hostid from config basically.

@chris-belcher
Copy link
Collaborator Author

I missed the : thing in testing, I should've tried it on many different IRC networks in hindsight. Will fix soon.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 4, 2016

np, and thanks. btw feel free to push the fix directly to the branch.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 4, 2016

a255d5d allows Makers to broadcast used commitments to the pit with a command !hp2 <hash>. Makers can choose to ignore or not these broadcasts with the config variable accept_commitment_broadcasts in POLICY. Note this is not a protocol break.

An important subtlety is when the broadcast occurs: in a typical scenario !fill, containing the commitment, is sent out to N makers at the same time; if they broadcast immediately, this could lead to other makers taking part in the transaction erroneously adding it to their blacklist before processing it themselves; therefore, the value is broadcast immediately if acceptance fails (i.e. if it's already in my blacklist), or otherwise broadcast after sending ioauth (i.e. after all checks have passed).

Important to remember in this that each individual hash value/commitment is indeed intended to be used only once globally, that's why we have "retries" that allows multiple (default 3) for the same utxo. I ummed and ahhed about this broadcast element for a while, but the more concretely I think about it the more I'm convinced it should be used to get the real rate-limiting effect. So, currently have the accept_commitment_broadcasts = 1 in the default config, i.e. switched on. Switching it off is always an option, there is no issue with agreement/consensus here.

Tests passing.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 6, 2016

891cfe9 is designed to reduce the effect whereby broadcasting commitments on the public pit channel unambiguously marks the maker as a participant in a transaction, making it easier for a spy to focus their queries on certain maker bots. This is accomplished with a new method Maker.transfer_commitment which randomly chooses another counterparty with active orders (for this purpose, as suggested by @chris-belcher , the Maker class now inherits from OrderbookWatch rather than CoinjoinerPeer - a change which I don't believe has any undesirable side effects, afaict) and sends the commitment to them with hp2 command name as before, over private message. The recipient, having a on_commitment_transferred callback, simply broadcasts it with a public message to the pit. Thus the origin of the commitment broadcast is obfuscated.

Worth noting the possible imperfections - the receiving party may simply choose not to broadcast; the receiving party may be acting on behalf of a snooper, or be a snooper. But, this seems unambiguously better than direct public broadcast, since failure to broadcast is not in any way critical (this is especially true since there will usually be ~ 3-5 different makers broadcasting the commitment at once).

Note that none of this changes the existing accept_commitment_broadcasts setting either way; that controls whether a maker responds to broadcasted commitments (i.e. adds them to their blacklist); that code is only triggered by the public message.

Not a protocol break.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 7, 2016

d50ae7c addresses a bug: if a bot cycles nick on one channel it doesn't on the other, leaving the message channel layer confused. Just changed get_irc_nick so that the fixed length (without underscores) nick is picked up. (Hmm this may need a tweak actually, will update in a bit).

@chris-belcher
Copy link
Collaborator Author

Fixed bug in IRC network name finding 66fdcce

Protocol break because Freenode is correctly found now

@AdamISZ
Copy link
Member

AdamISZ commented Aug 7, 2016

53e30d7 seems to be the most logical way to deal with nicks changing, after discussion with @chris-belcher : force the nick/username change on all channels if it happens on one. I'd like to get a solid test of this, but it'll take some considerable work, and it's a very unusual condition, so will start running it in the test pit straight away. Review would be appreciated.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 8, 2016

I'm thinking of making a yg-algo that has the best privacy effect, at the moment all I have is: offer announce is max from maxmixdepth (so one price only, not like current *mixdepth), and requests are responded to with utxos from the appropriate mixdepth depending on requested amount, then (here a big difference to earlier perhaps): the cjout goes to some other mixdepth than maxmixdepth, to avoid reannouncement. Two obvious limitations are: this obviously will not always avoid reannouncement (presumably impossible!), OK that's fine, but also: coins tend to de-concentrate. Thoughts?

Edited to add:
Commenter on my blog made a couple of other suggestions, e.g.: randomize fees and maximums on re-start to help reduce ease of pseudonym linking. These are anti-features economically, so problematic of course, but a small variation in both may be acceptable. Not sure about this. Another good one along the same lines: re-announcement with above randomness, during one run, to give more confusion to the pattern.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 9, 2016

0537a69 (edited after bugfix, seems to work in live tests now) does 2 things: (1) a yieldgenerator.py module within joinmarket, which provides a base class for YGs to extend (with abstract methods for create_my_orders etc). This wasn't too hard at all really, although it's arguable whether ygmain with the option parsing should go in there. Anyway, it's fine for now, and avoids a chunk of code duplication; it can be refined later.

Second, re the previous comment, created yg-pe.py meaning "privacy enhancing" as a new yield generator. It only implements the simplest (and I think the soundest) of the ideas listed above: minimize re-announcement by choosing to source utxos not from the largest mixdepth, where possible and sending cjouts to, again, not the largest (in fact the smallest) mixdepth, for the same reason. Review would be appreciated; it passes regtest and I've done several runs to check it; it seems OK.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 11, 2016

cdcbc67 removes the legacy pybitcointools entirely. Some notes:

  • I've retried the secp256k1 install on fresh Ubuntu1404 and Win7 machines. It works, but there are some points of note. First, I believe that the instructions on package dependencies here are correct except technically missing two python dependencies: python-dev and maybe python-pip (obviously you need pip somehow). So the full dependency list seems to be: python-dev python-pip build-essential automake pkg-config libtool libffi-dev libgmp-dev from a standing start. This is assuming you don't already have a Bitcoin Core installed (and visible) such that libsecp256k1 alread exists, of course. Another note is that the latest merged PR to secp256k1-py seems to create wheels (meaning binary distributions) automatically for new releases for Linux and OSX. And, although it refers to this in the README, there appear to be no wheels as of yet at pypi, still only the source tarball. I'll hit up Ludvik on this at some point, although he's not often around. Once that is in place it may make life considerably easier for some users (although many won't want that trust model, perhaps).

With respect to Windows, the existing instructions here seem to work exactly as before, albeit it's an ugly business (installing MinGW to get 1 DLL file).

With respect to OSX, I have no idea, but the dependencies are listed, one assumes they probably work.

  • My custom nonce PR was merged, and hence the donation code now uses it. I refactored the donation code, putting the address generation in the configure module, and tested it in test_donations, which is now a better test because it uses the same code, and checks that the funds are accessible using the standard address regeneration from nonce*G in the tx input signature. I also added a donate flag to the test_regtest suite, which I'm not including in the build, but ran manually a few times.

There are also a few redactions from the tests due to the removal of the old code, but that's a very minor matter; the tests were mostly already ignoring the legacy code.

I've attempted to reach out to users about this change via IRC, reddit, bitcointalk. Not much response. I hope people can appreciate how hard it is to maintain two different Bitcoin interfaces.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 11, 2016

Complete tumbler test (unedited) run ran OK, so that's a good sign, although it isn't really a test of how commitments can gum things up right now since it starts with a "big" wallet.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 12, 2016

59443a2 does the following:

  1. Change create-unsigned-tx.py (and a couple of minor changes to taker.py) to support the new protocol. It no longer needs an auth_utxo provided or private key entered; however, something similar is required: a user would have to provide external utxo commitment(s) via the add-utxo.py tool.

    Implementing this and testing it threw up a few issues:

  2. Add a showutxos method to wallet-tool.py for convenience (my use case was to use external commitments from other joinmarket wallets, but there may be other situations of course where it's useful).

  3. Makers crashed when I tried to send spent utxos as unspent (commitments), added a check to make sure the gettxout return val is parsed correctly.

  4. There was an insidious bug in the json parsing in the podle module that unexpectedly deleted the list of used commitments; fixed.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 15, 2016

c6c67ba is trying to streamline processes for adding external commitments, or extending the utxos in a wallet, which will be important for anyone trying to act as a Taker (especially tumbler) if they're in a "from scratch" scenario - i.e. fund a new wallet and immediately run. There is a new cmttools directory in which the commitments.json will now be stored. The contents of that file are now readable (although you won't generally do that) using indent=4 option to json.dumps.

So your problem is to make sure you have enough commitments to do the job. If you're doing 1 sendpayment your life is much easier; you could just wait 5 blocks before starting, and/or you can fund with multiple utxos instead of 1. You can also add a bunch of external utxo commitments for longer term use.

With tumbler your job is harder: even funding with multiple utxos and waiting 5 blocks might not be enough, since they are liable to get consumed during the run. You can make the wait times long enough that they safely cover 5 blocks, but that's long. If you're starting with a fresh wallet you may well need to follow the "external commitments add" workflows described here.

So there's various workflows for this: (a) you want to add external commitments, (a1) you have an existing joinmarket wallet (a2) you only have other wallets (like Core, Electrum), (b) you want to make a bunch of utxos in your wallet instead of only 1, (b1) you have an existing joinmarket wallet (b2) you only have other wallets.

(a) add-utxos.py
(a1)
An easy two-step process to grab all the utxos from another wallet (like one used for a yieldgenerator) and use them for commitments as spending in a new wallet (of course, it will generally be far easier to just use the same wallet, then no messing around like this).

Run python wallet-tool.py -p -u walletname showutxos -> output is walletname.utxos (e.g. wallet.json.utxos) in the wallets/ directory containing utxos and corresponding private keys. Then cd cmttools ; python add-utxos.py -R ../wallets/walletname.utxos (there is an option to verify the utxos as valid unspent first using the -o option). You should then see these added into commitments.json in the "external" field. Then delete the file with the private keys: rm wallets/walletname.utxos.
Changed this insecure and clunky approach, now you can just python add-utxo.py -w walletname and add the utxos to your external list that way. 82f09da

(a2)
More realistic case: you have utxos say in Electrum. You'd need to gather the utxos and their corresponding private keys and enter them into a csv file one by one (each line format txid:N, wif-compressed-privkey). Then do python add-utxos.py -r csvfile to achieve the same result as before. And again delete the file with the private keys...

One can also enter single utxos with python add-utxos.py txid:N and enter wif-comp-key on command line prompt (as already discussed for the first version of add-utxos.py).

The script has other minor options like delete and verify utxos too.

(b1) Second tool: cmttools/sendtomany.py
To produce several utxos from one: python sendtomany.py utxo destaddr1 destaddr2 .... This does as you'd expect; send the coins with equal amount after fees into each of several destinations (intended to be different "external" addresses in a joinmarket wallet).

(b2) This send-to-many feature exists in other wallets like Electrum, Core (I think?). This is obviously better where possible, you can just send 1 utxo to 5-10 new addresses in your joinmarket wallet, instead of only to 1, without doing anything difficult.

The bottom line as I see it: we want to use (b2) and in any case, not (a), because it doesn't require any exotic signing and therefore private key manipulation. But I doubt one can really run tumbler this way (even spreading deposit across multiple mixdepths can't work since it breaks the privacy model), and so, since we want to keep instructions simpler for users, I'm more inclined to tell them to use (a) as a mostly one-off job when they start: give yourself say 5-10 utxos (so 15-30 "free tries") from another wallet, using (a2) probably. But this does mean dicking around with private keys. It could also mean some extra privacy loss to makers since they read the utxo that was used on commitment opening.

Perhaps the only way to cut the Gordian knot here is to require tumblers to run with a wait > N blocks, and perhaps 5 is too aggressive?

@AdamISZ
Copy link
Member

AdamISZ commented Aug 16, 2016

I've been doing a lot of testing to see how this looks in practice. A couple of things:

  • I think the best way to deal with the above problem is simply to add a config field that a tumbler can set that means "if a commitment can't be found, wait a few minutes and try again". This means the tumbler will run at a rate set by the user, until it runs out of commitment utxos, then just waits ~5 confirmations before resuming. It's a simple enough solution and doesn't require any interaction, but will make a tumbler run longer than it was before, in most scenarios. Testing this now. Edit: Added in 077d3ed
  • The tools I made for adding commitments involving creating a file with private keys, that's making it much too easy for a user to have private keys lying around, a bit ridiculous. I'll make amendments so those files are encrypted with the same key as the wallet.

Apologies for using this as a dumping ground, but I don't want to forget this either: the privacy enhanced yield generator could implement this feature: if mixdepths that have enough has more than 1 entry, and the last-used-coinjoin output utxo is included in one of that list, then filter it out. To be clear, the utxo must be spent eventually, but I would assume the spy's analysis is made at least somewhat harder by delaying.

Edit: Also in 077d3ed : bugfix in choose_sweep_orders - previously, if after filtering no orders were left, then the call to weighted_order_choose via chooseOrdersBy could crash, because element 0 of the list is accessed at the start of that function. The liquidity check that checked for that condition (no orders) only previously was called before the for loop.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 16, 2016

82f09da couple of bugfixes and: add-utxo reads wallet directly rather than creating a file with private keys as above (not secure + inconvenient).

@AdamISZ
Copy link
Member

AdamISZ commented Aug 21, 2016

dcf9333 - update the test_broadcast_method.py to work with the new system (dummy commitments). Now all the test suite passes.

a4ac313 was essentially a bugfix, because externally sourced commitments weren't being checked for age/size limits, now they are.

Full tumbler has been run a couple of times, done a variety of other transactions, I think this is largely stable now.

I even wrote some primitive attacker code to try to sweep up utxos from makers, although it needs a lot of work. But with only a few bots in the pit and no real activity, there's not much that can be done in terms of simulation.

Spent some time trying to get sensible measures of how many utxos the spy needs; it's nearly impossible, there are so many variables*, but overall I get the impression that at current levels they really won't need much. Could be anything from 10-200 in an hour I think, for the current pit ("hour" is a sensible measure if we stick to ~5 block age requirement, since that's the "refresh rate" they'll need over the long term). And they have to have the right amount range, of course.

I think the deterrent factor of creating fresh utxos all the time will be limited, but a combination of people using more "guarded" yg algos that don't advertise tons of information, forcing them to use up more utxos, plus the fact that their own utxos will be very visible and easily identified, might change the dynamics. That's the short term perspective, the long term perspective, if being optimistic, is that if we can scale up, it'll get harder and harder for them.

* Here are some I looked at; some of them aren't easy to define, let alone measure:

  • Pit Tx/hour - global coinjoin rate
  • N makers - 50 today, 100 in the future?
  • Utxo age reqmt - (5 blocks now)
  • Utxo retries allowance - (3 now)
  • Mixdepths/maker - 5 default now
  • Tx btc size (mean) - (a better model would involve a distribution, perhaps power-law, but it's completely murky anyway, maybe a spy is only interested in the bigger ones)
  • Maker-requests/commit - How many maker requests you can make simultaneously with the same commitment.
  • Makermixdepthfindfactor - Proportion of mixdepths you need to query before finding the one with the utxo you're looking for. This factor strongly depends on how visible are the mixdepth boundaries. It might be larger than one if you don't know them.
  • Maker success factor - Proportion of makers you need to query before finding the utxos you're looking for. This is lower than it might seem because this is a “dragnet” operation, trying to find all cjouts in all relevant transactions.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 29, 2016

a9ede54 - noticed that if multiple bots use the same directory (not something planned for, but in any case), blacklist file being shared means transactions getting unnecessarily blocked. This pushes back the adding commitments to the blacklist to the point they're actually used, i.e. on sending io_auth.

@AdamISZ
Copy link
Member

AdamISZ commented Aug 30, 2016

Minor bugfix 980c536 README update (also updated/added wiki articles) a6f191e , which includes reference to 00301e1 which formalizes external requirements libnacl and secp256k1(-transient) for installation. Also now testing successfully on anarplex (and got conf from server owner that it's OK, at least for now) so tentatively suggesting default server set:

[MESSAGING]
host = irc.cyberguerrilla.org, agora.anarplex.net, irc.rizon.net
channel = joinmarket-pit, joinmarket-pit, joinmarket-pit
port = 6697, 14716, 6697
usessl = true, true, true
socks5 = false, false, false
socks5_host = localhost, localhost, localhost
socks5_port = 9150, 9150, 9150

with this alternative for those wishing to connect to HS:

[MESSAGING]
host = 6dvj6v5imhny3anf.onion, cfyfz6afpgfeirst.onion
channel = joinmarket-pit, joinmarket-pit
port = 6697, 6667
usessl = false, false
socks5 = true, true
socks5_host = localhost, localhost
socks5_port = 9150, 9150

It seems like if the default uses all 3, then a maker requiring Tor only is not at a disadvantage, but, if you disagree, let me know. If a third supporting Tor+clearnet option is available, we can add/change to that.

@AdamISZ
Copy link
Member

AdamISZ commented Sep 9, 2016

1f7252a removes all yield generators except yg-basic and yg-pe ; the former inheriting from YieldGenerator in the joinmarket module so there is no longer significant code duplication.

The intention is twofold: (1) can't support multiple customised yield generators, it's impractical, but doesn't mean an attempt to stop them: I'd suggest making a separate repo in the Org to host other implementations, starting with what we already had - but note they actually have to be supported by coders, including testing of course. (2) emphasize that simpler, less information providing yield generators are better for privacy and for the overall system. This point is not completely uncontroversial, of course, but again: this is not an attempt to "ban" customizing, it's an attempt to limit the scope of the core project.

@adlai
Copy link
Contributor

adlai commented Sep 15, 2016

Closing this issue now, as the s/rd/ff/ protocol upgrade (f152495) has been frozen by release. Addressing issues left pending (#120, #29/#45, #27/#423) thus requires another "protocol softfork" (eg s/(abs|rel)offer/$0v2/), distinct from the one midwifed in this thread. Discussion should continue in respective threads.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

5 participants