Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Operation_history_id mismatch among nodes #585

Closed
abitmore opened this issue Jan 17, 2018 · 10 comments
Closed

Operation_history_id mismatch among nodes #585

abitmore opened this issue Jan 17, 2018 · 10 comments

Comments

@abitmore
Copy link
Member

A node with track-account configured has smaller history_object_id's (1.11.x) than nodes with default configuration values. Not sure if it's related to version.

@abitmore
Copy link
Member Author

Update: not smaller, but bigger.

@abitmore
Copy link
Member Author

abitmore commented Jan 17, 2018

Data in my node:

locked >>> get_object 2.6.12376
get_object 2.6.12376
[{
    "id": "2.6.12376",
    "owner": "1.2.12376",
    "most_recent_op": "2.9.2079548",
    "total_ops": 4080,
    "removed_ops": 0,
    "total_core_in_orders": "1000000000000",
    "lifetime_fees_paid": "4425871219",
    "pending_fees": 0,
    "pending_vested_fees": 0
  }
]
locked >>> get_object 2.9.2079548
get_object 2.9.2079548
[{
    "id": "2.9.2079548",
    "account": "1.2.12376",
    "operation_id": "1.11.119415835",
    "sequence": 4080,
    "next": "2.9.2079472"
  }
]

Data on OpenLedger:

$ curl -d '{"id":1,"method":"call","params":["database","get_objects",[["2.6.12376"]]]}' https://bitshares.openledger.info/ws;echo
{"id":1,"jsonrpc":"2.0","result":[{"id":"2.6.12376","owner":"1.2.12376","most_recent_op":"2.9.121485848","total_ops":4080,"removed_ops":3080,"total_core_in_orders":"1000000000000","lifetime_fees_paid":"4425871219","pending_fees":0,"pending_vested_fees":0}]}
$ curl -d '{"id":1,"method":"call","params":["database","get_objects",[["2.9.121485848"]]]}' https://bitshares.openledger.info/ws;echo
{"id":1,"jsonrpc":"2.0","result":[{"id":"2.9.121485848","account":"1.2.12376","operation_id":"1.11.119412594","sequence":4080,"next":"2.9.121480855"}]}

Note: in my node the 2.9.x is smaller because track-account is enabled. But it's strange that 1.11.x is bigger.

@abitmore
Copy link
Member Author

Update: after a replay, data in my node become identical to others. Not sure what was wrong.

@xeroc
Copy link
Member

xeroc commented Jan 18, 2018

Do I assume right that at least the 1.11.x objects are still all the same?

@pmconrad
Copy link
Contributor

Do I assume right that at least the 1.11.x objects are still all the same?

No, that's the problem.

Are you sure it's related to track-account?

@abitmore
Copy link
Member Author

Are you sure it's related to track-account?

I didn't see anything wrong with track-account. It's perhaps related to replay.

@pmconrad
Copy link
Contributor

I've checked the history code and haven't found anything that could cause a miscount of operation IDs.

The main differences between a replay and normal operations that I can think of are

  • during replay, we never see forks / rollbacks
  • during replay, we have more active skip_flags
  • during replay, we don't continuously receive new transactions, apply them, and undo them before applying the block
  • during replay, we don't send signals to listeners on object creation/modification/removal, nor on pending transactions

skip_flags shouldn't influence operation count.
rollback/undo seems to be a likely cause, but I haven't found any obvious errors in the undo code.
signal listeners should be passive only, and should not cause operations that are included in the history but don't turn up in the chain.

@abitmore
Copy link
Member Author

I still suspect that the bug is related to track-account.

The first time that I heard of this issue is from bittrex.
Then on my own node which was using track-account.
Got another report today from another exchange which is using track-account as well.

The data will be corrected after a replay, which means the issue only happens when continuously getting new transactions/blocks from p2p network (and RPC). Perhaps after called use_next_id() the counter won't be reverted if there is a reorganization or something?

If someone can provide a full-full node we can check if the data on memory reduced nodes are correct (I'm assuming they're correct).

@abitmore abitmore added this to the Future Non-Consensus-Changing Release milestone Apr 24, 2018
@abitmore
Copy link
Member Author

@pmconrad I'm 90% sure that using of use_next_id() alone will cause issue. Most of code is in undo_database.cpp:

When creating new objects, old_index_next_ids will be updated in on_create(), when undoing, the value will be reverted to saved old id.

But in history plugin we didn't save old ids in old_index_next_ids, thus no reversion will be done when undoing.

Thoughts?

@pmconrad
Copy link
Contributor

I think that's it, good catch! use_next_id() bypasses undo_db and will therefore not be rolled back in call cases when a block is popped.

@abitmore abitmore modified the milestones: Future Non-Consensus-Changing Release, 201805 - Non-Consensus-Changing Release Apr 24, 2018
@pmconrad pmconrad self-assigned this Apr 24, 2018
abitmore pushed a commit that referenced this issue Apr 25, 2018
jmjatlanta pushed a commit to jmjatlanta/bitshares-core that referenced this issue Apr 27, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants