-
Notifications
You must be signed in to change notification settings - Fork 649
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix #1270 Call price is inconsistent when MCR changed #1324
Conversation
Note: this should not break compatibility in production because the result should be the same after hard fork #343.
@abitmore, let me know what do you think about this possible framework to modify current testcases for the new after HF logic, i am trying to do this without duplicating too much code.
With the flag available inside each test case we can change the logic of them to match/test the new features for the existing test cases. |
@oxarbitrage we can have a better test framework for hard forks, but I don't think your solution is good enough. It's incorrect to add a flag to all test cases, instead, we should do it on a case by case basis so far. It doesn't make much sense to have one flag on the top, and worse, we (e.g. Travis) won't modify it when running test cases. Ideally some test cases should be executed once after EVERY hard fork time, from this point of view, we should modify all the test cases, and have a mechanism to avoid future modifications. IMHO this is out of this PR's scope, although it may be helpful for future work (need to make a decision about priority). Update:
|
Yes, that is what i am trying but i was thinking on modifying only the needed parts with the flag in the same test case. My approach was/is to identify where they fail and then try to understand why and workaround. I was able to identify where they fail in the first 2,3 cases with this. Anyways, i understand your point, can create separated new test cases for each of the current tests advancing to HF date and modify what is needed there, at least by now. Thanks. |
@abitmore please take a look when you can at: oxarbitrage@d9bb527 specifically, the cross test(last one). thank you. |
add support for existing tests after hf1270
Resolved conflicts: - libraries/chain/db_maint.cpp - libraries/chain/include/graphene/chain/config.hpp - libraries/chain/include/graphene/chain/database.hpp - tests/tests/swan_tests.cpp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Very good. Only "serious" issue is virtual op IDs in global settlement.
Is it possible to get rid of a sub-index in a boost multi_index? That would free some resources. Would have to be delayed until LIB is past the HF time though.
wlog( "Done updating all call orders for hardfork core-343 at block ${n}", ("n",db.head_block_num()) ); | ||
} | ||
|
||
/// Reset call_price of all call orders to (1,1) since it won't be used in the future. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is superfluous, since as you say it won't used anymore in the future. Getting rid of this and not tying the hf to a maintenance interval will simplify the PR significantly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concerns:
- For UI. If old positions'
call_price
are unchanged but new positions' are1
, people may get confused. - I guess it's faster to compare
1
with1
when inserting new data (when creating new positions) so it benefits in the long run with the cost of one-time process.
BTW as you've noticed later we need to tie the hf to a maintenance interval anyway.
libraries/chain/db_market.cpp
Outdated
@@ -62,15 +62,14 @@ void database::globally_settle_asset( const asset_object& mia, const price& sett | |||
const asset_dynamic_data_object& mia_dyn = mia.dynamic_asset_data_id(*this); | |||
auto original_mia_supply = mia_dyn.current_supply; | |||
|
|||
const call_order_index& call_index = get_index_type<call_order_index>(); | |||
const auto& call_price_index = call_index.indices().get<by_price>(); | |||
const auto& call_index = get_index_type<call_order_index>().indices().get<by_collateral>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switching to a different index here has the potential to change the order in which call orders are closed.
This shouldn't change the outcome in terms of amounts, but it can change the ID of virtual operations. We usually try to avoid that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Will revert this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done via b494609.
@@ -100,6 +100,9 @@ void graphene::chain::asset_bitasset_data_object::update_median_feeds(time_point | |||
if( current_feed.core_exchange_rate != median_feed.core_exchange_rate ) | |||
feed_cer_updated = true; | |||
current_feed = median_feed; | |||
// Note: perhaps can defer updating current_maintenance_collateralization for better performance | |||
if( after_core_hardfork_1270 ) | |||
current_maintenance_collateralization = current_feed.maintenance_collateralization(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Unconditionally, i. e. always, setting this will simplify the code significantly. I believe the performance penalty will be insignificant since you get rid of a method parameter and several checks at the same time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because there are several 128-bit multiplications and divisions in maintenance_collateralization()
, I thought that it costs more. Anyway we can benchmark it (I haven't done).
Because all positions will be closed anyway, using which index doesn't matter in terms of amounts, however, using another index may change the ID of historical virtual operations, specifically, `fill_order_operation` -- we usually try to avoid this.
WRT getting rid of a sub-index in a boost multi_index, I have an idea:
I guess it's possible to be implemented, however it would be a bit complicated, I'm not sure if it worth the efforts. |
Good to go, IMO. Any potential optimizations can be implemented after the release. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks!
PR for #1270.