Trivial journal for private transactions#10056
Conversation
| private_hash: H256, | ||
| transaction: SignedTransaction, | ||
| validators: Vec<Address>, | ||
| validators: &Vec<Address>, |
There was a problem hiding this comment.
Rather than taking validators by reference which is then cloned for insertion, I would keep it as a move and remove the clone call in line 236.
There was a problem hiding this comment.
Unfortunately these validators array is required in two different places, so I cannot just move it to the one of them.
There was a problem hiding this comment.
If necessary the Vec can be cloned on the call site, no? My point was that add_transaction needs validators by value.
There was a problem hiding this comment.
Yes, it could be cloned on the call-site but maybe @grbIzl is concerned by the massive call overhead of passing 3 words instead 1 😄
Also, I noticed that transaction is cloned which is not needed because it is moved into the function but it is probably out-of-scope for this PR
| // Flush all logs on drop | ||
| impl Drop for Logging { | ||
| fn drop(&mut self) { | ||
| self.flush_logs(); |
There was a problem hiding this comment.
I am not sure if writing the JSON file in a destructor is the best approach. One can not return errors back and as a destructor it executes quite slowly. Would it not be possible to offer explicit read/write functionality in the logging API?
There was a problem hiding this comment.
I was heavily thinking between these two approaches (log every event into the file vs flush all logs in the destructor). The problem with logging every event is, that these events are not sequential, as a result I would need to rewrite the whole log file in order to update one log record. Also not very efficient. So I've decided to stop on the flush in the dtor. But I'm open to the discussion ;-) and I've also refactored this part and added a test for serialization in order to proceed with any changes easily.
There was a problem hiding this comment.
It was only concerned with offering explicit read/write functions which return Result instead of reading and writing in constructor and destructor with no way to report errors. The granularity of writes should be unaffected.
There was a problem hiding this comment.
Discussed this moment offline. As a result, I would like to re-phrase @twittner 's concern (as I understand its application): should we return the error during RPC call of private transaction methods in the case, when flushing\reading logs during this call fails. My opinion is that we should not: 1) logging is just a secondary functionality and the primary functionality must not suffer from its flaws. The informing of the client via error warnings is enough IMO for such cases, 2) half of the cases, when private transaction methods are called, is the automatic processing of incoming packets. In this case returning the error doesn't make any sense at all, because it cannot be handled by anybody.
|
Refactoring this code in order to address the comments from @twittner |
27e26d5 to
ecec173
Compare
| } | ||
|
|
||
| pub struct FileLogsSerializer { | ||
| logs_dir: Option<PathBuf>, |
There was a problem hiding this comment.
Personally I would prefer a little different API:
#[derive(Default)]
pub struct FileLogsSerializer { .... }
impl FileLogsSerializer {
pub fn with_path(path: PathBuf) -> Self { ... }
}There was a problem hiding this comment.
any reason why it isn't #[derive(Default)]?
There was a problem hiding this comment.
I came here to ask the same. IMO FileLogsSerializer should just have a PathBuf field. Without it it can not perform its work. So instead of working around this with some internal Option, I would rather change Provider to contain the field logging: Option<Logger>.
twittner
left a comment
There was a problem hiding this comment.
LGTM.
@niklasad1: Maybe you want to catch up with the changes that happened since your approval?
…o review comments
|
@grbIzl shall this be mentioned in the release notes? If so, please add the label :P |
* Journal for private txs added * Tests after adding logging to private tx fixed * Logs getter and tests added * Time and amount limit for logs added * RPC method for log retrieving added * Correct path name and time validation implemented * References for parameters added, redundant cloning reworked * References for parameters added, redundant cloning reworked * Work with json moved to the separate struct * Serialization test added * Fixed build after the merge with head * Documentation for methods fixed, redundant field removed * Fixed error usages * Timestamp trait implemented for std struct * Commented code removed * Remove timestamp source, rework serialization test * u64 replaced with SystemTime * Path made mandatory for logging * Source of monotonic time added * into_system_time method renamed * Initialize time source by max from current system time and max creation time from already saved logs * Redundant conversions removed, code a little bit reworked according to review comments * One more redundant conversion removed, rpc call simplified
* master: docs: Add ProgPoW Rust docs to ethash module (#10653) fix: Move PR template into .github/ folder (#10663) docs: Add PR template (#10654) Trivial journal for private transactions (#10056) fix(compilation warnings) (#10649) [whisper] Move needed aes_gcm crypto in-crate (#10647) Adds parity_getRawBlockByNumber, parity_submitRawBlock (#10609) Fix rinkeby petersburg fork (#10632) ci: publish docs debug (#10638)
For private tx diagnostic purposes trivial journal implemented.
Details of the solution:
/// Original signed transaction hash (used as a source for private tx)
pub tx_hash: H256,
/// Current status of the private transaction
pub status: PrivateTxStatus,
/// Creation timestamp
pub creation_timestamp: u64,
/// List of validations
pub validators: Vec,
/// Timestamp of the resulting public tx deployment
pub deployment_timestamp: Option,
/// Hash of the resulting public tx
pub public_tx_hash: Option,
/// Private tx was created but no validation received yet
Created,
/// Several validators (but not all) validated the transaction
Validating,
/// All validators validated the private tx
/// Corresponding public tx was created and added into the pool
Deployed,
[{"tx_hash":"0x957251b3acfa1cabaed21234afa40cdb0c51952c6ac426d538f8b559bc50c271","status":"Created","creation_timestamp":1544627108,"validators":[{"account":"0x7ffbe3512782069be388f41be4d8eb350672d3a5","validated":false,"validation_timestamp":null}],"deployment_timestamp":null,"public_tx_hash":null},{"tx_hash":"0xc868b873b74259935f74b8b858bf44f692ad0ead3339844aaad1e6df6a29c982","status":"Deployed","creation_timestamp":1544626975,"validators":[{"account":"0x7ffbe3512782069be388f41be4d8eb350672d3a5","validated":true,"validation_timestamp":1544626975}],"deployment_timestamp":1544626975,"public_tx_hash":"0xe56b48fd040e14e38d6f6dae7846fec10ec20a7edc903b31ba87d3b1bd9413f0"}]
-- Amount of logs available (currently hardcoded 1000)
-- Lifetime of the logs (currently 20 days hardcoded), the older logs will be dropped with the next processing
Closes #9641