Skip to content

synch with master #1

Merged
egieseke merged 62 commits intoegieseke:masterfrom
algorand:master
Jun 24, 2019
Merged

synch with master #1
egieseke merged 62 commits intoegieseke:masterfrom
algorand:master

Conversation

@egieseke
Copy link
Copy Markdown
Owner

No description provided.

Vervious and others added 30 commits June 12, 2019 12:55
fix flaky agreement test: make sortition deterministic
* test concurrent SQL writes and reads

We see unexpected behavior when we have 1 thread doing inserts, and 2
threads doing selects of recently-inserted rows.  One of the threads
doing selects will get sql.ErrNoRows even though it is querying a row
that was already committed by an insert.

The issue might be that shared-cache connections provide serializability
but not strict serializability; see:

    http://mailinglists.sqlite.org/cgi-bin/mailman/private/sqlite-users/2019-June/084813.html

* workaround for sqlite issue: never use shared cache

The shared cache isn't that important for our workload, and I can't
reproduce the ledger bug when running in a no-shared-cache configuration.

By using private caches (no shared cache), we also maintain the
performance optimization of allowing concurrent reads and writes (and,
specifically, TestDBConcurrency passes).
New installs should be able to specify -g mainnet to directly join mainnet.
Updating an existing installation should preserve the genesis.json network even if someone manually replaced the original genesis.json file (ignore wallet-genesis.id).
* goal clerk inspect: print msig PKs using base32+checksum

* put boilerplate on top of inspect.go
This fixes a deviation from the spec. Currently, the code
(accidentally) filters all votePresent and voteVerified events
from the next round, if myPlayer.Period is large. Instead,
whether we filter votes from the next round should not be a
function of current period. As specified in the spec, we allow
votes with vote.Round = player.Round + 1 and vote.Period == 0
and vote.Step = {propose,soft,cert,next}
AlgorandFullNode::BroadcastSignedTxn checks, logs, and returns an error if an error occurs at any step, except for the final step where the transaction is sent to the networking stack for broadcast. This commit checks, logs, and returns that error if it occurs as well.
Link to the Algorand vulnerability submission form and bug bounty program.
Actually display the checksum for multisig PKs; also display txn Sender,
Receiver, and CloseRemainderTo using the same address encoding; and fix
the counter for displaying multiple txns in a file.
In particular, tell GitHub's language autodetection feature that
crypto/libsodium-fork/* is vendored code. This way GitHub will show go-algorand
as a Go project rather than a C project.
See https://github.com/github/linguist#vendored-code
This commit allows streaming cadaver files into the coroner debug tool.  This
improves the memory footprint of processing of large cadaver files, and it also
allows a user or other tools to inspect the output of coroner as quickly as it
is processed.

This commit removes support for relative round bounds for trimming coroner,
since end-relative round bounds are hard to compute while streaming.
Existing code would retry retrieving a block regardless of whether the previous block was retrieved successfully. While (functionally) it won't hurt to make the subsequent attempt, it generate excessive network traffic when the first block retrieval is being delayed.

We want to keep the happy path retrieve blocks as fast as possible, while slowing down once we ran into network failures.
It is possible to go to/from a wallet recovery mnemonic, but previously this functionality was not mirrored in account seed mnemonics: one could only import an account's mnemonic (for example, from algokey), and not export it. This PR proposes to add a new command `goal account export`, which recovers a stored account's mnemonic.

Additionally, the distinctions between wallet mnemonics and account mnemonics are clarified somewhat.

Testing: Tested this by destroying and then recovering an account on TestNet.
…r arguments (#30)

Example from config.json:
"NodeExporterPath": "./node_exporter --collector.systemd"

This should result in the following process being launched when metrics are enabled:
./node_exporter --collector.systemd --web.listen-address=:9100 --web.telemetry-path=/metrics
* Add Origin: and Label: fields to Debian package releases

* Add configuration file-handling directives for the Debian package
The existing sendReceive_test family of tests is a ping-pong-like test, so this changes these tests to move 45x ("a lot") more money.

The test also was restructured somewhat to make it run faster.
winder and others added 25 commits June 19, 2019 12:15
Disconnect slow peers based on their network activity rather than pending buffer size.
* Add support for zone exporting.

* Add space between functions.
Remove extra fields from Relay, dead code, and add copyright headers.
* Updating the docker file to work

* Updating dockerfile after review

* Slight tweak to dockerfile apt-get line(s)
…40)

* GOAL2-614 Updated algoh so that logs will be captured if algod terminates before block watcher is initialized.
Fix code analysis warnings.

* Consolidated error collection into captureErrorLogs().
Removed blank identifiers.
fix filter for next round period 0 votes
TestDBConcurrencyRW assumes `/dev/shm` device exists.  It doesn't on Mac, so use tmp folder in that case.
Fix TestDBConcurrencyRW test to run on Macs
)

* Skip test for taking port 8080 when this port is already in use.

* fix typo.
Fix nit from #7 and run 'make sanity'
Changing the encoding of expected hash value in AuctionMinion to Base32
Add data-driven utility for automatic relay configuration
@egieseke egieseke merged commit c055ddb into egieseke:master Jun 24, 2019
egieseke pushed a commit that referenced this pull request Jun 30, 2020
The test was creating a proxy which delays requests execution as a way to slow down the catchpoint catchup process.
This is important so that we can monitor from the goal command that the catchup is working as intended.
However, delaying the command execution caused an issue where the node was trying to issue multiple requests for blocks 1-16, in parallel, which reached the proxy at an arbitrary order. As a result, the request for block #1 was delayed by more than 4 second, causing it it timeout.

The solution was to reconfigure the number of parallel blocks being retrieved to 2. This would ensure that we only getting two blocks at a time. Since the delay is configured to 1.5 seconds, this would also be the delay, which is well under 4 second.
egieseke pushed a commit that referenced this pull request Oct 19, 2020
…2.0.14

Normalize differences between master and rel/beta
egieseke pushed a commit that referenced this pull request Mar 15, 2022
egieseke pushed a commit that referenced this pull request Sep 13, 2022
Tests: Remove using unreleased semicolon support in AVM test
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.