The height function apparently used to return the index of
the last block, rather than the height of the chain. This now
seems to be incorrect, judging the the code, so we remove the
now wrong comment, as well as a couple +/- 1 adjustments
which now cause the median calculation to differ from the
original blockchain_storage version.
It was only used by the older blockchain_storage.
We also move the code to the calling blockchain level, to avoid
replicating the code in every DB implementation. This also makes
the get_random_out method obsolete, and we delete it.
Pros:
- smaller on the blockchain
- shorter integrated addresses
Cons:
- less sparseness
- less ability to embed actual information
The boolean argument to encrypt payment ids is now gone from the
RPC calls, since the decision is made based on the length of the
payment id passed.
A payment ID may be encrypted using the tx secret key and the
receiver's public view key. The receiver can decrypt it with
the tx public key and the receiver's secret view key.
Using integrated addresses now cause the payment IDs to be
encrypted. Payment IDs used manually are not encrypted by default,
but can be encrypted using the new 'encrypt_payment_id' field
in the transfer and transfer_split RPC calls. It is not possible
to use an encrypted payment ID by specifying a manual simplewallet
transfer/transfer_new command, though this is just a limitation
due to input parsing.
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
Add public method blockchain_storage::debug_pop_block_from_blockchain()
Ensure blockchain_import calls destructors before exit.
To test:
DATABASE=memory make release
// create blockchain.bin from blockchain.raw if needed
build/release/bin/blockchain_import --block-stop 1000
// try popping a single block
build/release/bin/blockchain_import --pop-blocks 1
b43716c Do store transaction's blob size in transaction_chain_entry (Sergey Kazenyuk)
3be518f Use single get_transaction_hash to get both id and blob size (Sergey Kazenyuk)
There will need to be some more refactoring for these changes to be
considered complete/correct, but for now it's working.
new daemon cli argument "--db-type", works for LMDB and BerkeleyDB.
A good deal of refactoring is also present in this commit, namely
Blockchain no longer instantiates BlockchainDB, but rather is passed a
pointer to an already-instantiated BlockchainDB on init().
Add support to:
- BlockchainDB, BlockchainLMDB
- blockchain_import utility to open LMDB database with one or more
LMDB flags.
Sample use:
$ blockchain_import --database lmdb#nosync
$ blockchain_import --database lmdb#nosync,nometasync
In order to make things more general, BlockchainDB now has get_db_name()
which should return a string with the "name" of that type of db.
This "name" will be the subfolder name that holds that db type's files
within the monero folder.
Small bugfix: blockchain_converter was not correctly appending this in
the prior hard-coded-string implementation of the subfolder data
directory concept.
Ostensibly janitorial work, but should be more relevant later down the
line. Things that depend on core cryptonote things (i.e.
cryptonote_core) don't necessarily depend on BlockchainDB and thus
have no need to have BlockchainDB baked in with them.
The RPC calls the daemon executable uses to talk to the running daemon
instance have mostly been added back in. Rate limiting has not been
added in upstream, but is on its way in a separate effort, so those
calls are still NOPed out.
many RPC functions added by the daemonize changes
(and related changes on the upstream dev branch that were not merged)
were commented out (apart from return). Other than that, this *should*
work...at any rate, it builds, and that's something.
new update of the pr with network limits
more debug options:
discarding downloaded blocks all or after given height.
trying to trigger the locking errors.
debug levels polished/tuned to sane values.
debug/logging improved.
warning: this pr should be correct code, but it could make
an existing (in master version) locking error appear more often.
it's a race on the list (map) of peers, e.g. between closing/deleting
them versus working on them in net-limit sleep in sending chunk.
the bug is not in this code/this pr, but in the master version.
the locking problem of master will be fixed in other pr.
problem is ub, and in practice is seems to usually cause program abort
(tested on debian stable with updated gcc). see --help for option
to add sleep to trigger the error faster.
Update of the PR with network limits
works very well for all speeds
(but remember that low download speed can stop upload
because we then slow down downloading of blockchain
requests too)
more debug options
fixed pedantic warnings in our code
should work again on Mac OS X and FreeBSD
fixed warning about size_t
tested on Debian, Ubuntu, Windows(testing now)
TCP options and ToS (QoS) flag
FIXED peer number limit
FIXED some spikes in ingress/download
FIXED problems when other up and down limit
commands and options for network limiting
works very well e.g. for 50 KiB/sec up and down
ToS (QoS) flag
peer number limit
TODO some spikes in ingress/download
TODO problems when other up and down limit
added "otshell utils" - simple logging (with colors, text files channels)
Usage:
default is lmdb for blockchain branch:
$ make release
same as:
$ DATABASE=lmdb make release
for original in-memory implementation:
$ DATABASE=memory make release
It expects the total number of blocks of main chain, not last block id
(off-by-one error).
This again behaves like the same height assertion done in original
implementation in blockchain_storage::handle_alternative_block().
This allows a reorganization to proceed after an alternative block has
been added.
difficulty.
This fixes the continual reorganization between a main and alternate
chain, using the same two latest blocks from each.
The check that cumulative difficulty of the alternate chain is bigger
than main's was not using main's last block, but incorrectly using the
passed-in block's previous block. main_chain_cumulative_difficulty was
being used in two different ways. This has been split up to keep use
of main_chain_cumulative_difficulty consistent.
Remove have_block() check from Blockchain::handle_block_to_main_chain().
Add logging to have_block().
This allows blockchain reorganization to proceed further.
have_block() check here causes an error after a blockchain reorganize
begins with error: "Attempting to add block to main chain, but it's
already either there or in an alternate chain."
While reorganizing to become the main chain, a block in the
alternative chain would be refused due to have_block() rightfully
finding it in the alternative chain. The reorganization would end in
rollback, restoring to previous blockchain.
Original implementation didn't call it here, and it doesn't appear
necessary to be called from here in this implementation either. When
needed, it appears it's called prior to handle_block_to_main_chain().
Complete method BlockchainLMDB::remove_output()
- use output index as the key for:
m_output_indices, m_output_txs, m_output_keys
- call new method BlockchainLMDB::remove_amount_output_index()
Add method to remove amount output index.
- BlockchainLMDB::remove_amount_output_index()
- for m_output_amounts
This also fixes the segfault when blockchain reorganization is
attempted.
Use last block id, not number of blocks (off-by-one error).
Fixes error at start of blockchain reorganization: "Attempt to get
cumulative difficulty from height <XXXXXX> failed -- difficulty not in
db"
Implement BlockchainLMDB::get_output_global_index()
- returns global output index for a given amount and amount output
index.
Add information to debug statement for failed ring signature check
within Blockchain::check_tx_inputs()
Fixes bitmonerod RPC call "/getrandom_outs.bin" to return correct
output keys, used in creating a transaction with mixins.
TODO: get_output_global_index() could be refactored with part of
get_output_tx_and_index() as the latter uses the former's
functionality. Keep track of LMDB read transaction.
Fix Blockchain::get_tx_outputs_gindexs() to return amount output
indices.
Implement BlockchainLMDB::get_tx_amount_output_indices() and call it
from the function instead of BlockchainLMDB::get_tx_output_indices()
Previously, Blockchain::get_tx_outputs_gindexs() was instead returning
global output indices, which are internal to LMDB databases.
Allows bitmonerod RPC /get_o_indexes.bin to return the amount output
indices as expected.
Allows simplewallet refresh to set correct amount output indices for
incoming transfers. simplewallet can now construct and send valid
transactions (currently only without mixins).
This is a fix that doesn't require altering the structure of the
current LMDB databases.
TODO:
This can be done more efficiently by adding another LMDB database
(key-value table).
It's not used during regular transaction validation by bitmonerod. I
think it's currently used only or mainly by simplewallet for just its
own incoming transactions. So the current behavior is not a primary
bottleneck.
Currently, it's using the "output_amounts" database, walking through a
given amount's list of values, comparing each one to a given global
output index. The iteration number of the match is the desired result:
the amount output index. This is done for each global output index of
the transaction.
A tx's amount output indices can be stored in various other ways
allowing for faster lookup. Since a tx is only written once, there are
no special future write requirements for its list of indices.
As it is useful for functions calling BlockchainDB functions to know
whether an exception is expected (attempting to get a block that doesn't
exist and counting it missing if not, to save time checking if it
does, for example), the inline functions throw{0,1} need to keep the
exception type information.
Slight comment update due to copy/paste failure.
Fixes problem of obtaining incorrect outputs used for tx input.
Reverts to earlier intended behavior that was fixed in previous
commit's split of get_output_tx_and_index into two functions.
Ideally, the log would go in the exception's ctor, but
two log levels are used, so I'd need to specify the level
in the ctor, which isn't great as it's not really related
to the exception.
hard-coded config folder, hard-coded BlockchainDB subclass.
Needs finessing, but should be testable this way.
update for rebase (warptangent 2015-01-04)
fix conflicts with upstream CMakeLists.txt files
src/CMakeLists.txt (edit original commit)
src/blockchain_converter/CMakeLists.txt (add)
There are quite a few debug prints in this commit that will need removed
later, but for posterity (in case someone wants to debug this while I'm
away), I left them in.
Currently errors when syncing on the first block that has a "real"
transaction. Seems to not be able to validate the ring signature, but I
can't for the life of me figure out what's going wrong.
Blockchain and BlockchainLMDB classes now have a debug print at the
beginning of each function at log level 2. These can be removed at any
time, but for now are quite useful.
Blockchain runs, and adds the genesis block just fine, but for some
reason isn't getting new blocks.
Probably needs more looking at -- lot of things were done...in a rushed
sort of way. That said, it all builds and *should* be at least
testable.
update for rebase (warptangent 2015-01-04)
fix conflicts with upstream CMakeLists.txt files
src/CMakeLists.txt (remove edits from original commit)
tests/CMakeLists.txt (remove edits from original commit)
src/cryptonote_core/CMakeLists.txt (edit)
- use blockchain db .cpp and .h files
- add LMDB_LIBRARIES
All of the functionality for the LMDB implementation of BlockchainDB is
implemented, but only what is in tests/unit_tests/BlockchainDB.cpp has
been tested. This is basically add a block, see if you can get the
block and a tx from the block. More tests should be added at some
point.
Still needs testing (and need to write a few more unit tests), but
everything should be there. Lots of unfortunate duplication,
but...well, I can't see a way around it using LMDB.
A couple of other minor changes in this commit, only slightly relevant.