Commit Graph

254 Commits

Author SHA1 Message Date
Riccardo Spagni
5a26676932
Merge pull request #343
e20a4dd blockchain: fix testnet syncing (to not use blocks.dat) (moneromooo-monero)
2015-07-18 22:59:02 +02:00
moneromooo-monero
e20a4ddc76
blockchain: fix testnet syncing (to not use blocks.dat)
These are mainnet blocks, and would cause syncing on testnet to
reject all incoming blocks.
2015-07-18 10:25:22 +01:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
70ae2ee711 Fixed threadpool bug when running on single core systems.
*Thanks to freshman for reporting bug.
2015-07-17 20:02:29 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
94ea3e8ed2 Removed on_idle() calls to Blockchain::store_blockchain() for lmdb.
Added option to cache tx-input verification results.
2015-07-15 23:20:25 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
2e293a563e Fixed binary size issue due to embedded checkpoint data.
Fixed OSX compilation issues due to random lmdb resize points.
Fixed infinite loop bug when calling core::get_block_template(..).
2015-07-15 23:20:20 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
e5d2680094 ** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).

Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.

LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5

ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.

[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
   This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.

[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
	a. 0 = Compute long hash per block (may take a while depending on CPU)
	b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
	a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
	b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
	Fast    - Write meta-data but defer data flush.
	Fastest - Defer meta-data and data flush.
	Sync    - Flush data after nblocks_per_sync and wait.
	Async   - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
        Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
	Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
	For berkeley-db only. Auto remove logs if enabled.

**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
	At the moment, you need a full resync to use this optimized version.

[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.

Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block

**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop,   Dual-core / 4-threads U4200  (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).

lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k  (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-15 23:20:16 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
1f83444d3d Update blockchain.cpp
Fix compilation error
2015-07-15 23:20:15 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
da1d3c01de
Experimental BDB workaround optimizations 2015-07-15 21:13:42 -07:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
5d304cabfd Fix loop bug when calling core::get_block_template, causing calling thread to lock up. 2015-07-10 22:09:21 -07:00
moneromooo-monero
6a0f61d800
account: allow creating an account from a public address and view secret key 2015-06-20 17:33:06 +01:00
moneromooo-monero
f6da25a32e
Fix standard address deserialization 2015-06-16 15:36:20 +01:00
Riccardo Spagni
2d9d6c7621
Merge pull request #321
bbc5475 Fix DNS checkpoint consensus code (moneromooo-monero)
2015-06-14 13:10:18 +02:00
moneromooo-monero
bbc5475418
Fix DNS checkpoint consensus code
It's supposed to load all records and pick one that it finds twice.
2015-06-14 11:05:57 +01:00
moneromooo-monero
7bd6efe313
account: add a couple consts 2015-06-12 16:48:52 +01:00
moneromooo-monero
63741d8264
Integrated addresses (standard address plus payment id) 2015-06-12 16:48:41 +01:00
Riccardo Spagni
5bee2d2edf
Merge pull request #303
c882af6 wallet: add watch only wallet support (moneromooo-monero)
f7767c6 account: add a forget_spend_key method (moneromooo-monero)
2015-06-11 10:27:00 +02:00
moneromooo-monero
f7767c6508
account: add a forget_spend_key method 2015-05-31 15:32:54 +01:00
Riccardo Spagni
e01d32e52d
cleaning up, removing redundant files, renaming, fixing incorrect licenses 2015-05-31 13:40:18 +02:00
warptangent
d1eac1b71c
Support debugging command --pop-blocks on in-memory blockchain
Add public method blockchain_storage::debug_pop_block_from_blockchain()

Ensure blockchain_import calls destructors before exit.

To test:

DATABASE=memory make release

// create blockchain.bin from blockchain.raw if needed
build/release/bin/blockchain_import --block-stop 1000

// try popping a single block
build/release/bin/blockchain_import --pop-blocks 1
2015-05-16 19:38:52 -07:00
Thomas Winget
b1d92bcc37
Fixes changes to sort tx by fee per kb 2015-05-13 20:27:06 -04:00
Riccardo Spagni
1d42deb767
Merge pull request #281
ac011b4 Rename src/blockchain_converter/ to src/blockchain_utilities/ (warptangent)
ed9c639 Add --block-number option to blockchain_import (warptangent)
1eb4c66 Update blockchain utilities with portable bootstrap file format (warptangent)
54bd9c1 Add MDB_NORDAHEAD as a supported LMDB flag for blockchain_import (warptangent)
a52496d Condense #if directives (warptangent)
8c1a188 Add basic "pop blocks" command to blockchain_import for debugging (warptangent)
71af046 Update log statements (warptangent)
2015-05-13 11:21:42 +02:00
Riccardo Spagni
012164fff8
resolved merge conflict in tx_pool.cpp 2015-05-13 11:18:22 +02:00
warptangent
71af04669c
Update log statements
Use filesystem path conversion to string() instead of c_str().
Windows may otherwise output an address.
2015-05-08 14:12:06 -07:00
Riccardo Spagni
8005a0c7a1
Merge pull request #269
641d824 Keep memory pool consistent when stuck tx removed (warptangent)
b76857f Add mempool output to daemon via command and RPC (warptangent)
2015-05-06 08:09:31 +02:00
Thomas Winget
385d7c0495
Sort txs by per-kb-fee for miners 2015-04-30 01:02:12 -04:00
Thomas Winget
1b2614ba83
When removing 'stuck' transactions, don't ignore the first tx in the pool 2015-04-30 00:23:00 -04:00
warptangent
641d824f37
Keep memory pool consistent when stuck tx removed
When a stuck tx is removed from memory pool, first remove the associated
spent key images.
2015-04-23 07:04:36 -07:00
warptangent
b76857f9d9
Add mempool output to daemon via command and RPC
This is for the "print_pool" command and "get_transaction_pool" RPC
method.

Add mempool's spent key images to the results.
2015-04-23 07:04:36 -07:00
Thomas Winget
2717883dba
DNS Checkpoint updating-related fixes/changes
Only one thread will be doing the updating.

Two valid responses must match, and the first two that match will be
used.
2015-04-22 04:36:39 -04:00
Thomas Winget
ae08be5394
Disable DNS checkpoint updating on testnet 2015-04-08 18:07:46 -04:00
Thomas Winget
a8bc7182ea
Merge BlockchainDB into upstream 2015-04-07 17:56:18 -04:00
Thomas Winget
9519526224
Only compile BerkeleyDB as an option in non-static 2015-04-07 15:02:20 -04:00
Javier Smooth
83ddc942c1 handle unlikely rounding up after sqrt 2015-04-05 04:39:09 -07:00
Javier Smooth
f2e8348be0 triangular distribution to choose recent outputs more often for mixins 2015-04-05 04:01:00 -07:00
rfree2monero
c511abf005 remerged; commands JSON. logging upgrade. doxygen 2015-04-01 19:00:45 +02:00
rfree2monero
3cbdf198f1 Merge remote-tracking branch 'monero-official/master' into network-1.6-work1 2015-04-01 18:24:45 +02:00
Thomas Winget
94cb295db4
Merge upstream into blockchain 2015-03-29 09:58:18 -04:00
Riccardo Spagni
65d6d36449
Merge pull request #244
e6740ee Enforce DNSSEC for checkpoint updates (Thomas Winget)
dbf46a7 DNSSEC added (hardcoded key) (Thomas Winget)
2015-03-26 13:50:06 +02:00
Riccardo Spagni
c1187fabcf
Merge pull request #242
b43716c Do store transaction's blob size in transaction_chain_entry (Sergey Kazenyuk)
3be518f Use single get_transaction_hash to get both id and blob size (Sergey Kazenyuk)
2015-03-26 13:47:36 +02:00
Thomas Winget
7b14d4a17f
Steps toward multiple dbs available -- working
There will need to be some more refactoring for these changes to be
considered complete/correct, but for now it's working.

new daemon cli argument "--db-type", works for LMDB and BerkeleyDB.

A good deal of refactoring is also present in this commit, namely
Blockchain no longer instantiates BlockchainDB, but rather is passed a
pointer to an already-instantiated BlockchainDB on init().
2015-03-25 12:09:44 -04:00
Thomas Winget
5c0bc0050c
Merge upstream updates into blockchain branch 2015-03-25 05:56:36 -04:00
Thomas Winget
e6740ee103
Enforce DNSSEC for checkpoint updates 2015-03-24 06:59:38 -04:00
Thomas Winget
8855a32044
Merge upstream to daemonize changes
Preparation for PR
2015-03-24 02:47:15 -04:00
warptangent
4bedd68d2c
Update Blockchain::get_db() to return reference instead of pointer
Where this method is used, a BlockchainDB object is always expected, so
a pointer is unnecessary and less safe.
2015-03-22 15:45:36 -07:00
warptangent
275cbd4348
Add support for database open with flags
Add support to:
  - BlockchainDB, BlockchainLMDB
  - blockchain_import utility to open LMDB database with one or more
    LMDB flags.

Sample use:
  $ blockchain_import --database lmdb#nosync
  $ blockchain_import --database lmdb#nosync,nometasync
2015-03-16 00:26:59 -07:00
warptangent
ca75b4789c
Blockchain: add get_db() accessor, needed for blockchain_import
This handling may be changed in the future.
2015-03-15 13:22:52 -07:00
Sergey Kazenyuk
b43716c756 Do store transaction's blob size in transaction_chain_entry 2015-03-15 04:35:34 +03:00
Sergey Kazenyuk
3be518ff40 Use single get_transaction_hash to get both id and blob size 2015-03-15 04:33:34 +03:00
Thomas Winget
eee3ee7073
BlockchainDB implementations have names now
In order to make things more general, BlockchainDB now has get_db_name()
which should return a string with the "name" of that type of db.
This "name" will be the subfolder name that holds that db type's files
within the monero folder.

Small bugfix: blockchain_converter was not correctly appending this in
the prior hard-coded-string implementation of the subfolder data
directory concept.
2015-03-13 21:39:27 -04:00
Thomas Winget
5eab480cb1
Moved BlockchainDB into its own src/ subfolder
Ostensibly janitorial work, but should be more relevant later down the
line.  Things that depend on core cryptonote things (i.e.
    cryptonote_core) don't necessarily depend on BlockchainDB and thus
have no need to have BlockchainDB baked in with them.
2015-03-06 15:20:45 -05:00