Commit Graph

29 Commits

Author SHA1 Message Date
Howard Chu
c22d22e2db
Cleanup test impact of adding safesyncmode() method 2017-08-22 15:11:09 +01:00
Riccardo Spagni
4466b6d1b0
Merge pull request #2303
5a283078 cryptonote_protocol: large block sync size before v4 (moneromooo-monero)
7b747607 cryptonote_protocol: kick idle synchronizing peers (moneromooo-monero)
2017-08-17 21:39:44 +02:00
moneromooo-monero
5a283078ec
cryptonote_protocol: large block sync size before v4 2017-08-17 13:11:52 +01:00
moneromooo-monero
827afcb7ea
protocol: pass blockchain cumulative difficulty when syncing
Not used yet.
2017-08-15 21:03:37 +01:00
moneromooo-monero
a1891ebea9
tests: fix tests build
Add get_fork_version and add_ideal_fork_version to core so
cryptonote_protocol does not have to need the Blockchain
class directly, as it's not in its dependencies, and add
those to the fake core classes in tests too.
2017-08-10 11:12:56 +01:00
moneromooo-monero
5be43fcdba
cryptonote_protocol_handler: sync speedup
A block queue is now placed between block download and
block processing. Blocks are now requested only from one
peer (unless starved).

Includes a new sync_info coommand.
2017-08-07 09:33:04 +01:00
moneromooo-monero
b52abd1370
Move txpool to the database
Integration could go further (ie, return_tx_to_pool calls should
not be needed anymore, possibly other things).

poolstate.bin is now obsolete.
2017-05-25 22:23:37 +01:00
Riccardo Spagni
c3599fa7b9
update copyright year, fix occasional lack of newline at line end 2017-02-21 19:38:18 +02:00
moneromooo-monero
0288310e3b
blockchain_db: add "raw" blobdata getters for block and transaction
This speeds up operations such as serving blocks to syncing peers
2017-02-13 21:11:37 +00:00
moneromooo-monero
9faef1f83a
cryptonote_protocol: misc fluffy block fixes
- fix wrong block being used when a new block is received between
  a node elaying a fluffy block and sending a new fluffy block
  with txes a peer did not have
- misc a neverending ping pong requesting the same missing txids
  when a new block is received in the meantime, causing the top
  block to not be the one we need
- send the original fluffy block message block height when sending
  a new fluffy block, not the current top height, which might
  have been updated since
- avoid sending back the whole block blob when asking for txes,
  send only the hash instead
- plus misc cleanup and additional debugging logs
2017-02-12 12:33:45 +00:00
kenshi84
8027ce0c75 extract some basic code from libcryptonote_core into libcryptonote_basic 2017-02-08 22:45:15 +09:00
moneromooo-monero
81c384e408
fix do_not_relay not preventing relaying on a timer
Also print its value when printing pool
2017-01-14 13:07:05 +00:00
moneromooo-monero
2078cb6f2f
tests: fix tests builds after fluffy blocks merge 2016-11-11 18:17:16 +00:00
moneromooo-monero
1e163666f3
core: notify the txpool when transactions are relayed 2016-10-23 00:32:49 +01:00
moneromooo-monero
baa3e80140
tests: fix build after addition of cryptonote_core::get_block_sync_size 2016-10-01 10:09:51 +01:00
moneromooo-monero
9e82b694da
remove original Cryptonote blockchain_storage blockchain format 2016-08-28 21:27:32 +01:00
moneromooo-monero
f7301c3563
Revert "Print stack trace upon exceptions"
Ain't nobody got time for link/cmake skullduggery.

This reverts commit fff238ec94.
2016-03-21 10:12:23 +00:00
moneromooo-monero
fff238ec94
Print stack trace upon exceptions
Useful for debugging users' logs
2016-03-19 21:48:36 +00:00
Riccardo Spagni
de03926850
updated copyright year 2015-12-31 08:39:56 +02:00
moneromooo-monero
217792351d
Tone down a bit L0 logs during daemon sync 2015-12-14 00:36:37 +00:00
moneromooo-monero
932994c0cb
Relay transactions when they linger too long in the pool
The last relayed time of a transaction is maintained, and
transactions will be relayed again if they are still in the
pool after a certain amount of time, which increases with
the transaction's age. All such transactions are resent,
whether or not they originated on the local node.
2015-11-21 00:56:21 +00:00
NoodleDoodleNoodleDoodleNoodleDoodleNoo
e5d2680094 ** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).

Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.

LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5

ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.

[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
   This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.

[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
	a. 0 = Compute long hash per block (may take a while depending on CPU)
	b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
	a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
	b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
	Fast    - Write meta-data but defer data flush.
	Fastest - Defer meta-data and data flush.
	Sync    - Flush data after nblocks_per_sync and wait.
	Async   - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
        Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
	Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
	For berkeley-db only. Auto remove logs if enabled.

**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
	At the moment, you need a full resync to use this optimized version.

[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.

Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block

**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop,   Dual-core / 4-threads U4200  (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).

lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop,  Quad-core / 8-threads 2600k  (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k  (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).

berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-15 23:20:16 -07:00
rfree2monero
32c19c6c3d
[fix] log level change. compilation: dns, tests
old unbound #warning does not block compilation
unit tests build fine. Even though the RPC/P2P network type is required again
2015-04-10 16:54:21 +02:00
rfree2monero
0198ffb220 2014 network limit 1.3 fix log/path/data +utils
+toc -doc -drmonero

Fixed the windows path, and improved logging and data
(for graph) logging, fixed some locks and added more checks.

Still there is a locking error,
not added by my patches, but present in master version
(locking of map/list of peers).
2015-02-24 20:12:56 +01:00
rfree2monero
eabb519605 2014 network limit 1.0a +utils +toc -doc -drmonero
commands and options for network limiting
works very well e.g. for 50 KiB/sec up and down
ToS (QoS) flag
peer number limit
TODO some spikes in ingress/download
TODO problems when other up and down limit
added "otshell utils" - simple logging (with colors, text files channels)
2015-02-20 22:13:00 +01:00
Riccardo Spagni
f4b69d553a
year updated in license 2015-01-02 18:52:46 +02:00
fluffypony
6fc995fe5d License updated to BSD 3-clause 2014-07-23 15:03:52 +02:00
Neozaru
7fea5645e2 'getinfo' daemon HTTP-RPC returns 'target_height' for progress estimations 2014-06-04 22:50:13 +02:00
Antonio Juarez
296ae46ed8 moved all stuff to github 2014-03-03 22:07:58 +00:00