2015-01-02 17:52:46 +01:00
// Copyright (c) 2014-2015, The Monero Project
2014-07-23 15:03:52 +02:00
//
// All rights reserved.
//
// Redistribution and use in source and binary forms, with or without modification, are
// permitted provided that the following conditions are met:
//
// 1. Redistributions of source code must retain the above copyright notice, this list of
// conditions and the following disclaimer.
//
// 2. Redistributions in binary form must reproduce the above copyright notice, this list
// of conditions and the following disclaimer in the documentation and/or other
// materials provided with the distribution.
//
// 3. Neither the name of the copyright holder nor the names of its contributors may be
// used to endorse or promote products derived from this software without specific
// prior written permission.
//
// THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY
// EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF
// MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL
// THE COPYRIGHT HOLDER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
// SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
// PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
// INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT,
// STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF
// THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
//
// Parts of this file are originally copyright (c) 2012-2013 The Cryptonote developers
2014-03-03 23:07:58 +01:00
#pragma once
2014-05-25 19:06:40 +02:00
#include <algorithm>
2014-12-15 23:23:42 +01:00
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/thread/thread.hpp>
#include <atomic>
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
#include "version.h"
#include "string_tools.h"
#include "common/util.h"
2014-09-17 23:25:19 +02:00
#include "common/dns_utils.h"
2014-03-03 23:07:58 +01:00
#include "net/net_helper.h"
#include "math_helper.h"
#include "p2p_protocol_defs.h"
#include "net_peerlist_boost_serialization.h"
#include "net/local_ip.h"
#include "crypto/crypto.h"
#include "storages/levin_abstract_invoke2.h"
2015-04-01 19:00:45 +02:00
#include "data_logger.hpp"
2015-01-29 23:10:53 +01:00
#include "daemon/command_line_args.h"
2014-09-10 20:01:30 +02:00
// We have to look for miniupnpc headers in different places, dependent on if its compiled or external
#ifdef UPNP_STATIC
2014-12-15 23:23:42 +01:00
#include <miniupnpc/miniupnpc.h>
#include <miniupnpc/upnpcommands.h>
#include <miniupnpc/upnperrors.h>
2014-09-10 20:01:30 +02:00
#else
2014-12-15 23:23:42 +01:00
#include "miniupnpc.h"
#include "upnpcommands.h"
#include "upnperrors.h"
2014-09-10 20:01:30 +02:00
#endif
2014-04-09 14:14:35 +02:00
2014-03-03 23:07:58 +01:00
#define NET_MAKE_IP(b1,b2,b3,b4) ((LPARAM)(((DWORD)(b1)<<24)+((DWORD)(b2)<<16)+((DWORD)(b3)<<8)+((DWORD)(b4))))
namespace nodetool
{
namespace
{
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
const int64_t default_limit_up = 2048;
const int64_t default_limit_down = 8192;
2014-03-03 23:07:58 +01:00
const command_line::arg_descriptor<std::string> arg_p2p_bind_ip = {"p2p-bind-ip", "Interface for p2p network protocol", "0.0.0.0"};
2014-09-08 18:51:04 +02:00
const command_line::arg_descriptor<std::string> arg_p2p_bind_port = {
"p2p-bind-port"
, "Port for p2p network protocol"
, std::to_string(config::P2P_DEFAULT_PORT)
};
const command_line::arg_descriptor<std::string> arg_testnet_p2p_bind_port = {
"testnet-p2p-bind-port"
, "Port for testnet p2p network protocol"
, std::to_string(config::testnet::P2P_DEFAULT_PORT)
};
2014-03-03 23:07:58 +01:00
const command_line::arg_descriptor<uint32_t> arg_p2p_external_port = {"p2p-external-port", "External port for p2p network protocol (if port forwarding used with NAT)", 0};
const command_line::arg_descriptor<bool> arg_p2p_allow_local_ip = {"allow-local-ip", "Allow local ip add to peer list, mostly in debug purposes"};
const command_line::arg_descriptor<std::vector<std::string> > arg_p2p_add_peer = {"add-peer", "Manually add peer to local peerlist"};
const command_line::arg_descriptor<std::vector<std::string> > arg_p2p_add_priority_node = {"add-priority-node", "Specify list of peers to connect to and attempt to keep the connection open"};
2014-05-25 19:06:40 +02:00
const command_line::arg_descriptor<std::vector<std::string> > arg_p2p_add_exclusive_node = {"add-exclusive-node", "Specify list of peers to connect to only."
" If this option is given the options add-priority-node and seed-node are ignored"};
2014-03-03 23:07:58 +01:00
const command_line::arg_descriptor<std::vector<std::string> > arg_p2p_seed_node = {"seed-node", "Connect to a node to retrieve peer addresses, and disconnect"};
2014-05-25 19:06:40 +02:00
const command_line::arg_descriptor<bool> arg_p2p_hide_my_port = {"hide-my-port", "Do not announce yourself as peerlist candidate", false, true};
2015-01-05 20:30:17 +01:00
2015-02-12 20:59:39 +01:00
const command_line::arg_descriptor<bool> arg_no_igd = {"no-igd", "Disable UPnP port mapping"};
const command_line::arg_descriptor<int64_t> arg_out_peers = {"out-peers", "set max limit of out peers", -1};
2015-01-05 20:30:17 +01:00
const command_line::arg_descriptor<int> arg_tos_flag = {"tos-flag", "set TOS flag", -1};
const command_line::arg_descriptor<int64_t> arg_limit_rate_up = {"limit-rate-up", "set limit-rate-up [kB/s]", -1};
const command_line::arg_descriptor<int64_t> arg_limit_rate_down = {"limit-rate-down", "set limit-rate-down [kB/s]", -1};
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
const command_line::arg_descriptor<int64_t> arg_limit_rate = {"limit-rate", "set limit-rate [kB/s]", -1};
2015-04-01 19:00:45 +02:00
const command_line::arg_descriptor<bool> arg_save_graph = {"save-graph", "Save data for dr monero", false};
2014-05-25 19:06:40 +02:00
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::init_options(boost::program_options::options_description& desc)
{
command_line::add_arg(desc, arg_p2p_bind_ip);
command_line::add_arg(desc, arg_p2p_bind_port);
2014-09-08 18:51:04 +02:00
command_line::add_arg(desc, arg_testnet_p2p_bind_port);
2014-03-03 23:07:58 +01:00
command_line::add_arg(desc, arg_p2p_external_port);
command_line::add_arg(desc, arg_p2p_allow_local_ip);
command_line::add_arg(desc, arg_p2p_add_peer);
command_line::add_arg(desc, arg_p2p_add_priority_node);
2014-05-25 19:06:40 +02:00
command_line::add_arg(desc, arg_p2p_add_exclusive_node);
2014-03-03 23:07:58 +01:00
command_line::add_arg(desc, arg_p2p_seed_node);
2015-01-05 20:30:17 +01:00
command_line::add_arg(desc, arg_p2p_hide_my_port);
command_line::add_arg(desc, arg_no_igd);
command_line::add_arg(desc, arg_out_peers);
command_line::add_arg(desc, arg_tos_flag);
command_line::add_arg(desc, arg_limit_rate_up);
2015-05-06 18:10:51 +02:00
command_line::add_arg(desc, arg_limit_rate_down);
command_line::add_arg(desc, arg_limit_rate);
command_line::add_arg(desc, arg_save_graph);
2015-04-01 19:00:45 +02:00
}
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::init_config()
{
//
TRY_ENTRY();
std::string state_file_path = m_config_folder + "/" + P2P_NET_DATA_FILENAME;
std::ifstream p2p_data;
p2p_data.open( state_file_path , std::ios_base::binary | std::ios_base::in);
if(!p2p_data.fail())
{
boost::archive::binary_iarchive a(p2p_data);
a >> *this;
}else
{
make_default_config();
}
//at this moment we have hardcoded config
m_config.m_net_config.handshake_interval = P2P_DEFAULT_HANDSHAKE_INTERVAL;
m_config.m_net_config.packet_max_size = P2P_DEFAULT_PACKET_MAX_SIZE; //20 MB limit
m_config.m_net_config.config_id = 0; // initial config
m_config.m_net_config.connection_timeout = P2P_DEFAULT_CONNECTION_TIMEOUT;
m_config.m_net_config.ping_connection_timeout = P2P_DEFAULT_PING_CONNECTION_TIMEOUT;
m_config.m_net_config.send_peerlist_sz = P2P_DEFAULT_PEERS_IN_HANDSHAKE;
m_first_connection_maker_call = true;
CATCH_ENTRY_L0("node_server::init_config", false);
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-03-20 12:46:11 +01:00
void node_server<t_payload_net_handler>::for_each_connection(std::function<bool(typename t_payload_net_handler::connection_context&, peerid_type)> f)
2014-03-03 23:07:58 +01:00
{
m_net_server.get_config_object().foreach_connection([&](p2p_connection_context& cntx){
2014-03-20 12:46:11 +01:00
return f(cntx, cntx.peer_id);
2014-03-03 23:07:58 +01:00
});
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::make_default_config()
{
m_config.m_peer_id = crypto::rand<uint64_t>();
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::parse_peer_from_string(nodetool::net_address& pe, const std::string& node_addr)
{
2014-05-25 19:06:40 +02:00
return epee::string_tools::parse_peer_from_string(pe.ip, pe.port, node_addr);
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-09-08 18:51:04 +02:00
bool node_server<t_payload_net_handler>::handle_command_line(
const boost::program_options::variables_map& vm
, bool testnet
)
2014-03-03 23:07:58 +01:00
{
2014-09-08 18:51:04 +02:00
auto p2p_bind_arg = testnet ? arg_testnet_p2p_bind_port : arg_p2p_bind_port;
2014-03-03 23:07:58 +01:00
m_bind_ip = command_line::get_arg(vm, arg_p2p_bind_ip);
2014-09-08 18:51:04 +02:00
m_port = command_line::get_arg(vm, p2p_bind_arg);
2014-03-03 23:07:58 +01:00
m_external_port = command_line::get_arg(vm, arg_p2p_external_port);
m_allow_local_ip = command_line::get_arg(vm, arg_p2p_allow_local_ip);
2015-01-05 20:30:17 +01:00
m_no_igd = command_line::get_arg(vm, arg_no_igd);
2014-03-03 23:07:58 +01:00
if (command_line::has_arg(vm, arg_p2p_add_peer))
{
std::vector<std::string> perrs = command_line::get_arg(vm, arg_p2p_add_peer);
for(const std::string& pr_str: perrs)
{
nodetool::peerlist_entry pe = AUTO_VAL_INIT(pe);
pe.id = crypto::rand<uint64_t>();
bool r = parse_peer_from_string(pe.adr, pr_str);
CHECK_AND_ASSERT_MES(r, false, "Failed to parse address from string: " << pr_str);
m_command_line_peers.push_back(pe);
}
}
2015-04-01 19:00:45 +02:00
if(command_line::has_arg(vm, arg_save_graph))
{
set_save_graph(true);
}
2014-03-03 23:07:58 +01:00
2014-05-25 19:06:40 +02:00
if (command_line::has_arg(vm,arg_p2p_add_exclusive_node))
{
if (!parse_peers_and_add_to_container(vm, arg_p2p_add_exclusive_node, m_exclusive_peers))
return false;
}
2015-01-05 20:30:17 +01:00
2014-06-27 19:21:48 +02:00
if (command_line::has_arg(vm, arg_p2p_add_priority_node))
2014-05-25 19:06:40 +02:00
{
if (!parse_peers_and_add_to_container(vm, arg_p2p_add_priority_node, m_priority_peers))
return false;
2014-03-03 23:07:58 +01:00
}
2015-01-05 20:30:17 +01:00
2014-03-03 23:07:58 +01:00
if (command_line::has_arg(vm, arg_p2p_seed_node))
{
2014-05-25 19:06:40 +02:00
if (!parse_peers_and_add_to_container(vm, arg_p2p_seed_node, m_seed_nodes))
return false;
2014-03-03 23:07:58 +01:00
}
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
if(command_line::has_arg(vm, arg_p2p_hide_my_port))
2014-05-25 19:06:40 +02:00
m_hide_my_port = true;
2015-01-05 20:30:17 +01:00
if ( !set_max_out_peers(vm, command_line::get_arg(vm, arg_out_peers) ) )
return false;
if ( !set_tos_flag(vm, command_line::get_arg(vm, arg_tos_flag) ) )
return false;
if ( !set_rate_up_limit(vm, command_line::get_arg(vm, arg_limit_rate_up) ) )
return false;
if ( !set_rate_down_limit(vm, command_line::get_arg(vm, arg_limit_rate_down) ) )
return false;
if ( !set_rate_limit(vm, command_line::get_arg(vm, arg_limit_rate) ) )
return false;
2014-05-25 19:06:40 +02:00
return true;
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
2014-09-24 14:45:34 +02:00
inline void append_net_address(
std::vector<net_address> & seed_nodes
, std::string const & addr
)
2014-03-20 12:46:11 +01:00
{
2014-09-24 14:45:34 +02:00
using namespace boost::asio;
size_t pos = addr.find_last_of(':');
CHECK_AND_ASSERT_MES_NO_RET(std::string::npos != pos && addr.length() - 1 != pos && 0 != pos, "Failed to parse seed address from string: '" << addr << '\'');
std::string host = addr.substr(0, pos);
std::string port = addr.substr(pos + 1);
io_service io_srv;
ip::tcp::resolver resolver(io_srv);
ip::tcp::resolver::query query(host, port);
boost::system::error_code ec;
ip::tcp::resolver::iterator i = resolver.resolve(query, ec);
CHECK_AND_ASSERT_MES_NO_RET(!ec, "Failed to resolve host name '" << host << "': " << ec.message() << ':' << ec.value());
ip::tcp::resolver::iterator iend;
for (; i != iend; ++i)
2014-09-09 18:15:42 +02:00
{
2014-09-24 14:45:34 +02:00
ip::tcp::endpoint endpoint = *i;
if (endpoint.address().is_v4())
2014-03-20 12:46:11 +01:00
{
2014-09-09 18:15:42 +02:00
nodetool::net_address na;
2014-09-24 14:45:34 +02:00
na.ip = boost::asio::detail::socket_ops::host_to_network_long(endpoint.address().to_v4().to_ulong());
na.port = endpoint.port();
seed_nodes.push_back(na);
LOG_PRINT_L4("Added seed node: " << endpoint.address().to_v4().to_string(ec) << ':' << na.port);
2014-09-09 18:15:42 +02:00
}
else
{
2014-09-24 14:45:34 +02:00
LOG_PRINT_L2("IPv6 doesn't supported, skip '" << host << "' -> " << endpoint.address().to_v6().to_string(ec));
2014-03-20 12:46:11 +01:00
}
}
}
//-----------------------------------------------------------------------------------
2014-03-03 23:07:58 +01:00
template<class t_payload_net_handler>
2015-01-29 23:10:53 +01:00
bool node_server<t_payload_net_handler>::init(const boost::program_options::variables_map& vm)
2014-03-03 23:07:58 +01:00
{
2015-05-26 07:11:44 +02:00
std::set<std::string> full_addrs;
2015-01-29 23:10:53 +01:00
bool testnet = command_line::get_arg(vm, daemon_args::arg_testnet_on);
2014-09-09 15:10:30 +02:00
if (testnet)
{
2015-01-29 23:10:53 +01:00
memcpy(&m_network_id, &::config::testnet::NETWORK_ID, 16);
2015-05-26 07:11:44 +02:00
full_addrs.insert("197.242.158.240:28080");
full_addrs.insert("107.152.130.98:28080");
full_addrs.insert("5.9.25.103:28080");
full_addrs.insert("5.9.55.70:28080");
2014-09-09 15:10:30 +02:00
}
else
{
2015-01-29 23:10:53 +01:00
memcpy(&m_network_id, &::config::NETWORK_ID, 16);
2014-09-17 21:35:52 +02:00
// for each hostname in the seed nodes list, attempt to DNS resolve and
// add the result addresses as seed nodes
// TODO: at some point add IPv6 support, but that won't be relevant
// for some time yet.
2014-12-15 23:23:42 +01:00
std::vector<std::vector<std::string>> dns_results;
dns_results.resize(m_seed_nodes_list.size());
2014-12-15 23:43:12 +01:00
2015-01-14 22:14:01 +01:00
std::list<boost::thread*> dns_threads;
2014-12-15 23:23:42 +01:00
uint64_t result_index = 0;
2014-09-17 21:35:52 +02:00
for (const std::string& addr_str : m_seed_nodes_list)
{
2015-01-14 22:14:01 +01:00
boost::thread* th = new boost::thread([=, &dns_results, &addr_str]
2014-12-15 23:23:42 +01:00
{
2015-01-14 22:14:01 +01:00
LOG_PRINT_L4("dns_threads[" << result_index << "] created for: " << addr_str)
2014-12-15 23:23:42 +01:00
// TODO: care about dnssec avail/valid
bool avail, valid;
2015-01-14 22:14:01 +01:00
std::vector<std::string> addr_list;
try
{
2015-08-15 06:04:29 +02:00
addr_list = tools::DNSResolver::instance().get_ipv4(addr_str, avail, valid);
2015-01-14 22:14:01 +01:00
LOG_PRINT_L4("dns_threads[" << result_index << "] DNS resolve done");
boost::this_thread::interruption_point();
}
catch(const boost::thread_interrupted&)
{
// thread interruption request
// even if we now have results, finish thread without setting
// result variables, which are now out of scope in main thread
LOG_PRINT_L4("dns_threads[" << result_index << "] interrupted");
return;
}
2014-12-15 23:23:42 +01:00
2015-01-14 22:14:01 +01:00
LOG_PRINT_L4("dns_threads[" << result_index << "] addr_str: " << addr_str << " number of results: " << addr_list.size());
dns_results[result_index] = addr_list;
2014-12-15 23:23:42 +01:00
});
2015-01-14 22:14:01 +01:00
dns_threads.push_back(th);
++result_index;
2014-12-15 23:23:42 +01:00
}
2015-01-14 22:14:01 +01:00
LOG_PRINT_L4("dns_threads created, now waiting for completion or timeout of " << CRYPTONOTE_DNS_TIMEOUT_MS << "ms");
boost::chrono::system_clock::time_point deadline = boost::chrono::system_clock::now() + boost::chrono::milliseconds(CRYPTONOTE_DNS_TIMEOUT_MS);
uint64_t i = 0;
for (boost::thread* th : dns_threads)
2014-12-15 23:23:42 +01:00
{
2015-01-14 22:14:01 +01:00
if (! th->try_join_until(deadline))
2014-12-15 23:23:42 +01:00
{
2015-01-14 22:14:01 +01:00
LOG_PRINT_L4("dns_threads[" << i << "] timed out, sending interrupt");
th->interrupt();
2014-12-15 23:23:42 +01:00
}
2015-01-14 22:14:01 +01:00
++i;
2014-12-15 23:23:42 +01:00
}
2015-01-14 22:14:01 +01:00
i = 0;
2014-12-15 23:23:42 +01:00
for (const auto& result : dns_results)
{
2015-01-14 22:14:01 +01:00
LOG_PRINT_L4("DNS lookup for " << m_seed_nodes_list[i] << ": " << result.size() << " results");
// if no results for node, thread's lookup likely timed out
if (result.size())
2014-09-17 21:35:52 +02:00
{
2015-01-14 22:14:01 +01:00
for (const auto& addr_string : result)
2015-05-26 07:11:44 +02:00
full_addrs.insert(addr_string + ":18080");
2014-09-17 21:35:52 +02:00
}
2015-01-14 22:14:01 +01:00
++i;
2014-09-17 21:35:52 +02:00
}
2015-05-26 07:11:44 +02:00
if (!full_addrs.size())
2014-09-17 21:35:52 +02:00
{
2014-12-15 23:23:42 +01:00
LOG_PRINT_L0("DNS seed node lookup either timed out or failed, falling back to defaults");
2015-05-26 07:11:44 +02:00
full_addrs.insert("46.165.232.77:18080");
full_addrs.insert("63.141.254.186:18080");
full_addrs.insert("119.81.118.164:18080");
full_addrs.insert("60.191.33.112:18080");
full_addrs.insert("198.74.231.92:18080");
full_addrs.insert("5.9.55.70:18080");
full_addrs.insert("119.81.118.165:18080");
full_addrs.insert("202.112.0.100:18080");
full_addrs.insert("84.106.163.174:18080");
full_addrs.insert("178.206.94.87:18080");
full_addrs.insert("119.81.118.163:18080");
full_addrs.insert("95.37.217.253:18080");
full_addrs.insert("161.67.132.39:18080");
full_addrs.insert("119.81.48.114:18080");
full_addrs.insert("119.81.118.166:18080");
full_addrs.insert("93.120.240.209:18080");
full_addrs.insert("46.183.145.69:18080");
full_addrs.insert("108.170.123.66:18080");
full_addrs.insert("5.9.83.204:18080");
full_addrs.insert("104.130.19.193:18080");
full_addrs.insert("119.81.48.115:18080");
full_addrs.insert("80.71.13.36:18080");
2014-09-17 21:35:52 +02:00
}
2014-07-16 19:30:15 +02:00
}
2014-03-03 23:07:58 +01:00
2015-05-26 07:11:44 +02:00
for (const auto& full_addr : full_addrs)
{
LOG_PRINT_L2("Seed node: " << full_addr);
append_net_address(m_seed_nodes, full_addr);
}
LOG_PRINT_L1("Number of seed nodes: " << m_seed_nodes.size());
2014-09-08 18:51:04 +02:00
bool res = handle_command_line(vm, testnet);
2014-03-03 23:07:58 +01:00
CHECK_AND_ASSERT_MES(res, false, "Failed to handle command line");
2014-09-09 20:50:21 +02:00
auto config_arg = testnet ? command_line::arg_testnet_data_dir : command_line::arg_data_dir;
m_config_folder = command_line::get_arg(vm, config_arg);
2014-03-03 23:07:58 +01:00
res = init_config();
CHECK_AND_ASSERT_MES(res, false, "Failed to init config.");
res = m_peerlist.init(m_allow_local_ip);
CHECK_AND_ASSERT_MES(res, false, "Failed to init peerlist.");
for(auto& p: m_command_line_peers)
m_peerlist.append_with_peer_white(p);
//only in case if we really sure that we have external visible ip
m_have_address = true;
m_ip_address = 0;
m_last_stat_request_time = 0;
//configure self
m_net_server.set_threads_prefix("P2P");
m_net_server.get_config_object().m_pcommands_handler = this;
m_net_server.get_config_object().m_invoke_timeout = P2P_DEFAULT_INVOKE_TIMEOUT;
//try to bind
LOG_PRINT_L0("Binding on " << m_bind_ip << ":" << m_port);
res = m_net_server.init_server(m_port, m_bind_ip);
CHECK_AND_ASSERT_MES(res, false, "Failed to bind server");
m_listenning_port = m_net_server.get_binded_port();
2014-09-09 11:32:00 +02:00
LOG_PRINT_GREEN("Net service bound to " << m_bind_ip << ":" << m_listenning_port, LOG_LEVEL_0);
2014-03-03 23:07:58 +01:00
if(m_external_port)
LOG_PRINT_L0("External port defined as " << m_external_port);
2014-04-09 14:14:35 +02:00
// Add UPnP port mapping
2015-01-05 20:30:17 +01:00
if(m_no_igd == false) {
LOG_PRINT_L0("Attempting to add IGD port mapping.");
int result;
UPNPDev* deviceList = upnpDiscover(1000, NULL, NULL, 0, 0, &result);
UPNPUrls urls;
IGDdatas igdData;
char lanAddress[64];
result = UPNP_GetValidIGD(deviceList, &urls, &igdData, lanAddress, sizeof lanAddress);
freeUPNPDevlist(deviceList);
if (result != 0) {
if (result == 1) {
std::ostringstream portString;
portString << m_listenning_port;
// Delete the port mapping before we create it, just in case we have dangling port mapping from the daemon not being shut down correctly
UPNP_DeletePortMapping(urls.controlURL, igdData.first.servicetype, portString.str().c_str(), "TCP", 0);
int portMappingResult;
portMappingResult = UPNP_AddPortMapping(urls.controlURL, igdData.first.servicetype, portString.str().c_str(), portString.str().c_str(), lanAddress, CRYPTONOTE_NAME, "TCP", 0, "0");
if (portMappingResult != 0) {
LOG_ERROR("UPNP_AddPortMapping failed, error: " << strupnperror(portMappingResult));
} else {
LOG_PRINT_GREEN("Added IGD port mapping.", LOG_LEVEL_0);
}
} else if (result == 2) {
LOG_PRINT_L0("IGD was found but reported as not connected.");
} else if (result == 3) {
LOG_PRINT_L0("UPnP device was found but not recoginzed as IGD.");
} else {
LOG_ERROR("UPNP_GetValidIGD returned an unknown result code.");
}
FreeUPNPUrls(&urls);
} else {
LOG_PRINT_L0("No IGD was found.");
}
}
2014-03-03 23:07:58 +01:00
return res;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
typename node_server<t_payload_net_handler>::payload_net_handler& node_server<t_payload_net_handler>::get_payload_object()
{
return m_payload_handler;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::run()
{
2015-04-01 19:00:45 +02:00
// creating thread to log number of connections
mPeersLoggerThread.reset(new std::thread([&]()
{
_note("Thread monitor number of peers - start");
while (!is_closing)
{ // main loop of thread
//number_of_peers = m_net_server.get_config_object().get_connections_count();
unsigned int number_of_peers = 0;
m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if (!cntxt.m_is_income) ++number_of_peers;
return true;
}); // lambda
m_current_number_of_out_peers = number_of_peers;
if (epee::net_utils::data_logger::is_dying())
break;
epee::net_utils::data_logger::get_instance().add_data("peers", number_of_peers);
std::this_thread::sleep_for(std::chrono::seconds(1));
} // main loop of thread
_note("Thread monitor number of peers - done");
})); // lambda
2014-03-03 23:07:58 +01:00
//here you can set worker threads count
int thrds_count = 10;
m_net_server.add_idle_handler(boost::bind(&node_server<t_payload_net_handler>::idle_worker, this), 1000);
m_net_server.add_idle_handler(boost::bind(&t_payload_net_handler::on_idle, &m_payload_handler), 1000);
2014-04-30 22:50:06 +02:00
boost::thread::attributes attrs;
attrs.set_stack_size(THREAD_STACK_SIZE);
2014-03-03 23:07:58 +01:00
//go to loop
LOG_PRINT("Run net_service loop( " << thrds_count << " threads)...", LOG_LEVEL_0);
2014-04-30 22:50:06 +02:00
if(!m_net_server.run_server(thrds_count, true, attrs))
2014-03-03 23:07:58 +01:00
{
LOG_ERROR("Failed to run net tcp server!");
}
LOG_PRINT("net_service loop stopped.", LOG_LEVEL_0);
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
uint64_t node_server<t_payload_net_handler>::get_connections_count()
{
return m_net_server.get_config_object().get_connections_count();
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::deinit()
{
2015-02-24 21:02:48 +01:00
kill();
2014-03-03 23:07:58 +01:00
m_peerlist.deinit();
m_net_server.deinit_server();
return store_config();
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::store_config()
{
TRY_ENTRY();
if (!tools::create_directories_if_necessary(m_config_folder))
{
LOG_PRINT_L0("Failed to create data directory: " << m_config_folder);
return false;
}
std::string state_file_path = m_config_folder + "/" + P2P_NET_DATA_FILENAME;
std::ofstream p2p_data;
p2p_data.open( state_file_path , std::ios_base::binary | std::ios_base::out| std::ios::trunc);
if(p2p_data.fail())
{
LOG_PRINT_L0("Failed to save config to file " << state_file_path);
return false;
};
boost::archive::binary_oarchive a(p2p_data);
a << *this;
return true;
CATCH_ENTRY_L0("blockchain_storage::save", false);
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::send_stop_signal()
{
m_net_server.send_stop_signal();
LOG_PRINT_L0("[node] Stop signal sent");
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::do_handshake_with_peer(peerid_type& pi, p2p_connection_context& context_, bool just_take_peerlist)
{
typename COMMAND_HANDSHAKE::request arg;
typename COMMAND_HANDSHAKE::response rsp;
get_local_node_data(arg.node_data);
m_payload_handler.get_payload_sync_data(arg.payload_data);
2014-05-25 19:06:40 +02:00
epee::simple_event ev;
2014-03-03 23:07:58 +01:00
std::atomic<bool> hsh_result(false);
2014-05-25 19:06:40 +02:00
bool r = epee::net_utils::async_invoke_remote_command2<typename COMMAND_HANDSHAKE::response>(context_.m_connection_id, COMMAND_HANDSHAKE::ID, arg, m_net_server.get_config_object(),
2014-03-03 23:07:58 +01:00
[this, &pi, &ev, &hsh_result, &just_take_peerlist](int code, const typename COMMAND_HANDSHAKE::response& rsp, p2p_connection_context& context)
{
2014-05-25 19:06:40 +02:00
epee::misc_utils::auto_scope_leave_caller scope_exit_handler = epee::misc_utils::create_scope_leave_handler([&](){ev.raise();});
2014-03-03 23:07:58 +01:00
if(code < 0)
{
2014-05-25 19:06:40 +02:00
LOG_PRINT_CC_RED(context, "COMMAND_HANDSHAKE invoke failed. (" << code << ", " << epee::levin::get_err_descr(code) << ")", LOG_LEVEL_1);
2014-03-03 23:07:58 +01:00
return;
}
2014-07-16 19:30:15 +02:00
if(rsp.node_data.network_id != m_network_id)
2014-03-03 23:07:58 +01:00
{
2014-05-25 19:06:40 +02:00
LOG_ERROR_CCONTEXT("COMMAND_HANDSHAKE Failed, wrong network! (" << epee::string_tools::get_str_from_guid_a(rsp.node_data.network_id) << "), closing connection.");
2014-03-03 23:07:58 +01:00
return;
}
if(!handle_remote_peerlist(rsp.local_peerlist, rsp.node_data.local_time, context))
{
LOG_ERROR_CCONTEXT("COMMAND_HANDSHAKE: failed to handle_remote_peerlist(...), closing connection.");
return;
}
hsh_result = true;
if(!just_take_peerlist)
{
if(!m_payload_handler.process_payload_sync_data(rsp.payload_data, context, true))
{
LOG_ERROR_CCONTEXT("COMMAND_HANDSHAKE invoked, but process_payload_sync_data returned false, dropping connection.");
hsh_result = false;
return;
}
pi = context.peer_id = rsp.node_data.peer_id;
m_peerlist.set_peer_just_seen(rsp.node_data.peer_id, context.m_remote_ip, context.m_remote_port);
if(rsp.node_data.peer_id == m_config.m_peer_id)
{
LOG_PRINT_CCONTEXT_L2("Connection to self detected, dropping connection");
hsh_result = false;
return;
}
2014-05-25 19:06:40 +02:00
LOG_PRINT_CCONTEXT_L1(" COMMAND_HANDSHAKE INVOKED OK");
2014-03-03 23:07:58 +01:00
}else
{
2014-05-25 19:06:40 +02:00
LOG_PRINT_CCONTEXT_L1(" COMMAND_HANDSHAKE(AND CLOSE) INVOKED OK");
2014-03-03 23:07:58 +01:00
}
}, P2P_DEFAULT_HANDSHAKE_INVOKE_TIMEOUT);
if(r)
{
ev.wait();
}
if(!hsh_result)
{
2014-05-25 19:06:40 +02:00
LOG_PRINT_CC_L1(context_, "COMMAND_HANDSHAKE Failed");
2014-03-03 23:07:58 +01:00
m_net_server.get_config_object().close(context_.m_connection_id);
}
return hsh_result;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-05-25 19:06:40 +02:00
bool node_server<t_payload_net_handler>::do_peer_timed_sync(const epee::net_utils::connection_context_base& context_, peerid_type peer_id)
2014-03-03 23:07:58 +01:00
{
typename COMMAND_TIMED_SYNC::request arg = AUTO_VAL_INIT(arg);
m_payload_handler.get_payload_sync_data(arg.payload_data);
2014-05-25 19:06:40 +02:00
bool r = epee::net_utils::async_invoke_remote_command2<typename COMMAND_TIMED_SYNC::response>(context_.m_connection_id, COMMAND_TIMED_SYNC::ID, arg, m_net_server.get_config_object(),
2014-03-03 23:07:58 +01:00
[this](int code, const typename COMMAND_TIMED_SYNC::response& rsp, p2p_connection_context& context)
{
if(code < 0)
{
2014-05-25 19:06:40 +02:00
LOG_PRINT_CC_RED(context, "COMMAND_TIMED_SYNC invoke failed. (" << code << ", " << epee::levin::get_err_descr(code) << ")", LOG_LEVEL_1);
2014-03-03 23:07:58 +01:00
return;
}
if(!handle_remote_peerlist(rsp.local_peerlist, rsp.local_time, context))
{
LOG_ERROR_CCONTEXT("COMMAND_TIMED_SYNC: failed to handle_remote_peerlist(...), closing connection.");
m_net_server.get_config_object().close(context.m_connection_id );
}
if(!context.m_is_income)
m_peerlist.set_peer_just_seen(context.peer_id, context.m_remote_ip, context.m_remote_port);
m_payload_handler.process_payload_sync_data(rsp.payload_data, context, false);
});
if(!r)
{
LOG_PRINT_CC_L2(context_, "COMMAND_TIMED_SYNC Failed");
return false;
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_random_index_with_fixed_probability(size_t max_index)
{
//divide by zero workaround
if(!max_index)
return 0;
size_t x = crypto::rand<size_t>()%(max_index+1);
size_t res = (x*x*x)/(max_index*max_index); //parabola \/
LOG_PRINT_L3("Random connection index=" << res << "(x="<< x << ", max_index=" << max_index << ")");
return res;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::is_peer_used(const peerlist_entry& peer)
{
if(m_config.m_peer_id == peer.id)
return true;//dont make connections to ourself
bool used = false;
m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if(cntxt.peer_id == peer.id || (!cntxt.m_is_income && peer.adr.ip == cntxt.m_remote_ip && peer.adr.port == cntxt.m_remote_port))
{
used = true;
return false;//stop enumerating
}
return true;
});
return used;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::is_addr_connected(const net_address& peer)
{
bool connected = false;
m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if(!cntxt.m_is_income && peer.ip == cntxt.m_remote_ip && peer.port == cntxt.m_remote_port)
{
connected = true;
return false;//stop enumerating
}
return true;
});
return connected;
}
2014-05-25 19:06:40 +02:00
#define LOG_PRINT_CC_PRIORITY_NODE(priority, con, msg) \
do { \
if (priority) {\
2014-09-09 11:32:00 +02:00
LOG_PRINT_CC_L1(con, msg); \
2014-05-25 19:06:40 +02:00
} else {\
LOG_PRINT_CC_L1(con, msg); \
} \
} while(0)
2014-03-03 23:07:58 +01:00
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::try_to_connect_and_handshake_with_new_peer(const net_address& na, bool just_take_peerlist, uint64_t last_seen_stamp, bool white)
{
2015-02-20 22:28:03 +01:00
if (m_current_number_of_out_peers == m_config.m_net_config.connections_count) // out peers limit
2015-02-12 20:59:39 +01:00
{
return false;
}
2015-02-20 22:28:03 +01:00
else if (m_current_number_of_out_peers > m_config.m_net_config.connections_count)
2015-02-12 20:59:39 +01:00
{
m_net_server.get_config_object().del_out_connections(1);
2015-02-20 22:28:03 +01:00
m_current_number_of_out_peers --; // atomic variable, update time = 1s
2015-02-12 20:59:39 +01:00
return false;
}
2014-05-25 19:06:40 +02:00
LOG_PRINT_L1("Connecting to " << epee::string_tools::get_ip_string_from_int32(na.ip) << ":"
<< epee::string_tools::num_to_string_fast(na.port) << "(white=" << white << ", last_seen: "
<< (last_seen_stamp ? epee::misc_utils::get_time_interval_string(time(NULL) - last_seen_stamp):"never")
<< ")...");
2014-03-03 23:07:58 +01:00
typename net_server::t_connection_context con = AUTO_VAL_INIT(con);
2014-05-25 19:06:40 +02:00
bool res = m_net_server.connect(epee::string_tools::get_ip_string_from_int32(na.ip),
epee::string_tools::num_to_string_fast(na.port),
2014-03-03 23:07:58 +01:00
m_config.m_net_config.connection_timeout,
con);
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
if(!res)
{
2014-05-25 19:06:40 +02:00
bool is_priority = is_priority_node(na);
LOG_PRINT_CC_PRIORITY_NODE(is_priority, con, "Connect failed to "
<< epee::string_tools::get_ip_string_from_int32(na.ip)
<< ":" << epee::string_tools::num_to_string_fast(na.port)
2014-03-03 23:07:58 +01:00
/*<< ", try " << try_count*/);
//m_peerlist.set_peer_unreachable(pe);
return false;
}
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
peerid_type pi = AUTO_VAL_INIT(pi);
res = do_handshake_with_peer(pi, con, just_take_peerlist);
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
if(!res)
{
2014-05-25 19:06:40 +02:00
bool is_priority = is_priority_node(na);
LOG_PRINT_CC_PRIORITY_NODE(is_priority, con, "Failed to HANDSHAKE with peer "
<< epee::string_tools::get_ip_string_from_int32(na.ip)
<< ":" << epee::string_tools::num_to_string_fast(na.port)
2014-03-03 23:07:58 +01:00
/*<< ", try " << try_count*/);
return false;
}
2014-05-25 19:06:40 +02:00
2014-03-03 23:07:58 +01:00
if(just_take_peerlist)
{
m_net_server.get_config_object().close(con.m_connection_id);
LOG_PRINT_CC_GREEN(con, "CONNECTION HANDSHAKED OK AND CLOSED.", LOG_LEVEL_2);
return true;
}
peerlist_entry pe_local = AUTO_VAL_INIT(pe_local);
pe_local.adr = na;
pe_local.id = pi;
2014-08-20 17:57:29 +02:00
time_t last_seen;
time(&last_seen);
pe_local.last_seen = static_cast<int64_t>(last_seen);
2014-03-03 23:07:58 +01:00
m_peerlist.append_with_peer_white(pe_local);
//update last seen and push it to peerlist manager
LOG_PRINT_CC_GREEN(con, "CONNECTION HANDSHAKED OK.", LOG_LEVEL_2);
return true;
}
2014-05-25 19:06:40 +02:00
#undef LOG_PRINT_CC_PRIORITY_NODE
2014-03-03 23:07:58 +01:00
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::make_new_connection_from_peerlist(bool use_white_list)
{
size_t local_peers_count = use_white_list ? m_peerlist.get_white_peers_count():m_peerlist.get_gray_peers_count();
if(!local_peers_count)
return false;//no peers
size_t max_random_index = std::min<uint64_t>(local_peers_count -1, 20);
std::set<size_t> tried_peers;
size_t try_count = 0;
size_t rand_count = 0;
while(rand_count < (max_random_index+1)*3 && try_count < 10 && !m_net_server.is_stop_signal_sent())
{
++rand_count;
size_t random_index = get_random_index_with_fixed_probability(max_random_index);
CHECK_AND_ASSERT_MES(random_index < local_peers_count, false, "random_starter_index < peers_local.size() failed!!");
if(tried_peers.count(random_index))
continue;
tried_peers.insert(random_index);
peerlist_entry pe = AUTO_VAL_INIT(pe);
bool r = use_white_list ? m_peerlist.get_white_peer_by_index(pe, random_index):m_peerlist.get_gray_peer_by_index(pe, random_index);
CHECK_AND_ASSERT_MES(r, false, "Failed to get random peer from peerlist(white:" << use_white_list << ")");
++try_count;
2015-02-12 20:59:39 +01:00
_note("Considering connecting (out) to peer: " << pe.id << " " << epee::string_tools::get_ip_string_from_int32(pe.adr.ip) << ":" << boost::lexical_cast<std::string>(pe.adr.port));
if(is_peer_used(pe)) {
_note("Peer is used");
2014-03-03 23:07:58 +01:00
continue;
2015-02-12 20:59:39 +01:00
}
2014-03-03 23:07:58 +01:00
2014-05-25 19:06:40 +02:00
LOG_PRINT_L1("Selected peer: " << pe.id << " " << epee::string_tools::get_ip_string_from_int32(pe.adr.ip)
<< ":" << boost::lexical_cast<std::string>(pe.adr.port)
<< "[white=" << use_white_list
<< "] last_seen: " << (pe.last_seen ? epee::misc_utils::get_time_interval_string(time(NULL) - pe.last_seen) : "never"));
2014-03-03 23:07:58 +01:00
2015-02-12 20:59:39 +01:00
if(!try_to_connect_and_handshake_with_new_peer(pe.adr, false, pe.last_seen, use_white_list)) {
_note("Handshake failed");
2014-03-03 23:07:58 +01:00
continue;
2015-02-12 20:59:39 +01:00
}
2014-03-03 23:07:58 +01:00
return true;
}
return false;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::connections_maker()
{
2014-05-25 19:06:40 +02:00
if (!connect_to_peerlist(m_exclusive_peers)) return false;
if (!m_exclusive_peers.empty()) return true;
2014-03-03 23:07:58 +01:00
if(!m_peerlist.get_white_peers_count() && m_seed_nodes.size())
{
size_t try_count = 0;
size_t current_index = crypto::rand<size_t>()%m_seed_nodes.size();
while(true)
{
if(m_net_server.is_stop_signal_sent())
return false;
if(try_to_connect_and_handshake_with_new_peer(m_seed_nodes[current_index], true))
break;
if(++try_count > m_seed_nodes.size())
{
LOG_PRINT_RED_L0("Failed to connect to any of seed peers, continuing without seeds");
break;
}
2014-03-20 12:46:11 +01:00
if(++current_index >= m_seed_nodes.size())
2014-03-03 23:07:58 +01:00
current_index = 0;
}
}
2014-05-25 19:06:40 +02:00
if (!connect_to_peerlist(m_priority_peers)) return false;
2014-03-03 23:07:58 +01:00
size_t expected_white_connections = (m_config.m_net_config.connections_count*P2P_DEFAULT_WHITELIST_CONNECTIONS_PERCENT)/100;
size_t conn_count = get_outgoing_connections_count();
if(conn_count < m_config.m_net_config.connections_count)
{
if(conn_count < expected_white_connections)
{
//start from white list
if(!make_expected_connections_count(true, expected_white_connections))
return false;
//and then do grey list
if(!make_expected_connections_count(false, m_config.m_net_config.connections_count))
return false;
}else
{
//start from grey list
if(!make_expected_connections_count(false, m_config.m_net_config.connections_count))
return false;
//and then do white list
if(!make_expected_connections_count(true, m_config.m_net_config.connections_count))
return false;
}
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::make_expected_connections_count(bool white_list, size_t expected_connections)
{
size_t conn_count = get_outgoing_connections_count();
//add new connections from white peers
while(conn_count < expected_connections)
{
if(m_net_server.is_stop_signal_sent())
return false;
if(!make_new_connection_from_peerlist(white_list))
break;
conn_count = get_outgoing_connections_count();
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
size_t node_server<t_payload_net_handler>::get_outgoing_connections_count()
{
size_t count = 0;
m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if(!cntxt.m_is_income)
++count;
return true;
});
return count;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::idle_worker()
{
m_peer_handshake_idle_maker_interval.do_call(boost::bind(&node_server<t_payload_net_handler>::peer_sync_idle_maker, this));
m_connections_maker_interval.do_call(boost::bind(&node_server<t_payload_net_handler>::connections_maker, this));
m_peerlist_store_interval.do_call(boost::bind(&node_server<t_payload_net_handler>::store_config, this));
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::peer_sync_idle_maker()
{
LOG_PRINT_L2("STARTED PEERLIST IDLE HANDSHAKE");
2014-05-25 19:06:40 +02:00
typedef std::list<std::pair<epee::net_utils::connection_context_base, peerid_type> > local_connects_type;
2014-03-03 23:07:58 +01:00
local_connects_type cncts;
m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if(cntxt.peer_id)
cncts.push_back(local_connects_type::value_type(cntxt, cntxt.peer_id));//do idle sync only with handshaked connections
return true;
});
std::for_each(cncts.begin(), cncts.end(), [&](const typename local_connects_type::value_type& vl){do_peer_timed_sync(vl.first, vl.second);});
LOG_PRINT_L2("FINISHED PEERLIST IDLE HANDSHAKE");
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::fix_time_delta(std::list<peerlist_entry>& local_peerlist, time_t local_time, int64_t& delta)
{
//fix time delta
time_t now = 0;
time(&now);
delta = now - local_time;
BOOST_FOREACH(peerlist_entry& be, local_peerlist)
{
if(be.last_seen > local_time)
{
2014-09-09 11:32:00 +02:00
LOG_PRINT_RED_L1("FOUND FUTURE peerlist for entry " << epee::string_tools::get_ip_string_from_int32(be.adr.ip) << ":" << be.adr.port << " last_seen: " << be.last_seen << ", local_time(on remote node):" << local_time);
2014-03-03 23:07:58 +01:00
return false;
}
be.last_seen += delta;
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
2014-05-25 19:06:40 +02:00
bool node_server<t_payload_net_handler>::handle_remote_peerlist(const std::list<peerlist_entry>& peerlist, time_t local_time, const epee::net_utils::connection_context_base& context)
2014-03-03 23:07:58 +01:00
{
int64_t delta = 0;
std::list<peerlist_entry> peerlist_ = peerlist;
if(!fix_time_delta(peerlist_, local_time, delta))
return false;
LOG_PRINT_CCONTEXT_L2("REMOTE PEERLIST: TIME_DELTA: " << delta << ", remote peerlist size=" << peerlist_.size());
LOG_PRINT_CCONTEXT_L3("REMOTE PEERLIST: " << print_peerlist_to_string(peerlist_));
return m_peerlist.merge_peerlist(peerlist_);
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::get_local_node_data(basic_node_data& node_data)
{
2014-04-30 19:52:21 +02:00
time_t local_time;
time(&local_time);
node_data.local_time = local_time;
2014-03-03 23:07:58 +01:00
node_data.peer_id = m_config.m_peer_id;
if(!m_hide_my_port)
node_data.my_port = m_external_port ? m_external_port : m_listenning_port;
else
node_data.my_port = 0;
2014-07-16 19:30:15 +02:00
node_data.network_id = m_network_id;
2014-03-03 23:07:58 +01:00
return true;
}
//-----------------------------------------------------------------------------------
#ifdef ALLOW_DEBUG_COMMANDS
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::check_trust(const proof_of_trust& tr)
{
uint64_t local_time = time(NULL);
uint64_t time_delata = local_time > tr.time ? local_time - tr.time: tr.time - local_time;
if(time_delata > 24*60*60 )
{
LOG_ERROR("check_trust failed to check time conditions, local_time=" << local_time << ", proof_time=" << tr.time);
return false;
}
if(m_last_stat_request_time >= tr.time )
{
LOG_ERROR("check_trust failed to check time conditions, last_stat_request_time=" << m_last_stat_request_time << ", proof_time=" << tr.time);
return false;
}
if(m_config.m_peer_id != tr.peer_id)
{
LOG_ERROR("check_trust failed: peer_id mismatch (passed " << tr.peer_id << ", expected " << m_config.m_peer_id<< ")");
return false;
}
crypto::public_key pk = AUTO_VAL_INIT(pk);
2014-09-05 04:14:36 +02:00
epee::string_tools::hex_to_pod(::config::P2P_REMOTE_DEBUG_TRUSTED_PUB_KEY, pk);
2014-03-03 23:07:58 +01:00
crypto::hash h = tools::get_proof_of_trust_hash(tr);
if(!crypto::check_signature(h, pk, tr.sign))
{
LOG_ERROR("check_trust failed: sign check failed");
return false;
}
//update last request time
m_last_stat_request_time = tr.time;
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_get_stat_info(int command, typename COMMAND_REQUEST_STAT_INFO::request& arg, typename COMMAND_REQUEST_STAT_INFO::response& rsp, p2p_connection_context& context)
{
if(!check_trust(arg.tr))
{
drop_connection(context);
return 1;
}
rsp.connections_count = m_net_server.get_config_object().get_connections_count();
rsp.incoming_connections_count = rsp.connections_count - get_outgoing_connections_count();
2014-09-12 13:06:51 +02:00
rsp.version = MONERO_VERSION_FULL;
2014-03-03 23:07:58 +01:00
rsp.os_version = tools::get_os_version_string();
m_payload_handler.get_stat_info(rsp.payload_info);
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_get_network_state(int command, COMMAND_REQUEST_NETWORK_STATE::request& arg, COMMAND_REQUEST_NETWORK_STATE::response& rsp, p2p_connection_context& context)
{
if(!check_trust(arg.tr))
{
drop_connection(context);
return 1;
}
m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
connection_entry ce;
ce.adr.ip = cntxt.m_remote_ip;
ce.adr.port = cntxt.m_remote_port;
ce.id = cntxt.peer_id;
ce.is_income = cntxt.m_is_income;
rsp.connections_list.push_back(ce);
return true;
});
m_peerlist.get_peerlist_full(rsp.local_peerlist_gray, rsp.local_peerlist_white);
rsp.my_id = m_config.m_peer_id;
rsp.local_time = time(NULL);
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_get_peer_id(int command, COMMAND_REQUEST_PEER_ID::request& arg, COMMAND_REQUEST_PEER_ID::response& rsp, p2p_connection_context& context)
{
rsp.my_id = m_config.m_peer_id;
return 1;
}
#endif
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::request_callback(const epee::net_utils::connection_context_base& context)
{
m_net_server.get_config_object().request_callback(context.m_connection_id);
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::relay_notify_to_all(int command, const std::string& data_buff, const epee::net_utils::connection_context_base& context)
{
std::list<boost::uuids::uuid> connections;
m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
if(cntxt.peer_id && context.m_connection_id != cntxt.m_connection_id)
connections.push_back(cntxt.m_connection_id);
return true;
});
BOOST_FOREACH(const auto& c_id, connections)
{
m_net_server.get_config_object().notify(command, data_buff, c_id);
}
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::callback(p2p_connection_context& context)
{
m_payload_handler.on_callback(context);
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::invoke_notify_to_peer(int command, const std::string& req_buff, const epee::net_utils::connection_context_base& context)
{
int res = m_net_server.get_config_object().notify(command, req_buff, context.m_connection_id);
return res > 0;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::invoke_command_to_peer(int command, const std::string& req_buff, std::string& resp_buff, const epee::net_utils::connection_context_base& context)
{
int res = m_net_server.get_config_object().invoke(command, req_buff, resp_buff, context.m_connection_id);
return res > 0;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::drop_connection(const epee::net_utils::connection_context_base& context)
{
m_net_server.get_config_object().close(context.m_connection_id);
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler> template<class t_callback>
bool node_server<t_payload_net_handler>::try_ping(basic_node_data& node_data, p2p_connection_context& context, t_callback cb)
{
if(!node_data.my_port)
return false;
uint32_t actual_ip = context.m_remote_ip;
if(!m_peerlist.is_ip_allowed(actual_ip))
return false;
2014-05-25 19:06:40 +02:00
std::string ip = epee::string_tools::get_ip_string_from_int32(actual_ip);
std::string port = epee::string_tools::num_to_string_fast(node_data.my_port);
2014-03-03 23:07:58 +01:00
peerid_type pr = node_data.peer_id;
bool r = m_net_server.connect_async(ip, port, m_config.m_net_config.ping_connection_timeout, [cb, /*context,*/ ip, port, pr, this](
const typename net_server::t_connection_context& ping_context,
const boost::system::error_code& ec)->bool
{
if(ec)
{
LOG_PRINT_CC_L2(ping_context, "back ping connect failed to " << ip << ":" << port);
return false;
}
COMMAND_PING::request req;
COMMAND_PING::response rsp;
//vc2010 workaround
/*std::string ip_ = ip;
std::string port_=port;
peerid_type pr_ = pr;
auto cb_ = cb;*/
2015-05-26 07:07:17 +02:00
// GCC 5.1.0 gives error with second use of uint64_t (peerid_type) variable.
peerid_type pr_ = pr;
2014-05-25 19:06:40 +02:00
bool inv_call_res = epee::net_utils::async_invoke_remote_command2<COMMAND_PING::response>(ping_context.m_connection_id, COMMAND_PING::ID, req, m_net_server.get_config_object(),
2014-03-03 23:07:58 +01:00
[=](int code, const COMMAND_PING::response& rsp, p2p_connection_context& context)
{
if(code <= 0)
{
2014-05-25 19:06:40 +02:00
LOG_PRINT_CC_L2(ping_context, "Failed to invoke COMMAND_PING to " << ip << ":" << port << "(" << code << ", " << epee::levin::get_err_descr(code) << ")");
2014-03-03 23:07:58 +01:00
return;
}
if(rsp.status != PING_OK_RESPONSE_STATUS_TEXT || pr != rsp.peer_id)
{
2015-05-26 07:07:17 +02:00
LOG_PRINT_CC_L2(ping_context, "back ping invoke wrong response \"" << rsp.status << "\" from" << ip << ":" << port << ", hsh_peer_id=" << pr_ << ", rsp.peer_id=" << rsp.peer_id);
2014-03-03 23:07:58 +01:00
return;
}
m_net_server.get_config_object().close(ping_context.m_connection_id);
cb();
});
if(!inv_call_res)
{
LOG_PRINT_CC_L2(ping_context, "back ping invoke failed to " << ip << ":" << port);
m_net_server.get_config_object().close(ping_context.m_connection_id);
return false;
}
return true;
});
if(!r)
{
LOG_ERROR("Failed to call connect_async, network error.");
}
return r;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_timed_sync(int command, typename COMMAND_TIMED_SYNC::request& arg, typename COMMAND_TIMED_SYNC::response& rsp, p2p_connection_context& context)
{
if(!m_payload_handler.process_payload_sync_data(arg.payload_data, context, false))
{
LOG_ERROR_CCONTEXT("Failed to process_payload_sync_data(), dropping connection");
drop_connection(context);
return 1;
}
//fill response
rsp.local_time = time(NULL);
m_peerlist.get_peerlist_head(rsp.local_peerlist);
m_payload_handler.get_payload_sync_data(rsp.payload_data);
LOG_PRINT_CCONTEXT_L2("COMMAND_TIMED_SYNC");
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_handshake(int command, typename COMMAND_HANDSHAKE::request& arg, typename COMMAND_HANDSHAKE::response& rsp, p2p_connection_context& context)
{
2014-07-16 19:30:15 +02:00
if(arg.node_data.network_id != m_network_id)
2014-03-03 23:07:58 +01:00
{
2014-09-09 11:32:00 +02:00
LOG_PRINT_CCONTEXT_L1("WRONG NETWORK AGENT CONNECTED! id=" << epee::string_tools::get_str_from_guid_a(arg.node_data.network_id));
2014-03-03 23:07:58 +01:00
drop_connection(context);
return 1;
}
if(!context.m_is_income)
{
LOG_ERROR_CCONTEXT("COMMAND_HANDSHAKE came not from incoming connection");
drop_connection(context);
return 1;
}
if(context.peer_id)
{
LOG_ERROR_CCONTEXT("COMMAND_HANDSHAKE came, but seems that connection already have associated peer_id (double COMMAND_HANDSHAKE?)");
drop_connection(context);
return 1;
}
if(!m_payload_handler.process_payload_sync_data(arg.payload_data, context, true))
{
LOG_ERROR_CCONTEXT("COMMAND_HANDSHAKE came, but process_payload_sync_data returned false, dropping connection.");
drop_connection(context);
return 1;
}
//associate peer_id with this connection
context.peer_id = arg.node_data.peer_id;
if(arg.node_data.peer_id != m_config.m_peer_id && arg.node_data.my_port)
{
peerid_type peer_id_l = arg.node_data.peer_id;
2014-03-20 12:46:11 +01:00
uint32_t port_l = arg.node_data.my_port;
2014-03-03 23:07:58 +01:00
//try ping to be sure that we can add this peer to peer_list
try_ping(arg.node_data, context, [peer_id_l, port_l, context, this]()
{
//called only(!) if success pinged, update local peerlist
peerlist_entry pe;
pe.adr.ip = context.m_remote_ip;
pe.adr.port = port_l;
2014-08-20 17:57:29 +02:00
time_t last_seen;
time(&last_seen);
pe.last_seen = static_cast<int64_t>(last_seen);
2014-03-03 23:07:58 +01:00
pe.id = peer_id_l;
this->m_peerlist.append_with_peer_white(pe);
2014-05-25 19:06:40 +02:00
LOG_PRINT_CCONTEXT_L2("PING SUCCESS " << epee::string_tools::get_ip_string_from_int32(context.m_remote_ip) << ":" << port_l);
2014-03-03 23:07:58 +01:00
});
}
//fill response
m_peerlist.get_peerlist_head(rsp.local_peerlist);
get_local_node_data(rsp.node_data);
m_payload_handler.get_payload_sync_data(rsp.payload_data);
LOG_PRINT_CCONTEXT_GREEN("COMMAND_HANDSHAKE", LOG_LEVEL_1);
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
int node_server<t_payload_net_handler>::handle_ping(int command, COMMAND_PING::request& arg, COMMAND_PING::response& rsp, p2p_connection_context& context)
{
LOG_PRINT_CCONTEXT_L2("COMMAND_PING");
rsp.status = PING_OK_RESPONSE_STATUS_TEXT;
rsp.peer_id = m_config.m_peer_id;
return 1;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::log_peerlist()
{
std::list<peerlist_entry> pl_wite;
std::list<peerlist_entry> pl_gray;
m_peerlist.get_peerlist_full(pl_gray, pl_wite);
LOG_PRINT_L0(ENDL << "Peerlist white:" << ENDL << print_peerlist_to_string(pl_wite) << ENDL << "Peerlist gray:" << ENDL << print_peerlist_to_string(pl_gray) );
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::log_connections()
{
LOG_PRINT_L0("Connections: \r\n" << print_connections_container() );
return true;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
std::string node_server<t_payload_net_handler>::print_connections_container()
{
std::stringstream ss;
m_net_server.get_config_object().foreach_connection([&](const p2p_connection_context& cntxt)
{
2014-05-25 19:06:40 +02:00
ss << epee::string_tools::get_ip_string_from_int32(cntxt.m_remote_ip) << ":" << cntxt.m_remote_port
2014-03-03 23:07:58 +01:00
<< " \t\tpeer_id " << cntxt.peer_id
2014-05-25 19:06:40 +02:00
<< " \t\tconn_id " << epee::string_tools::get_str_from_guid_a(cntxt.m_connection_id) << (cntxt.m_is_income ? " INC":" OUT")
2014-03-03 23:07:58 +01:00
<< std::endl;
return true;
});
std::string s = ss.str();
return s;
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::on_connection_new(p2p_connection_context& context)
{
2014-05-25 19:06:40 +02:00
LOG_PRINT_L2("["<< epee::net_utils::print_connection_context(context) << "] NEW CONNECTION");
2014-03-03 23:07:58 +01:00
}
//-----------------------------------------------------------------------------------
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::on_connection_close(p2p_connection_context& context)
{
2014-05-25 19:06:40 +02:00
LOG_PRINT_L2("["<< epee::net_utils::print_connection_context(context) << "] CLOSE CONNECTION");
}
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::is_priority_node(const net_address& na)
{
return (std::find(m_priority_peers.begin(), m_priority_peers.end(), na) != m_priority_peers.end()) || (std::find(m_exclusive_peers.begin(), m_exclusive_peers.end(), na) != m_exclusive_peers.end());
}
template<class t_payload_net_handler> template <class Container>
bool node_server<t_payload_net_handler>::connect_to_peerlist(const Container& peers)
{
for(const net_address& na: peers)
{
if(m_net_server.is_stop_signal_sent())
return false;
if(is_addr_connected(na))
continue;
try_to_connect_and_handshake_with_new_peer(na);
}
return true;
}
template<class t_payload_net_handler> template <class Container>
bool node_server<t_payload_net_handler>::parse_peers_and_add_to_container(const boost::program_options::variables_map& vm, const command_line::arg_descriptor<std::vector<std::string> > & arg, Container& container)
{
std::vector<std::string> perrs = command_line::get_arg(vm, arg);
for(const std::string& pr_str: perrs)
{
nodetool::net_address na = AUTO_VAL_INIT(na);
bool r = parse_peer_from_string(na, pr_str);
CHECK_AND_ASSERT_MES(r, false, "Failed to parse address from string: " << pr_str);
container.push_back(na);
}
return true;
2014-03-03 23:07:58 +01:00
}
2015-01-05 20:30:17 +01:00
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::set_max_out_peers(const boost::program_options::variables_map& vm, int64_t max)
2015-02-20 22:28:03 +01:00
{
2015-01-05 20:30:17 +01:00
if(max == -1) {
m_config.m_net_config.connections_count = P2P_DEFAULT_CONNECTIONS_COUNT;
2015-02-20 22:28:03 +01:00
epee::net_utils::data_logger::get_instance().add_data("peers_limit", m_config.m_net_config.connections_count);
2015-01-05 20:30:17 +01:00
return true;
}
2015-02-20 22:28:03 +01:00
epee::net_utils::data_logger::get_instance().add_data("peers_limit", max);
2015-01-05 20:30:17 +01:00
m_config.m_net_config.connections_count = max;
return true;
}
template<class t_payload_net_handler>
void node_server<t_payload_net_handler>::delete_connections(size_t count)
{
2015-02-12 20:59:39 +01:00
m_net_server.get_config_object().del_out_connections(count);
2015-01-05 20:30:17 +01:00
}
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::set_tos_flag(const boost::program_options::variables_map& vm, int flag)
{
if(flag==-1){
return true;
}
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_tos_flag(flag);
_dbg1("Set ToS flag " << flag);
return true;
}
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::set_rate_up_limit(const boost::program_options::variables_map& vm, int64_t limit)
{
this->islimitup=true;
if (limit==-1) {
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
limit=default_limit_up;
2015-01-05 20:30:17 +01:00
this->islimitup=false;
}
limit *= 1024;
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_up_limit( limit );
LOG_PRINT_L0("Set limit-up to " << limit/1024 << " kB/s");
return true;
}
template<class t_payload_net_handler>
bool node_server<t_payload_net_handler>::set_rate_down_limit(const boost::program_options::variables_map& vm, int64_t limit)
{
this->islimitdown=true;
if(limit==-1) {
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
limit=default_limit_down;
2015-01-05 20:30:17 +01:00
this->islimitdown=false;
}
limit *= 1024;
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_down_limit( limit );
LOG_PRINT_L0("Set limit-down to " << limit/1024 << " kB/s");
return true;
}
template<class t_payload_net_handler>
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
bool node_server<t_payload_net_handler>::set_rate_limit(const boost::program_options::variables_map& vm, int64_t limit)
2015-01-05 20:30:17 +01:00
{
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
int64_t limit_up = 0;
int64_t limit_down = 0;
if(limit == -1)
{
limit_up = default_limit_up * 1024;
limit_down = default_limit_down * 1024;
}
else
{
limit_up = limit * 1024;
limit_down = limit * 1024;
}
2015-08-11 21:32:19 +02:00
if(!this->islimitup) {
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_up_limit(limit_up);
2015-08-11 21:32:19 +02:00
LOG_PRINT_L0("Set limit-up to " << limit_up/1024 << " kB/s");
}
if(!this->islimitdown) {
** CHANGES ARE EXPERIMENTAL (FOR TESTING ONLY)
Bockchain:
1. Optim: Multi-thread long-hash computation when encountering groups of blocks.
2. Optim: Cache verified txs and return result from cache instead of re-checking whenever possible.
3. Optim: Preload output-keys when encoutering groups of blocks. Sort by amount and global-index before bulk querying database and multi-thread when possible.
4. Optim: Disable double spend check on block verification, double spend is already detected when trying to add blocks.
5. Optim: Multi-thread signature computation whenever possible.
6. Patch: Disable locking (recursive mutex) on called functions from check_tx_inputs which causes slowdowns (only seems to happen on ubuntu/VMs??? Reason: TBD)
7. Optim: Removed looped full-tx hash computation when retrieving transactions from pool (???).
8. Optim: Cache difficulty/timestamps (735 blocks) for next-difficulty calculations so that only 2 db reads per new block is needed when a new block arrives (instead of 1470 reads).
Berkeley-DB:
1. Fix: 32-bit data errors causing wrong output global indices and failure to send blocks to peers (etc).
2. Fix: Unable to pop blocks on reorganize due to transaction errors.
3. Patch: Large number of transaction aborts when running multi-threaded bulk queries.
4. Patch: Insufficient locks error when running full sync.
5. Patch: Incorrect db stats when returning from an immediate exit from "pop block" operation.
6. Optim: Add bulk queries to get output global indices.
7. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
8. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
9. Optim: Added thread-safe buffers used when multi-threading bulk queries.
10. Optim: Added support for nosync/write_nosync options for improved performance (*see --db-sync-mode option for details)
11. Mod: Added checkpoint thread and auto-remove-logs option.
12. *Now usable on 32-bit systems like RPI2.
LMDB:
1. Optim: Added custom comparison for 256-bit key tables (minor speed-up, TBD: get actual effect)
2. Optim: Modified output_keys table to store public_key+unlock_time+height for single transaction lookup (vs 3)
3. Optim: Used output_keys table retrieve public_keys instead of going through output_amounts->output_txs+output_indices->txs->output:public_key
4. Optim: Added support for sync/writemap options for improved performance (*see --db-sync-mode option for details)
5. Mod: Auto resize to +1GB instead of multiplier x1.5
ETC:
1. Minor optimizations for slow-hash for ARM (RPI2). Incomplete.
2. Fix: 32-bit saturation bug when computing next difficulty on large blocks.
[PENDING ISSUES]
1. Berkely db has a very slow "pop-block" operation. This is very noticeable on the RPI2 as it sometimes takes > 10 MINUTES to pop a block during reorganization.
This does not happen very often however, most reorgs seem to take a few seconds but it possibly depends on the number of outputs present. TBD.
2. Berkeley db, possible bug "unable to allocate memory". TBD.
[NEW OPTIONS] (*Currently all enabled for testing purposes)
1. --fast-block-sync arg=[0:1] (default: 1)
a. 0 = Compute long hash per block (may take a while depending on CPU)
b. 1 = Skip long-hash and verify blocks based on embedded known good block hashes (faster, minimal CPU dependence)
2. --db-sync-mode arg=[[safe|fast|fastest]:[sync|async]:[nblocks_per_sync]] (default: fastest:async:1000)
a. safe = fdatasync/fsync (or equivalent) per stored block. Very slow, but safest option to protect against power-out/crash conditions.
b. fast/fastest = Enables asynchronous fdatasync/fsync (or equivalent). Useful for battery operated devices or STABLE systems with UPS and/or systems with battery backed write cache/solid state cache.
Fast - Write meta-data but defer data flush.
Fastest - Defer meta-data and data flush.
Sync - Flush data after nblocks_per_sync and wait.
Async - Flush data after nblocks_per_sync but do not wait for the operation to finish.
3. --prep-blocks-threads arg=[n] (default: 4 or system max threads, whichever is lower)
Max number of threads to use when computing long-hash in groups.
4. --show-time-stats arg=[0:1] (default: 1)
Show benchmark related time stats.
5. --db-auto-remove-logs arg=[0:1] (default: 1)
For berkeley-db only. Auto remove logs if enabled.
**Note: lmdb and berkeley-db have changes to the tables and are not compatible with official git head version.
At the moment, you need a full resync to use this optimized version.
[PERFORMANCE COMPARISON]
**Some figures are approximations only.
Using a baseline machine of an i7-2600K+SSD+(with full pow computation):
1. The optimized lmdb/blockhain core can process blocks up to 585K for ~1.25 hours + download time, so it usually takes 2.5 hours to sync the full chain.
2. The current head with memory can process blocks up to 585K for ~4.2 hours + download time, so it usually takes 5.5 hours to sync the full chain.
3. The current head with lmdb can process blocks up to 585K for ~32 hours + download time and usually takes 36 hours to sync the full chain.
Averate procesing times (with full pow computation):
lmdb-optimized:
1. tx_ave = 2.5 ms / tx
2. block_ave = 5.87 ms / block
memory-official-repo:
1. tx_ave = 8.85 ms / tx
2. block_ave = 19.68 ms / block
lmdb-official-repo (0f4a036437fd41a5498ee5e74e2422ea6177aa3e)
1. tx_ave = 47.8 ms / tx
2. block_ave = 64.2 ms / block
**Note: The following data denotes processing times only (does not include p2p download time)
lmdb-optimized processing times (with full pow computation):
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.25 hours processing time (--db-sync-mode=fastest:async:1000).
2. Laptop, Dual-core / 4-threads U4200 (3Mb) - 4.90 hours processing time (--db-sync-mode=fastest:async:1000).
3. Embedded, Quad-core / 4-threads Z3735F (2x1Mb) - 12.0 hours processing time (--db-sync-mode=fastest:async:1000).
lmdb-optimized processing times (with per-block-checkpoint)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 10 minutes processing time (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with full pow computation)
1. Desktop, Quad-core / 8-threads 2600k (8Mb) - 1.8 hours processing time (--db-sync-mode=fastest:async:1000).
2. RPI2. Improved from estimated 3 months(???) into 2.5 days (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
berkeley-db optimized processing times (with per-block-checkpoint)
1. RPI2. 12-15 hours (*Need 2AMP supply + Clock:1Ghz + [usb+ssd] to achieve this speed) (--db-sync-mode=fastest:async:1000).
2015-07-10 22:09:32 +02:00
epee::net_utils::connection<epee::levin::async_protocol_handler<p2p_connection_context> >::set_rate_down_limit(limit_down);
2015-08-11 21:32:19 +02:00
LOG_PRINT_L0("Set limit-down to " << limit_down/1024 << " kB/s");
2015-01-05 20:30:17 +01:00
}
return true;
}
2014-03-03 23:07:58 +01:00
}