6137a0b9 blockchain: reject unsorted ins and outs from v7 (moneromooo-monero)
16afab90 core: sort ins and outs key key image and public key, respectively (moneromooo-monero)
0c36b9f9 common: add apply_permutation file and function (moneromooo-monero)
0299cb77 Fix various oversights/bugs in ZMQ RPC server code (Thomas Winget)
77986023 json serialization for rpc-relevant monero types (Thomas Winget)
5c1e08fe Refactor some things into more composable (smaller) functions (Thomas Winget)
9ac2ad07 DRY refactoring (Thomas Winget)
And optimize import startup:
Remember start_height position during initial count_blocks pass
to avoid having to reread entire file again to arrive at start_height
Structured {de-,}serialization methods for (many new) types
which are used for requests or responses in the RPC.
New types include RPC requests and responses, and structs which compose
types within those.
# Conflicts:
# src/cryptonote_core/blockchain.cpp
This commit refactors some of the rpc-related functions in the
Blockchain class to be more composable. This change was made
in order to make implementing the new zmq rpc easier without
trampling on the old rpc.
New functions:
Blockchain::get_num_mature_outputs
Blockchain::get_random_outputs
Blockchain::get_output_key
Blockchain::get_output_key_mask_unlocked
Blockchain::find_blockchain_supplement (overload)
functions which previously had this functionality inline now call these
functions as necessary.
5807529e blockchain: cap memory size of retrieved blocks (moneromooo-monero)
c1b10381 rpc: decrease memory usage a bit in getblocks.bin (moneromooo-monero)
If monerod is started with default sync mode, set it to SAFE after
synchronization completes. Set it back to FAST if synchronization
restarts (e.g. because another peer has a longer blockchain).
If monerod is started with an explicit sync mode, none of this
automation takes effect.
5d4ef719 core: speed up output index unique set calculation (moneromooo-monero)
19d7f568 perf_timer: allow profiling more granular than millisecond (moneromooo-monero)
bda8c598 epee: add nanosecond timer and pause/restart profiling macros (moneromooo-monero)
A sort+uniq step was done for every tx in a 200 block chunk,
causing a lot of repeated scanning as the size of the offset
map got larger with every added tx. We now do the step only
once at the end of the loop.
Doing it this way potentially uses more memory, but testing
shows that it's currently only about 2% more.
If the number of blocks to check was not a multiple of the
number of preparation threads, the last few blocks would
not be included in the threaded long hash calculation.
Those would still get calculated when the block gets added
to the chain, however, so this was only a tiny performance
hit, rather than a security bug.
Minimum mixin 4 and enforced ringct is moved from v5 to v6.
v5 is now used for an increased minimum block size (from 60000
to 300000) to cater for larger typical/minimum transaction size.
The fee algorithm is also changed to decrease the base per kB
fee, and add a cheap tier for those transactions which we do
not care if they get delayed (or even included in a block).
If the blocks aren't being linked against a binary (such as
one of the blockchain utilities), the symbol will not be
NULL, but the size will be 0. This avoids a apurious warning
about the data hash.
When scanning for outputs used in a set of incoming blocks,
we expect that some of the inputs in their transactions will
not be found in the blockchain, as they could be in previous
blocks in that set. Those outputs will be scanned there at
a later point. In this case, we add a flag to control wehther
an output not being found is expected or not.
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
3ff54bdd Check for correct thread before ending batch transaction (Howard Chu)
eaf8470b Must wait for previous batch to finish before starting new one (Howard Chu)
c903c554 Don't cache block height, always get from DB (Howard Chu)
eb1fb601 Tweak default db-sync-mode to fast:async:1 (Howard Chu)
0693cff9 Use batch transactions when syncing (Howard Chu)
It can now be queried by RPC, so it needs to be set before
it is otherwise needed for consensus, even if no blocks had
to be added (ie, exit and restart quickly).
Continue filling until we reach the block size limit, or the
resulting coinbase decreases.
Also remove old sanity check on block size, which is now not
wanted anymore.
After popping blocks from the old chain, the hard fork object's
notion of the current version was not in line with the new height,
causing the first blocks from the new chain to be rejected due
to a false expection of a newer version.
If the block reward to use for the fee calculation can't be
calculated (should not happen in practice), use a high bound,
so we use a fee overestimate that will be accepted by the network.
The fee will vary based on the base reward and the current
block size limit:
fee = (R/R0) * (M0/M) * F0
R: base reward
R0: reference base reward (10 monero)
M: block size limit
M0: minimum block size limit (60000)
F0: 0.002 monero
Starts applying at v4
25% of the outputs are selected from the last 5 days (if possible),
in order to avoid the common case of sending recently received
outputs again. 25% and 5 days are subject to review later, since
it's just a wallet level change.
Since this queries block heights for blocks that may or may not
exist, queries for non existing blocks would throw an exception,
and that would slow down the loop a lot. 7 seconds to go through
a 30 hash list.
Fix this by adding an optional return block height to block_exists
and using this instead. Actual errors will still throw an
exception.
This also cuts down on log exception spam.
The code used to cap at 5000 blocks per sync. It also treated 0 as 1.
Remove these checks; if specified as 0 do no periodic syncs at all.
Then the user is responsible for syncing in some external process.
When RingCT is enabled, outputs from coinbase transactions
are created as a single output, and stored as RingCT output,
with a fake mask. Their amount is not hidden on the blockchain
itself, but they are then able to be used as fake inputs in
a RingCT ring. Since the output amounts are hidden, their
"dustiness" is not an obstacle anymore to mixing, and this
makes the coinbase transactions a lot smaller, as well as
helping the TXO set to grow more slowly.
Also add a new "Null" type of rct signature, which decreases
the size required when no signatures are to be stored, as
in a coinbase tx.
The whole rct data apart from the MLSAGs is now included in
the signed message, to avoid malleability issues.
Instead of passing the data that's not serialized as extra
parameters to the verification API, the transaction is modified
to fill all that information. This means the transaction can
not be const anymore, but it cleaner in other ways.
Since these are needed at the same time as the output pubkeys,
this is a whole lot faster, and takes less space. Only outputs
of 0 amount store the commitment. When reading other outputs,
a fake commitment is regenerated on the fly. This avoids having
to rewrite the database to add space for fake commitments for
existing outputs.
This code relies on two things:
- LMDB must support fixed size records per key, rather than
per database (ie, all records on key 0 are the same size, all
records for non 0 keys are same size, but records from key 0
and non 0 keys do have different sizes).
- the commitment must be directly after the rest of the data
in outkey and output_data_t.
The mixRing (output keys and commitments) and II fields (key images)
can be reconstructed from vin data.
This saves some modest amount of space in the tx.
This plugs a privacy leak from the wallet to the daemon,
as the daemon could previously see what input is included
as a transaction input, which the daemon hadn't previously
supplied. Now, the wallet requests a particular set of
outputs, including the real one.
This can result in transactions that can't be accepted if
the wallet happens to select too many outputs with non standard
unlock times. The daemon could know this and select another
output, but the wallet is blind to it. It's currently very
unlikely since I don't think anything uses non default
unlock times. The wallet requests more outputs than necessary
so it can use spares if any of the returns outputs are still
locked. If there are not enough spares to reach the desired
mixin, the transaction will fail.
This constrains the number of instances of any amount
to the unlocked ones (as defined by the default unlock time
setting: outputs with non default unlock time are not
considered, so may be counted as unlocked even if they are
not actually unlocked).
It sets the max number of threads to use for a parallel job.
This is different that the number of total threads, since monero
binaries typically start a lot of them.
When reaching the tail emission phase, the amount of coins will
eventually go over MONEY_SUPPLY, overflowing 64 bits. There was
a check added to blockchain_storage, but this was not ported to
the blockchain DB version.
Reported by smooth.
d5d46e6 tests: obligatory hardfork unit build fix after interface change (moneromooo-monero)
25672d3 wallet: pass std::function by const ref, not value (moneromooo-monero)
0be6e08 wallet: do not leak owned amounts to the daemon unless --trusted-daemon (moneromooo-monero)
12146da wallet: change sweep_dust to sweep_unmixable (moneromooo-monero)
600a3cf New RPC and daemon command to get output histogram (moneromooo-monero)
f9a2fd2 wallet: handle rare case where fee adjustment can bump to the next kB (moneromooo-monero)
f26651a wallet: factor fee calculation (moneromooo-monero)