A timedsync is issued every minute on a connection, but the input
tineout is 2 minutes. This means a new sync request could be issued
while a slow sync request was already in progress. The additional
request will further clog the network on a slow connection, and
cause a premature timeout.
- http_simple_client now uses std::chrono for timeouts
- http_simple_client accepts timeouts per connect / invoke call
- shortened names of epee http invoke functions
- invoke command functions only take relative path, connection
is not automatically performed
This replaces the epee and data_loggers logging systems with
a single one, and also adds filename:line and explicit severity
levels. Categories may be defined, and logging severity set
by category (or set of categories). epee style 0-4 log level
maps to a sensible severity configuration. Log files now also
rotate when reaching 100 MB.
To select which logs to output, use the MONERO_LOGS environment
variable, with a comma separated list of categories (globs are
supported), with their requested severity level after a colon.
If a log matches more than one such setting, the last one in
the configuration string applies. A few examples:
This one is (mostly) silent, only outputting fatal errors:
MONERO_LOGS=*:FATAL
This one is very verbose:
MONERO_LOGS=*:TRACE
This one is totally silent (logwise):
MONERO_LOGS=""
This one outputs all errors and warnings, except for the
"verify" category, which prints just fatal errors (the verify
category is used for logs about incoming transactions and
blocks, and it is expected that some/many will fail to verify,
hence we don't want the spam):
MONERO_LOGS=*:WARNING,verify:FATAL
Log levels are, in decreasing order of priority:
FATAL, ERROR, WARNING, INFO, DEBUG, TRACE
Subcategories may be added using prefixes and globs. This
example will output net.p2p logs at the TRACE level, but all
other net* logs only at INFO:
MONERO_LOGS=*:ERROR,net*:INFO,net.p2p:TRACE
Logs which are intended for the user (which Monero was using
a lot through epee, but really isn't a nice way to go things)
should use the "global" category. There are a few helper macros
for using this category, eg: MGINFO("this shows up by default")
or MGINFO_RED("this is red"), to try to keep a similar look
and feel for now.
Existing epee log macros still exist, and map to the new log
levels, but since they're used as a "user facing" UI element
as much as a logging system, they often don't map well to log
severities (ie, a log level 0 log may be an error, or may be
something we want the user to see, such as an important info).
In those cases, I tried to use the new macros. In other cases,
I left the existing macros in. When modifying logs, it is
probably best to switch to the new macros with explicit levels.
The --log-level options and set_log commands now also accept
category settings, in addition to the epee style log levels.
5783dd8c tests: add unit tests for uri parsing (moneromooo-monero)
82ba2108 wallet: add API and RPC to create/parse monero: URIs (moneromooo-monero)
d9001b43 epee: add functions to convert from URL format (ie, %XX values) (moneromooo-monero)
When receiving an answer packet, the command code was passed
to the callback instead of the error code. This was hiding
the "command not found" failure from the peer, and in turn
causing the code to attempt to deserialize a non existent
reply string.
This is intended to catch traffic coming from a web browser,
so we avoid issues with a web page sending a transfer RPC to
the wallet. Requiring a particular user agent can act as a
simple password scheme, while we wait for 0MQ and proper
authentication to be merged.
The noexcept specs were added to make GCC 6.1.1 happy (#846), but this
one was missing (because GCC did not complain about it on Linux, but
does complain on OSX).
The destructors get a noexcept(true) spec by default, but these
destructors in fact throw exceptions. An alternative fix might be to not
throw (most if not all of these throws are non-essential
error-reporting/logging).
When the send queue limit is reached, it is likely to not drain
any time soon. If we call close on the connection, it will stay
alive, waiting for the queue to drain before actually closing,
and will hit that check again and again. Since the queue size
limit is the reason we're closing in the first place, we call
shutdown directly.
If we reach the send queue size limit, we need to release the lock,
or we will deadlock and it will never drain.
If we reach that limit, it's likely there's another problem in the
first place though, so it will probably not drain in practice either,
unless some kind of transient network timeout.
Since connections from the ::connect method are now kept in
a deque to be able to cancel them on exit, this leaks both
memory and a file descriptor. Here, we clean those up after
30 seconds, to avoid this. 30 seconds is higher then the
5 second timeout used in the async code, so this should be
safe. However, this is an assumption which would break if
that async code was to start relying on longer timeouts.
When the boost ioservice is stopped, pending work notifications
will not happen. This includes deadline timers, which would
otherwise time out the now cancelled I/O operations. When this
happens just after starting a new connect operation, this can
leave that operations in a state where it won't receive either
the completion notification nor a timeout, causing a hang.
This is fixed by keeping a list of connections corresponding
to the connect operations, and cancelling them before stopping
the boost ioservice.
Note that the list of these connections can grow unbounded, as
they're never cleaned up. Cleaning them up would involve
working out which connections do not have any pending work,
and it's not quite clear yet how to go about this.