For printf, %f and %lf are synonymous, since floats are promoted to
doubles when passed as varargs. It's only for scanf that we need to
say "%lf" for doubles and "%f" for floats.
Apparenly, some older compilers think it's naughty to say %lf and like
to spew warnings about it.
Found by grarpamp.
When we added the check for key size, we required that the keys be
128 bytes. But RSA_size (which defers to BN_num_bytes) will return
128 for keys of length 1017..1024. This patch adds a new
crypto_pk_num_bits() that returns the actual number of significant
bits in the modulus, and uses that to enforce key sizes.
Also, credit the original bug3318 in the changes file.
When we introduced NEED_KEY_1024 in routerparse.c back in
0.2.0.1-alpha, I forgot to add a *8 when logging the length of a
bad-length key.
Bugfix for 3318 on 0.2.0.1-alpha.
Rename crypto_pk_check_key_public_exponent to crypto_pk_public_exponent_ok:
it's nice to name predicates s.t. you can tell how to interpret true
and false.
This patch introduces a few new functions in router.c to produce a
more helpful description of a node than its nickame, and then tweaks
nearly all log messages taking a nickname as an argument to call these
functions instead.
There are a few cases where I left the old log messages alone: in
these cases, the nickname was that of an authority (whose nicknames
are useful and unique), or the message already included an identity
and/or an address. I might have missed a couple more too.
This is a fix for bug 3045.
Conflicts throughout. All resolved in favor of taking HEAD and
adding tor_mem* or fast_mem* ops as appropriate.
src/common/Makefile.am
src/or/circuitbuild.c
src/or/directory.c
src/or/dirserv.c
src/or/dirvote.c
src/or/networkstatus.c
src/or/rendclient.c
src/or/rendservice.c
src/or/router.c
src/or/routerlist.c
src/or/routerparse.c
src/or/test.c
Here I looked at the results of the automated conversion and cleaned
them up as follows:
If there was a tor_memcmp or tor_memeq that was in fact "safe"[*] I
changed it to a fast_memcmp or fast_memeq.
Otherwise if there was a tor_memcmp that could turn into a
tor_memneq or tor_memeq, I converted it.
This wants close attention.
[*] I'm erring on the side of caution here, and leaving some things
as tor_memcmp that could in my opinion use the data-dependent
fast_memcmp variant.
This fixes a remotely triggerable assert on directory authorities, who
don't handle descriptors with ipv6 contents well yet. We will want to
revert this once we're ready to handle ipv6.
Issue raised by lorth on #tor, who wasn't able to use Tor anymore.
Analyzed with help from Christian Fromme. Fix suggested by arma. Bugfix
on 0.2.1.3-alpha.
If we got a signed digest that was shorter than the required digest
length, but longer than 20 bytes, we would accept it as long
enough.... and then immediately fail when we want to check it.
Fixes bug 2409; bug in 0.2.2.20-alpha; found by piebeer.
We need to make sure that the worst thing that a weird consensus param
can do to us is to break our Tor (and only if the other Tors are
reliably broken in the same way) so that the majority of directory
authorities can't pull any attacks that are worse than the DoS that
they can trigger by simply shutting down.
One of these worse things was the cbtnummodes parameter, which could
lead to heap corruption on some systems if the value was sufficiently
large.
This commit fixes this particular issue and also introduces sanity
checking for all consensus parameters.
Our public key functions assumed that they were always writing into a
large enough buffer. In one case, they weren't.
(Incorporates fixes from sebastian)
An object, you'll recall, is something between -----BEGIN----- and
-----END----- tags in a directory document. Some of our code, as
doorss has noted in bug 2352, could assert if one of these ever
overflowed SIZE_T_CEILING but not INT_MAX. As a solution, I'm setting
a maximum size on a single object such that neither of these limits
will ever be hit. I'm also fixing the INT_MAX checks, just to be sure.
We would never actually enforce multiplicity rules when parsing
annotations, since the counts array never got entries added to it for
annotations in the token list that got added by earlier calls to
tokenize_string.
Found by piebeer.