We need to keep these around for TAP and old-style hidden services,
but they're obsolete, and we shouldn't encourage anyone to use them.
So I've added "obsolete" to their names, and a comment explaining
what the problem is.
Closes ticket 23026.
- Move some crypto structures so that they are visible by tests.
- Introduce a func to count number of hops in cpath which will be used
by the tests.
- Mark a function as mockable.
A fair number of our mock_impl declarations were messed up so that
even our special AM_ETAGSFLAGS couldn't find them.
This should be a whitespace-only patch.
Check size argument to memwipe() for underflow.
Closes bug #18089. Reported by "gk", patch by "teor".
Bugfix on 0.2.3.25 and 0.2.4.6-alpha (#7352),
commit 49dd5ef3 on 7 Nov 2012.
I got confused when I saw my Tor saying it was opening a file
that doesn't exist. It turns out it isn't opening it, it's just
calling open() on it and then moving on when it's not there.
Tor 0.2.9 has a broader range of fixes and workarounds here, but for
0.2.8, we're just going to maintain the existing behavior.
(The alternative would be to backport both
1eba088054 and
16fcbd21c9 , but the latter is kind of
a subtle kludge in the configure.ac script, and I'm not a fan of
backporting that kind of thing.)
In our code to write public keys to a string, for some unfathomable
reason since 253f0f160e, we would allocate a memory BIO, then
set the NOCLOSE flag on it, extract its memory buffer, and free it.
Then a little while later we'd free the memory buffer with
BUF_MEM_free().
As of openssl 1.1 this doesn't work any more, since there is now a
BIO_BUF_MEM structure that wraps the BUF_MEM structure. This
BIO_BUF_MEM doesn't get freed in our code.
So, we had a memory leak!
Is this an openssl bug? Maybe. But our code was already pretty
silly. Why mess around with the NOCLOSE flag here when we can just
keep the BIO object around until we don't need the buffer any more?
Fixes bug 20553; bugfix on 0.0.2pre8
Previously, the IV and key were stored in the structure, even though
they mostly weren't needed. The only purpose they had was to
support a seldom-used API where you could pass NULL when creating
a cipher in order to get a random key/IV, and then pull that key/IV
back out.
This saves 32 bytes per AES instance, and makes it easier to support
different key lengths.
We have a mock for our RSA key generation function, so we now wire
it to pk_generate(). This covers all the cases that were not using
pk_generate() before -- all ~93 of them.
The Autoconf macro AC_USE_SYSTEM_EXTENSIONS defines preprocessor macros
which turn on extensions to C and POSIX. The macro also makes it easier
for developers to use the extensions without needing (or forgetting) to
define them manually.
The macro can be safely used because it was introduced in Autoconf 2.60
and Tor requires Autoconf 2.63 and above.
There's accessors to get at things, but it ends up being rather
cumbersome. The only place where behavior should change is that the
code will fail instead of attempting to generate a new DH key if our
internal sanity check fails.
Like the previous commit, this probably breaks snapshots prior to pre5.
Instead of `ERR_remove_thread_state()` having a modified prototype, it
now has the old prototype and a deprecation annotation. Since it's
pointless to add extra complexity just to remain compatible with an old
OpenSSL development snapshot, update the code to work with 1.1.0pre5
and later.
If OpenSSL fails to generate an RSA key, do not retain a dangling
pointer to the previous (uninitialized) key value. The impact here
should be limited to a difficult-to-trigger crash, if OpenSSL is
running an engine that makes key generation failures possible, or if
OpenSSL runs out of memory. Fixes bug 19152; bugfix on
0.2.1.10-alpha. Found by Yuan Jochen Kang, Suman Jana, and Baishakhi
Ray.
This is potentially scary stuff, so let me walk through my analysis.
I think this is a bug, and a backport candidate, but not remotely
triggerable in any useful way.
Observation 1a:
Looking over the OpenSSL code here, the only way we can really fail in
the non-engine case is if malloc() fails. But if malloc() is failing,
then tor_malloc() calls should be tor_asserting -- the only way that an
attacker could do an exploit here would be to figure out some way to
make malloc() fail when openssl does it, but work whenever Tor does it.
(Also ordinary malloc() doesn't fail on platforms like Linux that
overcommit.)
Observation 1b:
Although engines are _allowed_ to fail in extra ways, I can't find much
evidence online that they actually _do_ fail in practice. More evidence
would be nice, though.
Observation 2:
We don't call crypto_pk_generate*() all that often, and we don't do it
in response to external inputs. The only way to get it to happen
remotely would be by causing a hidden service to build new introduction
points.
Observation 3a:
So, let's assume that both of the above observations are wrong, and the
attacker can make us generate a crypto_pk_env_t with a dangling pointer
in its 'key' field, and not immediately crash.
This dangling pointer will point to what used to be an RSA structure,
with the fields all set to NULL. Actually using this RSA structure,
before the memory is reused for anything else, will cause a crash.
In nearly every function where we call crypto_pk_generate*(), we quickly
use the RSA key pointer -- either to sign something, or to encode the
key, or to free the key. The only exception is when we generate an
intro key in rend_consider_services_intro_points(). In that case, we
don't actually use the key until the intro circuit is opened -- at which
point we encode it, and use it to sign an introduction request.
So in order to exploit this bug to do anything besides crash Tor, the
attacker needs to make sure that by the time the introduction circuit
completes, either:
* the e, d, and n BNs look valid, and at least one of the other BNs is
still NULL.
OR
* all 8 of the BNs must look valid.
To look like a valid BN, *they* all need to have their 'top' index plus
their 'd' pointer indicate an addressable region in memory.
So actually getting useful data of of this, rather than a crash, is
going to be pretty damn hard. You'd have to force an introduction point
to be created (or wait for one to be created), and force that particular
crypto_pk_generate*() to fail, and then arrange for the memory that the
RSA points to to in turn point to 3...8 valid BNs, all by the time the
introduction circuit completes.
Naturally, the signature won't check as valid [*], so the intro point
will reject the ESTABLISH_INTRO cell. So you need to _be_ the
introduction point, or you don't actually see this information.
[*] Okay, so if you could somehow make the 'rsa' pointer point to a
different valid RSA key, then you'd get a valid signature of an
ESTABLISH_INTRO cell using a key that was supposed to be used for
something else ... but nothing else looks like that, so you can't use
that signature elsewhere.
Observation 3b:
Your best bet as an attacker would be to make the dangling RSA pointer
actually contain a fake method, with a fake RSA_private_encrypt
function that actually pointed to code you wanted to execute. You'd
still need to transit 3 or 4 pointers deep though in order to make that
work.
Conclusion:
By 1, you probably can't trigger this without Tor crashing from OOM.
By 2, you probably can't trigger this reliably.
By 3, even if I'm wrong about 1 and 2, you have to jump through a pretty
big array of hoops in order to get any kind of data leak or code
execution.
So I'm calling it a bug, but not a security hole. Still worth
patching.
Apparently somewhere along the line we decided that MIN might be
missing.
But we already defined it (if it was missing) in compat.h, which
everybody includes.
Closes ticket 18889.
This marks some lines as unreachable by the unit tests, and as
therefore excluded from test coverage.
(Note: This convention is only for lines that are absolutely
unreachable. Don't use it anywhere you wouldn't add a
tor_fragile_assert().)
Did you know that crypto_digest_all is a substring of
crypto_digest_alloc_bytes()? Hence the mysterious emergence of
"crypto_common_digestsoc_bytes".
Next time I should use the \b assertion in my regexen.
Spotted by Mike.
They are no longer "all" digests, but only the "common" digests.
Part of 17795.
This is an automated patch I made with a couple of perl one-liners:
perl -i -pe 's/crypto_digest_all/crypto_common_digests/g;' src/*/*.[ch]
perl -i -pe 's/\bdigests_t\b/common_digests_t/g;' src/*/*.[ch]
We use sensible parameters taken from common sources, and no longer
have dynamic DH groups as an option, but it feels prudent to have
OpenSSL validate p and g at initialization time.
Check size argument to memwipe() for underflow.
Closes bug #18089. Reported by "gk", patch by "teor".
Bugfix on 0.2.3.25 and 0.2.4.6-alpha (#7352),
commit 49dd5ef3 on 7 Nov 2012.
OpenSSL doesn't use them, and fwict they were never called. If some
version of openssl *does* start using them, we should test them before
we turn them back on.
See ticket 17926
This is an eXtendable-Output Function with the following claimed
security strengths against *all* adversaries:
Collision: min(d/2, 256)
Preimage: >= min(d, 256)
2nd Preimage: min(d, 256)
where d is the amount of output used, in bits.
* DIGEST_SHA3_[256,512] added as supported algorithms, which do
exactly what is said on the tin.
* test/bench now benchmarks all of the supported digest algorithms,
so it's possible to see just how slow SHA-3 is, though the message
sizes could probably use tweaking since this is very dependent on
the message size vs the SHA-3 rate.