* CHANGE .travis.yml so that commands for different purposes (e.g. getting
dependencies, building, testing) are in separate config lines and sections.
* CHANGE .travis.yml to use their mechanism for installing dependencies via
apt. [0] This also allows us to not need sudo (the "sudo: false" line).
* CHANGE Travis CI tests (the "script:" section) to build and run tests in the
same manner as Jenkins (i.e. with --enable-fatal-warnings and
--disable-silent-rules and run `make check`).
* ADD Travis configuration to do all the target builds with both GCC and clang.
* ADD make flags to build with both of the cores available.
* ADD notifications for IRC, and configure email notifications (to the author
of the commit) only if the branch was previously building successfully and
the latest commit broke it.
* ADD the ability to run the Travis build matrix for OSX as well, but leave it
commented out by default (because it takes roughly ten times longer, due to a
shortage of OSX build machines).
* ADD Travis config option to cancel/fail the build early if one target has
already failed ("fast_finish: true").
* ADD comments to describe what our Travis config is doing and why it is
configured that way.
[0]: https://docs.travis-ci.com/user/installing-dependencies/#Installing-Packages-on-Container-Based-Infrastructure)
Installs dependencies (including rust) and runs the existing test suite.
TODO: Introduce build matrix utilizing the rust toolchain to run test
suites both with and without the rust components.
This mistake causes two possible bugs. I believe they are both
harmless IRL.
BUG 1: memory stomping
When we call the memset, we are overwriting two 0 bytes past the end
of packed_cell_t.body. But I think that's harmless in practice,
because the definition of packed_cell_t is:
// ...
typedef struct packed_cell_t {
TOR_SIMPLEQ_ENTRY(packed_cell_t) next;
char body[CELL_MAX_NETWORK_SIZE];
uint32_t inserted_time;
} packed_cell_t;
So we will overwrite either two bytes of inserted_time, or two bytes
of padding, depending on how the platform handles alignment.
If we're overwriting padding, that's safe.
If we are overwriting the inserted_time field, that's also safe: In
every case where we call cell_pack() from connection_or.c, we ignore
the inserted_time field. When we call cell_pack() from relay.c, we
don't set or use inserted_time until right after we have called
cell_pack(). SO I believe we're safe in that case too.
BUG 2: memory exposure
The original reason for this memset was to avoid the possibility of
accidentally leaking uninitialized ram to the network. Now
remember, if wide_circ_ids is false on a connection, we shouldn't
actually be sending more than 512 bytes of packed_cell_t.body, so
these two bytes can only leak to the network if there is another bug
somewhere else in the code that sends more data than is correct.
Fortunately, in relay.c, where we allocate packed_cell_t in
packed_cell_new() , we allocate it with tor_malloc_zero(), which
clears the RAM, right before we call cell_pack. So those
packed_cell_t.body bytes can't leak any information.
That leaves the two calls to cell_pack() in connection_or.c, which
use stack-alocated packed_cell_t instances.
In or_handshake_state_record_cell(), we pass the cell's contents to
crypto_digest_add_bytes(). When we do so, we get the number of
bytes to pass using the same setting of wide_circ_ids as we passed
to cell_pack(). So I believe that's safe.
In connection_or_write_cell_to_buf(), we also use the same setting
of wide_circ_ids in both calls. So I believe that's safe too.
I introduced this bug with 1c0e87f6d8
back in 0.2.4.11-alpha; it is bug 22737 and CID 1401591
On an hidden service rendezvous circuit, a BEGIN_DIR could be sent
(maliciously) which would trigger a tor_assert() because
connection_edge_process_relay_cell() thought that the circuit is an
or_circuit_t but is an origin circuit in reality.
Fixes#22494
Reported-by: Roger Dingledine <arma@torproject.org>
Signed-off-by: David Goulet <dgoulet@torproject.org>
Fix for TROVE-2017-001 and bug 21278.
(Note: Instead of handling signed ints "correctly", we keep the old
behavior, except for the part where we would crash with -ftrapv.)
Check size argument to memwipe() for underflow.
Closes bug #18089. Reported by "gk", patch by "teor".
Bugfix on 0.2.3.25 and 0.2.4.6-alpha (#7352),
commit 49dd5ef3 on 7 Nov 2012.
The length of auth_data from an INTRODUCE2 cell is checked when the
auth_type is recognized (1 or 2), but not for any other non-zero
auth_type. Later, auth_data is assumed to have at least
REND_DESC_COOKIE_LEN bytes, leading to a client-triggered out of bounds
read.
Fixed by checking auth_len before comparing the descriptor cookie
against known clients.
Fixes#15823; bugfix on 0.2.1.6-alpha.
In get_token(), we could read one byte past the end of the
region. This is only a big problem in the case where the region
itself is (a) potentially hostile, and (b) not explicitly
nul-terminated.
This patch fixes the underlying bug, and also makes sure that the
one remaining case of not-NUL-terminated potentially hostile data
gets NUL-terminated.
Fix for bug 21018, TROVE-2016-12-002, and CVE-2016-1254
If OpenSSL fails to generate an RSA key, do not retain a dangling
pointer to the previous (uninitialized) key value. The impact here
should be limited to a difficult-to-trigger crash, if OpenSSL is
running an engine that makes key generation failures possible, or if
OpenSSL runs out of memory. Fixes bug 19152; bugfix on
0.2.1.10-alpha. Found by Yuan Jochen Kang, Suman Jana, and Baishakhi
Ray.
This is potentially scary stuff, so let me walk through my analysis.
I think this is a bug, and a backport candidate, but not remotely
triggerable in any useful way.
Observation 1a:
Looking over the OpenSSL code here, the only way we can really fail in
the non-engine case is if malloc() fails. But if malloc() is failing,
then tor_malloc() calls should be tor_asserting -- the only way that an
attacker could do an exploit here would be to figure out some way to
make malloc() fail when openssl does it, but work whenever Tor does it.
(Also ordinary malloc() doesn't fail on platforms like Linux that
overcommit.)
Observation 1b:
Although engines are _allowed_ to fail in extra ways, I can't find much
evidence online that they actually _do_ fail in practice. More evidence
would be nice, though.
Observation 2:
We don't call crypto_pk_generate*() all that often, and we don't do it
in response to external inputs. The only way to get it to happen
remotely would be by causing a hidden service to build new introduction
points.
Observation 3a:
So, let's assume that both of the above observations are wrong, and the
attacker can make us generate a crypto_pk_env_t with a dangling pointer
in its 'key' field, and not immediately crash.
This dangling pointer will point to what used to be an RSA structure,
with the fields all set to NULL. Actually using this RSA structure,
before the memory is reused for anything else, will cause a crash.
In nearly every function where we call crypto_pk_generate*(), we quickly
use the RSA key pointer -- either to sign something, or to encode the
key, or to free the key. The only exception is when we generate an
intro key in rend_consider_services_intro_points(). In that case, we
don't actually use the key until the intro circuit is opened -- at which
point we encode it, and use it to sign an introduction request.
So in order to exploit this bug to do anything besides crash Tor, the
attacker needs to make sure that by the time the introduction circuit
completes, either:
* the e, d, and n BNs look valid, and at least one of the other BNs is
still NULL.
OR
* all 8 of the BNs must look valid.
To look like a valid BN, *they* all need to have their 'top' index plus
their 'd' pointer indicate an addressable region in memory.
So actually getting useful data of of this, rather than a crash, is
going to be pretty damn hard. You'd have to force an introduction point
to be created (or wait for one to be created), and force that particular
crypto_pk_generate*() to fail, and then arrange for the memory that the
RSA points to to in turn point to 3...8 valid BNs, all by the time the
introduction circuit completes.
Naturally, the signature won't check as valid [*], so the intro point
will reject the ESTABLISH_INTRO cell. So you need to _be_ the
introduction point, or you don't actually see this information.
[*] Okay, so if you could somehow make the 'rsa' pointer point to a
different valid RSA key, then you'd get a valid signature of an
ESTABLISH_INTRO cell using a key that was supposed to be used for
something else ... but nothing else looks like that, so you can't use
that signature elsewhere.
Observation 3b:
Your best bet as an attacker would be to make the dangling RSA pointer
actually contain a fake method, with a fake RSA_private_encrypt
function that actually pointed to code you wanted to execute. You'd
still need to transit 3 or 4 pointers deep though in order to make that
work.
Conclusion:
By 1, you probably can't trigger this without Tor crashing from OOM.
By 2, you probably can't trigger this reliably.
By 3, even if I'm wrong about 1 and 2, you have to jump through a pretty
big array of hoops in order to get any kind of data leak or code
execution.
So I'm calling it a bug, but not a security hole. Still worth
patching.