Each piece of queued work now has an associated priority value; each
priority goes on a separate queue.
With probability (N-1)/N, the workers will take work from the highest
priority nonempty queue. Otherwise, they'll look for work in a
queue of lower priority. This behavior is meant to prevent
starvation for lower-priority tasks.
This reverts part of commit 706c44a6ce.
It was a mistake to remove these includes: they were needed on
systems where we have openssl 1.1.0 *and* libscrypt, and where we
were validating the one against the other.
Fixes bug 22892; bugfix on 0.3.1.1-alpha.
When setting the maximum number of connections allowed by the OS,
always allow some extra file descriptors for other files.
Fixes bug 22797; bugfix on 0.2.0.10-alpha.
We just have to suppress these warnings: Mingw's math.h uses gcc's
__builtin_choose_expr() facility to declare isnan, isfinite, and
signbit. But as implemented in at least some versions of gcc,
__builtin_choose_expr() can generate type warnings even from
branches that are not taken.
Fixes bug 22801; bugfix on 0.2.8.1-alpha.
When the new path selection logic went into place, I accidentally
dropped the code that considered the _family_ of the exit node when
deciding if the guard was usable, and we didn't catch that during
code review.
This patch makes the guard_restriction_t code consider the exit
family as well, and adds some (hopefully redundant) checks for the
case where we lack a node_t for a guard but we have a bridge_info_t
for it.
Fixes bug 22753; bugfix on 0.3.0.1-alpha. Tracked as TROVE-2016-006
and CVE-2017-0377.
This patch fixes a crash in our LZMA module where liblzma will allocate
slightly more data than it is allowed to by its limit, which leads to a
crash.
See: https://bugs.torproject.org/22751
As of ac2f6b608a in 0.2.1.19-alpha,
Sebastian fixed bug 888 by marking descriptors as "impossible" by
digest if they got rejected during the
router_load_routers_from_string() phase. This fix stopped clients
and relays from downloading the same thing over and over.
But we never made the same change for descriptors rejected during
dirserv_add_{descriptor,extrainfo}. Instead, we tried to notice in
advance that we'd reject them with dirserv_would_reject().
This notice-in-advance check stopped working once we added
key-pinning and didn't make a corresponding key-pinning change to
dirserv_would_reject() [since a routerstatus_t doesn't include an
ed25519 key].
So as a fix, let's make the dirserv_add_*() functions mark digests
as undownloadable when they are rejected.
Fixes bug 22349; I am calling this a fix on 0.2.1.19-alpha, though
you could also argue for it being a fix on 0.2.7.2-alpha.
This mistake causes two possible bugs. I believe they are both
harmless IRL.
BUG 1: memory stomping
When we call the memset, we are overwriting two 0 bytes past the end
of packed_cell_t.body. But I think that's harmless in practice,
because the definition of packed_cell_t is:
// ...
typedef struct packed_cell_t {
TOR_SIMPLEQ_ENTRY(packed_cell_t) next;
char body[CELL_MAX_NETWORK_SIZE];
uint32_t inserted_time;
} packed_cell_t;
So we will overwrite either two bytes of inserted_time, or two bytes
of padding, depending on how the platform handles alignment.
If we're overwriting padding, that's safe.
If we are overwriting the inserted_time field, that's also safe: In
every case where we call cell_pack() from connection_or.c, we ignore
the inserted_time field. When we call cell_pack() from relay.c, we
don't set or use inserted_time until right after we have called
cell_pack(). SO I believe we're safe in that case too.
BUG 2: memory exposure
The original reason for this memset was to avoid the possibility of
accidentally leaking uninitialized ram to the network. Now
remember, if wide_circ_ids is false on a connection, we shouldn't
actually be sending more than 512 bytes of packed_cell_t.body, so
these two bytes can only leak to the network if there is another bug
somewhere else in the code that sends more data than is correct.
Fortunately, in relay.c, where we allocate packed_cell_t in
packed_cell_new() , we allocate it with tor_malloc_zero(), which
clears the RAM, right before we call cell_pack. So those
packed_cell_t.body bytes can't leak any information.
That leaves the two calls to cell_pack() in connection_or.c, which
use stack-alocated packed_cell_t instances.
In or_handshake_state_record_cell(), we pass the cell's contents to
crypto_digest_add_bytes(). When we do so, we get the number of
bytes to pass using the same setting of wide_circ_ids as we passed
to cell_pack(). So I believe that's safe.
In connection_or_write_cell_to_buf(), we also use the same setting
of wide_circ_ids in both calls. So I believe that's safe too.
I introduced this bug with 1c0e87f6d8
back in 0.2.4.11-alpha; it is bug 22737 and CID 1401591