Right now, all our curve25519 backends ignore the high bit of the
public key. But possibly, others could treat the high bit of the
public key as encoding out-of-bounds values, or as something to be
preserved. This could be used to distinguish clients with different
backends, at the cost of killing a circuit.
As a workaround, let's just clear the high bit of each public key
indiscriminately before we use it. Fix for bug 8121, reported by
rransom. Bugfix on 0.2.4.8-alpha.
If we're deciding on a node's bandwidth based on "Bandwidth="
declarations, clip it to "20" or to the maxunmeasuredbw parameter,
if it's voted on.
This adds a new consensus method.
This is "part A" of bug 2286
Authorities don't set is_possible_guard on node_t, so they were
never deciding that they could build enough paths. This is a quick
and dirty fix.
Bug not in any released version of Tor
These seem to have gotten conflicted out of existence while mike was
working on path bias stuff.
Thanks to sysrqb for collecting these in a handy patch.
The fix is to move the two functions to format/parse base64
curve25519 public keys into a new "crypto_format.c" file. I could
have put them in crypto.c, but that's a big file worth splitting
anyway.
Fixes bug 8153; bugfix on 0.2.4.8-alpha where I did the fix for 7869.
Now we can specify to skip bridges that wouldn't be able to answer the
type of dir fetch we're launching.
It's still the responsibility of the rest of the code to prevent us from
launching a given dir fetch if we have no bridges that could handle it.
Now as we move into a future where most bridges can handle microdescs
we will generally find ourselves using them, rather than holding back
just because one of our bridges doesn't use them.
When we first implemented TLS, we assumed in conneciton_handle_write
that a TOR_TLS_WANT_WRITE from flush_buf_tls meant that nothing had
been written. But when we moved our buffers to a ring buffer
implementation back in 0.1.0.5-rc (!), we broke that invariant: it's
possible that some bytes have been written but nothing.
That's bad. It means that if we do a sequence of TLS writes that ends
with a WANTWRITE, we don't notice that we flushed any bytes, and we
don't (I think) decrement buckets.
Fixes bug 7708; bugfix on 0.1.0.5-rc
Also, deprecate the torrc options for the scaling values. It's unlikely anyone
but developers will ever tweak them, even if we provided a single ratio value.
This is meant to avoid conflict with the built-in log() function in
math.h. It resolves ticket 7599. First reported by dhill.
This was generated with the following perl script:
#!/usr/bin/perl -w -i -p
s/\blog\(LOG_(ERR|WARN|NOTICE|INFO|DEBUG)\s*,\s*/log_\L$1\(/g;
s/\blog\(/tor_log\(/g;
Instead of hardcoding the minimum fraction of possible paths to 0.6, we
take it from the user, and failing that from the consensus, and
failing that we fall back to 0.6.
Previously we did this based on the fraction of descriptors we
had. But really, we should be going based on what fraction of paths
we're able to build based on weighted bandwidth, since otherwise a
directory guard or two could make us behave quite oddly.
Implementation for feature 5956
This is allowed by the C statndard, which permits you to represent
doubles any way you like, but in practice we have some code that
assumes that memset() clears doubles in structs. Noticed as part of
7802 review; see 8081 for more info.
When we implemented #5823 and removed v2 directory request info, we
never actually changed the unit tests not to expect it.
Fixes bug 8084; bug not in any released version of Tor.
It looks like there was a compilation error for 6826 on some
platforms. Removing even more now-uncallable code to handle detecting
libevent versions before 1.3e.
Fixes bug 8012; bug not in any released Tor.
If any circuits were opened during a scaling event, we were scaling attempts
and successes by different amounts. This leads to rounding error.
The fix is to record how many circuits are in a state that hasn't been fully
counted yet, and subtract that before scaling, and add it back afterwords.
Since they use RELAY_EARLY (which can be seen by all hops on the path),
it's not safe to say they actually count as a successful use.
There are also problems with trying to allow them to finish extending due to
the circuit purpose state machine logic. It is way less complicated (and
possibly more semantically coherent) to simply wait until we actually try to
do something with them before claiming we 'used' them.
Also, we shouldn't call timed out circuits 'used' either, for semantic
consistency.
An adversary could let the first stream request succeed (ie the resolve), but
then tag and timeout the remainder (via cell dropping), forcing them on new
circuits.
Rolling back the state will cause us to probe such circuits, which should lead
to probe failures in the event of such tagging due to either unrecognized
cells coming in while we wait for the probe, or the cipher state getting out
of sync in the case of dropped cells.
Path use bias measures how often we can actually succeed using the circuits we
actually try to use. It is a subset of path bias accounting, but it is
computed as a separate statistic because the rate of client circuit use may
vary depending on use case.
This is a minimal refactoring to expose the weighted bandwidth
calculations for each node so I can use them to see what fraction of
nodes, weighted by bandwidth, we have descriptors for.
This is ticket 7706, reported by "bugcatcher." The rationale here
is that if somebody says 'ExcludeNodes {tv}', then they probably
don't just want to block definitely Tuvaluan nodes: they also want
to block nodes that have unknown country, since for all they know
such nodes are also in Tuvalu.
This behavior is controlled by a new GeoIPExcludeUnknown autobool
option. With the default (auto) setting, we exclude ?? and A1 if
any country is excluded. If the option is 1, we add ?? and A1
unconditionally; if the option is 0, we never add them.
(Right now our geoip file doesn't actually seem to include A1: I'm
including it here in case it comes back.)
This feature only takes effect if you have a GeoIP file. Otherwise
you'd be excluding every node.
This won't actually break them any worse than they were broken before:
it just removes a set of warnings that nobody was actually seeing, I
hope.
Closes 6826
The implementation is pretty straightforward: parse_extended_hostname() is
modified to drop any leading components from an address like
'foo.aaaaaaaaaaaaaaaa.onion'.
This is an automatically generated commit, from the following perl script,
run with the options "-w -i -p".
s/smartlist_string_num_isin/smartlist_contains_int_as_string/g;
s/smartlist_string_isin((?:_case)?)/smartlist_contains_string$1/g;
s/smartlist_digest_isin/smartlist_contains_digest/g;
s/smartlist_isin/smartlist_contains/g;
s/digestset_isin/digestset_contains/g;
In 6fbdf635 we added a couple of statements like:
if (test) {
...
};
The extraneous semicolons there get flagged as worrisome empty
statements by the cparser library, so let's fix them.
Patch by Christian Grothoff; fixes bug 7115.
Otherwise, it's possible to create streams or circuits with these
bogus IDs, leading to orphaned circuits or streams, or to ones that
can cause bandwidth DOS problems.
Fixes bug 7889; bugfix on all released Tors.
In general, if we tried to use a circ for a stream, but then decided to place
that stream on a different circuit, we need to probe the original circuit
before deciding it was a "success".
We also need to do the same for cannibalized circuits that go unused.
This makes removing items from the middle of the queue into an O(1)
operation, which could prove important as we let onionqueues grow
longer.
Doing this actually makes the code slightly smaller, too.
The right way to set "MaxOnionsPending" was to adjust it until the
processing delay was appropriate. So instead, let's measure how long
it takes to process onionskins (sampling them once we have a big
number), and then limit the queue based on its expected time to
finish.
This change is extra-necessary for ntor, since there is no longer a
reasonable way to set MaxOnionsPending without knowing what mix of
onionskins you'll get.
This patch also reserves 1/3 of the onionskin spots for ntor
handshakes, on the theory that TAP handshakes shouldn't be allowed to
starve their speedier cousins. We can change this later if need be.
Resolves 7291.
The unit of work sent to a cpuworker is now a create_cell_t; its
response is now a created_cell_t. Several of the things that call or
get called by this chain of logic now take create_cell_t or
created_cell_t too.
Since all cpuworkers are forked or spawned by Tor, they don't need a
stable wire protocol, so we can just send structs. This saves us some
insanity, and helps p
As elsewhere, it makes sense when adding or extending a cell type to
actually make the code to parse it into a separate tested function.
This commit doesn't actually make anything use these new functions;
that's for a later commit.
The handshake_digest field was never meaningfully a digest *of* the
handshake, but rather is a digest *from* the handshake that we exapted
to prevent replays of ESTABLISH_INTRO cells. The ntor handshake will
generate it as more key material rather than taking it from any part
of the circuit handshake reply..
I'm going to want a generic "onionskin" type and set of wrappers, and
for that, it will be helpful to isolate the different circuit creation
handshakes. Now the original handshake is in onion_tap.[ch], the
CREATE_FAST handshake is in onion_fast.[ch], and onion.[ch] now
handles the onion queue.
This commit does nothing but move code and adjust header files.
Here we try to handle curve25519 onion keys from generating them,
loading and storing them, publishing them in our descriptors, putting
them in microdescriptors, and so on.
This commit is untested and probably buggy like whoa
This patch moves curve25519_keypair_t from src/or/onion_ntor.h to
src/common/crypto_curve25519.h, and adds new functions to generate,
load, and store keypairs.
Previously, we only used the strong OS entropy source as part of
seeding OpenSSL's RNG. But with curve25519, we'll have occasion to
want to generate some keys using extremely-good entopy, as well as the
means to do so. So let's!
This patch refactors the OS-entropy wrapper into its own
crypto_strongest_rand() function, and makes our new
curve25519_secret_key_generate function try it as appropriate.
The ntor handshake--described in proposal 216 and in a paper by
Goldberg, Stebila, and Ustaoglu--gets us much better performance than
our current approach.
We want to use donna-c64 when we have a GCC with support for
64x64->uint128_t multiplying. If not, we want to use libnacl if we
can, unless it's giving us the unsafe "ref" implementation. And if
that isn't going to work, we'd like to use the
portable-and-safe-but-slow 32-bit "donna" implementation.
We might need more library searching for the correct libnacl,
especially once the next libnacl release is out -- it's likely to have
bunches of better curve25519 implementations.
I also define a set of curve25519 wrapper functions, though it really
shouldn't be necessary.
We should eventually make the -donna*.c files get build with
-fomit-frame-pointer, since that can make a difference.
There was one place in curve25519-donna-c64 that was relying on
unaligned access and relying on little-endian values. This patch
fixes that.
I've sent Adam a pull request.
Our old warn_nonlocal_client_ports() would give a bogus warning for
every nonlocal port every time it parsed any ports at all. So if it
parsed a nonlocal socksport, it would complain that it had a nonlocal
socksport...and then turn around and complain about the nonlocal
socksport again, calling it a nonlocal transport or nonlocal dnsport,
if it had any of those.
Fixes bug 7836; bugfix on 0.2.3.3-alpha.
mr-4 reports on #7799 that he was seeing it several times per second,
which suggests that things had gone very wrong.
This isn't a real fix, but it should make Tor usable till we can
figure out the real issue.
This implements the server-side of proposal 198 by detecting when
clients lack the magic list of ciphersuites that indicates that
they're lying faking some ciphers they don't really have. When
clients lack this list, we can choose any cipher that we'd actually
like. The newly allowed ciphersuites are, currently, "All ECDHE-RSA
ciphers that openssl supports, except for ECDHE-RSA-RC4".
The code to detect the cipher list relies on on (ab)use of
SSL_set_session_secret_cb.
We already use this classification for deciding whether (as a server)
to do a v2/v3 handshake, and we're about to start using it for
deciding whether we can use good ciphersuites too.
This is less easy than you might think; we can't just look at the
client ciphers list, since openssl doesn't remember client ciphers if
it doesn't know about them. So we have to keep a list of the "v2"
ciphers, with the ones we don't know about removed.
It's important not to call choose_array_element_by_weight and then
pass its return value unchecked to smartlist_get : it is allowed to
return -1.
Fixes bug 7756; bugfix on 4e3d07a6 (not in any released Tor)
This is good enough to give P_success >= 999,999,999/1,000,000,000 so
long as the address space is less than 97.95 full. It'd be ridiculous
for that to happen for IPv6, and usome reasonable assumptions, it
would also be pretty silly for IPv4.
With an IPv6 virtual address map, we can basically hand out a new
IPv6 address for _every_ address we connect to. That'll be cool, and
will let us maybe get around prop205 issues.
This uses some fancy logic to try to make the code paths in the ipv4
and the ipv6 case as close as possible, and moves to randomly
generated addresses so we don't need to maintain those stupid counters
that will collide if Tor restarts but apps don't.
Also has some XXXX items to fix to make this useful. More design
needed.
This function gives us a single place to set reasonable default flags
for port_cfg_t entries, to avoid bugs like the one where we weren't
setting ipv4_traffic_ok to 1 on SocksPorts initialized in an older
way.
(This is part 2 of making DNS cache use enabled/disabled on a
per-client port basis. This implements the CacheIPv[46]DNS options,
but not the UseCachedIPv[46] ones.)
(This is part 1 of making DNS cache use enabled/disabled on a
per-client port basis. These options are shuffled around correctly,
but don't do anything yet.)
We want to be saying fast_mem{cmp,eq,neq} when we're doing a
comparison that's allowed to exit early, or tor_mem{cmp,eq,neq} when
we need a data-invariant timing. Direct use of memcmp tends to imply
that we haven't thought about the issue.
This has several advantages, including more resilience to ambient failure.
I still need to rename all the first_hop vars tho.. Saving that for a separate
commit.
Turns out there's more than one way to block a tagged circuit.
This seems to successfully handle all of the normal exit circuits. Hidden
services need additional tweaks, still.
This replaces the old FallbackConsensus notion, and should provide a
way -- assuming we pick reasonable nodes! -- to give clients
suggestions of placs to go to get their first consensus.
Now creating a dir_server_t and adding it are separate functions, and
there are frontend functions for adding a trusted dirserver and a
fallback dirserver.