When the new path selection logic went into place, I accidentally
dropped the code that considered the _family_ of the exit node when
deciding if the guard was usable, and we didn't catch that during
code review.
This patch makes the guard_restriction_t code consider the exit
family as well, and adds some (hopefully redundant) checks for the
case where we lack a node_t for a guard but we have a bridge_info_t
for it.
Fixes bug 22753; bugfix on 0.3.0.1-alpha. Tracked as TROVE-2016-006
and CVE-2017-0377.
We used to not set the guard state in launch_direct_bridge_descriptor_fetch().
So when a bridge descriptor fetch failed, the guard subsystem would never
learn about the fail (and hence the guard's reachability state would not
be updated).
We used to not set the guard state in launch_direct_bridge_descriptor_fetch().
So when a bridge descriptor fetch failed, the guard subsystem would never
learn about the fail (and hence the guard's reachability state would not
be updated).
When calculating max sampled size, Tor would only count the number of
bridges in torrc, without considering that our state file might already
have sampled bridges in it. This caused problems when people swap
bridges, since the following error would trigger:
[warn] Not expanding the guard sample any further; just hit the
maximum sample threshold of 1
In that chutney test, the bridge client is configured to connect to
the same bridge at 127.0.0.1:5003 _and_ at [::1]:5003, with no
change in transports.
That meant, I think, that the descriptor is only assigned to the
first bridge when it arrives, and never the second.
- Make sure we check at least two guards for descriptor before making
circuits. We typically use the first primary guard for circuits, but
it can also happen that we use the second primary guard (e.g. if we
pick our first primary guard as an exit), so we should make sure we
have descriptors for both of them.
- Remove BUG() from the guard_has_descriptor() check since we now know
that this can happen in rare but legitimate situations as well, and we
should just move to the next guard in that case.
Previously I'd made a bad assumption in the implementation of
prop271 in 0.3.0.1-alpha: I'd assumed that there couldn't be two
guards with the same identity. That's true for non-bridges, but in
the bridge case, we allow two bridges to have the same ID if they
have different addr:port combinations -- in order to have the same
bridge ID running multiple PTs.
Fortunately, this assumption wasn't deeply ingrained: we stop
enforcing the "one guard per ID" rule in the bridge case, and
instead enforce "one guard per <id,addr,port>".
We also needed to tweak our implementation of
get_bridge_info_for_guard, since it made the same incorrect
assumption.
Fixes bug 21027; bugfix on 0.3.0.1-alpha.
Since we can call this function more than once before we update all
the confirmed_idx fields, we can't rely on all the relays having an
accurate confirmed_idx.
Fixes bug 21129; bugfix on 0.3.0.1-alpha
In addition to not wanting to build circuits until we can see most
of the paths in the network, and in addition to not wanting to build
circuits until we have a consensus ... we shouldn't build circuits
till all of our (in-use) primary guards have descriptors that we can
use for them.
This is another bug 21242 fix.
Actually, it's _fine_ to use a descriptorless guard for fetching
directory info -- we just shouldn't use it when building circuits.
Fortunately, we already have a "usage" flag that we can use here.
Partial fix for bug 21242.
This relates to the 21242 fix -- entry_guard_pick_for_circuit()
should never yield nodes without descriptors when the node is going
to be used for traffic, since we won't be able to extend through
them.
In the past, when we exhausted all guards in our sampled set, we just
waited there till we mark a guard for retry again (usually takes 10 mins
for a primary guard, 1 hour for a non-primary guard). This patch marks
all guards as maybe-reachable when we exhaust all guards (this can
happen when network is down for some time).