When calculating max sampled size, Tor would only count the number of
bridges in torrc, without considering that our state file might already
have sampled bridges in it. This caused problems when people swap
bridges, since the following error would trigger:
[warn] Not expanding the guard sample any further; just hit the
maximum sample threshold of 1
In that chutney test, the bridge client is configured to connect to
the same bridge at 127.0.0.1:5003 _and_ at [::1]:5003, with no
change in transports.
That meant, I think, that the descriptor is only assigned to the
first bridge when it arrives, and never the second.
- Make sure we check at least two guards for descriptor before making
circuits. We typically use the first primary guard for circuits, but
it can also happen that we use the second primary guard (e.g. if we
pick our first primary guard as an exit), so we should make sure we
have descriptors for both of them.
- Remove BUG() from the guard_has_descriptor() check since we now know
that this can happen in rare but legitimate situations as well, and we
should just move to the next guard in that case.
Previously I'd made a bad assumption in the implementation of
prop271 in 0.3.0.1-alpha: I'd assumed that there couldn't be two
guards with the same identity. That's true for non-bridges, but in
the bridge case, we allow two bridges to have the same ID if they
have different addr:port combinations -- in order to have the same
bridge ID running multiple PTs.
Fortunately, this assumption wasn't deeply ingrained: we stop
enforcing the "one guard per ID" rule in the bridge case, and
instead enforce "one guard per <id,addr,port>".
We also needed to tweak our implementation of
get_bridge_info_for_guard, since it made the same incorrect
assumption.
Fixes bug 21027; bugfix on 0.3.0.1-alpha.
Since we can call this function more than once before we update all
the confirmed_idx fields, we can't rely on all the relays having an
accurate confirmed_idx.
Fixes bug 21129; bugfix on 0.3.0.1-alpha
In addition to not wanting to build circuits until we can see most
of the paths in the network, and in addition to not wanting to build
circuits until we have a consensus ... we shouldn't build circuits
till all of our (in-use) primary guards have descriptors that we can
use for them.
This is another bug 21242 fix.
Actually, it's _fine_ to use a descriptorless guard for fetching
directory info -- we just shouldn't use it when building circuits.
Fortunately, we already have a "usage" flag that we can use here.
Partial fix for bug 21242.
This relates to the 21242 fix -- entry_guard_pick_for_circuit()
should never yield nodes without descriptors when the node is going
to be used for traffic, since we won't be able to extend through
them.
In the past, when we exhausted all guards in our sampled set, we just
waited there till we mark a guard for retry again (usually takes 10 mins
for a primary guard, 1 hour for a non-primary guard). This patch marks
all guards as maybe-reachable when we exhaust all guards (this can
happen when network is down for some time).
Previously, we had NumEntryGuards kind of hardwired to 1. Now we
have the code (but not the configuarability) to choose randomly from
among the first N primary guards that would work, where N defaults
to 1.
Part of 20831 support for making NumEntryGuards work again.
Since we already had a separate function for getting the universe of
possible guards, all we had to do was tweak it to handle very the
GS_TYPE_RESTRICTED case.
asn found while testing that this function can be reached with
GUARD_STATE_COMPLETE circuits; I believe this happens when
cannibalization occurs.
The added complexity of handling one more state made it reasonable
to turn the main logic here into a switch statement.
Letting the maximum sample size grow proportionally to the number of
guards defeats its purpose to a certain extent. Noted by asn during
code review.
Fixes bug 20920; bug not in any released (or merged) version of Tor.