Use IP address, effective family, and contact info to
discover and limit fallbacks to one per operator.
Also analyse netblock, ports, IP version, and Exit flag,
and print the results. Don't exclude any fallbacks from
the list because of netblocks, ports, IP version, or
Exit flag.
But as advertised bandwidth is controlled by relays,
use consensus weight and median weight to bandwidth ratio
to approximate measured bandwidth.
Includes minor comment changes and parameter reordering.
Previously, we would cut the list down to 100 fallbacks,
then check if they could serve a consensus, and comment
them out if they couldn't.
This would leave us with fewer than 100 active fallbacks.
Now, we stop when there are 100 active fallbacks.
Also count fallbacks with identical contact info.
Also fix minor logging issues.
Give each fallback a set weight of 10.0 for client selection.
Fallbacks must have at least 3000 consensus weight.
This is (nominally) 100 times the expected extra load of
20 kilobytes per second (50 GB per month).
Fixes issue #17905.
Improve the download test:
* Allow IPv4 DirPort checks to be turned off.
* Add a timeout to stem's consensus download.
* Actually check for download errors, rather than ignoring them.
* Simplify the timeout and download error checking logic.
Tweak whitelist/blacklist checks to be more robust.
Improve logging, make it warn by default.
Cleanse fallback comments more thoroughly:
* non-printables (yes, ContactInfo can have these)
* // comments (don't rely on newlines to prevent // */ escapes)
Allow fallback directories which have been stable for 7 days
to work around #18050, which causes relays to submit descriptors
with 0 DirPorts when restarted. (Particularly during Tor version
upgrades.)
Ignore low fallback directory count in alpha builds.
Set the target count to 50.
Allow fallback directories which have been stable for 30 days
to work around #18050, which causes relays to submit descriptors
with 0 DirPorts when restarted. (Particularly during Tor version
upgrades.)
Ignore low fallback directory count in alpha builds.
* support maximum history age in _avg_generic_history()
* fix division-by-zero trap in _avg_generic_history()
* skip missing (i.e. null/None) intervals in _avg_generic_history()
* Python timedelta.total_seconds() function not available in 2.6;
replace with equivalent expression
* set DEBUG logging level to make relay exclusion reasons visible
* move CUTOFF_GUARD test to end in order to expose more exclusion
reasons
Patch by "starlight", merge modifications by "teor".
Allow cached or outdated Onionoo data to be used to choose
fallback directories, as long as it's less than a day old.
Modify last modified date checks in preparation for Onionoo change
"Tor has included a feature to fetch the initial consensus from nodes
other than the authorities for a while now. We just haven't shipped a
list of alternate locations for clients to go to yet.
Reasons why we might want to ship tor with a list of additional places
where clients can find the consensus is that it makes authority
reachability and BW less important.
We want them to have been around and using their current key, address,
and port for a while now (120 days), and have been running, a guard,
and a v2 directory mirror for most of that time."
Features:
* whitelist and blacklist for an opt-in/opt-out trial.
* excludes BadExits, tor versions that aren't recommended, and low
consensus weight directory mirrors.
* reduces the weighting of Exits to avoid overloading them.
* places limits on the weight of any one fallback.
* includes an IPv6 address and orport for each FallbackDir, as
implemented in #17327. (Tor won't bootstrap using IPv6 fallbacks
until #17840 is merged.)
* generated output includes timestamps & Onionoo URL for traceability.
* unit test ensures that we successfully load all included default
fallback directories.
Closes ticket #15775. Patch by "teor".
OnionOO script by "weasel", "teor", "gsathya", and "karsten".
These scripts are now a little more bulletproof, cache data a little
better, and generate more information. Notably, they search for the
vectors or edges to cut that would lower the size of the largest
SCC.
Additional fixes to make the change work;
- fix Python 2 vs 3 issues
- fix some PEP 8 warnings
- handle paths with numbers correctly
- mention the make rule in doc/HACKING.