So, back long ago, XXX012 meant, "before Tor 0.1.2 is released, we
had better revisit this comment and fix it!"
But we have a huge pile of such comments accumulated for a large
number of released versions! Not cool.
So, here's what I tried to do:
* 0.2.9 and 0.2.8 are retained, since those are not yet released.
* XXX+ or XXX++ or XXX++++ or whatever means, "This one looks
quite important!"
* The others, after one-by-one examination, are downgraded to
plain old XXX. Which doesn't mean they aren't a problem -- just
that they cannot possibly be a release-blocking problem.
In dirserv_compute_performance_thresholds, we allocate arrays based
on the length of 'routers', a list of routerinfo_t, but loop over
the nodelist. The 'routers' list may be shorter when relays were
filtered by routers_make_ed_keys_unique, leading to an out-of-bounds
write on directory authorities.
This bug was originally introduced in 26e89742, but it doesn't look
possible to trigger until routers_make_ed_keys_unique was introduced
in 13a31e72.
Fixes bug 19032; bugfix on tor 0.2.8.2-alpha.
we should avoid launching a consensus fetch if we don't want one,
but if we do end up with an extra one, we should let the other checks
take care of it.
This changes simply renames them by removing "Testing" in front of them and
they do not require TestingTorNetwork to be enabled anymore.
Fixes#18481
Signed-off-by: David Goulet <dgoulet@ev0ke.net>
We've got to make sure that every single subsequent calculation in
dirserv_generate_networkstatus_vote_obj() are based on the list of
routerinfo_t *after* we've removed possible duplicates, not before.
Fortunately, none of the functions that were taking a routerlist_t
as an argument were actually using any fields other than this list
of routers.
Resolves issue 18318.DG3.
They are no longer "all" digests, but only the "common" digests.
Part of 17795.
This is an automated patch I made with a couple of perl one-liners:
perl -i -pe 's/crypto_digest_all/crypto_common_digests/g;' src/*/*.[ch]
perl -i -pe 's/\bdigests_t\b/common_digests_t/g;' src/*/*.[ch]
Once tor is downloading a usable consensus, any other connection
attempts are not needed.
Choose a connection to keep, favouring:
* fallback directories over authorities,
* connections initiated earlier over later connections
Close all other connections downloading a consensus.
Prop210: Add attempt-based connection schedules
Existing tor schedules increment the schedule position on failure,
then retry the connection after the scheduled time.
To make multiple simultaneous connections, we need to increment the
schedule position when making each attempt, then retry a (potentially
simultaneous) connection after the scheduled time.
(Also change find_dl_schedule_and_len to find_dl_schedule, as it no
longer takes or returns len.)
Prop210: Add multiple simultaneous consensus downloads for clients
Make connections on TestingClientBootstrapConsensus*DownloadSchedule,
incrementing the schedule each time the client attempts to connect.
Check if the number of downloads is less than
TestingClientBootstrapConsensusMaxInProgressTries before trying any
more connections.
Decrease minimum consensus interval to 10 seconds
when TestingTorNetwork is set. (Or 5 seconds for
the first consensus.)
Fix code that assumes larger interval values.
This assists in quickly bootstrapping a testing
Tor network.
Fixes bugs 13718 & 13823.
Now, if a router ever changes its microdescriptor, but the new
microdescriptor SHA256 hash has the same 160-bit prefix as the old
one, we treat it as a new microdescriptor when deciding whether to
copy status information.
(This function also is used to compare SHA1 digests of router
descriptors, but don't worry: the descriptor_digest field either holds
a SHA256 hash, or a SHA1 hash padded with 0 bytes.)
One pain point in evolving the Tor design and implementing has been
adding code that makes clients reject directory documents that they
previously would have accepted, if those descriptors actually exist.
When this happened, the clients would get the document, reject it,
and then decide to try downloading it again, ad infinitum. This
problem becomes particularly obnoxious with authorities, since if
some authorities accept a descriptor that others don't, the ones
that don't accept it would go crazy trying to re-fetch it over and
over. (See for example ticket #9286.)
This patch tries to solve this problem by tracking, if a descriptor
isn't parseable, what its digest was, and whether it is invalid
because of some flaw that applies to the portion containing the
digest. (This excludes RSA signature problems: RSA signatures
aren't included in the digest. This means that a directory
authority can still put another directory authority into a loop by
mentioning a descriptor, and then serving that descriptor with an
invalid RSA signatures. But that would also make the misbehaving
directory authority get DoSed by the server it's attacking, so it's
not much of an issue.)
We already have a mechanism to mark something undownloadable with
downloadstatus_mark_impossible(); we use that here for
microdescriptors, extrainfos, and router descriptors.
Unit tests to follow in another patch.
Closes ticket #11243.