This is in accordance with our usual policy against freelists,
now that working allocators are everywhere.
It should also make memarea.c's coverage higher.
I also doubt that this code ever helped performance.
They are no longer "all" digests, but only the "common" digests.
Part of 17795.
This is an automated patch I made with a couple of perl one-liners:
perl -i -pe 's/crypto_digest_all/crypto_common_digests/g;' src/*/*.[ch]
perl -i -pe 's/\bdigests_t\b/common_digests_t/g;' src/*/*.[ch]
We've never actually tested this support, and we should probably assume
it's broken.
To the best of my knowledge, only OpenVMS has this, and even on
OpenVMS it's a compile-time option to disable it. And I don't think
we build on openvms anyway. (Everybody else seems to be working
around the 2038 problem by using a 64-bit time_t, which won't expire
for roughly 292 billion years.)
Closes ticket 18184.
Avoid using a pronoun where it makes comments unclear.
Avoid using gender for things that don't have it.
Avoid assigning gender to people unnecessarily.
When a relay does not have an open directory port but it has an
orport configured and is accepting client connections then it can
now service tunnelled directory requests, too. This was already true
of relays with an dirport configured.
We also conditionally stop advertising this functionality if the
relay is nearing its bandwidth usage limit - same as how dirport
advertisement is determined.
Partial implementation of prop 237, ticket 12538
Prop210: Add attempt-based connection schedules
Existing tor schedules increment the schedule position on failure,
then retry the connection after the scheduled time.
To make multiple simultaneous connections, we need to increment the
schedule position when making each attempt, then retry a (potentially
simultaneous) connection after the scheduled time.
(Also change find_dl_schedule_and_len to find_dl_schedule, as it no
longer takes or returns len.)
Prop210: Add multiple simultaneous consensus downloads for clients
Make connections on TestingClientBootstrapConsensus*DownloadSchedule,
incrementing the schedule each time the client attempts to connect.
Check if the number of downloads is less than
TestingClientBootstrapConsensusMaxInProgressTries before trying any
more connections.
Test that TestingDirAuthVote{Exit,Guard,HSDir}[Strict] work on
routersets matching all routers, one router, and no routers.
TestingDirAuthVote{Exit,Guard,HSDir} set the corresponding flag
on routerstatuses which match the routerset, but leave other flags
unmodified.
TestingDirAuthVote{Exit,Guard,HSDir}Strict clear the corresponding flag
on routerstatuses which don't match the routerset.
Extrainfo documents are now ed-signed just as are router
descriptors, according to proposal 220. This patch also includes
some more tests for successful/failing parsing, and fixes a crash
bug in ed25519 descriptor parsing.
Routers now use TAP and ntor onion keys to sign their identity keys,
and put these signatures in their descriptors. That allows other
parties to be confident that the onion keys are indeed controlled by
the router that generated the descriptor.
By now, support in the network is widespread and it's time to require
more modern crypto on all Tor instances, whether they're clients or
servers. By doing this early in 0.2.6, we can be sure that at some point
all clients will have reasonable support.
1. The test that adds things to the cache needs to set the clock back so
that the descriptors it adds are valid.
2. We split ROUTER_NOT_NEW into ROUTER_TOO_OLD, so that we can
distinguish "already had it" from "rejected because of old published
date".
3. We make extrainfo_insert() return a was_router_added_t, and we
make its caller use it correctly. This is probably redundant with
the extrainfo_is_bogus flag.
We didn't really have test coverage for these parsing functions, so
I went and made some. These tests also verify that the parsing
functions set the list of invalid digests correctly.
One pain point in evolving the Tor design and implementing has been
adding code that makes clients reject directory documents that they
previously would have accepted, if those descriptors actually exist.
When this happened, the clients would get the document, reject it,
and then decide to try downloading it again, ad infinitum. This
problem becomes particularly obnoxious with authorities, since if
some authorities accept a descriptor that others don't, the ones
that don't accept it would go crazy trying to re-fetch it over and
over. (See for example ticket #9286.)
This patch tries to solve this problem by tracking, if a descriptor
isn't parseable, what its digest was, and whether it is invalid
because of some flaw that applies to the portion containing the
digest. (This excludes RSA signature problems: RSA signatures
aren't included in the digest. This means that a directory
authority can still put another directory authority into a loop by
mentioning a descriptor, and then serving that descriptor with an
invalid RSA signatures. But that would also make the misbehaving
directory authority get DoSed by the server it's attacking, so it's
not much of an issue.)
We already have a mechanism to mark something undownloadable with
downloadstatus_mark_impossible(); we use that here for
microdescriptors, extrainfos, and router descriptors.
Unit tests to follow in another patch.
Closes ticket #11243.