Merge remote-tracking branch 'origin/maint-0.2.4'

Conflicts:
	src/or/routerlist.c
This commit is contained in:
Nick Mathewson 2013-03-15 12:20:17 -04:00
commit b163e801bc
21 changed files with 16 additions and 731 deletions

4
changes/bug7280 Normal file
View File

@ -0,0 +1,4 @@
o Minor bugfixes:
- Fix some bugs in tor-fw-helper-natpmp when trying to build and
run it on Windows. More bugs likely remain. Patch from Gisle Vanem.
Fixes bug 7280; bugfix on 0.2.3.1-alpha.

View File

@ -1,479 +0,0 @@
Tor Incentives Design Brainstorms
1. Goals: what do we want to achieve with an incentive scheme?
1.1. Encourage users to provide good relay service (throughput, latency).
1.2. Encourage users to allow traffic to exit the Tor network from
their node.
2. Approaches to learning who should get priority.
2.1. "Hard" or quantitative reputation tracking.
In this design, we track the number of bytes and throughput in and
out of nodes we interact with. When a node asks to send or receive
bytes, we provide service proportional to our current record of the
node's value. One approach is to let each circuit be either a normal
circuit or a premium circuit, and nodes can "spend" their value by
sending and receiving bytes on premium circuits: see section 4.1 for
details of this design. Another approach (section 4.2) would treat
all traffic from the node with the same priority class, and so nodes
that provide resources will get and provide better service on average.
This approach could be complemented with an anonymous e-cash
implementation to let people spend reputations gained from one context
in another context.
2.2. "Soft" or qualitative reputation tracking.
Rather than accounting for every byte (if I owe you a byte, I don't
owe it anymore once you've spent it), instead I keep a general opinion
about each server: my opinion increases when they do good work for me,
and it decays with time, but it does not decrease as they send traffic.
Therefore we reward servers who provide value to the system without
nickle and diming them at each step. We also let them benefit from
relaying traffic for others without having to "reserve" some of the
payment for their own use. See section 4.3 for a possible design.
2.3. Centralized opinions from the reputation servers.
The above approaches are complex and we don't have all the answers
for them yet. A simpler approach is just to let some central set
of trusted servers (say, the Tor directory servers) measure whether
people are contributing to the network, and provide a signal about
which servers should be rewarded. They can even do the measurements
via Tor so servers can't easily perform only when they're being
tested. See section 4.4.
2.4. Reputation servers that aggregate opinions.
The option above has the directory servers doing all of the
measurements. This doesn't scale. We can set it up so we have "deputy
testers" -- trusted other nodes that do performance testing and report
their results.
If we want to be really adventurous, we could even
accept claims from every Tor user and build a complex weighting /
reputation system to decide which claims are "probably" right.
One possible way to implement the latter is something similar to
EigenTrust [http://www.stanford.edu/~sdkamvar/papers/eigentrust.pdf],
where the opinion of nodes with high reputation more is weighted
higher.
3. Related issues we need to keep in mind.
3.1. Relay and exit configuration needs to be easy and usable.
Implicit in all of the above designs is the need to make it easy to
run a Tor server out of the box. We need to make it stable on all
common platforms (including XP), it needs to detect its available
bandwidth and not overreach that, and it needs to help the operator
through opening up ports on his firewall. Then we need a slick GUI
that lets people click a button or two rather than editing text files.
Once we've done all this, we'll hit our first big question: is
most of the barrier to growth caused by the unusability of the current
software? If so, are the rest of these incentive schemes superfluous?
3.2. The network effect: how many nodes will you interact with?
One of the concerns with pairwise reputation systems is that as the
network gets thousands of servers, the chance that you're going to
interact with a given server decreases. So if 90% of interactions
don't have any prior information, the "local" incentive schemes above
are going to degrade. This doesn't mean they're pointless -- it just
means we need to be aware that this is a limitation, and plan in the
background for what step to take next. (It seems that e-cash solutions
would scale better, though they have issues of their own.)
3.3. Guard nodes
As of Tor 0.1.1.11, Tor users pick from a small set of semi-permanent
"guard nodes" for their first hop of each circuit. This seems like it
would have a big impact on pairwise reputation systems since you
will only be cashing in on your reputation to a few people, and it is
unlikely that a given pair of nodes will use each other as guard nodes.
What does this imply? For one, it means that we don't care at all
about the opinions of most of the servers out there -- we should
focus on keeping our guard nodes happy with us.
One conclusion from that is that our design needs to judge performance
not just through direct interaction (beginning of the circuit) but
also through indirect interaction (middle of the circuit). That way
you can never be sure when your guards are measuring you.
Both 3.2 and 3.3 may be solved by having a global notion of reputation,
as in 2.3 and 2.4. However, computing the global reputation from local
views could be expensive (O(n^2)) when the network is really large.
3.4. Restricted topology: benefits and roadmap.
As the Tor network continues to grow, we will need to make design
changes to the network topology so that each node does not need
to maintain connections to an unbounded number of other nodes. For
anonymity's sake, we may partition the network such that all
the nodes have the same belief about the divisions and each node is
in only one partition. (The alternative is that every user fetches
his own random subset of the overall node list -- this is bad because
of intersection attacks.)
Therefore the "network horizon" for each user will stay bounded,
which helps against the above issues in 3.2 and 3.3.
It could be that the core of long-lived servers will all get to know
each other, and so the critical point that decides whether you get
good service is whether the core likes you. Or perhaps it will turn
out to work some other way.
A special case here is the social network, where the network isn't
partitioned randomly but instead based on some external properties.
Social network topologies can provide incentives in other ways, because
people may be more inclined to help out their friends, and more willing
to relay traffic if most of the traffic they are relaying comes
from their friends. It also opens the door for out-of-band incentive
schemes because of the out-of-band links in the graph.
3.5. Profit-maximizing vs. Altruism.
There are some interesting game theory questions here.
First, in a volunteer culture, success is measured in public utility
or in public esteem. If we add a reward mechanism, there's a risk that
reward-maximizing behavior will surpass utility- or esteem-maximizing
behavior.
Specifically, if most of our servers right now are relaying traffic
for the good of the community, we may actually *lose* those volunteers
if we turn the act of relaying traffic into a selfish act.
I am not too worried about this issue for now, since we're aiming
for an incentive scheme so effective that it produces tens of
thousands of new servers.
3.6. What part of the node's performance do you measure?
We keep referring to having a node measure how well the other nodes
receive bytes. But don't leeching clients receive bytes just as well
as servers?
Further, many transactions in Tor involve fetching lots of
bytes and not sending very many. So it seems that we want to turn
things around: we need to measure how quickly a node is _sending_
us bytes, and then only send it bytes in proportion to that.
However, a sneaky user could simply connect to a node and send some
traffic through it, and voila, he has performed for the network. This
is no good. The first fix is that we only count if you're receiving
bytes "backwards" in the circuit. Now the sneaky user needs to
construct a circuit such that his node appears later in the circuit,
and then send some bytes back quickly.
Maybe that complexity is sufficient to deter most lazy users. Or
maybe it's an argument in favor of a more penny-counting reputation
approach.
Addendum: I was more thinking of measuring based on who is the service
provider and service receiver for the circuit. Say Alice builds a
circuit to Bob. Then Bob is providing service to Alice, since he
otherwise wouldn't need to spend his bandwidth. So traffic in either
direction should be charged to Alice. Of course, the same attack would
work, namely, Bob could cheat by sending bytes back quickly. So someone
close to the origin needs to detect this and close the circuit, if
necessary. -JN
3.7. What is the appropriate resource balance for servers vs. clients?
If we build a good incentive system, we'll still need to tune it
to provide the right bandwidth allocation -- if we reserve too much
bandwidth for fast servers, then we're wasting some potential, but
if we reserve too little, then fewer people will opt to become servers.
In fact, finding an optimum balance is especially hard because it's
a moving target: the better our incentive mechanism (and the lower
the barrier to setup), the more servers there will be. How do we find
the right balance?
One answer is that it doesn't have to be perfect: we can err on the
side of providing extra resources to servers. Then we will achieve our
desired goal -- when people complain about speed, we can tell them to
run a server, and they will in fact get better performance.
3.8. Anonymity attack: fast connections probably come from good servers.
If only fast servers can consistently get good performance in the
network, they will stand out. "Oh, that connection probably came from
one of the top ten servers in the network." Intersection attacks over
time can improve the certainty of the attack.
I'm not too worried about this. First, in periods of low activity,
many different people might be getting good performance. This dirties
the intersection attack. Second, with many of these schemes, we will
still be uncertain whether the fast node originated the traffic, or
was the entry node for some other lucky user -- and we already accept
this level of attack in other cases such as the Murdoch-Danezis attack
[http://freehaven.net/anonbib/#torta05].
3.9. How do we allocate bandwidth over the course of a second?
This may be a simple matter of engineering, but it still needs to be
addressed. Our current token bucket design refills each bucket once a
second. If we have N tokens in our bucket, and we don't know ahead of
time how many connections are going to want to send out how many bytes,
how do we balance providing quick service to the traffic that is
already here compared to providing service to potential high-importance
future traffic?
If we have only two classes of service, here is a simple design:
At each point, when we are 1/t through the second, the total number
of non-priority bytes we are willing to send out is N/t. Thus if N
priority bytes are waiting at the beginning of the second, we drain
our whole bucket then, and otherwise we provide some delayed service
to the non-priority bytes.
Does this design expand to cover the case of three priority classes?
Ideally we'd give each remote server its own priority number. Or
hopefully there's an easy design in the literature to point to --
this is clearly not my field.
Is our current flow control mechanism (each circuit and each stream
start out with a certain window, and once they've exhausted it they
need to receive an ack before they can send more) going to have
problems with this new design now that we'll be queueing more bytes
for less preferred nodes? If it turns out we do, the first fix is
to have the windows start out at zero rather than start out full --
it will slow down the startup phase but protect us better.
While we have outgoing cells queued for a given server, we have the
option of reordering them based on the priority of the previous hop.
Is this going to turn out to be useful? If we're the exit node (that
is, there is no previous hop) what priority do those cells get?
Should we do this prioritizing just for sending out bytes (as I've
described here) or would it help to do it also for receiving bytes?
See next section.
3.10. Different-priority cells arriving on the same TCP connection.
In some of the proposed designs, servers want to give specific circuits
priority rather than having all circuits from them get the same class
of service.
Since Tor uses TCP's flow control for rate limiting, this constraints
our design choices -- it is easy to give different TCP connections
different priorities, but it is hard to give different cells on the
same connection priority, because you have to read them to know what
priority they're supposed to get.
There are several possible solutions though. First is that we rely on
the sender to reorder them so the highest priority cells (circuits) are
more often first. Second is that if we open two TCP connections -- one
for the high-priority cells, and one for the low-priority cells. (But
this prevents us from changing the priority of a circuit because
we would need to migrate it from one connection to the other.) A
third approach is to remember which connections have recently sent
us high-priority cells, and preferentially read from those connections.
Hopefully we can get away with not solving this section at all. But if
necessary, we can consult Ed Knightly, a Professor at Rice
[http://www.ece.rice.edu/~knightly/], for his extensive experience on
networking QoS.
3.11. Global reputation system: Congestion on high reputation servers?
If the notion of reputation is global (as in 2.3 or 2.4), circuits that
go through successive high reputation servers would be the fastest and
most reliable. This would incentivize everyone, regardless of their own
reputation, to choose only the highest reputation servers in its
circuits, causing an over-congestion on those servers.
One could argue, though, that once those servers are over-congested,
their bandwidth per circuit drops, which would in turn lower their
reputation in the future. A question is whether this would overall
stabilize.
Another possible way is to keep a cap on reputation. In this way, a
fraction of servers would have the same high reputation, thus balancing
such load.
3.12. Another anonymity attack: learning from service levels.
If reputation is local, it may be possible for an evil node to learn
the identity of the origin through provision of differential service.
For instance, the evil node provides crappy bandwidth to everyone,
until it finds a circuit that it wants to trace the origin, then it
provides good bandwidth. Now, as only those directly or indirectly
observing this circuit would like the evil node, it can test each node
by building a circuit via each node to another evil node. If the
bandwidth is high, it is (somewhat) likely that the node was a part of
the circuit.
This problem does not exist if the reputation is global and nodes only
follow the global reputation, i.e., completely ignore their own view.
3.13. DoS through high priority traffic.
Assume there is an evil node with high reputation (or high value on
Alice) and this evil node wants to deny the service to Alice. What it
needs to do is to send a lot of traffic to Alice. To Alice, all traffic
from this evil node is of high priority. If the choice of circuits are
too based toward high priority circuits, Alice would spend most of her
available bandwidth on this circuit, thus providing poor bandwidth to
everyone else. Everyone else would start to dislike Alice, making it
even harder for her to forward other nodes' traffic. This could cause
Alice to have a low reputation, and the only high bandwidth circuit
Alice could use would be via the evil node.
3.14. If you run a fast server, can you run your client elsewhere?
A lot of people want to run a fast server at a colocation facility,
and then reap the rewards using their cablemodem or DSL Tor client.
If we use anonymous micropayments, where reputation can literally
be transferred, this is trivial.
If we pick a design where servers accrue reputation and can only
use it themselves, though, the clients can configure the servers as
their entry nodes and "inherit" their reputation. In this approach
we would let servers configure a set of IP addresses or keys that get
"like local" service.
4. Sample designs.
4.1. Two classes of service for circuits.
Whenever a circuit is built, it is specified by the origin which class,
either "premium" or "normal", this circuit belongs. A premium circuit
gets preferred treatment at each node. A node "spends" its value, which
it earned a priori by providing service, to the next node by sending
and receiving bytes. Once a node has overspent its values, the circuit
cannot stay as premium. It either breaks or converts into a normal
circuit. Each node also reserves a small portion of bandwidth for
normal circuits to prevent starvation.
Pro: Even if a node has no value to spend, it can still use normal
circuits. This allow casual user to use Tor without forcing them to run
a server.
Pro: Nodes have incentive to forward traffic as quick and as much as
possible to accumulate value.
Con: There is no proactive method for a node to rebalance its debt. It
has to wait until there happens to be a circuit in the opposite
direction.
Con: A node needs to build circuits in such a way that each node in the
circuit has to have good values to the next node. This requires
non-local knowledge and makes circuits less reliable as the values are
used up in the circuit.
Con: May discourage nodes to forward traffic in some circuits, as they
worry about spending more useful values to get less useful values in
return.
4.2. Treat all the traffic from the node with the same service;
hard reputation system.
This design is similar to 4.1, except that instead of having two
classes of circuits, there is only one. All the circuits are
prioritized based on the value of the interacting node.
Pro: It is simpler to design and give priority based on connections,
not circuits.
Con: A node only needs to keep a few guard nodes happy to forward their
traffic.
Con: Same as in 4.1, may discourage nodes to forward traffic in some
circuits, as they worry about spending more useful values to get less
useful values in return.
4.3. Treat all the traffic from the node with the same service;
soft reputation system.
Rather than a guaranteed system with accounting (as 4.1 and 4.2),
we instead try for a best-effort system. All bytes are in the same
class of service. You keep track of other Tors by key, and give them
service proportional to the service they have given you. That is, in
the past when you have tried to push bytes through them, you track the
number of bytes and the average bandwidth, and use that to weight the
priority of their connections if they try to push bytes through you.
Now you're going to get minimum service if you don't ever push bytes
for other people, and you get increasingly improved service the more
active you are. We should have memories fade over time (we'll have
to tune that, which could be quite hard).
Pro: Sybil attacks are pointless because new identities get lowest
priority.
Pro: Smoothly handles periods of both low and high network load. Rather
than keeping track of the ratio/difference between what he's done for
you and what you've done for him, simply keep track of what he's done
for you, and give him priority based on that.
Based on 3.3 above, it seems we should reward all the nodes in our
path, not just the first one -- otherwise the node can provide good
service only to its guards. On the other hand, there might be a
second-order effect where you want nodes to like you so that *when*
your guards choose you for a circuit, they'll be able to get good
performance. This tradeoff needs more simulation/analysis.
This approach focuses on incenting people to relay traffic, but it
doesn't do much for incenting them to allow exits. It may help in
one way through: if there are few exits, then they will attract a
lot of use, so lots of people will like them, so when they try to
use the network they will find their first hop to be particularly
pleasant. After that they're like the rest of the world though. (An
alternative would be to reward exit nodes with higher values. At the
extreme, we could even ask the directory servers to suggest the extra
values, based on the current availability of exit nodes.)
Pro: this is a pretty easy design to add; and it can be phased in
incrementally simply by having new nodes behave differently.
4.4. Centralized opinions from the reputation servers.
Have a set of official measurers who spot-check servers from the
directory to see if they really do offer roughly the bandwidth
they advertise. Include these observations in the directory. (For
simplicity, the directory servers could be the measurers.) Then Tor
servers give priority to other servers. We'd like to weight the
priority by advertised bandwidth to encourage people to donate more,
but it seems hard to distinguish between a slow server and a busy
server.
The spot-checking can be done anonymously to prevent selectively
performing only for the measurers, because hey, we have an anonymity
network.
We could also reward exit nodes by giving them better priority, but
like above this only will affect their first hop. Another problem
is that it's darn hard to spot-check whether a server allows exits
to all the pieces of the Internet that it claims to. If necessary,
perhaps this can be solved by a distributed reporting mechanism,
where clients that can reach a site from one exit but not another
anonymously submit that site to the measurers, who verify.
A last problem is that since directory servers will be doing their
tests directly (easy to detect) or indirectly (through other Tor
servers), then we know that we can get away with poor performance for
people that aren't listed in the directory. Maybe we can turn this
around and call it a feature though -- another reason to get listed
in the directory.
5. Recommendations and next steps.
5.1. Simulation.
For simulation trace, we can use two: one is what we obtained from Tor
and one from existing web traces.
We want to simulate all the four cases in 4.1-4. For 4.4, we may want
to look at two variations: (1) the directory servers check the
bandwidth themselves through Tor; (2) each node reports their perceived
values on other nodes, while the directory servers use EigenTrust to
compute global reputation and broadcast those.
5.2. Deploying into existing Tor network.

View File

@ -675,11 +675,6 @@ median_int32(int32_t *array, int n_elements)
{
return find_nth_int32(array, n_elements, (n_elements-1)/2);
}
static INLINE long
median_long(long *array, int n_elements)
{
return find_nth_long(array, n_elements, (n_elements-1)/2);
}
#endif

View File

@ -113,8 +113,8 @@ crypto_get_rsa_padding_overhead(int padding)
{
switch (padding)
{
case RSA_PKCS1_OAEP_PADDING: return 42;
case RSA_PKCS1_PADDING: return 11;
case RSA_PKCS1_OAEP_PADDING: return PKCS1_OAEP_PADDING_OVERHEAD;
case RSA_PKCS1_PADDING: return PKCS1_PADDING_OVERHEAD;
default: tor_assert(0); return -1;
}
}

View File

@ -1176,119 +1176,10 @@ escaped(const char *s)
return escaped_val_;
}
/** Rudimentary string wrapping code: given a un-wrapped <b>string</b> (no
* newlines!), break the string into newline-terminated lines of no more than
* <b>width</b> characters long (not counting newline) and insert them into
* <b>out</b> in order. Precede the first line with prefix0, and subsequent
* lines with prefixRest.
*/
/* This uses a stupid greedy wrapping algorithm right now:
* - For each line:
* - Try to fit as much stuff as possible, but break on a space.
* - If the first "word" of the line will extend beyond the allowable
* width, break the word at the end of the width.
*/
void
wrap_string(smartlist_t *out, const char *string, size_t width,
const char *prefix0, const char *prefixRest)
{
size_t p0Len, pRestLen, pCurLen;
const char *eos, *prefixCur;
tor_assert(out);
tor_assert(string);
tor_assert(width);
if (!prefix0)
prefix0 = "";
if (!prefixRest)
prefixRest = "";
p0Len = strlen(prefix0);
pRestLen = strlen(prefixRest);
tor_assert(width > p0Len && width > pRestLen);
eos = strchr(string, '\0');
tor_assert(eos);
pCurLen = p0Len;
prefixCur = prefix0;
while ((eos-string)+pCurLen > width) {
const char *eol = string + width - pCurLen;
while (eol > string && *eol != ' ')
--eol;
/* eol is now the last space that can fit, or the start of the string. */
if (eol > string) {
size_t line_len = (eol-string) + pCurLen + 2;
char *line = tor_malloc(line_len);
memcpy(line, prefixCur, pCurLen);
memcpy(line+pCurLen, string, eol-string);
line[line_len-2] = '\n';
line[line_len-1] = '\0';
smartlist_add(out, line);
string = eol + 1;
} else {
size_t line_len = width + 2;
char *line = tor_malloc(line_len);
memcpy(line, prefixCur, pCurLen);
memcpy(line+pCurLen, string, width - pCurLen);
line[line_len-2] = '\n';
line[line_len-1] = '\0';
smartlist_add(out, line);
string += width-pCurLen;
}
prefixCur = prefixRest;
pCurLen = pRestLen;
}
if (string < eos) {
size_t line_len = (eos-string) + pCurLen + 2;
char *line = tor_malloc(line_len);
memcpy(line, prefixCur, pCurLen);
memcpy(line+pCurLen, string, eos-string);
line[line_len-2] = '\n';
line[line_len-1] = '\0';
smartlist_add(out, line);
}
}
/* =====
* Time
* ===== */
/**
* Converts struct timeval to a double value.
* Preserves microsecond precision, but just barely.
* Error is approx +/- 0.1 usec when dealing with epoch values.
*/
double
tv_to_double(const struct timeval *tv)
{
double conv = tv->tv_sec;
conv += tv->tv_usec/1000000.0;
return conv;
}
/**
* Converts timeval to milliseconds.
*/
int64_t
tv_to_msec(const struct timeval *tv)
{
int64_t conv = ((int64_t)tv->tv_sec)*1000L;
/* Round ghetto-style */
conv += ((int64_t)tv->tv_usec+500)/1000L;
return conv;
}
/**
* Converts timeval to microseconds.
*/
int64_t
tv_to_usec(const struct timeval *tv)
{
int64_t conv = ((int64_t)tv->tv_sec)*1000000L;
conv += tv->tv_usec;
return conv;
}
/** Return the number of microseconds elapsed between *start and *end.
*/
long

View File

@ -112,7 +112,6 @@ extern int dmalloc_free(const char *file, const int line, void *pnt,
#define tor_malloc(size) tor_malloc_(size DMALLOC_ARGS)
#define tor_malloc_zero(size) tor_malloc_zero_(size DMALLOC_ARGS)
#define tor_calloc(nmemb,size) tor_calloc_(nmemb, size DMALLOC_ARGS)
#define tor_malloc_roundup(szp) _tor_malloc_roundup(szp DMALLOC_ARGS)
#define tor_realloc(ptr, size) tor_realloc_(ptr, size DMALLOC_ARGS)
#define tor_strdup(s) tor_strdup_(s DMALLOC_ARGS)
#define tor_strndup(s, n) tor_strndup_(s, n DMALLOC_ARGS)
@ -216,8 +215,6 @@ int tor_digest256_is_zero(const char *digest);
char *esc_for_log(const char *string) ATTR_MALLOC;
const char *escaped(const char *string);
struct smartlist_t;
void wrap_string(struct smartlist_t *out, const char *string, size_t width,
const char *prefix0, const char *prefixRest);
int tor_vsscanf(const char *buf, const char *pattern, va_list ap)
#ifdef __GNUC__
__attribute__((format(scanf, 2, 0)))
@ -240,9 +237,6 @@ void base16_encode(char *dest, size_t destlen, const char *src, size_t srclen);
int base16_decode(char *dest, size_t destlen, const char *src, size_t srclen);
/* Time helpers */
double tv_to_double(const struct timeval *tv);
int64_t tv_to_msec(const struct timeval *tv);
int64_t tv_to_usec(const struct timeval *tv);
long tv_udiff(const struct timeval *start, const struct timeval *end);
long tv_mdiff(const struct timeval *start, const struct timeval *end);
int tor_timegm(const struct tm *tm, time_t *time_out);

View File

@ -76,7 +76,6 @@ int directory_fetches_from_authorities(const or_options_t *options);
int directory_fetches_dir_info_early(const or_options_t *options);
int directory_fetches_dir_info_later(const or_options_t *options);
int directory_caches_v2_dir_info(const or_options_t *options);
#define directory_caches_v1_dir_info(o) directory_caches_v2_dir_info(o)
int directory_caches_unknown_auth_certs(const or_options_t *options);
int directory_caches_dir_info(const or_options_t *options);
int directory_permits_begindir_requests(const or_options_t *options);

View File

@ -506,10 +506,6 @@ accounting_run_housekeeping(time_t now)
}
}
/** When we have no idea how fast we are, how long do we assume it will take
* us to exhaust our bandwidth? */
#define GUESS_TIME_TO_USE_BANDWIDTH (24*60*60)
/** Based on our interval and our estimated bandwidth, choose a
* deterministic (but random-ish) time to wake up. */
static void

View File

@ -158,10 +158,6 @@ int can_complete_circuit=0;
/** How long do we let a directory connection stall before expiring it? */
#define DIR_CONN_MAX_STALL (5*60)
/** How long do we let OR connections handshake before we decide that
* they are obsolete? */
#define TLS_HANDSHAKE_TIMEOUT (60)
/** Decides our behavior when no logs are configured/before any
* logs have been configured. For 0, we log notice to stdout as normal.
* For 1, we log warnings only. For 2, we log nothing.

View File

@ -1432,18 +1432,6 @@ consensus_is_waiting_for_certs(void)
? 1 : 0;
}
/** Return the network status with a given identity digest. */
networkstatus_v2_t *
networkstatus_v2_get_by_digest(const char *digest)
{
SMARTLIST_FOREACH(networkstatus_v2_list, networkstatus_v2_t *, ns,
{
if (tor_memeq(ns->identity_digest, digest, DIGEST_LEN))
return ns;
});
return NULL;
}
/** Return the most recent consensus that we have downloaded, or NULL if we
* don't have one. */
networkstatus_t *

View File

@ -75,7 +75,6 @@ void update_certificate_downloads(time_t now);
int consensus_is_waiting_for_certs(void);
int client_would_use_router(const routerstatus_t *rs, time_t now,
const or_options_t *options);
networkstatus_v2_t *networkstatus_v2_get_by_digest(const char *digest);
networkstatus_t *networkstatus_get_latest_consensus(void);
networkstatus_t *networkstatus_get_latest_consensus_by_flavor(
consensus_flavor_t f);

View File

@ -4465,15 +4465,6 @@ typedef struct vote_timing_t {
/********************************* geoip.c **************************/
/** Round all GeoIP results to the next multiple of this value, to avoid
* leaking information. */
#define DIR_RECORD_USAGE_GRANULARITY 8
/** Time interval: Flush geoip data to disk this often. */
#define DIR_ENTRY_RECORD_USAGE_RETAIN_IPS (24*60*60)
/** How long do we have to have observed per-country request history before
* we are willing to talk about it? */
#define DIR_RECORD_USAGE_MIN_OBSERVATION_TIME (12*60*60)
/** Indicates an action that we might be noting geoip statistics on.
* Note that if we're noticing CONNECT, we're a bridge, and if we're noticing
* the others, we're not.

View File

@ -1452,13 +1452,6 @@ rend_process_relay_cell(circuit_t *circ, const crypt_path_t *layer_hint,
command);
}
/** Return the number of entries in our rendezvous descriptor cache. */
int
rend_cache_size(void)
{
return strmap_size(rend_cache);
}
/** Allocate and return a new rend_data_t with the same
* contents as <b>query</b>. */
rend_data_t *

View File

@ -49,7 +49,6 @@ int rend_cache_store(const char *desc, size_t desc_len, int published,
int rend_cache_store_v2_desc_as_client(const char *desc,
const rend_data_t *rend_query);
int rend_cache_store_v2_desc_as_dir(const char *desc);
int rend_cache_size(void);
int rend_encode_v2_descriptors(smartlist_t *descs_out,
rend_service_descriptor_t *desc, time_t now,
uint8_t period, rend_auth_type_t auth_type,

View File

@ -337,7 +337,6 @@ trusted_dirs_remove_old_certs(void)
time_t now = time(NULL);
#define DEAD_CERT_LIFETIME (2*24*60*60)
#define OLD_CERT_LIFETIME (7*24*60*60)
#define CERT_EXPIRY_SKEW (60*60)
if (!trusted_dir_certs)
return;

View File

@ -124,10 +124,6 @@ static INLINE void free_execve_args(char **arg);
#define PROTO_CMETHODS_DONE "CMETHODS DONE"
#define PROTO_SMETHODS_DONE "SMETHODS DONE"
/** Number of environment variables for managed proxy clients/servers. */
#define ENVIRON_SIZE_CLIENT 3
#define ENVIRON_SIZE_SERVER 7 /* XXX known to be too high, but that's ok */
/** The first and only supported - at the moment - configuration
protocol version. */
#define PROTO_VERSION_ONE 1

View File

@ -2066,11 +2066,6 @@ const struct testcase_setup_t legacy_setup = {
#define ENT(name) \
{ #name, legacy_test_helper, 0, &legacy_setup, test_ ## name }
#define SUBENT(group, name) \
{ #group "_" #name, legacy_test_helper, 0, &legacy_setup, \
test_ ## group ## _ ## name }
#define DISABLED(name) \
{ #name, legacy_test_helper, TT_SKIP, &legacy_setup, test_ ## name }
#define FORK(name) \
{ #name, legacy_test_helper, TT_FORK, &legacy_setup, test_ ## name }

View File

@ -407,10 +407,8 @@ test_dir_split_fps(void *testdata)
"0123456789ABCdef0123456789ABCdef0123456789ABCdef0123456789ABCdef"
#define B64_1 "/g2v+JEnOJvGdVhpEjEjRVEZPu4"
#define B64_2 "3q2+75mZmZERERmZmRERERHwC6Q"
#define B64_3 "sz/wDbM/8A2zP/ANsz/wDbM/8A0"
#define B64_256_1 "8/Pz8/u7vz8/Pz+7vz8/Pz+7u/Pz8/P7u/Pz8/P7u78"
#define B64_256_2 "zMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMzMw"
#define B64_256_3 "ASNFZ4mrze8BI0VniavN7wEjRWeJq83vASNFZ4mrze8"
/* no flags set */
dir_split_resource_into_fingerprints("A+C+B", sl, NULL, 0);

View File

@ -1054,79 +1054,6 @@ test_util_strmisc(void)
test_assert(!tor_memstr(haystack, 7, "ababcade"));
}
/* Test wrap_string */
{
smartlist_t *sl = smartlist_new();
wrap_string(sl,
"This is a test of string wrapping functionality: woot. "
"a functionality? w00t w00t...!",
10, "", "");
cp = smartlist_join_strings(sl, "", 0, NULL);
test_streq(cp,
"This is a\ntest of\nstring\nwrapping\nfunctional\nity: woot.\n"
"a\nfunctional\nity? w00t\nw00t...!\n");
tor_free(cp);
SMARTLIST_FOREACH(sl, char *, cp, tor_free(cp));
smartlist_clear(sl);
wrap_string(sl, "This is a test of string wrapping functionality: woot.",
16, "### ", "# ");
cp = smartlist_join_strings(sl, "", 0, NULL);
test_streq(cp,
"### This is a\n# test of string\n# wrapping\n# functionality:\n"
"# woot.\n");
tor_free(cp);
SMARTLIST_FOREACH(sl, char *, cp, tor_free(cp));
smartlist_clear(sl);
wrap_string(sl, "A test of string wrapping...", 6, "### ", "# ");
cp = smartlist_join_strings(sl, "", 0, NULL);
test_streq(cp,
"### A\n# test\n# of\n# stri\n# ng\n# wrap\n# ping\n# ...\n");
tor_free(cp);
SMARTLIST_FOREACH(sl, char *, cp, tor_free(cp));
smartlist_clear(sl);
wrap_string(sl, "Wrapping test", 6, "#### ", "# ");
cp = smartlist_join_strings(sl, "", 0, NULL);
test_streq(cp, "#### W\n# rapp\n# ing\n# test\n");
tor_free(cp);
SMARTLIST_FOREACH(sl, char *, cp, tor_free(cp));
smartlist_clear(sl);
wrap_string(sl, "Small test", 6, "### ", "#### ");
cp = smartlist_join_strings(sl, "", 0, NULL);
test_streq(cp, "### Sm\n#### a\n#### l\n#### l\n#### t\n#### e"
"\n#### s\n#### t\n");
tor_free(cp);
SMARTLIST_FOREACH(sl, char *, cp, tor_free(cp));
smartlist_clear(sl);
wrap_string(sl, "First null", 6, NULL, "> ");
cp = smartlist_join_strings(sl, "", 0, NULL);
test_streq(cp, "First\n> null\n");
tor_free(cp);
SMARTLIST_FOREACH(sl, char *, cp, tor_free(cp));
smartlist_clear(sl);
wrap_string(sl, "Second null", 6, "> ", NULL);
cp = smartlist_join_strings(sl, "", 0, NULL);
test_streq(cp, "> Seco\nnd\nnull\n");
tor_free(cp);
SMARTLIST_FOREACH(sl, char *, cp, tor_free(cp));
smartlist_clear(sl);
wrap_string(sl, "Both null", 6, NULL, NULL);
cp = smartlist_join_strings(sl, "", 0, NULL);
test_streq(cp, "Both\nnull\n");
tor_free(cp);
SMARTLIST_FOREACH(sl, char *, cp, tor_free(cp));
smartlist_free(sl);
/* Can't test prefixes that have the same length as the line width, because
the function has an assert */
}
/* Test hex_str */
{
char binary_data[68];

View File

@ -93,16 +93,20 @@ wait_until_fd_readable(tor_socket_t fd, struct timeval *timeout)
{
int r;
fd_set fds;
#ifndef WIN32
if (fd >= FD_SETSIZE) {
fprintf(stderr, "E: NAT-PMP FD_SETSIZE error %d\n", fd);
return -1;
}
#endif
FD_ZERO(&fds);
FD_SET(fd, &fds);
r = select(fd+1, &fds, NULL, NULL, timeout);
if (r == -1) {
fprintf(stderr, "V: select failed in wait_until_fd_readable: %s\n",
strerror(errno));
tor_socket_strerror(tor_socket_errno(fd)));
return -1;
}
/* XXXX we should really check to see whether fd was readable, or we timed
@ -140,12 +144,12 @@ tor_natpmp_add_tcp_mapping(uint16_t internal_port, uint16_t external_port,
if (is_verbose)
fprintf(stderr, "V: attempting to readnatpmpreponseorretry...\n");
r = readnatpmpresponseorretry(&(state->natpmp), &(state->response));
sav_errno = errno;
sav_errno = tor_socket_errno(state->natpmp.s);
if (r<0 && r!=NATPMP_TRYAGAIN) {
fprintf(stderr, "E: readnatpmpresponseorretry failed %d\n", r);
fprintf(stderr, "E: errno=%d '%s'\n", sav_errno,
strerror(sav_errno));
tor_socket_strerror(sav_errno));
}
} while (r == NATPMP_TRYAGAIN);
@ -198,7 +202,7 @@ tor_natpmp_fetch_public_ip(tor_fw_options_t *tor_fw_options,
if (tor_fw_options->verbose)
fprintf(stderr, "V: NAT-PMP attempting to read reponse...\n");
r = readnatpmpresponseorretry(&(state->natpmp), &(state->response));
sav_errno = errno;
sav_errno = tor_socket_errno(state->natpmp.s);
if (tor_fw_options->verbose)
fprintf(stderr, "V: NAT-PMP readnatpmpresponseorretry returned"
@ -208,7 +212,7 @@ tor_natpmp_fetch_public_ip(tor_fw_options_t *tor_fw_options,
fprintf(stderr, "E: NAT-PMP readnatpmpresponseorretry failed %d\n",
r);
fprintf(stderr, "E: NAT-PMP errno=%d '%s'\n", sav_errno,
strerror(sav_errno));
tor_socket_strerror(sav_errno));
}
} while (r == NATPMP_TRYAGAIN );

View File

@ -100,7 +100,7 @@ usage(void)
" [-T|--Test]\n"
" [-v|--verbose]\n"
" [-g|--fetch-public-ip]\n"
" [-p|--forward-port ([<external port>]:<internal port>])\n");
" [-p|--forward-port ([<external port>]:<internal port>)]\n");
}
/** Log commandline options to a hardcoded file <b>tor-fw-helper.log</b> in the