mirror of
https://gitlab.torproject.org/tpo/core/tor.git
synced 2024-11-10 21:23:58 +01:00
r12489@catbus: nickm | 2007-04-21 13:48:39 -0400
The ten thousandth Tor commit: add two new proposals (one from Mike Perry about randomized path length, and one from me about simplifyin authority operation) and expand and/or refine serveral older ones. Most notable there are changes to 103 that will allow us to make authorities more resistant to key compromise. svn:r10000
This commit is contained in:
parent
f9cf90b597
commit
c277b742f4
@ -30,3 +30,5 @@ Proposals by number:
|
|||||||
109 No more than one server per IP address [ACCEPTED]
|
109 No more than one server per IP address [ACCEPTED]
|
||||||
110 Avoiding infinite length circuits [OPEN]
|
110 Avoiding infinite length circuits [OPEN]
|
||||||
111 Prioritizing local traffic over relayed traffic [OPEN]
|
111 Prioritizing local traffic over relayed traffic [OPEN]
|
||||||
|
112 Bring Back Patlen Coin Weight [OPEN]
|
||||||
|
113 Simplifying directory authority administration [OPEN]
|
@ -90,9 +90,14 @@ Proposal:
|
|||||||
|
|
||||||
2. Details.
|
2. Details.
|
||||||
|
|
||||||
|
2.0. Versioning
|
||||||
|
|
||||||
|
All documents generated here have version "3" given in their
|
||||||
|
network-status-version entries.
|
||||||
|
|
||||||
2.1. Vote specifications
|
2.1. Vote specifications
|
||||||
|
|
||||||
Votes in v2.1 are similar to v2 network status documents. We add these
|
Votes in v3 are similar to v2 network status documents. We add these
|
||||||
fields to the preamble:
|
fields to the preamble:
|
||||||
|
|
||||||
"vote-status" -- the word "vote".
|
"vote-status" -- the word "vote".
|
||||||
@ -122,7 +127,7 @@ Proposal:
|
|||||||
|
|
||||||
2.2. Consensus directory specifications
|
2.2. Consensus directory specifications
|
||||||
|
|
||||||
Consensuses are like v2.1 votes, except for the following fields:
|
Consensuses are like v3 votes, except for the following fields:
|
||||||
|
|
||||||
"vote-status" -- the word "consensus".
|
"vote-status" -- the word "consensus".
|
||||||
|
|
||||||
|
@ -57,10 +57,99 @@ Proposal:
|
|||||||
(who will expect descriptors to be signed by the identity keys they know
|
(who will expect descriptors to be signed by the identity keys they know
|
||||||
and love, and who will not understand signing keys) happy.
|
and love, and who will not understand signing keys) happy.
|
||||||
|
|
||||||
I'd enumerate designs here, but I'm hoping that somebody will come up with
|
A possible solution:
|
||||||
a better one, so I'll try not to prejudice them with more ideas yet.
|
|
||||||
|
|
||||||
Oh, and of course, we'll want to make sure that the keys are
|
One thing to consider is that router identity keys are not very sensitive:
|
||||||
cross-certified. :)
|
if an OR disappears and reappears with a new key, the network treats it as
|
||||||
|
though an old router had disappeared and a new one had joined the network.
|
||||||
|
The Tor network continues unharmed; this isn't a disaster.
|
||||||
|
|
||||||
Ideas? -NM
|
Thus, the ideas above are mostly relevant for authorities.
|
||||||
|
|
||||||
|
The most straightforward solution for the authorities is probably to take
|
||||||
|
advantage of the protocol transition that will come with proposal 101, and
|
||||||
|
introduce a new set of signing _and_ identity keys used only to sign votes
|
||||||
|
and consensus network-status documents. Signing and identity keys could be
|
||||||
|
delivered to users in a separate, rarely changing "keys" document, so that
|
||||||
|
the consensus network-status documents wouldn't need to include N signing
|
||||||
|
keys, N identity keys, and N certifications.
|
||||||
|
|
||||||
|
Note also that there is no reason that the identity/signing keys used by
|
||||||
|
directory authorities would necessarily have to be the same as the identity
|
||||||
|
keys those authorities use in their capacity as routers. Decoupling these
|
||||||
|
keys would give directory authorities the following set of keys:
|
||||||
|
|
||||||
|
Directory authority identity:
|
||||||
|
Highly confidential; stored encrypted and/or offline. Used to
|
||||||
|
identity directory authorities. Shipped with clients. Used to
|
||||||
|
sign Directory authority signing keys.
|
||||||
|
|
||||||
|
Directory authority signing key:
|
||||||
|
Stored online, accessible to regular Tor process. Used to sign
|
||||||
|
votes and consensus directories. Downloaded as part of a "keys"
|
||||||
|
document.
|
||||||
|
|
||||||
|
[Administrators SHOULD rotate their signing keys every month or
|
||||||
|
two, just to keep in practice and keep from forgetting the
|
||||||
|
password to the authority identity.]
|
||||||
|
|
||||||
|
V1-V2 directory authority identity:
|
||||||
|
Stored online, never changed. Used to sign legacy network-status
|
||||||
|
and directory documents.
|
||||||
|
|
||||||
|
Router identity:
|
||||||
|
Stored online, seldom changed. Used to sign server descriptors
|
||||||
|
for this authority in its role as a router. Implicitly certified
|
||||||
|
by being listed in network-status documents.
|
||||||
|
|
||||||
|
Onion key, link key:
|
||||||
|
As in tor-spec.txt
|
||||||
|
|
||||||
|
|
||||||
|
Extensions to Proposal 101.
|
||||||
|
|
||||||
|
Add the following elements to vote documents:
|
||||||
|
|
||||||
|
"dir-identity-key": The long-term identity key for this authority.
|
||||||
|
"dir-key-published": The time when this directory's signing key was last
|
||||||
|
changed.
|
||||||
|
"dir-key-certification": A signature of the fields "fingerprint",
|
||||||
|
"dir-key-published", "dir-signing-key", and "dir-identity-key",
|
||||||
|
concatenated, in that order. The signed material extends from the
|
||||||
|
beginning of "fingerprint" through the newline after
|
||||||
|
"dir-key-certification". The identity key is used to generate this
|
||||||
|
signature.
|
||||||
|
|
||||||
|
The elements "fingerprint", "dir-key-published", "dir-signing-key",
|
||||||
|
"dir-identity-key", and "dir-key-certification" together constitute a
|
||||||
|
"key certificate". These are generated offline when starting a v2.1
|
||||||
|
authority.
|
||||||
|
|
||||||
|
The elements "dir-signing-key", "dir-key-published", and
|
||||||
|
"dir-identity-key", "dir-key-certification" and MUST NOT appear in
|
||||||
|
consensus documents.
|
||||||
|
|
||||||
|
The "fingerprint" field is generated based on the identity key, not
|
||||||
|
the signing key.
|
||||||
|
|
||||||
|
Consensus network statues change as follows:
|
||||||
|
|
||||||
|
Remove dir-signing-key.
|
||||||
|
|
||||||
|
Change "directory-signature" to take a fingerprint of the authority's
|
||||||
|
identity key rather than the authority's nickname.
|
||||||
|
|
||||||
|
Add a new document type:
|
||||||
|
|
||||||
|
A "keys" document contains all currently known key certification
|
||||||
|
certificates. All authorities serve it at
|
||||||
|
|
||||||
|
http://<hostname>/tor/status/keys.z
|
||||||
|
|
||||||
|
Caches and clients download the keys document whenever they receive a
|
||||||
|
consensus vote that uses a key they do not recognize. Caches download
|
||||||
|
from authorities; clients download from caches.
|
||||||
|
|
||||||
|
Verification:
|
||||||
|
|
||||||
|
[XXXX write me]
|
@ -122,6 +122,13 @@ Specification:
|
|||||||
the digest of the extra-info document.
|
the digest of the extra-info document.
|
||||||
* The published fields in the two documents match.
|
* The published fields in the two documents match.
|
||||||
|
|
||||||
|
Authorities SHOULD drop extra-info documents that do not meet these
|
||||||
|
criteria.
|
||||||
|
|
||||||
|
Extra-info documents MAY be uploaded as part of the same HTTP post as
|
||||||
|
the router descriptor, or separately. Authorities MUST accept both
|
||||||
|
methods.
|
||||||
|
|
||||||
Authorities SHOULD try to fetch extra-info documents from one another if
|
Authorities SHOULD try to fetch extra-info documents from one another if
|
||||||
they do not have one matching the digest declared in a router
|
they do not have one matching the digest declared in a router
|
||||||
descriptor.
|
descriptor.
|
||||||
|
209
doc/spec/proposals/112-bring-back-pathlencoinweight.txt
Normal file
209
doc/spec/proposals/112-bring-back-pathlencoinweight.txt
Normal file
@ -0,0 +1,209 @@
|
|||||||
|
Filename: 112-bring-back-pathlencoinweight.txt
|
||||||
|
Title: Bring Back Pathlen Coin Weight
|
||||||
|
Version:
|
||||||
|
Last-Modified:
|
||||||
|
Author: Mike Perry
|
||||||
|
Created:
|
||||||
|
Status: Open
|
||||||
|
|
||||||
|
|
||||||
|
Overview:
|
||||||
|
|
||||||
|
The idea is that users should be able to choose a weight which
|
||||||
|
probabilistically chooses their path lengths to be 2 or 3 hops. This
|
||||||
|
weight will essentially be a biased coin that indicates an
|
||||||
|
additional hop (beyond 2) with probability P. The user should be
|
||||||
|
allowed to choose 0 for this weight to always get 2 hops and 1 to
|
||||||
|
always get 3.
|
||||||
|
|
||||||
|
This value should be modifiable from the controller, and should be
|
||||||
|
available from Vidalia.
|
||||||
|
|
||||||
|
|
||||||
|
Motivation:
|
||||||
|
|
||||||
|
The Tor network is slow and overloaded. Increasingly often I hear
|
||||||
|
stories about friends and friends of friends who are behind firewalls,
|
||||||
|
annoying censorware, or under surveillance that interferes with their
|
||||||
|
productivity and Internet usage, or chills their speech. These people
|
||||||
|
know about Tor, but they choose to put up with the censorship because
|
||||||
|
Tor is too slow to be usable for them. In fact, to download a fresh,
|
||||||
|
complete copy of levine-timing.pdf for the Anonymity Implications
|
||||||
|
section of this proposal over Tor took me 3 tries.
|
||||||
|
|
||||||
|
There are many ways to improve the speed problem, and of course we
|
||||||
|
should and will implement as many as we can. Johannes's GSoC project
|
||||||
|
and my reputation system are longer term, higher-effort things that
|
||||||
|
will still provide benefit independent of this proposal.
|
||||||
|
|
||||||
|
However, reducing the path length to 2 for those who do not need the
|
||||||
|
(questionable) extra anonymity 3 hops provide not only improves
|
||||||
|
their Tor experience but also reduces their load on the Tor network by
|
||||||
|
33%, and can be done in less than 10 lines of code. That's not just
|
||||||
|
Win-Win, it's Win-Win-Win.
|
||||||
|
|
||||||
|
Furthermore, when blocking resistance measures insert an extra relay
|
||||||
|
hop into the equation, 4 hops will certainly be completely unusable
|
||||||
|
for these users, especially since it will be considerably more
|
||||||
|
difficult to balance the load across a dark relay net than balancing
|
||||||
|
the load on Tor itself (which today is still not without its flaws).
|
||||||
|
|
||||||
|
|
||||||
|
Anonymity Implications:
|
||||||
|
|
||||||
|
It has long been established that timing attacks against mixed
|
||||||
|
networks are extremely effective, and that regardless of path
|
||||||
|
length, if the adversary has compromised your first and last
|
||||||
|
hop of your path, you can assume they have compromised your
|
||||||
|
identity for that connection.
|
||||||
|
|
||||||
|
In [1], it is demonstrated that for all but the slowest, lossiest
|
||||||
|
networks, error rates for false positives and false negatives were
|
||||||
|
very near zero. Only for constant streams of traffic over slow and
|
||||||
|
(more importantly) extremely lossy network links did the error rate
|
||||||
|
hit 20%. For loss rates typical to the Internet, even the error rate
|
||||||
|
for slow nodes with constant traffic streams was 13%.
|
||||||
|
|
||||||
|
When you take into account that most Tor streams are not constant,
|
||||||
|
but probably much more like their "HomeIP" dataset, which consists
|
||||||
|
mostly of web traffic that exists over finite intervals at specific
|
||||||
|
times, error rates drop to fractions of 1%, even for the "worst"
|
||||||
|
network nodes.
|
||||||
|
|
||||||
|
Therefore, the user has little benefit from the extra hop, assuming
|
||||||
|
the adversary does timing correlation on their nodes. The real
|
||||||
|
protection is the probability of getting both the first and last hop,
|
||||||
|
and this is constant whether the client chooses 2 hops, 3 hops, or 42.
|
||||||
|
|
||||||
|
Partitioning attacks form another concern. Since Tor uses telescoping
|
||||||
|
to build circuits, it is possible to tell a user is constructing only
|
||||||
|
two hop paths at the entry node. It is questionable if this data is
|
||||||
|
actually worth anything though, especially if the majority of users
|
||||||
|
have easy access to this option, and do actually choose their path
|
||||||
|
lengths semi-randomly.
|
||||||
|
|
||||||
|
Nick has postulated that exits may also be able to tell that you are
|
||||||
|
using only 2 hops by the amount of time between sending their
|
||||||
|
RELAY_CONNECTED cell and the first bit of RELAY_DATA traffic they
|
||||||
|
see from the OP. I doubt that they will be able to make much use
|
||||||
|
of this timing pattern, since it will likely vary widely depending
|
||||||
|
upon the type of node selected for that first hop, and the user's
|
||||||
|
connection rate to that first hop. It is also questionable if this
|
||||||
|
data is worth anything, especially if many users are using this
|
||||||
|
option (and I imagine many will).
|
||||||
|
|
||||||
|
Perhaps most seriously, two hop paths do allow malicious guards
|
||||||
|
to easily fail circuits if they do not extend to their colluding peers
|
||||||
|
for the exit hop. Since guards can detect the number of hops in a
|
||||||
|
path, they could always fail the 3 hop circuits and focus on
|
||||||
|
selectively failing the two hop ones until a peer was chosen.
|
||||||
|
|
||||||
|
I believe currently guards are rotated if circuits fail, which does
|
||||||
|
provide some protection, but this could be changed so that an entry
|
||||||
|
guard is completely abandoned after a certain number of extend or
|
||||||
|
general circuit failures, though perhaps this also could be gamed
|
||||||
|
to increase guard turnover. Such a game would be much more noticeable
|
||||||
|
than an individual guard failing circuits, though, since it would
|
||||||
|
affect all clients, not just those who chose a particular guard.
|
||||||
|
|
||||||
|
|
||||||
|
Why not fix Pathlen=2?:
|
||||||
|
|
||||||
|
The main reason I am not advocating that we always use 2 hops is that
|
||||||
|
in some situations, timing correlation evidence by itself may not be
|
||||||
|
considered as solid and convincing as an actual, uninterrupted, fully
|
||||||
|
traced path. Are these timing attacks as effective on a real network
|
||||||
|
as they are in simulation? Would an extralegal adversary or authoritarian
|
||||||
|
government even care? In the face of these situation-dependent unknowns,
|
||||||
|
it should be up to the user to decide if this is a concern for them or not.
|
||||||
|
|
||||||
|
|
||||||
|
Implementation:
|
||||||
|
|
||||||
|
new_route_len() can be modified directly with a check of the
|
||||||
|
PathlenCoinWeight option (converted to percent) and a call to
|
||||||
|
crypto_rand_int(0,100) for the weighted coin.
|
||||||
|
|
||||||
|
The Vidalia setting should probably be in the network status window
|
||||||
|
as a slider, complete with tooltip, help documentation, and perhaps
|
||||||
|
an "Are you Sure?" checkbox.
|
||||||
|
|
||||||
|
The entry_guard_t structure could have a num_circ_failed member
|
||||||
|
such that if it exceeds N circuit extend failure to a second hop,
|
||||||
|
it is removed from the entry list. N should be sufficiently high
|
||||||
|
to avoid churn from normal Tor circuit failure, and could possibly be
|
||||||
|
represented as a ratio of failed to successful circuits through that
|
||||||
|
guard.
|
||||||
|
|
||||||
|
|
||||||
|
Migration:
|
||||||
|
|
||||||
|
Phase one: Re-enable config and modify new_route_len() to add an
|
||||||
|
extra hop if coin comes up "heads".
|
||||||
|
|
||||||
|
Phase two: Experiment with the proper ratio of circuit failures
|
||||||
|
used to expire garbage or malicious guards.
|
||||||
|
|
||||||
|
Phase three: Make slider or entry box in Vidalia, along with help entry
|
||||||
|
that explains in layman's terms the risks involved.
|
||||||
|
|
||||||
|
|
||||||
|
[1] http://www.cs.umass.edu/~mwright/papers/levine-timing.pdf
|
||||||
|
|
||||||
|
|
||||||
|
============================================================
|
||||||
|
|
||||||
|
I love replying to myself. I can't resist doing it. Sorry. "Think twice
|
||||||
|
post once" is a concept totally lost on me, especially when I'm wrong
|
||||||
|
the first two times ;)
|
||||||
|
|
||||||
|
|
||||||
|
Thus spake Mike Perry (mikepery@fscked.org):
|
||||||
|
|
||||||
|
> Why not fix Pathlen=2?:
|
||||||
|
>
|
||||||
|
> The main reason I am not advocating that we always use 2 hops is that
|
||||||
|
> in some situations, timing correlation evidence by itself may not be
|
||||||
|
> considered as solid and convincing as an actual, uninterrupted, fully
|
||||||
|
> traced path. Are these timing attacks as effective on a real network
|
||||||
|
> as they are in simulation? Would an extralegal adversary or authoritarian
|
||||||
|
> government even care? In the face of these situation-dependent unknowns,
|
||||||
|
> it should be up to the user to decide if this is a concern for them or not.
|
||||||
|
|
||||||
|
Hrmm.. it should probably also be noted that even a false positive
|
||||||
|
rate of 1% for a 200k concurrent-user network could mean that for a
|
||||||
|
given node, a given stream could be confused with something like 10
|
||||||
|
users, assuming ~200 nodes carry most of the traffic (ie 1000 users
|
||||||
|
each). Though of course to really know for sure, someone needs to do
|
||||||
|
an attack on a real network, unfortunately.
|
||||||
|
|
||||||
|
For this reason this option should instead be represented not as a
|
||||||
|
slider, but as a straight boolean value, at least in Vidalia.
|
||||||
|
|
||||||
|
Perhaps something like a radiobutton:
|
||||||
|
|
||||||
|
* "I use Tor for Censorship Resistance, not Anonymity. Speed is more
|
||||||
|
important to me than Anonymity."
|
||||||
|
* "I use Tor for Anonymity. I need extra protection at the cost of speed."
|
||||||
|
|
||||||
|
and then some explanation in the help for exactly what this means, and
|
||||||
|
the risks involved with eliminating the adversary's need for timing attacks
|
||||||
|
wrt to false positives, etc.
|
||||||
|
|
||||||
|
This radio button can then also be used to toggle Johannes's work,
|
||||||
|
should it be discovered that using latency/bandwidth measurements
|
||||||
|
gives the adversary some information as to your location or likely
|
||||||
|
node choices. Or we can create a series of choices along these lines
|
||||||
|
as more load balancing/path choice optimizations are developed.
|
||||||
|
|
||||||
|
----
|
||||||
|
|
||||||
|
So what does this change mean wrt to the proposal process? Should I
|
||||||
|
submit a new proposal? I'm still on the fence if the underlying torrc
|
||||||
|
option and Tor implementation should be a coin weight or a fixed
|
||||||
|
value, so at this point really all this changes is the proposed
|
||||||
|
Vidalia behavior (Vidalia is an imporant part of this proposal,
|
||||||
|
because it would be nice to take 33% of the load off the network for
|
||||||
|
all users who do not need 3 hops).
|
||||||
|
|
||||||
|
|
80
doc/spec/proposals/113-fast-authority-interface.txt
Normal file
80
doc/spec/proposals/113-fast-authority-interface.txt
Normal file
@ -0,0 +1,80 @@
|
|||||||
|
Filename: 113-fast-authority-interface.txt
|
||||||
|
Title: Simplifying directory authority administration
|
||||||
|
Version: $Revision: 12412 $
|
||||||
|
Last-Modified: $Date: 2007-04-16T19:11:29.511998Z $
|
||||||
|
Author: Nick Mathewson
|
||||||
|
Created:
|
||||||
|
Status: Open
|
||||||
|
|
||||||
|
Overview
|
||||||
|
|
||||||
|
The problem:
|
||||||
|
|
||||||
|
Administering a directory authority is a pain: you need to go through
|
||||||
|
emails and manually add new nodes as "named". When bad things come up,
|
||||||
|
you need to mark nodes (or whole regions) as invalid, badexit, etc.
|
||||||
|
|
||||||
|
This means that mostly, authority admins don't: only 2/4 current authority
|
||||||
|
admins actually bind names or list bad exits, and those two have often
|
||||||
|
complained about how annoying it is to do so.
|
||||||
|
|
||||||
|
Worse, name binding is a common path, but it's a pain in the neck: nobody
|
||||||
|
has done it for a couple of months.
|
||||||
|
|
||||||
|
Digression: who knows what?
|
||||||
|
|
||||||
|
It's trivial for Tor to automatically keep track of all of the
|
||||||
|
following information about a server:
|
||||||
|
name, fingerprint, IP, last-seen time, first-seen time, declared
|
||||||
|
contact.
|
||||||
|
|
||||||
|
All we need to have the administrator set is:
|
||||||
|
- Is this name/fingerprint pair bound?
|
||||||
|
- Is this fingerprint/IP a bad exit?
|
||||||
|
- Is this fingerprint/IP an invalid node?
|
||||||
|
- Is this fingerprint/IP to be rejected?
|
||||||
|
|
||||||
|
The workflow for authority admins has two parts:
|
||||||
|
- Periodically, go through tor-ops and add new names. This doesn't
|
||||||
|
need to be done urgently.
|
||||||
|
- Less often, mark badly behaved serves as badly behaved. This is more
|
||||||
|
urgent.
|
||||||
|
|
||||||
|
Possible solution #1: Web-interface for name binding.
|
||||||
|
|
||||||
|
Deprecate use of the tor-ops mailing list; instead, have operators go to a
|
||||||
|
webform and enter their server info. This would put the information in a
|
||||||
|
standardized format, thus allowing quick, nearly-automated approval and
|
||||||
|
reply.
|
||||||
|
|
||||||
|
Possible solution #2: Self-binding names.
|
||||||
|
|
||||||
|
Peter Palfrader has proposed that names be assigned automatically to nodes
|
||||||
|
that have been up and running and valid for a while.
|
||||||
|
|
||||||
|
Possible solution #3: Self-maintaining approved-routers file
|
||||||
|
|
||||||
|
Mixminion alpha has a neat feature where whenever a new server is seen,
|
||||||
|
a stub line gets added to a configuration file. For Tor, it could look
|
||||||
|
something like this:
|
||||||
|
|
||||||
|
## First seen with this key on 2007-04-21 13:13:14
|
||||||
|
## Stayed up for at least 12 hours on IP 192.168.10.10
|
||||||
|
#RouterName AAAABBBBCCCCDDDDEFEF
|
||||||
|
|
||||||
|
(Note that the implementation needs to parse commented lines to make sure
|
||||||
|
that it doesn't add duplicates, but that's not so hard.)
|
||||||
|
|
||||||
|
To add a router as named, administrators would only need to uncomment the
|
||||||
|
entry. This automatically maintained file could be kept separately from a
|
||||||
|
manually maintained one.
|
||||||
|
|
||||||
|
This could be combined with solution #2, such that Tor would do the hard
|
||||||
|
work of uncommenting entries for routers that should get Named, but
|
||||||
|
operators could override its decisions.
|
||||||
|
|
||||||
|
Possible solution #4: A separate mailing list for authority operators.
|
||||||
|
|
||||||
|
Right now, the tor-ops list is very high volume. There should be another
|
||||||
|
list that's only for dealing with problems that need prompt action, like
|
||||||
|
marking a router as !badexit.
|
Loading…
Reference in New Issue
Block a user