mirror of
https://gitlab.torproject.org/tpo/core/tor.git
synced 2024-11-10 21:23:58 +01:00
Merge commit 'mikeperry/bandwidth-proposals-final'
This commit is contained in:
commit
6423091f07
@ -42,31 +42,25 @@ Status: Open
|
||||
slices of 50 nodes each, grouped according to advertised node bandwidth.
|
||||
|
||||
Two hop circuits are built using nodes from the same slice, and a large
|
||||
file is downloaded via these circuits. For nodes in the first 15% of the
|
||||
network, a 500K file will be used. For nodes in the next 15%, a 250K file
|
||||
will be used. For nodes in next 15%, a 100K file will be used. The
|
||||
remainder of the nodes will fetch a 75K file.[1]
|
||||
file is downloaded via these circuits. The file sizes are set based
|
||||
on node percentile rank as follows:
|
||||
|
||||
This process is repeated 250 times, and average stream capacities are
|
||||
assigned to each node from these results.
|
||||
0-10: 2M
|
||||
10-20: 1M
|
||||
20-30: 512k
|
||||
30-50: 256k
|
||||
50-100: 128k
|
||||
|
||||
In the future, a node generator type can be created to ensure that
|
||||
each node is chosen to participate in an equal number of circuits,
|
||||
and the selection will continue until every live node is chosen
|
||||
to participate in at least 7 circuits.
|
||||
These sizes are based on measurements performed during test scans.
|
||||
|
||||
This process is repeated until each node has been chosen to participate
|
||||
in at least 5 circuits.
|
||||
|
||||
|
||||
4. Ratio Calculation Options
|
||||
4. Ratio Calculation
|
||||
|
||||
There are two options for deriving the ratios themselves. They can
|
||||
be obtained by dividing each nodes' average stream capacity by
|
||||
either the average for the slice, or the average for the network as a
|
||||
whole.
|
||||
|
||||
Dividing by the network-wide average has the advantage that it will
|
||||
account for issues related to unbalancing between higher vs lower
|
||||
capacity, such as Steven Murdoch's queuing theory weighting result.
|
||||
For this reason, we will opt for network-wide averages.
|
||||
The ratios are calculated by dividing each measured value by the
|
||||
network-wide average.
|
||||
|
||||
|
||||
5. Ratio Filtering
|
||||
@ -77,10 +71,8 @@ Status: Open
|
||||
with capacity of one standard deviation below a node's average
|
||||
are also removed.
|
||||
|
||||
The final ratio result will be calculated as the maximum of
|
||||
these two resulting ratios if both are less than 1.0, the minimum
|
||||
if both are greater than 1.0, and the mean if one is greater
|
||||
and one is less than 1.0.
|
||||
The final ratio result will be greater of the unfiltered ratio
|
||||
and the filtered ratio.
|
||||
|
||||
|
||||
6. Pseudocode for Ratio Calculation Algorithm
|
||||
@ -95,11 +87,8 @@ Status: Open
|
||||
BW_measured(N) = MEAN(b | b is bandwidth of a stream through N)
|
||||
Bw_stddev(N) = STDDEV(b | b is bandwidth of a stream through N)
|
||||
Bw_avg(S) = MEAN(b | b = BW_measured(N) for all N in S)
|
||||
Normal_Routers(S) = {N | Bw_measured(N)/Bw_avg(S) > 0.5 }
|
||||
for N in S:
|
||||
Normal_Streams(N) =
|
||||
{stream via N | all nodes in stream not in {Normal_Routers(S)-N}
|
||||
and bandwidth > BW_measured(N)-Bw_stddev(N)}
|
||||
Normal_Streams(N) = {stream via N | bandwidth >= BW_measured(N)}
|
||||
BW_Norm_measured(N) = MEAN(b | b is a bandwidth of Normal_Streams(N))
|
||||
|
||||
Bw_net_avg(Slices) = MEAN(BW_measured(N) for all N in Slices)
|
||||
@ -107,14 +96,9 @@ Status: Open
|
||||
|
||||
for N in all Slices:
|
||||
Bw_net_ratio(N) = Bw_measured(N)/Bw_net_avg(Slices)
|
||||
Bw_Norm_net_ratio(N) = Bw_measured2(N)/Bw_Norm_net_avg(Slices)
|
||||
Bw_Norm_net_ratio(N) = BW_Norm_measured(N)/Bw_Norm_net_avg(Slices)
|
||||
|
||||
if Bw_net_ratio(N) < 1.0 and Bw_Norm_net_ratio(N) < 1.0:
|
||||
ResultRatio(N) = MAX(Bw_net_ratio(N), Bw_Norm_net_ratio(N))
|
||||
else if Bw_net_ratio(N) > 1.0 and Bw_Norm_net_ratio(N) > 1.0:
|
||||
ResultRatio(N) = MIN(Bw_net_ratio(N), Bw_Norm_net_ratio(N))
|
||||
else:
|
||||
ResultRatio(N) = MEAN(Bw_net_ratio(N), Bw_Norm_net_ratio(N))
|
||||
ResultRatio(N) = MAX(Bw_net_ratio(N), Bw_Norm_net_ratio(N))
|
||||
|
||||
|
||||
7. Security implications
|
||||
@ -126,13 +110,13 @@ Status: Open
|
||||
|
||||
This scheme will not address nodes that try to game the system by
|
||||
providing better service to scanners. The scanners can be detected
|
||||
at the entry by IP address, and at the exit by the destination fetch.
|
||||
at the entry by IP address, and at the exit by the destination fetch
|
||||
IP.
|
||||
|
||||
Measures can be taken to obfuscate and separate the scanners' source
|
||||
IP address from the directory authority IP address. For instance,
|
||||
scans can happen offsite and the results can be rsynced into the
|
||||
authorities. The destination fetch can also be obscured by using SSL
|
||||
and periodically changing the large document that is fetched.
|
||||
authorities. The destination server IP can also change.
|
||||
|
||||
Neither of these methods are foolproof, but such nodes can already
|
||||
lie about their bandwidth to attract more traffic, so this solution
|
||||
@ -148,14 +132,14 @@ Status: Open
|
||||
over a portion of the network, outputting files of the form:
|
||||
|
||||
node_id=<idhex> SP strm_bw=<BW_measured(N)> SP
|
||||
filt_bw=<BW_Norm_measured(N)> NL
|
||||
filt_bw=<BW_Norm_measured(N)> ns_bw=<CurrentConsensusBw(N)> NL
|
||||
|
||||
The most recent file from each scanner will be periodically gathered
|
||||
by another script that uses them to produce network-wide averages
|
||||
and calculate ratios as per the algorithm in section 6. Because nodes
|
||||
may shift in capacity, they may appear in more than one slice and/or
|
||||
appear more than once in the file set. The line that yields a ratio
|
||||
closest to 1.0 will be chosen in this case.
|
||||
appear more than once in the file set. The most recently measured
|
||||
line will be chosen in this case.
|
||||
|
||||
|
||||
9. Integration with Proposal 160
|
||||
@ -166,10 +150,15 @@ Status: Open
|
||||
scan, and taking the weighted average with the previous consensus
|
||||
bandwidth:
|
||||
|
||||
Bw_new = (Bw_current * Alpha + Bw_scan_avg*Bw_ratio)/(Alpha + 1)
|
||||
Bw_new = Round((Bw_current * Alpha + Bw_scan_avg*Bw_ratio)/(Alpha + 1))
|
||||
|
||||
The Alpha parameter is a smoothing parameter intended to prevent
|
||||
rapid oscillation between loaded and unloaded conditions.
|
||||
rapid oscillation between loaded and unloaded conditions. It is
|
||||
currently fixed at 0.333.
|
||||
|
||||
The Round() step consists of rounding to the 3 most significant figures
|
||||
in base10, and then rounding that result to the nearest 1000, with
|
||||
a minimum value of 1000.
|
||||
|
||||
This will produce a new bandwidth value that will be output into a
|
||||
file consisting of lines of the form:
|
||||
@ -183,6 +172,3 @@ Status: Open
|
||||
This file can be either copied or rsynced into a directory readable
|
||||
by the directory authority.
|
||||
|
||||
|
||||
1. Exact values for each segment are still being determined via
|
||||
test scans.
|
||||
|
Loading…
Reference in New Issue
Block a user