mirror of
https://gitlab.torproject.org/tpo/core/tor.git
synced 2024-11-24 04:13:28 +01:00
Update 161 to reflect current implementation.
Also mention rounding step.
This commit is contained in:
parent
9da7b22355
commit
011b732436
@ -42,31 +42,26 @@ Status: Open
|
||||
slices of 50 nodes each, grouped according to advertised node bandwidth.
|
||||
|
||||
Two hop circuits are built using nodes from the same slice, and a large
|
||||
file is downloaded via these circuits. For nodes in the first 15% of the
|
||||
network, a 500K file will be used. For nodes in the next 15%, a 250K file
|
||||
will be used. For nodes in next 15%, a 100K file will be used. The
|
||||
remainder of the nodes will fetch a 75K file.[1]
|
||||
file is downloaded via these circuits. The file sizes are set based
|
||||
on node percentile rank as follows:
|
||||
|
||||
This process is repeated 250 times, and average stream capacities are
|
||||
assigned to each node from these results.
|
||||
0-10: 4M
|
||||
10-20: 2M
|
||||
20-30: 1M
|
||||
30-50: 512k
|
||||
50-75: 256k
|
||||
75-100: 128k
|
||||
|
||||
In the future, a node generator type can be created to ensure that
|
||||
each node is chosen to participate in an equal number of circuits,
|
||||
and the selection will continue until every live node is chosen
|
||||
to participate in at least 7 circuits.
|
||||
These sizes are based on measurements performed during test scans.
|
||||
|
||||
This process is repeated until each node has been chosen to participate
|
||||
in at least 5 circuits.
|
||||
|
||||
|
||||
4. Ratio Calculation Options
|
||||
4. Ratio Calculation
|
||||
|
||||
There are two options for deriving the ratios themselves. They can
|
||||
be obtained by dividing each nodes' average stream capacity by
|
||||
either the average for the slice, or the average for the network as a
|
||||
whole.
|
||||
|
||||
Dividing by the network-wide average has the advantage that it will
|
||||
account for issues related to unbalancing between higher vs lower
|
||||
capacity, such as Steven Murdoch's queuing theory weighting result.
|
||||
For this reason, we will opt for network-wide averages.
|
||||
The ratios are calculated by dividing each measured value by the
|
||||
network-wide average.
|
||||
|
||||
|
||||
5. Ratio Filtering
|
||||
@ -77,10 +72,8 @@ Status: Open
|
||||
with capacity of one standard deviation below a node's average
|
||||
are also removed.
|
||||
|
||||
The final ratio result will be calculated as the maximum of
|
||||
these two resulting ratios if both are less than 1.0, the minimum
|
||||
if both are greater than 1.0, and the mean if one is greater
|
||||
and one is less than 1.0.
|
||||
The final ratio result will be the unfiltered ratio if it is
|
||||
close to 1.0, otherwise it will be the filtered ratio.
|
||||
|
||||
|
||||
6. Pseudocode for Ratio Calculation Algorithm
|
||||
@ -109,12 +102,7 @@ Status: Open
|
||||
Bw_net_ratio(N) = Bw_measured(N)/Bw_net_avg(Slices)
|
||||
Bw_Norm_net_ratio(N) = Bw_measured2(N)/Bw_Norm_net_avg(Slices)
|
||||
|
||||
if Bw_net_ratio(N) < 1.0 and Bw_Norm_net_ratio(N) < 1.0:
|
||||
ResultRatio(N) = MAX(Bw_net_ratio(N), Bw_Norm_net_ratio(N))
|
||||
else if Bw_net_ratio(N) > 1.0 and Bw_Norm_net_ratio(N) > 1.0:
|
||||
ResultRatio(N) = MIN(Bw_net_ratio(N), Bw_Norm_net_ratio(N))
|
||||
else:
|
||||
ResultRatio(N) = MEAN(Bw_net_ratio(N), Bw_Norm_net_ratio(N))
|
||||
ResultRatio(N) = ClosestToOne(Bw_net_ratio(N), Bw_Norm_net_ratio(N))
|
||||
|
||||
|
||||
7. Security implications
|
||||
@ -126,13 +114,13 @@ Status: Open
|
||||
|
||||
This scheme will not address nodes that try to game the system by
|
||||
providing better service to scanners. The scanners can be detected
|
||||
at the entry by IP address, and at the exit by the destination fetch.
|
||||
at the entry by IP address, and at the exit by the destination fetch
|
||||
IP.
|
||||
|
||||
Measures can be taken to obfuscate and separate the scanners' source
|
||||
IP address from the directory authority IP address. For instance,
|
||||
scans can happen offsite and the results can be rsynced into the
|
||||
authorities. The destination fetch can also be obscured by using SSL
|
||||
and periodically changing the large document that is fetched.
|
||||
authorities. The destination server IP can also change.
|
||||
|
||||
Neither of these methods are foolproof, but such nodes can already
|
||||
lie about their bandwidth to attract more traffic, so this solution
|
||||
@ -148,7 +136,7 @@ Status: Open
|
||||
over a portion of the network, outputting files of the form:
|
||||
|
||||
node_id=<idhex> SP strm_bw=<BW_measured(N)> SP
|
||||
filt_bw=<BW_Norm_measured(N)> NL
|
||||
filt_bw=<BW_Norm_measured(N)> ns_bw=<CurrentConsensusBw(N)> NL
|
||||
|
||||
The most recent file from each scanner will be periodically gathered
|
||||
by another script that uses them to produce network-wide averages
|
||||
@ -166,10 +154,15 @@ Status: Open
|
||||
scan, and taking the weighted average with the previous consensus
|
||||
bandwidth:
|
||||
|
||||
Bw_new = (Bw_current * Alpha + Bw_scan_avg*Bw_ratio)/(Alpha + 1)
|
||||
Bw_new = Round((Bw_current * Alpha + Bw_scan_avg*Bw_ratio)/(Alpha + 1))
|
||||
|
||||
The Alpha parameter is a smoothing parameter intended to prevent
|
||||
rapid oscillation between loaded and unloaded conditions.
|
||||
rapid oscillation between loaded and unloaded conditions. It is
|
||||
currently fixed at 0.333.
|
||||
|
||||
The Round() step consists of rounding to the 3 most significant figures
|
||||
in base10, and then rounding that result to the nearest 1000, with
|
||||
a minimum value of 1000.
|
||||
|
||||
This will produce a new bandwidth value that will be output into a
|
||||
file consisting of lines of the form:
|
||||
@ -183,6 +176,3 @@ Status: Open
|
||||
This file can be either copied or rsynced into a directory readable
|
||||
by the directory authority.
|
||||
|
||||
|
||||
1. Exact values for each segment are still being determined via
|
||||
test scans.
|
||||
|
Loading…
Reference in New Issue
Block a user