Merge remote-tracking branch 'teor/fallbacks-20161219'

This commit is contained in:
Nick Mathewson 2016-12-20 18:38:45 -05:00
commit df87812b41
4 changed files with 494 additions and 198 deletions

37
changes/fallbacks-201612 Normal file
View File

@ -0,0 +1,37 @@
o Minor features (fallback directories):
- Select 200 fallback directories for each release.
Closes ticket 20881.
- Provide bandwidth and consensus weight for each candidate fallback in
updateFallbackDirs.py.
Closes ticket 20878.
- Require fallback directories to have the same address and port for
7 dayss. (Due to the number of relays with enough stability.) Relays
whose OnionOO stability timer is reset on restart by bug 18050 should
upgrade to Tor 0.2.8.7 or later, which has a fix for this issue.
Closes ticket 20880; maintains short-term fix in e220214 in
tor-0.2.8.2-alpha.
- Make it easier to change the output sort order of fallbacks.
Closes ticket 20822.
- Exclude relays affected by 20499 from the fallback list. Exclude known
affected versions, and any relay that delivers a stale consensus, as
long as that consensus expired more than 24 hours ago.
Closes ticket 20539.
- Require fallbacks to have flags for 90% of the time (weighted decaying
average), rather than 95%. This allows at least 73% of clients to
bootstrap in the first 5 seconds without contacting an authority.
Part of ticket 18828.
- Display the fingerprint when downloading consensuses from fallbacks.
Closes ticket 20908.
- Allow 3 fallbacks per operator. (This is safe now we are choosing 200
fallbacks.) Closes ticket 20912.
- Reduce the minimum fallback bandwidth to 1 MByte/s.
Part of #18828.
o Minor bugfixes (fallback directories):
- Stop failing when OUTPUT_COMMENTS is True in updateFallbackDirs.py.
Closes ticket 20877; bugfix on commit 9998343 in tor-0.2.8.3-alpha.
- Avoid checking fallback candidates' DirPorts if they are down in
OnionOO. When a relay operator has multiple relays, this prioritises
relays that are up over relays that are down.
Closes ticket #20926; bugfix on 0.2.8.3-alpha.
- Stop failing when a relay has no uptime data in updateFallbackDirs.py.
Closes ticket 20945; bugfix on tor-0.2.8.1-alpha.

View File

@ -27,11 +27,6 @@
# https://lists.torproject.org/pipermail/tor-relays/2015-December/008384.html
80.82.215.199:80 orport=443 id=3BEFAB76461B6B99DCF34C285E933562F5712AE4 ipv6=[2001:4ba0:cafe:a18::1]:443
# https://lists.torproject.org/pipermail/tor-relays/2016-January/008515.html
# later opt-out in
# https://lists.torproject.org/pipermail/tor-relays/2016-January/008521.html
5.9.158.75:80 orport=443 id=F1BE15429B3CE696D6807F4D4A58B1BFEC45C822 ipv6=[2a01:4f8:190:514a::2]:443
# Email sent directly to teor, verified using relay contact info
5.34.183.168:80 orport=443 id=601C92108A568742A7A6D9473FE3A414F7149070
217.12.199.208:8080 orport=22 id=BCFB0933367D626715DA32A147F417194A5D48D6
@ -132,7 +127,6 @@
85.114.135.20:9030 orport=9001 id=ED8A9291A3139E34BBD35037B082081EC6C26C80 ipv6=[2001:4ba0:fff5:2d::8]:9001
148.251.128.156:9030 orport=9001 id=E382042E06A0A68AFC533E5AD5FB6867A12DF9FF ipv6=[2a01:4f8:210:238a::8]:9001
62.210.115.147:9030 orport=9001 id=7F1D94E2C36F8CC595C2AB00022A5AE38171D50B ipv6=[2001:bc8:3182:101::8]:9001
212.47.250.24:9030 orport=9001 id=33DA0CAB7C27812EFF2E22C9705630A54D101FEB
# Email sent directly to teor, verified using relay contact info
74.208.220.222:60000 orport=59999 id=4AA22235F0E9B3795A33930343CBB3EDAC60C5B0
@ -227,3 +221,34 @@ id=9C8A123081EFBE022EF795630F447839DDFDDDEC
# Fallback was on 0.2.8.2-alpha list, but opted-out before 0.2.8.6
37.187.1.149:9030 orport=9001 id=08DC0F3C6E3D9C527C1FC8745D35DD1B0DE1875D ipv6=[2001:41d0:a:195::1]:9001
# Email sent directly to teor, verified using relay contact info
195.154.15.227:9030 orport=9001 id=6C3E3AB2F5F03CD71B637D433BAD924A1ECC5796
# Fallback was on 0.2.8.6 list, but changed IPv4 before 0.2.9
195.154.8.111:80 orport=443 id=FCB6695F8F2DC240E974510A4B3A0F2B12AB5B64
# Same operator, not on 0.2.8.6 list, also changed IPv4
51.255.235.246:80 orport=443 id=9B99C72B02AF8E3E5BE3596964F9CACD0090D132
# Fallback was on 0.2.8.6 list, but changed IPv4 before 0.2.9
5.175.233.86:80 orport=443 id=5525D0429BFE5DC4F1B0E9DE47A4CFA169661E33
# Fallbacks were on 0.2.8.6 list, but went down before 0.2.9
194.150.168.79:11112 orport=11111 id=29F1020B94BE25E6BE1AD13E93CE19D2131B487C
94.126.23.174:9030 orport=9001 id=6FC6F08270D565BE89B7C819DD8E2D487397C073
195.191.233.221:80 orport=443 id=DE134FC8E5CC4EC8A5DE66934E70AC9D70267197
176.31.180.157:143 orport=22 id=E781F4EC69671B3F1864AE2753E0890351506329 ipv6=[2001:41d0:8:eb9d::1]:22
# Fallback was on 0.2.8.6 list, but opted-out before 0.2.9
144.76.73.140:9030 orport=9001 id=6A640018EABF3DA9BAD9321AA37C2C87BBE1F907
# https://lists.torproject.org/pipermail/tor-relays/2016-December/011114.html
# no dirport
86.107.110.34:0 orport=9001 id=A0E3D30A660DB70CA0B6D081BA54D094DED6F28D
94.242.59.147:80 orport=9001 id=674DCBB0D9C1C4C4DBFB4A9AE024AF59FE4E7F46 ipv6=[2a00:1838:35:42::b648]:9001
# Email sent directly to teor, verified using relay contact info
167.114.152.100:9030 orport=443 id=0EF5E5FFC5D1EABCBDA1AFF6F6D6325C5756B0B2 ipv6=[2607:5300:100:200::1608]:443
# Email sent directly to teor, verified using relay contact info
163.172.35.245:80 orport=443 id=B771AA877687F88E6F1CA5354756DF6C8A7B6B24

View File

@ -50,17 +50,14 @@
167.114.35.28:9030 orport=9001 id=E65D300F11E1DB12C534B0146BDAB6972F1A8A48
# https://lists.torproject.org/pipermail/tor-relays/2015-December/008374.html
170.130.1.7:9030 orport=9001 id=FA3415659444AE006E7E9E5375E82F29700CFDFD
104.243.35.196:9030 orport=9001 id=FA3415659444AE006E7E9E5375E82F29700CFDFD
# https://lists.torproject.org/pipermail/tor-relays/2015-December/008378.html
144.76.14.145:110 orport=143 id=14419131033443AE6E21DA82B0D307F7CAE42BDB ipv6=[2a01:4f8:190:9490::dead]:443
# https://lists.torproject.org/pipermail/tor-relays/2015-December/008379.html
# Email sent directly to teor, verified using relay contact info
91.121.84.137:4951 orport=4051 id=6DE61A6F72C1E5418A66BFED80DFB63E4C77668F
# https://lists.torproject.org/pipermail/tor-relays/2015-December/008380.html
5.175.233.86:80 orport=443 id=5525D0429BFE5DC4F1B0E9DE47A4CFA169661E33
91.121.84.137:4951 orport=4051 id=6DE61A6F72C1E5418A66BFED80DFB63E4C77668F ipv6=[2001:41d0:1:8989::1]:4051
# https://lists.torproject.org/pipermail/tor-relays/2015-December/008381.html
# Sent additional email to teor with more relays
@ -99,17 +96,14 @@
178.62.199.226:80 orport=443 id=CBEFF7BA4A4062045133C053F2D70524D8BBE5BE ipv6=[2a03:b0c0:2:d0::b7:5001]:443
# Emails sent directly to teor, verified using relay contact info
217.12.199.208:80 orport=443 id=DF3AED4322B1824BF5539AE54B2D1B38E080FF05
217.12.199.208:80 orport=443 id=DF3AED4322B1824BF5539AE54B2D1B38E080FF05 ipv6=[2a02:27a8:0:2::7e]:443
# Email sent directly to teor, verified using relay contact info
94.23.204.175:9030 orport=9001 id=5665A3904C89E22E971305EE8C1997BCA4123C69
# https://twitter.com/binarytenshi/status/717952514327453697
94.126.23.174:9030 orport=9001 id=6FC6F08270D565BE89B7C819DD8E2D487397C073
# Email sent directly to teor, verified using relay contact info
171.25.193.78:80 orport=443 id=A478E421F83194C114F41E94F95999672AED51FE ipv6=[2001:67c:289c:3::78]:443
171.25.193.77:80 orport=443 id=A10C4F666D27364036B562823E5830BC448E046A ipv6=[2001:67c:289c:3::77]:443
171.25.193.78:80 orport=443 id=A478E421F83194C114F41E94F95999672AED51FE ipv6=[2001:67c:289c:3::78]:443
171.25.193.131:80 orport=443 id=79861CF8522FC637EF046F7688F5289E49D94576
171.25.193.20:80 orport=443 id=DD8BD7307017407FCC36F8D04A688F74A0774C02 ipv6=[2001:67c:289c::20]:443
# OK, but same machine as 79861CF8522FC637EF046F7688F5289E49D94576
@ -118,9 +112,9 @@
#171.25.193.25:80 orport=443 id=185663B7C12777F052B2C2D23D7A239D8DA88A0F ipv6=[2001:67c:289c::25]:443
# Email sent directly to teor, verified using relay contact info
212.47.229.2:9030 orport=9001 id=20462CBA5DA4C2D963567D17D0B7249718114A68
212.47.229.2:9030 orport=9001 id=20462CBA5DA4C2D963567D17D0B7249718114A68 ipv6=[2001:bc8:4400:2100::f03]:9001
93.115.97.242:9030 orport=9001 id=B5212DB685A2A0FCFBAE425738E478D12361710D
46.28.109.231:9030 orport=9001 id=F70B7C5CD72D74C7F9F2DC84FA9D20D51BA13610 ipv6=[2a02:2b88:2:1::4205:42]:9001
46.28.109.231:9030 orport=9001 id=F70B7C5CD72D74C7F9F2DC84FA9D20D51BA13610 ipv6=[2a02:2b88:2:1::4205:1]:9001
# Email sent directly to teor, verified using relay contact info
85.235.250.88:80 orport=443 id=72B2B12A3F60408BDBC98C6DF53988D3A0B3F0EE
@ -154,31 +148,19 @@
178.16.208.59:80 orport=443 id=136F9299A5009A4E0E96494E723BDB556FB0A26B ipv6=[2a00:1c20:4089:1234:bff6:e1bb:1ce3:8dc6]:443
# Email sent directly to teor, verified using relay contact info
195.154.8.111:80 orport=443 id=FCB6695F8F2DC240E974510A4B3A0F2B12AB5B64
51.255.235.246:80 orport=443 id=9B99C72B02AF8E3E5BE3596964F9CACD0090D132
5.39.76.158:80 orport=443 id=C41F60F8B00E7FEF5CCC5BC6BB514CA1B8AAB651
# Email sent directly to teor, verified using relay contact info
109.163.234.5:80 orport=443 id=5C84C35936B7100B949AC75764EEF1352550550B
109.163.234.7:80 orport=443 id=C46524E586E1B997329703D356C07EE12B28C722
109.163.234.9:80 orport=443 id=5714542DCBEE1DD9864824723638FD44B2122CEA
77.247.181.162:80 orport=443 id=7BB160A8F54BD74F3DA5F2CE701E8772B841859D
109.163.234.4:80 orport=443 id=6B1E001929AF4DDBB747D02EC28340792B7724A6
77.247.181.164:80 orport=443 id=10E13E340651D0EF66B4DEBF610B3C0981168107
109.163.234.8:80 orport=443 id=20B0038D7A2FD73C696922551B8344CB0893D1F8
77.247.181.166:80 orport=443 id=06E123865C590189B3181114F23F0F13A7BC0E69
109.163.234.2:80 orport=443 id=B4F883DB3D478C7AE569C9F6CB766FD58650DC6A
109.163.234.2:80 orport=443 id=14F92FF956105932E9DEC5B82A7778A0B1BD9A52
109.163.234.4:80 orport=443 id=4888770464F0E900EFEF1BA181EA873D13F7713C
109.163.234.5:80 orport=443 id=5EB8D862E70981B8690DEDEF546789E26AB2BD24
109.163.234.7:80 orport=443 id=23038A7F2845EBA2234ECD6651BD4A7762F51B18
109.163.234.8:80 orport=443 id=0818DAE0E2DDF795AEDEAC60B15E71901084F281
109.163.234.9:80 orport=443 id=ABF7FBF389C9A747938B639B20E80620B460B2A9
62.102.148.67:80 orport=443 id=4A0C3E177AF684581EF780981AEAF51A98A6B5CF
109.163.234.5:80 orport=443 id=5C84C35936B7100B949AC75764EEF1352550550B
109.163.234.7:80 orport=443 id=C46524E586E1B997329703D356C07EE12B28C722
109.163.234.9:80 orport=443 id=5714542DCBEE1DD9864824723638FD44B2122CEA
77.247.181.162:80 orport=443 id=7BB160A8F54BD74F3DA5F2CE701E8772B841859D
109.163.234.4:80 orport=443 id=6B1E001929AF4DDBB747D02EC28340792B7724A6
77.247.181.164:80 orport=443 id=10E13E340651D0EF66B4DEBF610B3C0981168107
109.163.234.8:80 orport=443 id=20B0038D7A2FD73C696922551B8344CB0893D1F8
77.247.181.166:80 orport=443 id=06E123865C590189B3181114F23F0F13A7BC0E69
109.163.234.2:80 orport=443 id=B4F883DB3D478C7AE569C9F6CB766FD58650DC6A
62.102.148.67:80 orport=443 id=4A0C3E177AF684581EF780981AEAF51A98A6B5CF
# https://twitter.com/biotimylated/status/718994247500718080
212.47.252.149:9030 orport=9001 id=2CAC39BAA996791CEFAADC9D4754D65AF5EB77C0
@ -215,9 +197,7 @@
# Email sent directly to teor, verified using relay contact info
86.59.119.88:80 orport=443 id=ACD889D86E02EDDAB1AFD81F598C0936238DC6D0
# Email sent directly to teor, verified using relay contact info
144.76.73.140:9030 orport=9001 id=6A640018EABF3DA9BAD9321AA37C2C87BBE1F907
86.59.119.83:80 orport=443 id=FC9AC8EA0160D88BCCFDE066940D7DD9FA45495B
# Email sent directly to teor, verified using relay contact info
193.11.164.243:9030 orport=9001 id=FFA72BD683BC2FCF988356E6BEC1E490F313FB07 ipv6=[2001:6b0:7:125::243]:9001
@ -278,8 +258,8 @@
# Email sent directly to teor, verified using relay contact info
178.62.22.36:80 orport=443 id=A0766C0D3A667A3232C7D569DE94A28F9922FCB1 ipv6=[2a03:b0c0:1:d0::174:1]:9050
188.166.23.127:80 orport=443 id=3771A8154DEA98D551607806C80A209CDAA74535 ipv6=[2a03:b0c0:2:d0::27b:7001]:9050
198.199.64.217:80 orport=443 id=FAD306BAA59F6A02783F8606BDAA431F5FF7D1EA ipv6=[2604:a880:400:d0::1a9:b001]:9050
188.166.23.127:80 orport=443 id=8672E8A01B4D3FA4C0BBE21C740D4506302EA487 ipv6=[2a03:b0c0:2:d0::27b:7001]:9050
198.199.64.217:80 orport=443 id=B1D81825CFD7209BD1B4520B040EF5653C204A23 ipv6=[2604:a880:400:d0::1a9:b001]:9050
159.203.32.149:80 orport=443 id=55C7554AFCEC1062DCBAC93E67B2E03C6F330EFC ipv6=[2604:a880:cad:d0::105:f001]:9050
# Email sent directly to teor, verified using relay contact info
@ -300,9 +280,6 @@
# Email sent directly to teor, verified using relay contact info
212.47.230.49:9030 orport=9001 id=3D6D0771E54056AEFC28BB1DE816951F11826E97
# Email sent directly to teor, verified using relay contact info
176.31.180.157:143 orport=22 id=E781F4EC69671B3F1864AE2753E0890351506329 ipv6=[2001:41d0:8:eb9d::1]:22
# Email sent directly to teor, verified using relay contact info
192.99.55.69:80 orport=443 id=0682DE15222A4A4A0D67DBA72A8132161992C023
192.99.59.140:80 orport=443 id=3C9148DA49F20654730FAC83FFF693A4D49D0244
@ -318,7 +295,7 @@
151.80.42.103:9030 orport=9001 id=9007C1D8E4F03D506A4A011B907A9E8D04E3C605 ipv6=[2001:41d0:e:f67::114]:9001
# Email sent directly to teor, verified using relay contact info
5.39.92.199:80 orport=443 id=0BEA4A88D069753218EAAAD6D22EA87B9A1319D6
5.39.92.199:80 orport=443 id=0BEA4A88D069753218EAAAD6D22EA87B9A1319D6 ipv6=[2001:41d0:8:b1c7::1]:443
# Email sent directly to teor, verified using relay contact info
176.31.159.231:80 orport=443 id=D5DBCC0B4F029F80C7B8D33F20CF7D97F0423BB1
@ -332,10 +309,7 @@
212.47.241.21:80 orport=443 id=892F941915F6A0C6E0958E52E0A9685C190CF45C
# Email sent directly to teor, verified using relay contact info
195.191.233.221:80 orport=443 id=DE134FC8E5CC4EC8A5DE66934E70AC9D70267197
# Email sent directly to teor, verified using relay contact info
62.210.238.33:9030 orport=9001 id=FDF845FC159C0020E2BDDA120C30C5C5038F74B4
212.129.38.254:9030 orport=9001 id=FDF845FC159C0020E2BDDA120C30C5C5038F74B4
# Email sent directly to teor, verified using relay contact info
37.157.195.87:8030 orport=443 id=12FD624EE73CEF37137C90D38B2406A66F68FAA2
@ -405,12 +379,12 @@
91.219.237.229:80 orport=443 id=1ECD73B936CB6E6B3CD647CC204F108D9DF2C9F7
# Email sent directly to teor, verified using relay contact info
# Suitable, check with operator before adding
#212.47.240.10:82 orport=443 id=2A4C448784F5A83AFE6C78DA357D5E31F7989DEB
212.47.240.10:81 orport=993 id=72527E3242CB15AADE28374AE0D35833FC083F60
212.47.240.10:82 orport=443 id=2A4C448784F5A83AFE6C78DA357D5E31F7989DEB
# Ok, but on the same machine as 2A4C448784F5A83AFE6C78DA357D5E31F7989DEB
#212.47.240.10:81 orport=993 id=72527E3242CB15AADE28374AE0D35833FC083F60
163.172.131.88:80 orport=443 id=AD253B49E303C6AB1E048B014392AC569E8A7DAE ipv6=[2001:bc8:4400:2100::2:1009]:443
# Suitable, check with operator before adding
#163.172.131.88:81 orport=993 id=D5F3FB17504744FB7ECEF46F4B1D155258A6D942 ipv6=D5F3FB17504744FB7ECEF46F4B1D155258A6D942
# Ok, but on the same machine as AD253B49E303C6AB1E048B014392AC569E8A7DAE
#163.172.131.88:81 orport=993 id=D5F3FB17504744FB7ECEF46F4B1D155258A6D942 ipv6=[2001:bc8:4400:2100::2:1009]:993
# Email sent directly to teor, verified using relay contact info
46.101.151.222:80 orport=443 id=1DBAED235E3957DE1ABD25B4206BE71406FB61F8
@ -443,9 +417,6 @@
# Email sent directly to teor, verified using relay contact info
188.166.133.133:9030 orport=9001 id=774555642FDC1E1D4FDF2E0C31B7CA9501C5C9C7 ipv6=[2a03:b0c0:2:d0::5:f001]:9001
# Email sent directly to teor, verified using relay contact info
5.196.88.122:9030 orport=9001 id=0C2C599AFCB26F5CFC2C7592435924C1D63D9484
# Email sent directly to teor, verified using relay contact info
46.8.249.10:80 orport=443 id=31670150090A7C3513CB7914B9610E786391A95D
@ -485,11 +456,10 @@
5.9.146.203:80 orport=443 id=1F45542A24A61BF9408F1C05E0DCE4E29F2CBA11
# Email sent directly to teor, verified using relay contact info
167.114.152.100:9030 orport=443 id=0EF5E5FFC5D1EABCBDA1AFF6F6D6325C5756B0B2 ipv6=[2607:5300:100:200::1608]:443
# Email sent directly to teor, verified using relay contact info
192.99.168.102:80 orport=443 id=230A8B2A8BA861210D9B4BA97745AEC217A94207
167.114.153.21:80 orport=443 id=0B85617241252517E8ECF2CFC7F4C1A32DCD153F
# Updated details from atlas based on ticket #20010
163.172.176.167:80 orport=443 id=230A8B2A8BA861210D9B4BA97745AEC217A94207
163.172.149.155:80 orport=443 id=0B85617241252517E8ECF2CFC7F4C1A32DCD153F
163.172.149.122:80 orport=443 id=A9406A006D6E7B5DA30F2C6D4E42A338B5E340B2
# Email sent directly to teor, verified using relay contact info
204.11.50.131:9030 orport=9001 id=185F2A57B0C4620582602761097D17DB81654F70
@ -497,9 +467,6 @@
# Email sent directly to teor, verified using relay contact info
151.236.222.217:44607 orport=9001 id=94D58704C2589C130C9C39ED148BD8EA468DBA54
# Email sent directly to teor, verified using relay contact info
194.150.168.79:11112 orport=11111 id=29F1020B94BE25E6BE1AD13E93CE19D2131B487C
# Email sent directly to teor, verified using relay contact info
185.35.202.221:9030 orport=9001 id=C13B91384CDD52A871E3ECECE4EF74A7AC7DCB08 ipv6=[2a02:ed06::221]:9001
@ -513,7 +480,7 @@
92.222.20.130:80 orport=443 id=0639612FF149AA19DF3BCEA147E5B8FED6F3C87C
# Email sent directly to teor, verified using relay contact info
80.112.155.100:9030 orport=9001 id=1163378F239C36CA1BDC730AC50BF4F2976141F5 ipv6=[2001:470:7b02::38]:9001
80.112.155.100:9030 orport=9001 id=53B000310984CD86AF47E5F3CD0BFF184E34B383 ipv6=[2001:470:7b02::38]:9001
# Email sent directly to teor, verified using relay contact info
83.212.99.68:80 orport=443 id=DDBB2A38252ADDA53E4492DDF982CA6CC6E10EC0 ipv6=[2001:648:2ffc:1225:a800:bff:fe3d:67b5]:443
@ -522,7 +489,7 @@
95.130.11.147:9030 orport=443 id=6B697F3FF04C26123466A5C0E5D1F8D91925967A
# Email sent directly to teor, verified using relay contact info
176.31.191.26:9030 orport=9001 id=7350AB9ED7568F22745198359373C04AC783C37C
176.31.191.26:80 orport=443 id=7350AB9ED7568F22745198359373C04AC783C37C
# Email sent directly to teor, verified using relay contact info
128.199.55.207:9030 orport=9001 id=BCEF908195805E03E92CCFE669C48738E556B9C5 ipv6=[2a03:b0c0:2:d0::158:3001]:9001
@ -540,16 +507,17 @@
80.240.139.111:80 orport=443 id=DD3BE7382C221F31723C7B294310EF9282B9111B
# Email sent directly to teor, verified using relay contact info
185.97.32.18:9030 orport=9001 id=3BAB316CAAEC47E71905EB6C65584636D5689A8A
185.97.32.18:9030 orport=9001 id=04250C3835019B26AA6764E85D836088BE441088
# Email sent directly to teor, verified using relay contact info
149.56.45.200:9030 orport=9001 id=FE296180018833AF03A8EACD5894A614623D3F76
# Email sent directly to teor, verified using relay contact info
81.2.209.10:443 orport=80 id=B6904ADD4C0D10CDA7179E051962350A69A63243
81.2.209.10:443 orport=80 id=B6904ADD4C0D10CDA7179E051962350A69A63243 ipv6=[2001:15e8:201:1::d10a]:80
# Email sent directly to teor, verified using relay contact info
195.154.164.243:80 orport=443 id=AC66FFA4AB35A59EBBF5BF4C70008BF24D8A7A5C ipv6=[2001:bc8:399f:f000::1]:993
# IPv6 address unreliable
195.154.164.243:80 orport=443 id=AC66FFA4AB35A59EBBF5BF4C70008BF24D8A7A5C #ipv6=[2001:bc8:399f:f000::1]:993
138.201.26.2:80 orport=443 id=6D3A3ED5671E4E3F58D4951438B10AE552A5FA0F
81.7.16.182:80 orport=443 id=51E1CF613FD6F9F11FE24743C91D6F9981807D82 ipv6=[2a02:180:1:1::517:10b6]:993
134.119.36.135:80 orport=443 id=763C9556602BD6207771A7A3D958091D44C43228 ipv6=[2a00:1158:3::2a8]:993
@ -563,7 +531,7 @@
217.12.208.117:80 orport=443 id=E6E18151300F90C235D3809F90B31330737CEB43 ipv6=[2a00:1ca8:a7::1bb]:993
81.7.10.251:80 orport=443 id=8073670F8F852971298F8AF2C5B23AE012645901 ipv6=[2a02:180:1:1::517:afb]:993
46.36.39.50:80 orport=443 id=ED4B0DBA79AEF5521564FA0231455DCFDDE73BB6 ipv6=[2a02:25b0:aaaa:aaaa:8d49:b692:4852:0]:995
91.194.90.103:80 orport=443 id=75C4495F4D80522CA6F6A3FB349F1B009563F4B7 ipv6=[2a02:c200:0:10:3:0:5449:1]:993
91.194.90.103:80 orport=443 id=75C4495F4D80522CA6F6A3FB349F1B009563F4B7 ipv6=[2a02:c205:3000:5449::1]:993
163.172.25.118:80 orport=22 id=0CF8F3E6590F45D50B70F2F7DA6605ECA6CD408F
188.138.88.42:80 orport=443 id=70C55A114C0EF3DC5784A4FAEE64388434A3398F
81.7.13.84:80 orport=443 id=0C1E7DD9ED0676C788933F68A9985ED853CA5812 ipv6=[2a02:180:1:1::5b8f:538c]:993
@ -587,11 +555,10 @@
91.229.20.27:9030 orport=9001 id=9A0D54D3A6D2E0767596BF1515E6162A75B3293F
# Email sent directly to teor, verified using relay contact info
# Awaiting confirmation of new ORPort from relay operator
80.127.137.19:80 orport=443 id=6EF897645B79B6CB35E853B32506375014DE3621 ipv6=[2001:981:47c1:1::6]:443
# Email sent directly to teor, verified using relay contact info
163.172.138.22:80 orport=443 id=8664DC892540F3C789DB37008236C096C871734D
163.172.138.22:80 orport=443 id=8664DC892540F3C789DB37008236C096C871734D ipv6=[2001:bc8:4400:2100::1:3]:443
# Email sent directly to teor, verified using relay contact info
97.74.237.196:9030 orport=9001 id=2F0F32AB1E5B943CA7D062C03F18960C86E70D94
@ -603,7 +570,7 @@
178.62.98.160:9030 orport=9001 id=8B92044763E880996A988831B15B2B0E5AD1544A
# Email sent directly to teor, verified using relay contact info
195.154.15.227:9030 orport=9001 id=6C3E3AB2F5F03CD71B637D433BAD924A1ECC5796
163.172.217.50:9030 orport=9001 id=02ECD99ECD596013A8134D46531560816ECC4BE6
# Email sent directly to teor, verified using relay contact info
185.100.86.100:80 orport=443 id=0E8C0C8315B66DB5F703804B3889A1DD66C67CE0
@ -617,10 +584,11 @@
178.62.86.96:9030 orport=9001 id=439D0447772CB107B886F7782DBC201FA26B92D1 ipv6=[2a03:b0c0:1:d0::3cf:7001]:9050
# Email sent directly to teor, verified using relay contact info
91.233.106.121:80 orport=443 id=896364B7996F5DFBA0E15D1A2E06D0B98B555DD6
# Very low bandwidth, stale consensues, excluded to cut down on warnings
#91.233.106.121:80 orport=443 id=896364B7996F5DFBA0E15D1A2E06D0B98B555DD6
# Email sent directly to teor, verified using relay contact info
167.114.113.48:9030 orport=443 id=2EC0C66EA700C44670444280AABAB1EC78B722A0
167.114.113.48:9030 orport=403 id=2EC0C66EA700C44670444280AABAB1EC78B722A0
# Email sent directly to teor, verified using relay contact info
79.120.16.42:9030 orport=9001 id=BD552C165E2ED2887D3F1CCE9CFF155DDA2D86E6
@ -675,7 +643,7 @@
46.4.111.124:9030 orport=9001 id=D9065F9E57899B3D272AA212317AF61A9B14D204
# Email sent directly to teor, verified using relay contact info
78.46.164.129:9030 orport=9001 id=52AEA31188331F421B2EDB494DB65CD181E5B257
138.201.130.32:9030 orport=9001 id=52AEA31188331F421B2EDB494DB65CD181E5B257
# Email sent directly to teor, verified using relay contact info
185.100.85.61:80 orport=443 id=025B66CEBC070FCB0519D206CF0CF4965C20C96E
@ -684,11 +652,12 @@
108.166.168.158:80 orport=443 id=CDAB3AE06A8C9C6BF817B3B0F1877A4B91465699
# Email sent directly to teor, verified using relay contact info
91.219.236.222:80 orport=443 id=EC413181CEB1C8EDC17608BBB177CD5FD8535E99
91.219.236.222:80 orport=443 id=20704E7DD51501DC303FA51B738D7B7E61397CF6
# Email sent directly to teor, verified using relay contact info
185.14.185.240:9030 orport=443 id=D62FB817B0288085FAC38A6DC8B36DCD85B70260
192.34.63.137:9030 orport=443 id=ABCB4965F1FEE193602B50A365425105C889D3F8
128.199.197.16:9030 orport=443 id=DEE5298B3BA18CDE651421CD2DCB34A4A69F224D
# Email sent directly to teor, verified using relay contact info
185.13.38.75:9030 orport=9001 id=D2A1703758A0FBBA026988B92C2F88BAB59F9361
@ -719,7 +688,7 @@
166.70.207.2:9030 orport=9001 id=E3DB2E354B883B59E8DC56B3E7A353DDFD457812
# Emails sent directly to teor, verified using relay contact info
#69.162.139.9:9030 orport=9001 id=4791FC0692EAB60DF2BCCAFF940B95B74E7654F6 ipv6=[2607:f128:40:1212::45a2:8b09]:9001
69.162.139.9:9030 orport=9001 id=4791FC0692EAB60DF2BCCAFF940B95B74E7654F6 ipv6=[2607:f128:40:1212::45a2:8b09]:9001
# Email sent directly to teor, verified using relay contact info
213.239.217.18:1338 orport=1337 id=C37BC191AC389179674578C3E6944E925FE186C2 ipv6=[2a01:4f8:a0:746a:101:1:1:1]:1337
@ -749,7 +718,6 @@
# Email sent directly to teor, verified using relay contact info
163.172.35.249:80 orport=443 id=C08DE49658E5B3CFC6F2A952B453C4B608C9A16A
163.172.35.247:80 orport=443 id=71AB4726D830FAE776D74AEF790CF04D8E0151B4
163.172.13.124:80 orport=443 id=B771AA877687F88E6F1CA5354756DF6C8A7B6B24
# Email sent directly to teor, verified using relay contact info
64.113.32.29:9030 orport=9001 id=30C19B81981F450C402306E2E7CFB6C3F79CB6B2
@ -768,3 +736,95 @@
# Email sent directly to teor, verified using relay contact info
62.216.5.120:9030 orport=9001 id=D032D4D617140D6B828FC7C4334860E45E414FBE
# Email sent directly to teor, verified using relay contact info
51.254.136.195:80 orport=443 id=7BB70F8585DFC27E75D692970C0EEB0F22983A63
# Email sent directly to teor, verified using relay contact info
163.172.13.165:9030 orport=9001 id=33DA0CAB7C27812EFF2E22C9705630A54D101FEB ipv6=[2001:bc8:38cb:201::8]:9001
# Email sent directly to teor, verified using relay contact info
5.196.88.122:9030 orport=9001 id=0C2C599AFCB26F5CFC2C7592435924C1D63D9484 ipv6=[2001:41d0:a:fb7a::1]:9001
# Email sent directly to teor, verified using relay contact info
5.9.158.75:80 orport=443 id=1AF72E8906E6C49481A791A6F8F84F8DFEBBB2BA ipv6=[2a01:4f8:190:514a::2]:443
# Email sent directly to teor, verified using relay contact info
46.101.169.151:9030 orport=9001 id=D760C5B436E42F93D77EF2D969157EEA14F9B39C ipv6=[2a03:b0c0:3:d0::74f:a001]:9001
# Email sent directly to teor, verified using relay contact info
199.249.223.81:80 orport=443 id=F7447E99EB5CBD4D5EB913EE0E35AC642B5C1EF3
199.249.223.79:80 orport=443 id=D33292FEDE24DD40F2385283E55C87F85C0943B6
199.249.223.78:80 orport=443 id=EC15DB62D9101481F364DE52EB8313C838BDDC29
199.249.223.77:80 orport=443 id=CC4A3AE960E3617F49BF9887B79186C14CBA6813
199.249.223.76:80 orport=443 id=43209F6D50C657A56FE79AF01CA69F9EF19BD338
199.249.223.75:80 orport=443 id=60D3667F56AEC5C69CF7E8F557DB21DDF6C36060
199.249.223.74:80 orport=443 id=5F4CD12099AF20FAF9ADFDCEC65316A376D0201C
199.249.223.73:80 orport=443 id=5649CB2158DA94FB747415F26628BEC07FA57616
199.249.223.72:80 orport=443 id=B028707969D8ED84E6DEA597A884F78AAD471971
199.249.223.71:80 orport=443 id=B6320E44A230302C7BF9319E67597A9B87882241
199.249.223.60:80 orport=443 id=B7047FBDE9C53C39011CA84E5CB2A8E3543066D0
199.249.223.61:80 orport=443 id=40E7D6CE5085E4CDDA31D51A29D1457EB53F12AD
199.249.223.62:80 orport=443 id=0077BCBA7244DB3E6A5ED2746E86170066684887
199.249.223.63:80 orport=443 id=1DB25DF59DAA01B5BE3D3CEB8AFED115940EBE8B
199.249.223.64:80 orport=443 id=9F2856F6D2B89AD4EF6D5723FAB167DB5A53519A
199.249.223.65:80 orport=443 id=9D21F034C3BFF4E7737D08CF775DC1745706801F
199.249.223.66:80 orport=443 id=C5A53BCC174EF8FD0DCB223E4AA929FA557DEDB2
199.249.223.67:80 orport=443 id=155D6F57425F16C0624D77777641E4EB1B47C6F0
199.249.223.68:80 orport=443 id=DF20497E487A979995D851A5BCEC313DF7E5BC51
199.249.223.69:80 orport=443 id=7FA8E7E44F1392A4E40FFC3B69DB3B00091B7FD3
# https://lists.torproject.org/pipermail/tor-relays/2016-December/011114.html
86.105.212.130:9030 orport=443 id=9C900A7F6F5DD034CFFD192DAEC9CCAA813DB022
# Email sent directly to teor, verified using relay contact info
178.33.183.251:80 orport=443 id=DD823AFB415380A802DCAEB9461AE637604107FB ipv6=[2001:41d0:2:a683::251]:443
# Email sent directly to teor, verified using relay contact info
#31.185.104.19:80 orport=443 id=9EAD5B2D3DBD96DBC80DCE423B0C345E920A758D
# OK, but on same machine as 9EAD5B2D3DBD96DBC80DCE423B0C345E920A758D
31.185.104.20:80 orport=443 id=ADB2C26629643DBB9F8FE0096E7D16F9414B4F8D
#31.185.104.21:80 orport=443 id=C2AAB088555850FC434E68943F551072042B85F1
#31.185.104.22:80 orport=443 id=5BA3A52760A0EABF7E7C3ED3048A77328FF0F148
# Email sent directly to teor, verified using relay contact info
185.34.60.114:80 orport=443 id=7F7A695DF6F2B8640A70B6ADD01105BC2EBC5135
# Email sent directly to teor, verified using relay contact info
94.142.242.84:80 orport=443 id=AA0D167E03E298F9A8CD50F448B81FBD7FA80D56 ipv6=[2a02:898:24:84::1]:443
# Email sent directly to teor, verified using relay contact info
185.129.62.62:9030 orport=9001 id=ACDD9E85A05B127BA010466C13C8C47212E8A38F ipv6=[2a06:d380:0:3700::62]:9001
# Email sent directly to teor, verified using relay contact info
# The e84 part of the IPv6 address does not have a leading 0 in the consensus
81.30.158.213:9030 orport=9001 id=789EA6C9AE9ADDD8760903171CFA9AC5741B0C70 ipv6=[2001:4ba0:cafe:e84::1]:9001
# https://lists.torproject.org/pipermail/tor-relays/2016-December/011209.html
5.9.159.14:9030 orport=9001 id=0F100F60C7A63BED90216052324D29B08CFCF797
# Email sent directly to teor, verified using relay contact info
45.62.255.25:80 orport=443 id=3473ED788D9E63361D1572B7E82EC54338953D2A
# Email sent directly to teor, verified using relay contact info
217.79.179.177:9030 orport=9001 id=3E53D3979DB07EFD736661C934A1DED14127B684 ipv6=[2001:4ba0:fff9:131:6c4f::90d3]:9001
# Email sent directly to teor, verified using relay contact info
212.47.244.38:8080 orport=443 id=E81EF60A73B3809F8964F73766B01BAA0A171E20
163.172.157.213:8080 orport=443 id=4623A9EC53BFD83155929E56D6F7B55B5E718C24
163.172.139.104:8080 orport=443 id=68F175CCABE727AA2D2309BCD8789499CEE36ED7
# Email sent directly to teor, verified using relay contact info
163.172.223.200:80 orport=443 id=998BF3ED7F70E33D1C307247B9626D9E7573C438
195.154.122.54:80 orport=443 id=64E99CB34C595A02A3165484BD1215E7389322C6
# Email sent directly to teor, verified using relay contact info
185.100.86.128:9030 orport=9001 id=9B31F1F1C1554F9FFB3455911F82E818EF7C7883
185.100.85.101:9030 orport=9001 id=4061C553CA88021B8302F0814365070AAE617270
31.171.155.108:9030 orport=9001 id=D3E5EDDBE5159388704D6785BE51930AAFACEC6F
# Email sent directly to teor, verified using relay contact info
89.163.247.43:9030 orport=9001 id=BC7ACFAC04854C77167C7D66B7E471314ED8C410 ipv6=[2001:4ba0:fff7:25::5]:9001
# Email sent directly to teor, verified using relay contact info
95.85.8.226:80 orport=443 id=1211AC1BBB8A1AF7CBA86BCE8689AA3146B86423

View File

@ -38,7 +38,8 @@ import dateutil.parser
#from bson import json_util
import copy
from stem.descriptor.remote import DescriptorDownloader
from stem.descriptor import DocumentHandler
from stem.descriptor.remote import get_consensus
import logging
# INFO tells you why each relay was included or excluded
@ -80,7 +81,27 @@ PERFORM_IPV4_DIRPORT_CHECKS = False if OUTPUT_CANDIDATES else True
# Don't check ~1000 candidates when OUTPUT_CANDIDATES is True
PERFORM_IPV6_DIRPORT_CHECKS = False if OUTPUT_CANDIDATES else False
# Output fallback name, flags, and ContactInfo in a C comment?
# Must relays be running now?
MUST_BE_RUNNING_NOW = (PERFORM_IPV4_DIRPORT_CHECKS
or PERFORM_IPV6_DIRPORT_CHECKS)
# Clients have been using microdesc consensuses by default for a while now
DOWNLOAD_MICRODESC_CONSENSUS = True
# If a relay delivers an expired consensus, if it expired less than this many
# seconds ago, we still allow the relay. This should never be less than -90,
# as all directory mirrors should have downloaded a consensus 90 minutes
# before it expires. It should never be more than 24 hours, because clients
# reject consensuses that are older than REASONABLY_LIVE_TIME.
# For the consensus expiry check to be accurate, the machine running this
# script needs an accurate clock.
# We use 24 hours to compensate for #20909, where relays on 0.2.9.5-alpha and
# 0.3.0.0-alpha-dev and later deliver stale consensuses, but typically recover
# after ~12 hours.
# We should make this lower when #20909 is fixed, see #20942.
CONSENSUS_EXPIRY_TOLERANCE = 24*60*60
# Output fallback name, flags, bandwidth, and ContactInfo in a C comment?
OUTPUT_COMMENTS = True if OUTPUT_CANDIDATES else False
# Output matching ContactInfo in fallbacks list or the blacklist?
@ -88,6 +109,12 @@ OUTPUT_COMMENTS = True if OUTPUT_CANDIDATES else False
CONTACT_COUNT = True if OUTPUT_CANDIDATES else False
CONTACT_BLACKLIST_COUNT = True if OUTPUT_CANDIDATES else False
# How the list should be sorted:
# fingerprint: is useful for stable diffs of fallback lists
# measured_bandwidth: is useful when pruning the list based on bandwidth
# contact: is useful for contacting operators once the list has been pruned
OUTPUT_SORT_FIELD = 'contact' if OUTPUT_CANDIDATES else 'fingerprint'
## OnionOO Settings
ONIONOO = 'https://onionoo.torproject.org/'
@ -127,16 +154,21 @@ MAX_LIST_FILE_SIZE = 1024 * 1024
## Eligibility Settings
# Reduced due to a bug in tor where a relay submits a 0 DirPort when restarted
# This causes OnionOO to (correctly) reset its stability timer
# This issue will be fixed in 0.2.7.7 and 0.2.8.2
# Until then, the CUTOFFs below ensure a decent level of stability.
# Require fallbacks to have the same address and port for a set amount of time
#
# There was a bug in Tor 0.2.8.1-alpha and earlier where a relay temporarily
# submits a 0 DirPort when restarted.
# This causes OnionOO to (correctly) reset its stability timer.
# Affected relays should upgrade to Tor 0.2.8.7 or later, which has a fix
# for this issue.
ADDRESS_AND_PORT_STABLE_DAYS = 7
# We ignore relays that have been down for more than this period
MAX_DOWNTIME_DAYS = 0 if MUST_BE_RUNNING_NOW else 7
# What time-weighted-fraction of these flags must FallbackDirs
# Equal or Exceed?
CUTOFF_RUNNING = .95
CUTOFF_V2DIR = .95
CUTOFF_GUARD = .95
CUTOFF_RUNNING = .90
CUTOFF_V2DIR = .90
CUTOFF_GUARD = .90
# What time-weighted-fraction of these flags must FallbackDirs
# Equal or Fall Under?
# .00 means no bad exits
@ -155,12 +187,19 @@ ONIONOO_SCALE_ONE = 999.
_FB_POG = 0.2
FALLBACK_PROPORTION_OF_GUARDS = None if OUTPUT_CANDIDATES else _FB_POG
# We want exactly 100 fallbacks for the initial release
# This gives us scope to add extra fallbacks to the list as needed
# Limit the number of fallbacks (eliminating lowest by advertised bandwidth)
MAX_FALLBACK_COUNT = None if OUTPUT_CANDIDATES else 100
# Emit a C #error if the number of fallbacks is below
MIN_FALLBACK_COUNT = 100
MAX_FALLBACK_COUNT = None if OUTPUT_CANDIDATES else 200
# Emit a C #error if the number of fallbacks is less than expected
MIN_FALLBACK_COUNT = 0 if OUTPUT_CANDIDATES else MAX_FALLBACK_COUNT*0.75
# The maximum number of fallbacks on the same address, contact, or family
# With 200 fallbacks, this means each operator can see 1% of client bootstraps
# (The directory authorities used to see ~12% of client bootstraps each.)
MAX_FALLBACKS_PER_IP = 1
MAX_FALLBACKS_PER_IPV4 = MAX_FALLBACKS_PER_IP
MAX_FALLBACKS_PER_IPV6 = MAX_FALLBACKS_PER_IP
MAX_FALLBACKS_PER_CONTACT = 3
MAX_FALLBACKS_PER_FAMILY = 3
## Fallback Bandwidth Requirements
@ -171,12 +210,12 @@ MIN_FALLBACK_COUNT = 100
EXIT_BANDWIDTH_FRACTION = 1.0
# If a single fallback's bandwidth is too low, it's pointless adding it
# We expect fallbacks to handle an extra 30 kilobytes per second of traffic
# We expect fallbacks to handle an extra 10 kilobytes per second of traffic
# Make sure they can support a hundred times the expected extra load
# (Use 102.4 to make it come out nicely in MB/s)
# (Use 102.4 to make it come out nicely in MByte/s)
# We convert this to a consensus weight before applying the filter,
# because all the bandwidth amounts are specified by the relay
MIN_BANDWIDTH = 102.4 * 30.0 * 1024.0
MIN_BANDWIDTH = 102.4 * 10.0 * 1024.0
# Clients will time out after 30 seconds trying to download a consensus
# So allow fallback directories half that to deliver a consensus
@ -367,8 +406,8 @@ def onionoo_fetch(what, **kwargs):
params = kwargs
params['type'] = 'relay'
#params['limit'] = 10
params['first_seen_days'] = '%d-'%(ADDRESS_AND_PORT_STABLE_DAYS,)
params['last_seen_days'] = '-7'
params['first_seen_days'] = '%d-'%(ADDRESS_AND_PORT_STABLE_DAYS)
params['last_seen_days'] = '-%d'%(MAX_DOWNTIME_DAYS)
params['flag'] = 'V2Dir'
url = ONIONOO + what + '?' + urllib.urlencode(params)
@ -497,6 +536,8 @@ class Candidate(object):
if (not 'effective_family' in details
or details['effective_family'] is None):
details['effective_family'] = []
if not 'platform' in details:
details['platform'] = None
details['last_changed_address_or_port'] = parse_ts(
details['last_changed_address_or_port'])
self._data = details
@ -511,6 +552,7 @@ class Candidate(object):
self._compute_ipv6addr()
if not self.has_ipv6():
logging.debug("Failed to get an ipv6 address for %s."%(self._fpr,))
self._compute_version()
def _stable_sort_or_addresses(self):
# replace self._data['or_addresses'] with a stable ordering,
@ -623,6 +665,59 @@ class Candidate(object):
self.ipv6orport = int(port)
return
def _compute_version(self):
# parse the version out of the platform string
# The platform looks like: "Tor 0.2.7.6 on Linux"
self._data['version'] = None
if self._data['platform'] is None:
return
# be tolerant of weird whitespacing, use a whitespace split
tokens = self._data['platform'].split()
for token in tokens:
vnums = token.split('.')
# if it's at least a.b.c.d, with potentially an -alpha-dev, -alpha, -rc
if (len(vnums) >= 4 and vnums[0].isdigit() and vnums[1].isdigit() and
vnums[2].isdigit()):
self._data['version'] = token
return
# From #20509
# bug #20499 affects versions from 0.2.9.1-alpha-dev to 0.2.9.4-alpha-dev
# and version 0.3.0.0-alpha-dev
# Exhaustive lists are hard to get wrong
STALE_CONSENSUS_VERSIONS = ['0.2.9.1-alpha-dev',
'0.2.9.2-alpha',
'0.2.9.2-alpha-dev',
'0.2.9.3-alpha',
'0.2.9.3-alpha-dev',
'0.2.9.4-alpha',
'0.2.9.4-alpha-dev',
'0.3.0.0-alpha-dev'
]
def is_valid_version(self):
# call _compute_version before calling this
# is the version of the relay a version we want as a fallback?
# checks both recommended versions and bug #20499 / #20509
#
# if the relay doesn't have a recommended version field, exclude the relay
if not self._data.has_key('recommended_version'):
logging.info('%s not a candidate: no recommended_version field',
self._fpr)
return False
if not self._data['recommended_version']:
logging.info('%s not a candidate: version not recommended', self._fpr)
return False
# if the relay doesn't have version field, exclude the relay
if not self._data.has_key('version'):
logging.info('%s not a candidate: no version field', self._fpr)
return False
if self._data['version'] in Candidate.STALE_CONSENSUS_VERSIONS:
logging.warning('%s not a candidate: version delivers stale consensuses',
self._fpr)
return False
return True
@staticmethod
def _extract_generic_history(history, which='unknown'):
# given a tree like this:
@ -767,41 +862,42 @@ class Candidate(object):
self._badexit = self._avg_generic_history(badexit) / ONIONOO_SCALE_ONE
def is_candidate(self):
must_be_running_now = (PERFORM_IPV4_DIRPORT_CHECKS
or PERFORM_IPV6_DIRPORT_CHECKS)
if (must_be_running_now and not self.is_running()):
logging.info('%s not a candidate: not running now, unable to check ' +
'DirPort consensus download', self._fpr)
return False
if (self._data['last_changed_address_or_port'] >
self.CUTOFF_ADDRESS_AND_PORT_STABLE):
logging.info('%s not a candidate: changed address/port recently (%s)',
self._fpr, self._data['last_changed_address_or_port'])
return False
if self._running < CUTOFF_RUNNING:
logging.info('%s not a candidate: running avg too low (%lf)',
self._fpr, self._running)
return False
if self._v2dir < CUTOFF_V2DIR:
logging.info('%s not a candidate: v2dir avg too low (%lf)',
self._fpr, self._v2dir)
return False
if self._badexit is not None and self._badexit > PERMITTED_BADEXIT:
logging.info('%s not a candidate: badexit avg too high (%lf)',
self._fpr, self._badexit)
return False
# if the relay doesn't report a version, also exclude the relay
if (not self._data.has_key('recommended_version')
or not self._data['recommended_version']):
logging.info('%s not a candidate: version not recommended', self._fpr)
return False
if self._guard < CUTOFF_GUARD:
logging.info('%s not a candidate: guard avg too low (%lf)',
self._fpr, self._guard)
return False
if (not self._data.has_key('consensus_weight')
or self._data['consensus_weight'] < 1):
logging.info('%s not a candidate: consensus weight invalid', self._fpr)
try:
if (MUST_BE_RUNNING_NOW and not self.is_running()):
logging.info('%s not a candidate: not running now, unable to check ' +
'DirPort consensus download', self._fpr)
return False
if (self._data['last_changed_address_or_port'] >
self.CUTOFF_ADDRESS_AND_PORT_STABLE):
logging.info('%s not a candidate: changed address/port recently (%s)',
self._fpr, self._data['last_changed_address_or_port'])
return False
if self._running < CUTOFF_RUNNING:
logging.info('%s not a candidate: running avg too low (%lf)',
self._fpr, self._running)
return False
if self._v2dir < CUTOFF_V2DIR:
logging.info('%s not a candidate: v2dir avg too low (%lf)',
self._fpr, self._v2dir)
return False
if self._badexit is not None and self._badexit > PERMITTED_BADEXIT:
logging.info('%s not a candidate: badexit avg too high (%lf)',
self._fpr, self._badexit)
return False
# this function logs a message depending on which check fails
if not self.is_valid_version():
return False
if self._guard < CUTOFF_GUARD:
logging.info('%s not a candidate: guard avg too low (%lf)',
self._fpr, self._guard)
return False
if (not self._data.has_key('consensus_weight')
or self._data['consensus_weight'] < 1):
logging.info('%s not a candidate: consensus weight invalid', self._fpr)
return False
except BaseException as e:
logging.warning("Exception %s when checking if fallback is a candidate",
str(e))
return False
return True
@ -1062,42 +1158,63 @@ class Candidate(object):
return True
return False
# report how long it takes to download a consensus from dirip:dirport
# log how long it takes to download a consensus from dirip:dirport
# returns True if the download failed, False if it succeeded within max_time
@staticmethod
def fallback_consensus_download_speed(dirip, dirport, nickname, max_time):
def fallback_consensus_download_speed(dirip, dirport, nickname, fingerprint,
max_time):
download_failed = False
downloader = DescriptorDownloader()
start = datetime.datetime.utcnow()
# some directory mirrors respond to requests in ways that hang python
# sockets, which is why we log this line here
logging.info('Initiating consensus download from %s (%s:%d).', nickname,
dirip, dirport)
logging.info('Initiating %sconsensus download from %s (%s:%d) %s.',
'microdesc ' if DOWNLOAD_MICRODESC_CONSENSUS else '',
nickname, dirip, dirport, fingerprint)
# there appears to be about 1 second of overhead when comparing stem's
# internal trace time and the elapsed time calculated here
TIMEOUT_SLOP = 1.0
start = datetime.datetime.utcnow()
try:
downloader.get_consensus(endpoints = [(dirip, dirport)],
timeout = (max_time + TIMEOUT_SLOP),
validate = True,
retries = 0,
fall_back_to_authority = False).run()
consensus = get_consensus(
endpoints = [(dirip, dirport)],
timeout = (max_time + TIMEOUT_SLOP),
validate = True,
retries = 0,
fall_back_to_authority = False,
document_handler = DocumentHandler.BARE_DOCUMENT,
microdescriptor = DOWNLOAD_MICRODESC_CONSENSUS
).run()[0]
end = datetime.datetime.utcnow()
time_since_expiry = (end - consensus.valid_until).total_seconds()
except Exception, stem_error:
end = datetime.datetime.utcnow()
logging.info('Unable to retrieve a consensus from %s: %s', nickname,
stem_error)
status = 'error: "%s"' % (stem_error)
level = logging.WARNING
download_failed = True
elapsed = (datetime.datetime.utcnow() - start).total_seconds()
if elapsed > max_time:
elapsed = (end - start).total_seconds()
if download_failed:
# keep the error failure status, and avoid using the variables
pass
elif elapsed > max_time:
status = 'too slow'
level = logging.WARNING
download_failed = True
elif (time_since_expiry > 0):
status = 'outdated consensus, expired %ds ago'%(int(time_since_expiry))
if time_since_expiry <= CONSENSUS_EXPIRY_TOLERANCE:
status += ', tolerating up to %ds'%(CONSENSUS_EXPIRY_TOLERANCE)
level = logging.INFO
else:
status += ', invalid'
level = logging.WARNING
download_failed = True
else:
status = 'ok'
level = logging.DEBUG
logging.log(level, 'Consensus download: %0.1fs %s from %s (%s:%d), ' +
logging.log(level, 'Consensus download: %0.1fs %s from %s (%s:%d) %s, ' +
'max download time %0.1fs.', elapsed, status, nickname,
dirip, dirport, max_time)
dirip, dirport, fingerprint, max_time)
return download_failed
# does this fallback download the consensus fast enough?
@ -1109,12 +1226,14 @@ class Candidate(object):
ipv4_failed = Candidate.fallback_consensus_download_speed(self.dirip,
self.dirport,
self._data['nickname'],
self._fpr,
CONSENSUS_DOWNLOAD_SPEED_MAX)
if self.has_ipv6() and PERFORM_IPV6_DIRPORT_CHECKS:
# Clients assume the IPv6 DirPort is the same as the IPv4 DirPort
ipv6_failed = Candidate.fallback_consensus_download_speed(self.ipv6addr,
self.dirport,
self._data['nickname'],
self._fpr,
CONSENSUS_DOWNLOAD_SPEED_MAX)
return ((not ipv4_failed) and (not ipv6_failed))
@ -1151,6 +1270,7 @@ class Candidate(object):
# /*
# nickname
# flags
# adjusted bandwidth, consensus weight
# [contact]
# [identical contact counts]
# */
@ -1162,6 +1282,13 @@ class Candidate(object):
s += 'Flags: '
s += cleanse_c_multiline_comment(' '.join(sorted(self._data['flags'])))
s += '\n'
# this is an adjusted bandwidth, see calculate_measured_bandwidth()
bandwidth = self._data['measured_bandwidth']
weight = self._data['consensus_weight']
s += 'Bandwidth: %.1f MByte/s, Consensus Weight: %d'%(
bandwidth/(1024.0*1024.0),
weight)
s += '\n'
if self._data['contact'] is not None:
s += cleanse_c_multiline_comment(self._data['contact'])
if CONTACT_COUNT or CONTACT_BLACKLIST_COUNT:
@ -1183,6 +1310,7 @@ class Candidate(object):
s += '\n'
s += '*/'
s += '\n'
return s
# output the fallback info C string for this fallback
# this is the text that would go after FallbackDir in a torrc
@ -1251,7 +1379,8 @@ class CandidateList(dict):
d = fetch('details',
fields=('fingerprint,nickname,contact,last_changed_address_or_port,' +
'consensus_weight,advertised_bandwidth,or_addresses,' +
'dir_address,recommended_version,flags,effective_family'))
'dir_address,recommended_version,flags,effective_family,' +
'platform'))
logging.debug('Loading details document done.')
if not 'relays' in d: raise Exception("No relays found in document.")
@ -1297,10 +1426,9 @@ class CandidateList(dict):
self.fallbacks.sort(key=lambda f: f._data['measured_bandwidth'],
reverse=True)
# sort fallbacks by their fingerprint, lowest to highest
# this is useful for stable diffs of fallback lists
def sort_fallbacks_by_fingerprint(self):
self.fallbacks.sort(key=lambda f: f._fpr)
# sort fallbacks by the data field data_field, lowest to highest
def sort_fallbacks_by(self, data_field):
self.fallbacks.sort(key=lambda f: f._data[data_field])
@staticmethod
def load_relaylist(file_name):
@ -1429,8 +1557,8 @@ class CandidateList(dict):
# the bandwidth we log here is limited by the relay's consensus weight
# as well as its adverttised bandwidth. See set_measured_bandwidth
# for details
logging.info('%s not a candidate: bandwidth %.1fMB/s too low, must ' +
'be at least %.1fMB/s', f._fpr,
logging.info('%s not a candidate: bandwidth %.1fMByte/s too low, ' +
'must be at least %.1fMByte/s', f._fpr,
f._data['measured_bandwidth']/(1024.0*1024.0),
MIN_BANDWIDTH/(1024.0*1024.0))
self.fallbacks = above_min_bw_fallbacks
@ -1470,49 +1598,85 @@ class CandidateList(dict):
else:
return None
# does exclusion_list contain attribute?
# return a new bag suitable for storing attributes
@staticmethod
def attribute_new():
return dict()
# get the count of attribute in attribute_bag
# if attribute is None or the empty string, return 0
@staticmethod
def attribute_count(attribute, attribute_bag):
if attribute is None or attribute == '':
return 0
if attribute not in attribute_bag:
return 0
return attribute_bag[attribute]
# does attribute_bag contain more than max_count instances of attribute?
# if so, return False
# if not, return True
# if attribute is None or the empty string, always return True
# if attribute is None or the empty string, or max_count is invalid,
# always return True
@staticmethod
def allow(attribute, exclusion_list):
if attribute is None or attribute == '':
def attribute_allow(attribute, attribute_bag, max_count=1):
if attribute is None or attribute == '' or max_count <= 0:
return True
elif attribute in exclusion_list:
elif CandidateList.attribute_count(attribute, attribute_bag) >= max_count:
return False
else:
return True
# make sure there is only one fallback per IPv4 address, and per IPv6 address
# add attribute to attribute_bag, incrementing the count if it is already
# present
# if attribute is None or the empty string, or count is invalid,
# do nothing
@staticmethod
def attribute_add(attribute, attribute_bag, count=1):
if attribute is None or attribute == '' or count <= 0:
pass
attribute_bag.setdefault(attribute, 0)
attribute_bag[attribute] += count
# make sure there are only MAX_FALLBACKS_PER_IP fallbacks per IPv4 address,
# and per IPv6 address
# there is only one IPv4 address on each fallback: the IPv4 DirPort address
# (we choose the IPv4 ORPort which is on the same IPv4 as the DirPort)
# there is at most one IPv6 address on each fallback: the IPv6 ORPort address
# we try to match the IPv4 ORPort, but will use any IPv6 address if needed
# (clients assume the IPv6 DirPort is the same as the IPv4 DirPort, but
# typically only use the IPv6 ORPort)
# (clients only use the IPv6 ORPort)
# if there is no IPv6 address, only the IPv4 address is checked
# return the number of candidates we excluded
def limit_fallbacks_same_ip(self):
ip_limit_fallbacks = []
ip_list = []
ip_list = CandidateList.attribute_new()
for f in self.fallbacks:
if (CandidateList.allow(f.dirip, ip_list)
and CandidateList.allow(f.ipv6addr, ip_list)):
if (CandidateList.attribute_allow(f.dirip, ip_list,
MAX_FALLBACKS_PER_IPV4)
and CandidateList.attribute_allow(f.ipv6addr, ip_list,
MAX_FALLBACKS_PER_IPV6)):
ip_limit_fallbacks.append(f)
ip_list.append(f.dirip)
CandidateList.attribute_add(f.dirip, ip_list)
if f.has_ipv6():
ip_list.append(f.ipv6addr)
elif not CandidateList.allow(f.dirip, ip_list):
logging.info('Eliminated %s: already have fallback on IPv4 %s'%(
f._fpr, f.dirip))
elif f.has_ipv6() and not CandidateList.allow(f.ipv6addr, ip_list):
logging.info('Eliminated %s: already have fallback on IPv6 %s'%(
f._fpr, f.ipv6addr))
CandidateList.attribute_add(f.ipv6addr, ip_list)
elif not CandidateList.attribute_allow(f.dirip, ip_list,
MAX_FALLBACKS_PER_IPV4):
logging.info('Eliminated %s: already have %d fallback(s) on IPv4 %s'
%(f._fpr, CandidateList.attribute_count(f.dirip, ip_list),
f.dirip))
elif (f.has_ipv6() and
not CandidateList.attribute_allow(f.ipv6addr, ip_list,
MAX_FALLBACKS_PER_IPV6)):
logging.info('Eliminated %s: already have %d fallback(s) on IPv6 %s'
%(f._fpr, CandidateList.attribute_count(f.ipv6addr,
ip_list),
f.ipv6addr))
original_count = len(self.fallbacks)
self.fallbacks = ip_limit_fallbacks
return original_count - len(self.fallbacks)
# make sure there is only one fallback per ContactInfo
# make sure there are only MAX_FALLBACKS_PER_CONTACT fallbacks for each
# ContactInfo
# if there is no ContactInfo, allow the fallback
# this check can be gamed by providing no ContactInfo, or by setting the
# ContactInfo to match another fallback
@ -1520,37 +1684,45 @@ class CandidateList(dict):
# go down at similar times, its usefulness outweighs the risk
def limit_fallbacks_same_contact(self):
contact_limit_fallbacks = []
contact_list = []
contact_list = CandidateList.attribute_new()
for f in self.fallbacks:
if CandidateList.allow(f._data['contact'], contact_list):
if CandidateList.attribute_allow(f._data['contact'], contact_list,
MAX_FALLBACKS_PER_CONTACT):
contact_limit_fallbacks.append(f)
contact_list.append(f._data['contact'])
CandidateList.attribute_add(f._data['contact'], contact_list)
else:
logging.info(('Eliminated %s: already have fallback on ' +
'ContactInfo %s')%(f._fpr, f._data['contact']))
logging.info(
'Eliminated %s: already have %d fallback(s) on ContactInfo %s'
%(f._fpr, CandidateList.attribute_count(f._data['contact'],
contact_list),
f._data['contact']))
original_count = len(self.fallbacks)
self.fallbacks = contact_limit_fallbacks
return original_count - len(self.fallbacks)
# make sure there is only one fallback per effective family
# make sure there are only MAX_FALLBACKS_PER_FAMILY fallbacks per effective
# family
# if there is no family, allow the fallback
# this check can't be gamed, because we use effective family, which ensures
# mutual family declarations
# we use effective family, which ensures mutual family declarations
# but the check can be gamed by not declaring a family at all
# if any indirect families exist, the result depends on the order in which
# fallbacks are sorted in the list
def limit_fallbacks_same_family(self):
family_limit_fallbacks = []
fingerprint_list = []
fingerprint_list = CandidateList.attribute_new()
for f in self.fallbacks:
if CandidateList.allow(f._fpr, fingerprint_list):
if CandidateList.attribute_allow(f._fpr, fingerprint_list,
MAX_FALLBACKS_PER_FAMILY):
family_limit_fallbacks.append(f)
fingerprint_list.append(f._fpr)
fingerprint_list.extend(f._data['effective_family'])
CandidateList.attribute_add(f._fpr, fingerprint_list)
for family_fingerprint in f._data['effective_family']:
CandidateList.attribute_add(family_fingerprint, fingerprint_list)
else:
# technically, we already have a fallback with this fallback in its
# effective family
logging.info('Eliminated %s: already have fallback in effective ' +
'family'%(f._fpr))
# we already have a fallback with this fallback in its effective
# family
logging.info(
'Eliminated %s: already have %d fallback(s) in effective family'
%(f._fpr, CandidateList.attribute_count(f._fpr, fingerprint_list)))
original_count = len(self.fallbacks)
self.fallbacks = family_limit_fallbacks
return original_count - len(self.fallbacks)
@ -1878,8 +2050,8 @@ class CandidateList(dict):
min_bw = min_fb._data['measured_bandwidth']
max_fb = self.fallback_max()
max_bw = max_fb._data['measured_bandwidth']
s += 'Bandwidth Range: %.1f - %.1f MB/s'%(min_bw/(1024.0*1024.0),
max_bw/(1024.0*1024.0))
s += 'Bandwidth Range: %.1f - %.1f MByte/s'%(min_bw/(1024.0*1024.0),
max_bw/(1024.0*1024.0))
s += '\n'
s += '*/'
if fallback_count < MIN_FALLBACK_COUNT:
@ -1985,12 +2157,14 @@ def list_fallbacks():
for s in fetch_source_list():
print describe_fetch_source(s)
# sort the list differently depending on why we've created it:
# if we're outputting the final fallback list, sort by fingerprint
# this makes diffs much more stable
# otherwise, leave sorted by bandwidth, which allows operators to be
# contacted in priority order
if not OUTPUT_CANDIDATES:
candidates.sort_fallbacks_by_fingerprint()
# otherwise, if we're trying to find a bandwidth cutoff, or we want to
# contact operators in priority order, sort by bandwidth (not yet
# implemented)
# otherwise, if we're contacting operators, sort by contact
candidates.sort_fallbacks_by(OUTPUT_SORT_FIELD)
for x in candidates.fallbacks:
print x.fallbackdir_line(candidates.fallbacks, prefilter_fallbacks)