There is a bug causing busy loops in Libevent and infinite loops in
the Shadow simulator. A connection that is marked for close, wants
to flush, is held open to flush, but is rate limited (the token
bucket is empty) triggers the bug.
This commit fixes the bug. Details are below.
This currently happens on read and write callbacks when the active
socket is marked for close. In this case, Tor doesn't actually try
to complete the read or write (it returns from those methods when
marked), but instead tries to clear the connection with
conn_close_if_marked(). Tor will not close a marked connection that
contains data: it must be flushed first. The bug occurs when this
flush operation on the marked connection can not occur because the
connection is rate-limited (its write token bucket is empty).
The fix is to detect when rate limiting is preventing a marked
connection from properly flushing. In this case, it should be
flagged as read/write_blocked_on_bandwidth and the read/write events
de-registered from Libevent. When the token bucket gets refilled, it
will check the associated read/write_blocked_on_bandwidth flag, and
add the read/write event back to Libevent, which will cause it to
fire. This time, it will be properly flushed and closed.
The reason that both read and write events are both de-registered
when the marked connection can not flush is because both result in
the same behavior. Both read/write events on marked connections will
never again do any actual reads/writes, and are only useful to
trigger the flush and close the connection. By setting the
associated read/write_blocked_on_bandwidth flag, we ensure that the
event will get added back to Libevent, properly flushed, and closed.
Why is this important? Every Shadow event occurs at a discrete time
instant. If Tor does not properly deregister Libevent events that
fire but result in Tor essentially doing nothing, Libevent will
repeatedly fire the event. In Shadow this means infinite loop,
outside of Shadow this means wasted CPU cycles.
This is a feature removal: we no longer fake any ciphersuite other
than the not-really-standard SSL_RSA_FIPS_WITH_3DES_EDE_CBC_SHA
(0xfeff). This change will let servers rely on our actually
supporting what we claim to support, and thereby let Tor migrate to
better TLS ciphersuites.
As a drawback, Tor instances that use old openssl versions and
openssl builds with ciphers disabled will no longer give the
"firefox" cipher list.
Manually removed range 0.116.0.0 to 0.119.255.255 which Maxmind says is
assigned to AT. This is very likely a bug in their database, because
0.0.0.0/8 is a reserved range.
From what I can tell, this configuration is usually a mistake, and
leads people to think that all their traffic is getting proxied when
in fact practically none of it is. Resolves the issue behind "bug"
4663.
First, specify -Werror when we are testing each option; if it causes
a warning to appear, we shouldn't be adding it.
Second, do not attempt to add these options until after we have
found the libraries we want. Previously, I would hit a bug where
the linker hardening options worked fine when we weren't linking
anything, but failed completely once we added openssl or libevent.
Nearly everywhere, we end options with "(Default: foo)". But in a
few places, we inserted an extra period after or before the close
parenthesis, and in a few other places we said "(Defaults to foo)".
Let's not do that.
The function is not guaranteed to NUL-terminate its output. It
*is*, however, guaranteed not to generate more than two bytes per
multibyte character (plus terminating nul), so the general approach
I'm taking is to try to allocate enough space, AND to manually add a
NUL at the end of each buffer just in case I screwed up the "enough
space" thing.
Fixes bug 5909.
This feature can make Tor relays less identifiable by their use of the
mod_ssl DH group, but at the cost of some usability (#4721) and bridge
tracing (#6087) regressions.
We should try to turn this on by default again if we find that the
mod_ssl group is uncommon and/or we move to a different DH group size
(see #6088). Before we can do so, we need a fix for bugs #6087 and
Resolves ticket #5598 for now.