Since svn r1475/git 5b6099e8 in tor-0.0.6, we have responded to an
exhaustion of all 65535 stream IDs on a circuit by marking that
circuit for close. That's not the right response. Instead, we
should mark the circuit as "too dirty for new circuits".
Of course in reality this isn't really right either. If somebody
has managed to cram 65535 streams onto a circuit, the circuit is
probably not going to work well for any of those streams, so maybe
we should be limiting the number of streams on an origin circuit
concurrently.
Also, closing the stream in this case is probably the wrong thing to
do as well, but fixing that can also wait.
We fixed bug 539 (where directories would say "503" but send data
anyway) back in 0.2.0.16-alpha/0.1.2.19. Because most directory
versions were affected, we added workaround to make sure that we
examined the contents of 503-replies to make sure there wasn't any
data for them to find. But now that such routers are nonexistent,
we can remove this code. (Even if somebody fired up an 0.1.2.19
directory cache today, it would still be fine to ignore data in its
erroneous 503 replies.)
The first was genuinely impossible, I think: it could only happen
when the amount we read differed from the amount we wanted to read
by more than INT_MAX.
The second is just very unlikely: it would give incorrect results to
the controller if you somehow wrote or read more than 4GB on one
edge conn in one second. That one is a bugfix on 0.1.2.8-beta.
In afe414 (tor-0.1.0.1-rc~173), when we moved to
connection_edge_end_errno(), we used it in handling errors from
connection_connect(). That's not so good, since by the time
connection_connect() returns, the socket is no longer set, and we're
supposed to be looking at the socket_errno return value from
connection_connect() instead. So do what we should've done, and
look at the socket_errno value that we get from connection_connect().
Autoconf adds -g -O2 by default, so adding it ourselves is not required.
It also caused a warning with clang for every source file, so remove it
here. Fixes last issue of ticket 2696.
Ian's original message:
The current code actually correctly handles queued data at the
Exit; if there is queued data in a EXIT_CONN_STATE_CONNECTING
stream, that data will be immediately sent when the connection
succeeds. If the connection fails, the data will be correctly
ignored and freed. The problem with the current server code is
that the server currently drops DATA cells on streams in the
EXIT_CONN_STATE_CONNECTING state. Also, if you try to queue data
in the EXIT_CONN_STATE_RESOLVING state, bad things happen because
streams in that state don't yet have conn->write_event set, and so
some existing sanity checks (any stream with queued data is at
least potentially writable) are no longer sound.
The solution is to simply not drop received DATA cells while in
the EXIT_CONN_STATE_CONNECTING state. Also do not send SENDME
cells in this state, so that the OP cannot send more than one
window's worth of data to be queued at the Exit. Finally, patch
the sanity checks so that streams in the EXIT_CONN_STATE_RESOLVING
state that have buffered data can pass.
[...] Here is a simple patch. It seems to work with both regular
streams and hidden services, but there may be other corner cases
I'm not aware of. (Do streams used for directory fetches, hidden
services, etc. take a different code path?)
Right now, we only consider sending stream-level SENDME cells when we
have completely flushed a connection_edge's outbuf, or when it sends
us a DATA cell. Neither of these is ideal for throughput.
This patch changes the behavior so we now call
connection_edge_consider_sending_sendme when we flush _some_ data from
an edge outbuf.
Fix for bug 2756; bugfix on svn r152.