The first was genuinely impossible, I think: it could only happen
when the amount we read differed from the amount we wanted to read
by more than INT_MAX.
The second is just very unlikely: it would give incorrect results to
the controller if you somehow wrote or read more than 4GB on one
edge conn in one second. That one is a bugfix on 0.1.2.8-beta.
In afe414 (tor-0.1.0.1-rc~173), when we moved to
connection_edge_end_errno(), we used it in handling errors from
connection_connect(). That's not so good, since by the time
connection_connect() returns, the socket is no longer set, and we're
supposed to be looking at the socket_errno return value from
connection_connect() instead. So do what we should've done, and
look at the socket_errno value that we get from connection_connect().
Autoconf adds -g -O2 by default, so adding it ourselves is not required.
It also caused a warning with clang for every source file, so remove it
here. Fixes last issue of ticket 2696.
Right now, we only consider sending stream-level SENDME cells when we
have completely flushed a connection_edge's outbuf, or when it sends
us a DATA cell. Neither of these is ideal for throughput.
This patch changes the behavior so we now call
connection_edge_consider_sending_sendme when we flush _some_ data from
an edge outbuf.
Fix for bug 2756; bugfix on svn r152.
Name the magic value "10" rather than re-deriving it.
Comment more.
Use the pattern that works for periodic timers, not the pattern that
doesn't work. ;)
It is important to verify the uptime claim of a relay instead of just
trusting it, otherwise it becomes too easy to blackhole a specific
hidden service. rephist already has data available that we can use here.
Bugfix on 0.2.0.10-alpha.
- When compiling using clang (2.9 or lower) do not enable
-Wnormalized=id or -Woverride-init when --enable-gcc-warnings
or --enable-gcc-warnings-advisory is set as these options
are unsupported.