A v0 HS authority stores v0 HS descriptors in the same descriptor
cache that its HS client functionality uses. Thus, if the HS
authority operator clears its client HS descriptor cache, ALL v0
HS descriptors will be lost. That would be bad.
If the user sent a SIGNAL NEWNYM command after we fetched a rendezvous
descriptor, while we were building the introduction-point circuit, we
would give up entirely on trying to connect to the hidden service.
Original patch by rransom slightly edited to go into 0.2.1
Previously, it would remove every trackhostexits-derived mapping
*from* xyz.<exitname>.exit; it was supposed to remove every
trackhostexits-derived mapping *to* xyz.<exitname>.exit.
Bugfix on 0.2.0.20-rc: fixes an XXX020 added while staring at bug-1090
issues.
Now we believe it to be the case that we never build a circuit for our
stream that has an unsuitable exit, so we'll never need to use such
a circuit. The risk is that we have some code that builds the circuit,
but now we refuse to use it, meaning we just build a bazillion circuits
and ignore them all.
This looked at first like another fun way around our node selection
logic: if we had introduction circuits, and we wound up building too
many, we would turn extras into general-purpose circuits. But when we
did so, we wouldn't necessarily check whether the general-purpose
circuits conformed to our node constraints. For example, the last
node could totally be in ExcludedExitNodes and we wouldn't have cared...
...except that the circuit should already be internal, so it won't get user
streams attached to it, so the transition should generally be allowed.
Add an assert to make sure we're right about this, and have it not
check whether ExitNodes is set, since that's irrelevant to internal
circuits.
IOW, if we were using TrackExitHosts, and we added an excluded node or
removed a node from exitnodes, we wouldn't actually remove the mapping
that points us at the new node.
Also, note with an XXX022 comment a place that I think we are looking
at the wrong string.
The routerset_equal function explicitly handles NULL inputs, so
there's no need to check inputs for NULL before calling it.
Also fix a bug in routerset_equal where a non-NULL routerset with no
entries didn't get counted as equal to a NULL routerset. This was
untriggerable, I think, but potentially annoying down the road.
ExcludeExitNodes foo now means that foo.exit doesn't work. If
StrictNodes is set, then ExcludeNodes foo also overrides foo.exit.
foo.exit , however, still works even if foo is not listed in ExitNodes.
This once maybe made sense when ExitNodes meant "Here are 3 exits;
use them all", but now it more typically means "Here are 3
countries; exit from there." Using non-Fast/Stable exits created a
potential partitioning opportunity and an annoying stability
problem.
(Don't worry about the case where all of our ExitNodes are non-Fast
or non-Stable: we handle that later in the function by retrying with
need_capacity and need_uptime set to 0.)
If we're picking a random directory node, never pick an excluded one.
But if we've chosen a specific one (or all), allow it unless strictnodes
is set (in which case warn so the user knows it's their fault).
When warning that we won't connect to a strictly excluded node,
log what it was we were trying to do at that node.
When ExcludeNodes is set but StrictNodes is not set, we only use
non-excluded nodes if we can, but fall back to using excluded nodes
if none of those nodes is usable.
This is a tweak to the bug2917 fix. Basically, if we want to simulate
a signal arriving in the controller, we shouldn't have to pretend that
we're Libevent, or depend on how Tor sets up its Libevent callbacks.
The last entry of the *Maxima values in the state file was inflated by a
factor of NUM_SECS_ROLLING_MEASURE (currently 10). This could lead to
a wrong maximum value propagating through the state file history.
When reading the bw history from the state file, we'd add the 900-second
value as traffic that occured during one second. Fix that by adding the
average value to each second.
This bug was present since 0.2.0.5-alpha, but was hidden until
0.2.23-alpha when we started using the saved values.
Some tor relays would report lines like these in their extrainfo
documents:
dirreq-write-history 2011-03-14 16:46:44 (900 s)
This was confusing to some people who look at the stats. It would happen
whenever a relay first starts up, or when a relay has dirport disabled.
Change this so that lines without actual bw entries are omitted.
Implements ticket 2497.
While doing so, get rid of the now unnecessary function
control_signal_act().
Fixes bug 2917, reported by Robert Ransom. Bugfix on commit
9b4aa8d2ab. This patch is loosely based on
a patch by Robert (Changelog entry).
- Document the structure and variables.
- Make circuits_for_buffer_stats into a static variable.
- Don't die horribly if interval_length is 0.
- Remove the unused local_circ_id field.
- Reorder the fields of circ_buffer_stats_t for cleaner alignment layout.
Instead of answering GETINFO requests about our geoip usage only after
running for 24 hours, this patch makes us answer GETINFO requests
immediately. We still round and quantize as before.
Implements bug2711.
Also, refactor the heck out of the bridge usage formatting code. No
longer should we need to do a generate-parse-and-regenerate cycle to
get the controller string, and that lets us simplify the code a lot.
We've got millisecond timers now, we might as well use them.
This change won't actually make circuits get expiered with microsecond
precision, since we only call the expiry functions once per second.
Still, it should avoid the situation where we have a circuit get
expired too early because of rounding.
A couple of the expiry functions now call tor_gettimeofday: this
should be cheap since we're only doing it once per second. If it gets
to be called more often, though, we should onsider having the current
time be an argument again.
Since svn r1475/git 5b6099e8 in tor-0.0.6, we have responded to an
exhaustion of all 65535 stream IDs on a circuit by marking that
circuit for close. That's not the right response. Instead, we
should mark the circuit as "too dirty for new circuits".
Of course in reality this isn't really right either. If somebody
has managed to cram 65535 streams onto a circuit, the circuit is
probably not going to work well for any of those streams, so maybe
we should be limiting the number of streams on an origin circuit
concurrently.
Also, closing the stream in this case is probably the wrong thing to
do as well, but fixing that can also wait.
We fixed bug 539 (where directories would say "503" but send data
anyway) back in 0.2.0.16-alpha/0.1.2.19. Because most directory
versions were affected, we added workaround to make sure that we
examined the contents of 503-replies to make sure there wasn't any
data for them to find. But now that such routers are nonexistent,
we can remove this code. (Even if somebody fired up an 0.1.2.19
directory cache today, it would still be fine to ignore data in its
erroneous 503 replies.)
The first was genuinely impossible, I think: it could only happen
when the amount we read differed from the amount we wanted to read
by more than INT_MAX.
The second is just very unlikely: it would give incorrect results to
the controller if you somehow wrote or read more than 4GB on one
edge conn in one second. That one is a bugfix on 0.1.2.8-beta.
In afe414 (tor-0.1.0.1-rc~173), when we moved to
connection_edge_end_errno(), we used it in handling errors from
connection_connect(). That's not so good, since by the time
connection_connect() returns, the socket is no longer set, and we're
supposed to be looking at the socket_errno return value from
connection_connect() instead. So do what we should've done, and
look at the socket_errno value that we get from connection_connect().
Right now, we only consider sending stream-level SENDME cells when we
have completely flushed a connection_edge's outbuf, or when it sends
us a DATA cell. Neither of these is ideal for throughput.
This patch changes the behavior so we now call
connection_edge_consider_sending_sendme when we flush _some_ data from
an edge outbuf.
Fix for bug 2756; bugfix on svn r152.
hid_serv_responsible_for_desc_id's return value is never negative, and
there is no need to search through the consensus to find out whether we
are responsible for a descriptor ID before we look in our cache for a
descriptor.
Name the magic value "10" rather than re-deriving it.
Comment more.
Use the pattern that works for periodic timers, not the pattern that
doesn't work. ;)
It is important to verify the uptime claim of a relay instead of just
trusting it, otherwise it becomes too easy to blackhole a specific
hidden service. rephist already has data available that we can use here.
Bugfix on 0.2.0.10-alpha.
Partial backport of daa0326aaa .
Resolves bug 2402. Bugfix on 0.2.1.15 (for the part where we switched to
git) and on 0.2.1.30 (for the part where we dumped micro-revisions.)
The calculation of when to send the logmessage was correct, but we
didn't give the correct number of relays required: We want more than
half of all authorities we know about. Fixes bug 2663.
This fixes a remotely triggerable assert on directory authorities, who
don't handle descriptors with ipv6 contents well yet. We will want to
revert this once we're ready to handle ipv6.
Issue raised by lorth on #tor, who wasn't able to use Tor anymore.
Analyzed with help from Christian Fromme. Fix suggested by arma. Bugfix
on 0.2.1.3-alpha.
This should fix a bug that special ran into, where if your state file
didn't record period maxima, it would never decide that it had
successfully parsed itself unless you got lucky with your
uninitialized-variable values.
This patch also tries to improve error messags in the case where a
maximum value legitimately doesn't parse.
In private networks, the defaults for some options are changed. This
means that in options_validate(), where we're testing that the defaults
are what we think they are, we fail. Use a workaround by setting a
hidden configuration option _UsingTestingTorNetwork when we have altered
the configuration this way, so that options_validate() can do the right
thing.
Fixes bug 2250, bugfix on 0.2.1.2-alpha (the version introducing private
network options).
Our regular DH parameters that we use for circuit and rendezvous
crypto are unchanged. This is yet another small step on the path of
protocol fingerprinting resistance.
(Backport from 0.2.2's 5ed73e3807)
rransom noticed that a change of ORPort is just as bad as a change of IP
address from a client's perspective, because both mean that the relay is
not available to them while the new information hasn't propagated.
Change the bug1035 fix accordingly.
Also make sure we don't log a bridge's IP address (which might happen
when we are the bridge authority).
When calling circuit_build_times_shuffle_and_store_array, we were
passing a uint32_t as an int. arma is pretty sure that this can't
actually cause a bug, because of checks elsewhere in the code, but
it's best not to pass a uint32_t as an int anyway.
Found by doorss; fix on 0.2.2.4-alpha.
We detect and reject said attempts if there is no chosen exit node or
circuit: connecting to a private addr via a randomly chosen exit node
will usually fail (if all exits reject private addresses), is always
ill-defined (you're not asking for any particular host or service),
and usually an error (you've configured all requests to go over Tor
when you really wanted to configure all _remote_ requests to go over
Tor).
This can also help detect forwarding loop requests.
Found as part of bug2279.
Left circuit_build_times_get_bw_scale() uncommented because it is in the wrong
place due to an improper bug2317 fix. It needs to be moved and renamed, as it
is not a cbt parameter.
To quote arma: "So instead of stopping your CBT from screaming, you're just
going to throw it in the closet and hope you can't hear it?"
Yep. The log message can happen because at 95% point on the curve, we can be
way beyond the max timeout we've seen, if the curve has few points and is
shallow.
Also applied Nick's rule of thumb for rewriting some other notice log messages
to read like how you would explain them to a raving lunatic on #tor who was
shouting at you demanding what they meant. Hopefully the changes live up to
that standard.
If we got a signed digest that was shorter than the required digest
length, but longer than 20 bytes, we would accept it as long
enough.... and then immediately fail when we want to check it.
Fixes bug 2409; bug in 0.2.2.20-alpha; found by piebeer.
When we added support for separate client tls certs on bridges in
a2bb0bfdd5 we forgot to correctly initialize this when changing
from relay to bridge or vice versa while Tor is running. Fix that
by always initializing keys when the state changes.
Fixes bug 2433.
When we stopped using svn, 0.2.1.x lost the ability to notice its svn
revision and report it in the version number. However, it kept
looking at the micro-revision.i file... so if you switched to master,
built tor, then switched to 0.2.1.x, you'd get a micro-revision.i file
from master reported as an SVN tag. This patch takes out the "include
the svn tag" logic entirely.
Bugfix on 0.2.1.15-rc; fixes bug 2402.
Our regular DH parameters that we use for circuit and rendezvous
crypto are unchanged. This is yet another small step on the path of
protocol fingerprinting resistance.
We need to make sure that the worst thing that a weird consensus param
can do to us is to break our Tor (and only if the other Tors are
reliably broken in the same way) so that the majority of directory
authorities can't pull any attacks that are worse than the DoS that
they can trigger by simply shutting down.
One of these worse things was the cbtnummodes parameter, which could
lead to heap corruption on some systems if the value was sufficiently
large.
This commit fixes this particular issue and also introduces sanity
checking for all consensus parameters.
Our public key functions assumed that they were always writing into a
large enough buffer. In one case, they weren't.
(Incorporates fixes from sebastian)
In dnsserv_resolved(), we carefully made a nul-terminated copy of the
answer in a PTR RESOLVED cell... then never used that nul-terminated
copy. Ouch.
Surprisingly this one isn't as huge a security problem as it could be.
The only place where the input to dnsserv_resolved wasn't necessarily
nul-terminated was when it was called indirectly from relay.c with the
contents of a relay cell's payload. If the end of the payload was
filled with junk, eventdns.c would take the strdup() of the name [This
part is bad; we might crash there if the cell is in a bad part of the
stack or the heap] and get a name of at least length
495[*]. eventdns.c then rejects any name of length over 255, so the
bogus data would be neither transmitted nor altered.
[*] If the name was less than 495 bytes long, the client wouldn't
actually be reading off the end of the cell.
Nonetheless this is a reasonably annoying bug. Better fix it.
Found while looking at bug 2332, reported by doorss. Bugfix on
0.2.0.1-alpha.
Previously, we only looked at up to 128 bytes. This is a bad idea
since socks messages can be at least 256+x bytes long. Now we look at
up to 512 bytes; this should be enough for 0.2.2.x to handle all valid
SOCKS messages. For 0.2.3.x, we can think about handling trickier
cases.
Fixes 2330. Bugfix on 0.2.0.16-alpha.
Right now, Tor routers don't save the maxima values from the
bw_history_t between sessions. That's no good, since we use those
values to determine bandwidth. This code adds a new BWHist.*Maximum
set of values to the state file. If they're not present, we estimate
them by taking the observed total bandwidth and dividing it by the
period length, which provides a lower bound.
This should fix bug 1863. I'm calling it a feature.
Previously, our state parsing code would fail to parse a bwhist
correctly if the Interval was anything other than the default
hardcoded 15 minutes. This change makes the parsing less incorrect,
though the resulting history array might get strange values in it if
the intervals don't match the one we're using. (That is, if stuff was
generated in 15 minute intervals, and we read it into an array that
expects 30 minute intervals, we're fine, since values can be combined
pairwise. But if we generate data at 30 minute intervals and read it
into 15 minute intervals, alternating buckets will be empty.)
Bugfix on 0.1.1.11-alpha.
The trick of looping from i=0..4 , switching on i to set up some
variables, then running some common code is much better expressed by
just calling a function 4 times with 4 sets of arguments. This should
make the code a little easier to follow and maintain here.
An object, you'll recall, is something between -----BEGIN----- and
-----END----- tags in a directory document. Some of our code, as
doorss has noted in bug 2352, could assert if one of these ever
overflowed SIZE_T_CEILING but not INT_MAX. As a solution, I'm setting
a maximum size on a single object such that neither of these limits
will ever be hit. I'm also fixing the INT_MAX checks, just to be sure.
When using libevent 2, we use evdns_base_resolve_*(). When not, we
fake evdns_base_resolve_*() using evdns_resolve_*().
Our old check was looking for negative values (like libevent 2
returns), but our eventdns.c code returns 1. This code makes the
check just test for nonzero.
Note that this broken check was not for _resolve_ failures or even for
failures to _launch_ a resolve: it was for failures to _create_ or
_encode_ a resolve request.
Bug introduced in 81eee0ecfff3dac1e9438719d2f7dc0ba7e84a71; found by
lodger; uploaded to trac by rransom. Bug 2363. Fix on 0.2.2.6-alpha.
We were not decrementing "available" every time we did
++next_virtual_addr in addressmap_get_virtual_address: we left out the
--available when we skipped .00 and .255 addresses.
This didn't actually cause a bug in most cases, since the failure mode
was to keep looping around the virtual addresses until we found one,
or until available hit zero. It could have given you an infinite loop
rather than a useful message, however, if you said "VirtualAddrNetwork
127.0.0.255/32" or something broken like that.
Spotted by cypherpunks
We were decrementing "available" twice for each in-use address we ran
across. This would make us declare that we ran out of virtual
addresses when the address space was only half full.