This fixes a remotely triggerable assert on directory authorities, who
don't handle descriptors with ipv6 contents well yet. We will want to
revert this once we're ready to handle ipv6.
Issue raised by lorth on #tor, who wasn't able to use Tor anymore.
Analyzed with help from Christian Fromme. Fix suggested by arma. Bugfix
on 0.2.1.3-alpha.
Our regular DH parameters that we use for circuit and rendezvous
crypto are unchanged. This is yet another small step on the path of
protocol fingerprinting resistance.
(Backport from 0.2.2's 5ed73e3807)
When we stopped using svn, 0.2.1.x lost the ability to notice its svn
revision and report it in the version number. However, it kept
looking at the micro-revision.i file... so if you switched to master,
built tor, then switched to 0.2.1.x, you'd get a micro-revision.i file
from master reported as an SVN tag. This patch takes out the "include
the svn tag" logic entirely.
Bugfix on 0.2.1.15-rc; fixes bug 2402.
Our public key functions assumed that they were always writing into a
large enough buffer. In one case, they weren't.
(Incorporates fixes from sebastian)
In dnsserv_resolved(), we carefully made a nul-terminated copy of the
answer in a PTR RESOLVED cell... then never used that nul-terminated
copy. Ouch.
Surprisingly this one isn't as huge a security problem as it could be.
The only place where the input to dnsserv_resolved wasn't necessarily
nul-terminated was when it was called indirectly from relay.c with the
contents of a relay cell's payload. If the end of the payload was
filled with junk, eventdns.c would take the strdup() of the name [This
part is bad; we might crash there if the cell is in a bad part of the
stack or the heap] and get a name of at least length
495[*]. eventdns.c then rejects any name of length over 255, so the
bogus data would be neither transmitted nor altered.
[*] If the name was less than 495 bytes long, the client wouldn't
actually be reading off the end of the cell.
Nonetheless this is a reasonably annoying bug. Better fix it.
Found while looking at bug 2332, reported by doorss. Bugfix on
0.2.0.1-alpha.
The C standard says that INT32_MAX is supposed to be a signed
integer. On platforms that have it, we get the correct
platform-defined value. Our own replacement, however, was
unsigned. That's going to cause a bug somewhere eventually.
An object, you'll recall, is something between -----BEGIN----- and
-----END----- tags in a directory document. Some of our code, as
doorss has noted in bug 2352, could assert if one of these ever
overflowed SIZE_T_CEILING but not INT_MAX. As a solution, I'm setting
a maximum size on a single object such that neither of these limits
will ever be hit. I'm also fixing the INT_MAX checks, just to be sure.
We were not decrementing "available" every time we did
++next_virtual_addr in addressmap_get_virtual_address: we left out the
--available when we skipped .00 and .255 addresses.
This didn't actually cause a bug in most cases, since the failure mode
was to keep looping around the virtual addresses until we found one,
or until available hit zero. It could have given you an infinite loop
rather than a useful message, however, if you said "VirtualAddrNetwork
127.0.0.255/32" or something broken like that.
Spotted by cypherpunks
We were decrementing "available" twice for each in-use address we ran
across. This would make us declare that we ran out of virtual
addresses when the address space was only half full.
On Windows, we never use pthreads, since it doesn't usually exist,
and when it does it tends to be a little weirdly-behaved. But some
mingw installations have a pthreads installed, so autoconf detects
pthread.h and tells us about it. This would make us include
pthread.h, which could make for trouble when the iffy pthread.h
tried to include config.h.
This patch changes compat.h so that we never include pthread.h on
Windows. Fixes bug 2313; bugfix on 0.1.0.1-rc.
It's all too easy in C to convert an unsigned value to a signed one,
which will (on all modern computers) give you a huge signed value. If
you have a size_t value of size greater than SSIZE_T_MAX, that is way
likelier to be an underflow than it is to be an actual request for
more than 2gb of memory in one go. (There's nothing in Tor that
should be trying to allocate >2gb chunks.)