With this patch, I dump the old kludge of using magic negative
numbers to indicate unknown bandwidths. I also compute each node's
weighted bandwidth exactly once, rather than computing it once in
a loop to compute the total weighted bandwidth and a second time in
a loop to find which one we picked.
Previously, we had incremented rand_bw so that when we later tested
"tmp >= rand_bw", we wouldn't have an off-by-one error. But instead,
it makes more sense to leave rand_bw alone and test "tmp > rand_bw".
Note that this is still safe. To take the example from the bug1203
writeup: Suppose that we have 3 nodes with bandwidth 1. So the
bandwidth array is { 1, 1, 1 }, and the total bandwidth is 3. We
choose rand_bw == 0, 1, or 2. With the first iteration of the loop,
tmp is now 1; with the second, tmp is 2; with the third, tmp is 3.
Now that our check is tmp > rand_bw, we will set i in the first
iteration of the loop iff rand_bw == 0; in the second iteration of
the loop iff rand_bw == 1, and in the third iff rand_bw == 2.
That's what we want.
Incidentally, this change makes the bug 6538 fix more ironclad: once
rand_bw is set to UINT64_MAX, tmp > rand_bw is obviously false
regardless of the value of tmp.
The old approach, because of its "tmp >= rand_bw &&
!i_has_been_chosen" check, would run through the second part of the
loop slightly slower than the first part. Now, we remove
i_has_been_chosen, and instead set rand_bw = UINT64_MAX, so that
every instance of the loop will do exactly the same amount of work
regardless of the initial value of rand_bw.
Fix for bug 6538.
This should make our preferred solution to #6538 easier to
implement, avoid a bunch of potential nastiness with excessive
int-vs-double math, and generally make the code there a little less
scary.
"But wait!" you say. "Is it really safe to do this? Won't the
results come out differently?"
Yes, but not much. We now round every weighted bandwidth to the
nearest byte before computing on it. This will make every node that
had a fractional part of its weighted bandwidth before either
slighty more likely or slightly less likely. Further, the rand_bw
value was only ever set with integer precision, so it can't
accurately sample routers with tiny fractional bandwidth values
anyway. Finally, doing repeated double-vs-uint64 comparisons is
just plain sad; it will involve an implicit cast to double, which is
never a fun thing.
This gives us a few benefits:
1) make -j clean all
this will start working, as it should. It currently doesn't.
2) increased parallel build
recursive make will max out at number of files in a directory,
non-recursive make doesn't have such a limitation
3) Removal of duplicate information in make files,
less error prone
I've also slightly updated how we call AM_INIT_AUTOMAKE, as the way
that was used was not only deprecated but will be *removed* in the next
major automake release (1.13).... so probably best that we can continue
to bulid tor without requiring old automake.
(see http://www.gnu.org/software/automake/manual/html_node/Public-Macros.html )
For more reasons why, see resources such as:
http://miller.emu.id.au/pmiller/books/rmch/
I don't personally agree that this is likely to be easy to exploit,
and some initial experimention I've done suggests that cache-miss
times are just plain too fast to get useful info out of when they're
mixed up with the rest of Tor's timing noise. Nevertheless, I'm
leaving Robert's initial changelog entry in the git history so that he
can be the voice of reason if I'm wrong. :)
This makes the V=1 or V=0 automake silent build options display (or hide)
the full command line used.
GEN foo.bar
will be seen rather than the full command.
As with all automake silent rules, "make V=1" will output the full command.
$ make V=1 # will temporarily disable them
otherwise you see:
CC foo.c
rather than the giant long bulid line.
This makes it significantly easier to spot compiler warnings etc.
Additionally, make them conditional, so we won't error on automake <
1.11
(commits squashed by nickm.)