Previously we had code like this for bad things happening from
signal handlers, but it makes sense to use the same logic to handle
cases when something is happening at a level too low for log.c to be
involved.
My raw_assert*() stuff now uses this code.
Our main function, though accurate on all platforms, can be very
slow on 32-bit hosts. This one is faster on all 32-bit hosts, and
accurate everywhere except apple, where it will typically be off by
1%. But since 32-bit apple is a relic anyway, I think we should be
fine.
The goal here is to replace our use of msec-based timestamps with
something less precise, but easier to calculate. We're doing this
because calculating lots of msec-based timestamps requires lots of
64/32 division operations, which can be inefficient on 32-bit
platforms.
We make sure that these stamps can be calculated using only the
coarse monotonic timer and 32-bit bitwise operations.
Sometimes when we call exit(), it's because the process is
completely hopeless: openssl has a broken AES-CTR implementation, or
the clock is in the 1960s, or something like that.
But sometimes, we should return cleanly from tor_main() instead, so
that embedders can keep embedding us and start another Tor process.
I've gone through all the exit() and _exit() calls to annotate them
with "exit ok" or "XXXX bad exit" -- the next step will be to fix
the bad exit()s.
First step towards 23848.
This came up on #21035, where somebody tried to build on a linux
system with kernel headers including CLOCK_MONOTONIC_COARSE, then
run on a kernel that didn't support it.
I've adopted a belt-and-suspenders approach here: we detect failures
at initialization time, and we also detect (loudly) failures later on.
Fixes bug 21035; bugfix on 0.2.9.1-alpha when we started using
monotonic time.
To maintain precision, to get nanoseconds, we were multiplying our
tick count by a billion, then dividing by ticks-per-second. But
that apparently isn't such a great idea, since ticks-per-second is
sometimes a billion on its own, so our intermediate result was
giving us attoseconds.
When you're counting in attoseconds, you can only fit about 9
seconds into an int64_t, which is not so great for our purposes.
Instead, we now simplify the 1000000000/1000000000 fraction before
we start messing with nanoseconds. This has potential to mess us
up if some future MS version declares that performance counters will
use 1,000,000,007 units per second, but let's burn that bridge when
we come to it.
This code uses QueryPerformanceCounter() [**] on Windows,
mach_absolute_time() on OSX, clock_gettime() where available, and
gettimeofday() [*] elsewhere.
Timer types are stored in an opaque OS-specific format; the only
supported operation is to compute the difference between two timers.
[*] As you know, gettimeofday() isn't monotonic, so we include
a simple ratchet function to ensure that it only moves forward.
[**] As you may not know, QueryPerformanceCounter() isn't actually
always as monotonic as you might like it to be, so we ratchet that
one too.
We also include a "coarse monotonic timer" for cases where we don't
actually need high-resolution time. This is GetTickCount{,64}() on
Windows, clock_gettime(CLOCK_MONOTONIC_COARSE) on Linux, and falls
back to regular monotonic time elsewhere.