Our main function, though accurate on all platforms, can be very
slow on 32-bit hosts. This one is faster on all 32-bit hosts, and
accurate everywhere except apple, where it will typically be off by
1%. But since 32-bit apple is a relic anyway, I think we should be
fine.
The goal here is to replace our use of msec-based timestamps with
something less precise, but easier to calculate. We're doing this
because calculating lots of msec-based timestamps requires lots of
64/32 division operations, which can be inefficient on 32-bit
platforms.
We make sure that these stamps can be calculated using only the
coarse monotonic timer and 32-bit bitwise operations.
This code uses QueryPerformanceCounter() [**] on Windows,
mach_absolute_time() on OSX, clock_gettime() where available, and
gettimeofday() [*] elsewhere.
Timer types are stored in an opaque OS-specific format; the only
supported operation is to compute the difference between two timers.
[*] As you know, gettimeofday() isn't monotonic, so we include
a simple ratchet function to ensure that it only moves forward.
[**] As you may not know, QueryPerformanceCounter() isn't actually
always as monotonic as you might like it to be, so we ratchet that
one too.
We also include a "coarse monotonic timer" for cases where we don't
actually need high-resolution time. This is GetTickCount{,64}() on
Windows, clock_gettime(CLOCK_MONOTONIC_COARSE) on Linux, and falls
back to regular monotonic time elsewhere.