To maintain precision, to get nanoseconds, we were multiplying our
tick count by a billion, then dividing by ticks-per-second. But
that apparently isn't such a great idea, since ticks-per-second is
sometimes a billion on its own, so our intermediate result was
giving us attoseconds.
When you're counting in attoseconds, you can only fit about 9
seconds into an int64_t, which is not so great for our purposes.
Instead, we now simplify the 1000000000/1000000000 fraction before
we start messing with nanoseconds. This has potential to mess us
up if some future MS version declares that performance counters will
use 1,000,000,007 units per second, but let's burn that bridge when
we come to it.
This code uses QueryPerformanceCounter() [**] on Windows,
mach_absolute_time() on OSX, clock_gettime() where available, and
gettimeofday() [*] elsewhere.
Timer types are stored in an opaque OS-specific format; the only
supported operation is to compute the difference between two timers.
[*] As you know, gettimeofday() isn't monotonic, so we include
a simple ratchet function to ensure that it only moves forward.
[**] As you may not know, QueryPerformanceCounter() isn't actually
always as monotonic as you might like it to be, so we ratchet that
one too.
We also include a "coarse monotonic timer" for cases where we don't
actually need high-resolution time. This is GetTickCount{,64}() on
Windows, clock_gettime(CLOCK_MONOTONIC_COARSE) on Linux, and falls
back to regular monotonic time elsewhere.