Fix a completely wrong calculation in mach monotime_init_internal()

Bug 1: We were purporting to calculate milliseconds per tick, when we
*should* have been computing ticks per millisecond.

Bug 2: Instead of computing either one of those, we were _actually_
computing femtoseconds per tick.

These two bugs covered for one another on x86 hardware, where 1 tick
== 1 nanosecond.  But on M1 OSX, 1 tick is about 41 nanoseconds,
causing surprising results.

Fixes bug 40684; bugfix on 0.3.3.1-alpha.
This commit is contained in:
Nick Mathewson 2022-10-13 13:40:10 -04:00
parent d52a5f2181
commit e531d4d1b9
2 changed files with 13 additions and 4 deletions

6
changes/bug40684 Normal file
View File

@ -0,0 +1,6 @@
o Major bugfixes (OSX):
- Fix coarse-time computation on Apple platforms (like Mac M1) where
the Mach absolute time ticks do not correspond directly to
nanoseconds. Previously, we computed our shift value wrong, which
led us to give incorrect timing results.
Fixes bug 40684; bugfix on 0.3.3.1-alpha.

View File

@ -253,11 +253,14 @@ monotime_init_internal(void)
tor_assert(mach_time_info.denom != 0); tor_assert(mach_time_info.denom != 0);
{ {
// approximate only. // We want to compute this, approximately:
uint64_t ns_per_tick = mach_time_info.numer / mach_time_info.denom; // uint64_t ns_per_tick = mach_time_info.numer / mach_time_info.denom;
uint64_t ms_per_tick = ns_per_tick * ONE_MILLION; // uint64_t ticks_per_ms = ONE_MILLION / ns_per_tick;
// This calculation multiplies first, though, to improve accuracy.
uint64_t ticks_per_ms = (ONE_MILLION * mach_time_info.denom)
/ mach_time_info.numer;
// requires that tor_log2(0) == 0. // requires that tor_log2(0) == 0.
monotime_shift = tor_log2(ms_per_tick); monotime_shift = tor_log2(ticks_per_ms);
} }
{ {
// For converting ticks to milliseconds in a 32-bit-friendly way, we // For converting ticks to milliseconds in a 32-bit-friendly way, we