Include the beginnings of a FAQ about which timer to use.

This commit is contained in:
Nick Mathewson 2019-01-11 09:08:51 -05:00
parent 2f0e187191
commit 5dd926caa6

View File

@ -15,6 +15,102 @@
* of tens of milliseconds.
*/
/* Q: Should you use monotime or monotime_coarse as your source?
*
* A: Generally, you get better precision with monotime, but better
* performance with monotime_coarse.
*
* Q: Should you use monotime_t or monotime_coarse_t directly? Should you use
* usec? msec? "stamp units?"
*
* A: Using monotime_t and monotime_coarse_t directly is most time-efficient,
* since no conversion needs to happen. But they can potentially use more
* memory than you would need for a usec/msec/"stamp unit" count.
*
* Converting to usec or msec on some platforms, and working with them in
* general, creates a risk of doing a 64-bit division. 64-bit division is
* expensive on 32-bit platforms, which still do exist.
*
* The "stamp unit" type is designed to give a type that is cheap to convert
* from monotime_coarse, has resolution of about 1-2ms, and fits nicely in a
* 32-bit integer. Its downside is that it does not correspond directly
* to a natural unit of time.
*
* There is not much point in using "coarse usec" or "coarse nsec", since the
* current coarse monotime implementations give you on the order of
* milliseconds of precision.
*
* Q: So, what backends is monotime_coarse using?
*
* A: Generally speaking, it uses "whatever monotonic-ish time implemenation
* does not require a context switch." The various implementations provide
* this by having a view of the current time in a read-only memory page that
* is updated with a frequency corresponding to the kernel's tick count.
*
* On Windows, monotime_coarse uses GetCount64() [or GetTickCount() on
* obsolete systems]. MSDN claims that the resolution is "typically in the
* range of 10-16 msec", but it has said that for years. Storing
* monotime_coarse_t uses 8 bytes.
*
* On OSX/iOS, monotime_coarse uses uses mach_approximate_time() where
* available, and falls back to regular monotime. The precision is not
* documented, but the implementation is open-source: it reads from a page
* that the kernel updates. Storing monotime_coarse_t uses 8 bytes.
*
* On unixy systems, monotime_coarse uses clock_gettime() with
* CLOCK_MONOTONIC_COARSE where available, and falls back to CLOCK_MONOTONIC.
* It typically uses vdso tricks to read from a page that the kernel updates.
* Its precision fixed, but you can get it with clock_getres(): on my Linux
* desktop, it claims to be 1 msec, but it will depend on the system HZ
* setting. Storing monotime_coarse_t uses 16 bytes.
*
* [TODO: Try CLOCK_MONOTONIC_FAST on foobsd.]
*
* Q: What backends is regular monotonic time using?
*
* A: In general, regular monotime uses something that requires a system call.
* On platforms where system calls are cheap, you win! Otherwise, you lose.
*
* On Windows, monotonic time uses QuereyPerformanceCounter. Storing
* monotime_t costs 8 bytes.
*
* On OSX/Apple, monotonic time uses mach_absolute_time. Storing
* monotime_t costs 8 bytes.
*
* On unixy systems, monotonic time uses CLOCK_MONOTONIC. Storing
* monotime_t costs 16 bytes.
*
* Q: Tell me about the costs of converting to a 64-bit nsec, usec, or msec
* count.
*
* A: Windows, coarse: Cheap, since it's all multiplication.
*
* Windows, precise: Expensive on 32-bit: it needs 64-bit division.
*
* Apple, all: Expensive on 32-bit: it needs 64-bit division.
*
* Unixy, all: Fairly cheap, since the only division required is dividing
* tv_nsec 1000, and nanoseconds-per-second fits in a 32-bit value.
*
* All, "timestamp units": Cheap everywhere: it never divides.
*
* Q: This is only somewhat related, but how much precision could I hope for
* from a libevent time.?
*
* A: Actually, it's _very_ related if you're timing in order to have a
* timeout happen.
*
* On Windows, it uses select: you could in theory have a microsecond
* resolution, but it usually isn't that accurate.
*
* On OSX, iOS, and BSD, you have kqueue: You could in theory have a nanosecond
* resolution, but it usually isn't that accurate.
*
* On Linux, you have epoll: It has a millisecond resolution. Some recent
* Libevents can also use timerfd for higher resolution if
* EVENT_BASE_FLAG_PRECISE_TIMER is set: Tor doesn't set that flag.
*/
#ifndef TOR_COMPAT_TIME_H
#define TOR_COMPAT_TIME_H