On slow system, 1 msec between one read and the other was too tight. For
instance, it failed on armel with a 4msec gap:
https://buildd.debian.org/status/package.php?p=tor&suite=experimental
Increase to 10 msec for now to address slow system. It is important that we
keep this OP_LE test in so we make sure the msec/usec/nsec read aren't
desynchronized by huge gaps. We'll adjust again if we ever encounter a system
that goes slower than 10 msec between calls.
Fixes#25113
Signed-off-by: David Goulet <dgoulet@torproject.org>
The goal here is to replace our use of msec-based timestamps with
something less precise, but easier to calculate. We're doing this
because calculating lots of msec-based timestamps requires lots of
64/32 division operations, which can be inefficient on 32-bit
platforms.
We make sure that these stamps can be calculated using only the
coarse monotonic timer and 32-bit bitwise operations.
OpenBSD doesn't like tricks where you use a too-wide sscanf argument
for a too-narrow array, even when you know the input string
statically. The fix here is just to use bigger buffers.
Fixes 15582; bugfix on a3dafd3f58 in 0.2.6.2-alpha.
This patch fixes the operator usage in src/test/*.c to use the symbolic
operators instead of the normal C comparison operators.
This patch was generated using:
./scripts/coccinelle/test-operator-cleanup src/test/*.[ch]
config_get_lines is now split into two functions:
- config_get_lines which is the same as before we had %include
- config_get_lines_include which actually processes %include
These required some special-casing, since some of the assumption
about real compression algorithms don't actually hold for the
identity transform. Specifically, we had assumed:
- compression functions typically change the lengths of their
inputs.
- decompression functions can detect truncated inputs
- compression functions have detectable headers
None of those is true for the identity transformation.
There were two issues here: first, zstd didn't exhibit the right
behavior unless it got a very large input. That's fine.
The second issue was a genuine bug, fixed by 39cfaba9e2.
This patch refactors our compression tests such that deflate, gzip,
lzma, and zstd are all tested using the same code.
Additionally we use run-time checks to see if the given compression
method is supported instead of using HAVE_LZMA and HAVE_ZSTD.
See: https://bugs.torproject.org/22085
Since we have a streaming API for each compression backend, we don't
need a non-streaming API for each: we can build a common
non-streaming API at the front-end.
This patch adds support for checking if a given `compress_method_t` is
supported by the currently running Tor instance using
`tor_compress_supports_method()`.
See: https://bugs.torproject.org/21662
This patch refactors our streaming compression code to allow us to
extend it with non-zlib/non-gzip based compression schemas.
See https://bugs.torproject.org/21663
To allow us to use the API name `tor_compress` and `tor_uncompress` as
the main entry-point for all compression/uncompression and not just gzip
and zlib.
See https://bugs.torproject.org/21663
This patch changes some of the tt_assert() usage in test_util_gzip() to
use tt_int_op() to get better error messages upon failure.
Additionally we move to use explicit NULL checks.
See https://bugs.torproject.org/21663
This patch fixes a regression described in bug #21757 that first
appeared after commit 6e78ede73f which was an attempt to fix bug #21654.
When switching from buffered I/O to direct file descriptor I/O our
output strings from get_string_from_pipe() might contain newline
characters (\n). In this patch we modify tor_get_lines_from_handle() to
ensure that the function splits the newly read string at the newline
character and thus might return multiple lines from a single call to
get_string_from_pipe().
Additionally, we add a test case to test_util_string_from_pipe() to
ensure that get_string_from_pipe() correctly returns multiple lines in a
single call.
See: https://bugs.torproject.org/21757
See: https://bugs.torproject.org/21654