This exposes the new fallback behavior in hashx via a new AUTOBOOL
configuration option, available to both clients and services. The
default should be fine for nearly everyone, but it might be necessary
to enable or disable the compiler manually for diagnostic purposes.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This change adapts the hs_pow layer and unit tests to API changes
in hashx and equix which modify the fault recovery responsibilities
and reporting behaivor.
This and the corresponding implementation changes in hashx and equix
form the fix for #40794, both solving the segfault and giving hashx a
way to report those failures up the call chain without them being
mistaken for a different error (unusable seed) that would warrant a
retry.
To handle these new late compiler failures with a minimum of fuss or
inefficiency, the failover is delegated to the internals of hashx and
tor needs only pass in a EQUIX_CTX_TRY_COMPILE flag to get the behavior
that tor was previously responsible for implementing.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This change adapts Equi-X to the corresponding HashX API changes that
added HASHX_TRY_COMPILE. The new regularized HashX return codes are
reflected by revised corresponding Equi-X return codes.
Both solve and verify operations now return an error/success code, and a
new equix_solutions_buffer struct includes both the solution buffer
and information about the solution count and hashx implementation.
With this change, it's possible to discern between hash construction
failures (invalid seed) and some external error like an mprotect()
failure.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This is an API breaking change to hashx, which modifies the error
handling strategy. The main goal here is to allow unproblematic
recovery from hashx_compile failures.
hashx_alloc can no longer fail for reasons other than memory
allocation. All platform-specific compile failures are now reported via
hashx_make(), in order to both allow later failure and avoid requiring
users of the API to maintain and test multiple failure paths.
Note that late failures may be more common in actual use than early
failures. Early failures represent architectures other than x86_64 and
aarch64. Late failures could represent a number of system configurations
where syscalls are restricted.
The definition of a hashx context no longer tries to overlay storage for
the different types of program, and instead allows one context to always
contain an interpretable description of the program as well as an optional
buffer for compiled code.
The hashx_type enum is now used to mean either a specific type of hash
function or a type of hashx context. You can allocate a context for use
only with interpreted or compiled functions, or you can use
HASHX_TRY_COMPILE to prefer the compiler with an automatic fallback on
the interpreter. After calling hashx_make(), the new hashx_query_type()
can be used if needed to determine which implementation was actually
chosen.
The error return types have been overhauled so that everyone uses the
hashx_result enum, and seed failures vs compile failures are always
clearly distinguishable.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This is a minimal portion of the fix for tor issue #40794, in which
hashx segfaults due to denial of mprotect() syscalls at runtime.
Prior to this fix, hashx makes the assumption that if the JIT is
supported on the current architecture, it will also be usable at
runtime. This isn't true if mprotect fails on linux, which it may for
various reasons: the tor built-in sandbox, the shadow simulator, or
external security software that implements a syscall filter.
The necessary error propagation was missing internally in hashx,
causing us to obliviously call into code which was never made
executable. With this fix, hashx_make() will instead fail by returning
zero.
A proper fix will require API changes so that callers can discern
between different types of failures. Zero already means that a program
couldn't be constructed, which requires a different response: choosing a
different seed, vs switching implementations. Callers would also benefit
from a way to use one context (with its already-built program) to
run in either compiled or interpreted mode.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
The code style in equix and hashx sometimes uses bitwise operators
in place of logical ones in cases where it doesn't really matter
either way. This sometimes annoys our static analyzer tools.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This is an additional test case for test_sandbox that runs a small
subset of test_crypto_equix() inside the syscall sandbox, where
mprotect() is filtered.
It's reasonable for the sandbox to disallow JIT. We could revise this
policy if we want, but it seems a good default for now. The problem
in issue 40794 is that both equix and hashx need improvements in their
API to handle failures after allocation time, and this failure occurs
while the hash function is being compiled.
With this commit only, the segfault from issue 40794 is reproduced.
Subsequent commits will fix the segfault and revise the API.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This patch makes Tor log state transitions within the PT layer at the
info log-level. This should make it easier to figure out if Tor ends up
in a strange state.
See: tpo/core/tor#33669
This started as a response to ticket #40792 where Coverity is
complaining about a potential year 2038 bug where we cast time_t from
approx_time() to uint32_t for use in token_bucket_ctr.
There was a larger can of worms though, since token_bucket really
doesn't want to be using wallclock time here. I audited the call sites
for approx_time() and changed any that used a 32-bit cast or made
inappropriate use of wallclock time. Things like certificate lifetime,
consensus intervals, etc. need wallclock time. Measurements of rates
over time, however, are better served with a monotonic timer that does
not try and sync with wallclock ever.
Looking closer at token_bucket, its design is a bit odd because it was
initially intended for use with tick units but later forked into
token_bucket_rw which uses ticks to count bytes per second, and
token_bucket_ctr which uses seconds to count slower events. The rates
represented by either token bucket can't be lower than 1 per second, so
the slower timer in 'ctr' is necessary to represent the slower rates of
things like connections or introduction packets or rendezvous attempts.
I considered modifying token_bucket to use 64-bit timestamps overall
instead of 32-bit, but that seemed like an unnecessarily invasive change
that would grant some peace of mind but probably not help much. I was
more interested in removing the dependency on wallclock time. The
token_bucket_rw timer already uses monotonic time. This patch converts
token_bucket_ctr to use monotonic time as well. It introduces a new
monotime_coarse_absolute_sec(), which is currently the same as nsec
divided by a billion but could be optimized easily if we ever need to.
This patch also might fix a rollover bug.. I haven't tested this
extensively but I don't think the previous version of the rollover code
on either token bucket was correct, and I would expect it to get stuck
after the first rollover.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This adds a bit more to hs_descriptor/test_decode_descriptor, mostly
testing pow-params and triggering the tor_assert() in issue #40793.
There was no mechanism for adding arbitrary test strings to the
encrypted portion of the desc without duplicating encode logic. One
option might be to publicize get_inner_encrypted_layer_plaintext enough
to add a mock implementation. In this patch I opt for what seems like
the simplest solution, at the cost of a small amount of #ifdef noise.
The unpacked descriptor grows a new test-only member that's used for
dropping arbitrary data in at encode time.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
The descriptor validation table had an out of date minimum length
for pow-params (3) whereas the spec and the current code expect at
least 4 parameters. This was an opportunity for a malicious service
to cause an assert failure in clients which attempted to parse its
descriptor.
Addresses issue #40793
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This should fix one of the warnings in issue #40792.
I was sloppy with freeing memory in the failure cases for
test_crypto_hashx. ASAN didn't notice but coverity did. Okay, I'll eat
my vegetables and put hashx_ctx's deinit in an upper scope and use
'goto done' correctly like a properly diligent C programmer.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This addresses one of the warnings in issue #40792. As far as I can tell
this is a false positive, since the use of "ctx->type" in hashx_free()
can only be hit after the unioned code/program pointer is non-NULL. It's
no big deal to zero this value explicitly to silence the warning though.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
Switch rate limiting will likely be helpful for limiting OOQ, but according to
shadow it was the cause of slower performance in Hong Kong endpoints.
So let's disable it, and then optimize for OOQ later.
This is a protocol breaking change that implements nickm's
changes to prop 327 to add an algorithm personalization string
and blinded HS id to the EquiX challenge string for our onion
service client puzzle.
This corresponds with the spec changes in torspec!130,
and it fixes a proposed vulnerability documented in
ticket tor#40789.
Clients and services prior to this patch will no longer
be compatible with the proposed "v1" proof-of-work protocol.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This lets controller apps see the outgoing PoW effort on client
circuits, and the validated effort received on an incoming service
circuit.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>