Switch rate limiting will likely be helpful for limiting OOQ, but according to
shadow it was the cause of slower performance in Hong Kong endpoints.
So let's disable it, and then optimize for OOQ later.
This is a protocol breaking change that implements nickm's
changes to prop 327 to add an algorithm personalization string
and blinded HS id to the EquiX challenge string for our onion
service client puzzle.
This corresponds with the spec changes in torspec!130,
and it fixes a proposed vulnerability documented in
ticket tor#40789.
Clients and services prior to this patch will no longer
be compatible with the proposed "v1" proof-of-work protocol.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This lets controller apps see the outgoing PoW effort on client
circuits, and the validated effort received on an incoming service
circuit.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This error path with the "PoW cpuworker returned with no solution.
Will retry soon." message was usually lying. It's concerning
now because we expect to always find a solution no matter how
long it takes, rather than re-enter the solver repeatedly, so any
exit without a solution is a sign of a problem.
In fact when this error path gets hit, we are usually missing a
circuit instead because the request is quite old and the circuits
have been destroyed. This is not an emergency, it's just a sign
of client-side overload.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
i think we're done with these?
and swap in a nonfatal assert to replace one of the comments.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This dequeue path has been through a few revisions by now, first
limiting us to a fixed number per event loop callback, then an
additional limit based on a token bucket, then the current version
which has only the token bucket.
The thinking behing processing multiple requests per callback was to
optimize our usage of libevent, but in effect this creates a
prioritization problem. I think even a small fixed limit would be less
reliable than just backing out this optimization and always allowing
other callbacks to interrupt us in-between dequeues.
With this patch I'm seeing much smoother queueing behavior when I add
artificial delays to the main thread in testing.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This centralizes the logic for deciding on these magic thresholds,
and tries to reduce them to just two: a min and max. The min should be a
"nearly empty" threshold, indicating that the queue only contains work
we expect to be able to complete very soon. The max level triggers a
bulk culling process that reduces the queue to half that amount.
This patch calculates both thresholds based on the torrc pqueue rate
settings if they're present, and uses generic defaults if the user asked
for an unlimited dequeue rate in torrc.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
The worker job queue for hs_pow needs what's effectively a weak pointer
to two circuits, but there's not a generic mechanism for this in c-tor.
The previous approach of circuit_get_by_global_id() is straightforward
but not efficient. These global IDs are normally only used by the
control port protocol. To reduce the number of O(N) lookups we have over
the whole circuit list, we can use hs_circuitmap to look up the rend
circuit by its auth cookie.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This is trying to be an AIMD event-driven algorithm, but we ended up with
two different add paths with diverging behavior. This fix makes the AIMD
events more explicit, and it fixes an earlier behavior where the effort
could be decreased (by the add/recalculate branch) even when the pqueue
was not emptying at all. With this patch we shouldn't drop down to an
effort of zero as long as even low-effort attacks are flooding the
pqueue.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
The goal of this patch is to add an additional mechanism for adjusting
PoW effort upwards, where clients rather than services can choose to
solve their puzzles at a higher effort than what was suggested in the
descriptor.
I wanted to use hs_cache's existing unreachability stats to drive this
effort bump, but this revealed some cases where a circuit (intro or
rend) closed early on can end up in hs_cache with an all zero intro
point key, where nobody will find it. This moves intro_auth_pk
initialization earlier in a couple places and adds nonfatal asserts to
catch the problem if it shows up elsewhere.
The actual effort adjustment method I chose is to multiply the suggested
effort by (1 + unresponsive_count), then ensure the result is at least
1. If a service has suggested effort of 0 but we fail to connect,
retries will all use an effort of 1. If the suggestion was 50, we'll try
50, 100, 150, 200, etc. This is bounded both by our client effort limit
and by the limit on unresponsive_count (currently 5).
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
Asan catches this pretty readily when ending a service gracefully while
a DoS is in progress and the queue is full of items that haven't yet
timed out.
The module boundaries in hs_circuit are quite fuzzy here, but I'm trying
to follow the vibe of the existing hs_pow code.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
500 was quite low, but this limit was helpful when the suggested-effort
estimation algorithm was likely to give us large abrupt increases. Now
that this should be fixed, let's allow spending a bit more time on the
client puzzles if it's actually necessary.
Solving a puzzle with effort=10000 usually completes within a minute
on my old x86_64 machine. We may want to fine tune this further, and it
should probably be made into a config option.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
I don't think the concept of "minimum effort" is really useful to us,
so this patch removes it entirely and consequentially changes the way
that "total" effort is calculated so that we don't rely on any minimum
and we instead ramp up effort no faster than necessary.
If at least some portion of the attack is conducted by clients that
avoid PoW or provide incorrect solutions, those (potentially very
cheap) attacks will end up keeping the pqueue full. Prior to this patch,
that would cause suggested efforts to be unnecessarily high, because
rounding these very cheap requests up to even a minimum of 1 will
overestimate how much actual attack effort is being spent.
The result is that this patch is a simplification and it also allows a
slower start, where PoW effort jumps up either by a single unit or by an
amount calculated from actual effort in the queue.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This patch is intended to clarify the points at which we convert
between the internal representation of an equix_solution and a portable
but opaque byte array representation.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This fixes a failure that was showing up on i386 Debian hosts
with sandboxing enabled, now that cpuworker is enabled on clients.
We already had allowances for creating threads and creating stacks
in the sandbox, but prot_none (probably used for a stack guard)
was not allowed so thread creation failed.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This leak was showing up in address sanitizer runs of test_hs_pow,
but it will also happen during normal operation as seeds are rotated.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This is more consistent with the specification, and it's much
less confusing with endianness. This resolves the underlying
cause of the earlier byte-swap. This patch itself does not
change the wire protocol at all, it's just tidying up the
types we use at the trunnel layer.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
We were using a native uint128_t to represent the hs_pow nonce,
but as the comments note it's more portable and more flexible to
use a byte array. Indeed the uint128_t was a problem for 32-bit
platforms. This swaps in a new implementation that uses multiple
machine words to implement the nonce incrementation.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
In proposal 327, "POW_SEED is the first 4 bytes of the seed used".
The proposal doesn't specifically mention the data type of this field,
and the code in hs_pow so far treats it as an integer but semantically
it's more like the first four bytes of an already-encoded little endian
blob. This leads to a byte swap, since the type confusion takes place
in a little-endian subsystem but the wire encoding of seed_head uses
tor's default of big endian.
This patch does not address the underlying type confusion, it's a
minimal change that only swaps the byte order and updates unit tests
accordingly. Further changes will clean up the data types.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
Much faster per-hash, affects both verify and solve.
Only implemented on x86_64 and aarch64, other platforms
always use the interpreted version of hashx.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This adds test vectors for the overall client puzzle at the
hs_pow and hs_cell layers.
These are similar to the crypto/equix tests, but they also cover
particulars of our hs_pow format like the conversion to byte arrays,
the replay cache, the effort test, and the formatting of the equix
challenge string.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
Fixes some type nitpicks that show up in Tor development builds,
which usually run with -Wall -Werror. Tested on x86_64 and aarch64
for clean build and passing equix-tests + hashx-tests.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This replaces the sketchy cmake invocation we had inside configure
The libs are always built and always used in unit tests, but only
included in libtor and tor when --enable-gpl is set.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This forgoes another external library dependency, and instead
introduces a compatibility header so that interested parties
(who already depend on equix, like hs_pow and unit tests) can
use the implementation of blake2b included in hashx.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This adds test vectors for the Equi-X proof of work algorithm and the
Hash-X function it's based on. The overall Equi-X test takes about
10 seconds to run on my machine, so it's in test_crypto_slow. The hashx
test still covers both the compiled and interpreted versions of the
hash function.
There aren't any official test vectors for Equi-X or for its particular
configuration of Hash-X, so I made some up based on the current
implementation.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
I'm planning on swapping blake2b implementations, and this test
is intended to prevent regressions. Right now blake2b is only used by
hs_pow.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This adds a new "pow" module for the user-visible proof
of work support in ./configure, and this disables
src/feature/hs/hs_pow at compile-time.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This change on its own doesn't use the option for anything, but
it includes support for configure and a message in 'tor --version'
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This was apparently misinterpreting "zero solutions" as an error
instead of just moving on to the next nonce. Additionally, equix
could have been returning up to 8 solutions and we would only
give one of those a chance to succeed.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
We may want to choose something larger eventually, but 20 seemed
much too large. Very low nonzero efforts are still useful against
a script kiddie level DoS attack.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
This adds a token bucket ratelimiter on the dequeue side
of hs_pow's priority queue. It adds config options and docs
for those options. (HiddenServicePoWQueueRate/Burst)
I'm testing this as a way to limit the overhead of circuit
creation when we're experiencing a flood of rendezvous requests.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
Without this check, we never actually refetch the hs descriptor
when PoW parameters expire, because can_client_refetch_desc
deems the descriptor to be still good.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
Adds two new metrics for hs_pow, and an internal parameter within
hs_metrics for implementing gauge parameters that reset before
every update.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
We mark the intro circuit with a new flag saying that the pow is
in the cpuworker queue. When the cpuworker comes back, it either
has a solution, in which case we proceed with sending the intro1
cell, or it has no solution, in which case we unmark the intro
circuit and let the whole process restart on the next iteration of
connection_ap_handshake_attach_circuit().
into two parts:
* a "consider whether to send an intro2 cell" part (now called
consider_sending_introduce1()), and
* an "actually send it" (now called send_introduce1()).
prepares the way for client-side pow cpuworkers
also happens to resolve bug https://bugs.torproject.org/tpo/core/tor/40617
(which went into 0.4.7.4-alpha) because now we survive initing the
cpuworker subsystem when we're not a relay.
First (both client and service), make descriptor parsing not fail when
suggested_effort is 0.
Second (client side), if we get a descriptor with a pow_params section
but with suggested_effort of 0, treat it as not requiring a pow.
Third (service side), when deciding whether the suggested effort has
changed, don't treat "previous suggested effort 0, new suggested effort 0"
as a change.
An alternative design to resolve 'first' and 'second' above would be
to omit the pow_params from the descriptor when suggested_effort is 0,
so clients never see the pow_params so they don't compute a pow. But
I decided to include a pow_params with an explicit suggested_effort
of 0, since this way the client knows the seed etc so they can solve
a higher-effort pow if they want. The tradeoff is that the descriptor
reveals whether HiddenServicePoWDefensesEnabled is set to 1 for this onion
service, even if the AIMD calculation is currently requiring effort 0.
our pqueue implementation does bizarre unspecified things with
ordering of elements that are equal. it certainly doesn't do any
sort of "first in first out" property that i was expecting.
now make it explicit by saying that "equal-effort, added-earlier" is
higher priority.
specifically, if we have 16 in-flight rend circs, and the next
one at the top of the pqueue is lower than our suggested effort,
then don't launch it yet.
this way we always launch adequate-effort requests immediately, and
we always handle *some* low-effort requests, but we are ready at any
moment to handle a few new adequate-effort requests.
this change makes us reach the callback *after* each mainloop
run, rather than as the next event to run immediately after
activation.
with the old behavior, we were starving everything else to drain the
pqueue entirely, each time we got a new intro2 cell.
now we at least will get to other activities as well.
now we let ourselves queue up to twice as many as we expect, and when
we get to the limit we make a new pqueue and move over the first n
elements that we like most.
(the old approach, of calling SMARTLIST_DEL_CURRENT_KEEPORDER() on
elements in a pqueue, will destroy its heapify property.)
we also discard elements that are too old, either during the trimming
process or if they come up as the next request to respond to.
lastly, fix a fencepost error on how many rend reqs we would handle
per iteration.
If PoW are enabled, use a priority queue by effort for the rendezvous
requests hooked into the mainloop.
Signed-off-by: David Goulet <dgoulet@torproject.org>
When parsing an INTRODUCE2 cell, we extract data in order to launch the
rendezvous circuit. This commit creates a data structure just for that
data so it can be used by future commits for prop327 in order to copy
that data over a priority queue instead of the whole intro data data
structure which contains pointers that could dissapear.
Signed-off-by: David Goulet <dgoulet@torproject.org>
At this commit, the tor main loop solves it. We might consider moving
this to the CPU pool at some point.
Signed-off-by: David Goulet <dgoulet@torproject.org>
We discovered two cases where edge connections can stall during testing:
1. Due to final data sitting in the edge inbuf when it was resumed
2. Due to flag synchronization between the token bucket and XON/XOFF
The first issue has always existed in C-Tor, but we were able to tickle it
in scp testing. If the last data from the protocol is able to fit in the
inbuf, but not large enough to send, if an XOFF or connection block comes in
at exactly that point, when the edge connection resumes, there will be no
data to read from the socket, but the inbuf can just sit there, never
draining.
We noticed the second issue along the way to finding the first. It seems
wrong, but it didn't seem to affect anything in practice.
These are extremely rare in normal operation, but with conflux, XON/XOFF
activity is more common, so we hit these.
Signed-off-by: David Goulet <dgoulet@torproject.org>
In https://gitlab.torproject.org/tpo/core/tor/-/issues/40623, we changed the
DESTROY propogation to ensure memory was freed quickly at relays. This was a
good move, but it exacerbates the condition where a stream is closed on a
circuit, and then it is immediately closed because it is dirty. This creates a
race between the DESTROY and the last data sent on the stream. This race is
visible in shadow, and does happen.
This could be backported. A better solution to these kinds of problems is to
create an ENDED cell, and not close any circuits until the ENDED comes back.
But this will also require thinking, since this ENDED cell can also get lost,
so some kind of timeout may be needed either way. The ENDED cell could just
allow us to have much longer timeouts for this case.
This adds utility functions to help stream block decisions, as well as cpath
layer_hint checks for stream cell acceptance, and syncing stream lists
for conflux circuits.
These functions are then called throughout the codebase to properly manage
conflux streams.
Streams can get blocked on a circuit in two ways:
1. When the circuit package window is full
2. When the channel's cell queue is too high
Conflux needs to decouple stream blocking from both of these conditions,
because streams can continue on another circuit, even if the primary circuit
is blocked for either of these cases.
However, both conflux and congestion control need to know if the channel's
cell queue hit the highwatermark and is still draining, because this condition
is used by those components, independent of stream state.
Therefore, this commit renames the 'streams_blocked_on_chan' variable to
signify that it refers to the cell queue state, and also refactors the actual
stream blocking bits out, so they can be handled separately if conflux is
present.
Because circuit-level sendmes are sent before relay data cells are processed,
we can safely move this to before the conflux decision. In this way,
regardless of conflux being negotiated, we still send sendmes as soon as data
cells are recieved. This avoids introducing conflux queue delay into RTT
measurement, which is important for measuring actual circuit capacity.
The circuit-level tracking must happen inside the call to send a data cell,
since that call now chooses a circuit to send on. Turns out, we were already
doing this kind of here, but only for the digest. Now we do both things here.
We use the oldest-circ-first method here, since that seems good for conflux:
queues could briefly spike, but the bad case is if they are maliciously
bloated to stick around for a long time.
The tradeoff here is that it is possible to kill old circuits on a relay
quickly, but that has always been the case with this algorithm choice.
Signed-off-by: David Goulet <dgoulet@torproject.org>
This adds 2 histogram metrics for hidden services:
* `tor_hs_rend_circ_build_time` - the rendezvous circuit build time in milliseconds
* `tor_hs_intro_circ_build_time` - the introduction circuit build time in milliseconds
The text representation representation of the new metrics looks like this:
```
# HELP tor_hs_rend_circ_build_time The rendezvous circuit build time in milliseconds
# TYPE tor_hs_rend_circ_build_time histogram
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="1000.00"} 2
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="5000.00"} 10
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="10000.00"} 10
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="30000.00"} 10
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="60000.00"} 10
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="+Inf"} 10
tor_hs_rend_circ_build_time_sum{onion="<elided>"} 10824
tor_hs_rend_circ_build_time_count{onion="<elided>"} 10
# HELP tor_hs_intro_circ_build_time The introduction circuit build time in milliseconds
# TYPE tor_hs_intro_circ_build_time histogram
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="1000.00"} 0
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="5000.00"} 6
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="10000.00"} 6
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="30000.00"} 6
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="60000.00"} 6
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="+Inf"} 6
tor_hs_intro_circ_build_time_sum{onion="<elided>"} 9843
tor_hs_intro_circ_build_time_count{onion="<elided>"} 6
```
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This adds a `reason` label to the `hs_intro_rejected_intro_req_count` and
`hs_rdv_error_count` metrics introduced in #40755.
Metric look up and intialization is now more a bit more involved. This may be
fine for now, but it will become unwieldy if/when we add more labels (and as
such will need to be refactored).
Also, in the future, we may want to introduce finer grained `reason` labels.
For example, the `invalid_introduce2` label actually covers multiple types of
errors that can happen during the processing of an INTRODUCE2 cell (such as
cell parse errors, replays, decryption errors).
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This introduces a couple of new service side metrics:
* `hs_intro_rejected_intro_req_count`, which counts the number of introduction
requests rejected by the hidden service
* `hs_rdv_error_count`, which counts the number of rendezvous errors as seen by
the hidden service (this number includes the number of circuit establishment
failures, failed retries, end-to-end circuit setup failures)
Closes#40755. This partially addresses #40717.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
Directory authorities now include their AuthDirMaxServersPerAddr
config option in the consensus parameter section of their vote. Now
external tools can better predict how they will behave.
In particular, the value should make its way to the
https://consensus-health.torproject.org/#consensusparams page.
Once enough dir auths vote this param, they should also compute a
consensus value for it in the consensus document. Nothing uses this
consensus value yet, but we could imagine having dir auths consult it
in the future.
Implements ticket 40753.
This updates the docs to stop suggesting `pk` can be NULL, as that doesn't seem
to be the case anymore (`tor_assert(pk)`).
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
When I copied this to arti, I messed up and thought that the default
time period was 1440 seconds for some weird testing reason. That led
to confusion.
This commit adds a test case that time period 1440 is May 20, 1973:
now arti and c tor match!
Add new liblzma enums (LZMA_SEEK_NEEDED and LZMA_RET_INTERNAL*)
conditional to the API version they arrived in. The first stable
version of liblzma this affects is 5.4.0
Fixes#40741
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
Add new liblzma enums (LZMA_SEEK_NEEDED and LZMA_RET_INTERNAL*)
conditional to the API version they arrived in. The first stable
version of liblzma this affects is 5.4.0
Fixes#40741
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
If a circuit only sends a tiny amount of data such that its cwnd is not
full, it won't increase its cwnd above the minimum. Since slow start circuits
should never hit the minimum otherwise, we can just ignore them for RTT reset
to handle this.
Since these are derived from the number of SENDMEs in a cwnd/cc update,
and a cwnd should not exceed ~10k, there's plenty of room in uint16_t
for them, even if the network gets significantly faster.
Also provides some stickiness, so that once full, the congestion window is
considered still full for the rest of an update cycle, or the entire
congestion window.
In this way, we avoid increasing the congestion window if it is not fully
utilized, but we can still back off in this case. This substantially reduces
queue use in Shadow.
Since these are derived from the number of SENDMEs in a cwnd/cc update,
and a cwnd should not exceed ~10k, there's plenty of room in uint16_t
for them, even if the network gets significantly faster.
Also provides some stickiness, so that once full, the congestion window is
considered still full for the rest of an update cycle, or the entire
congestion window.
In this way, we avoid increasing the congestion window if it is not fully
utilized, but we can still back off in this case. This substantially reduces
queue use in Shadow.
Having no TotalBuildTimes along a positive CircuitBuildAbandonedCount
count lead to a segfault. We check for that condition and then BUG + log
warn if that is the case.
It should never happened in theory but if someone modified their state
file, it can lead to this problem so instead of segfaulting, warn.
Fixes#40437
Signed-off-by: David Goulet <dgoulet@torproject.org>
The logic was inverted. Introduced in commit
9155e08450.
This was reported through our bug bounty program on H1. It fixes the
TROVE-2022-002.
Fixes#40730
Signed-off-by: David Goulet <dgoulet@torproject.org>
We need the fallbackdir file to be the same so our release CI can
generate a new list and apply it uniformly on all series.
(Same as geoip)
Signed-off-by: David Goulet <dgoulet@torproject.org>
We need all geoip files to be the same so our release CI can generate a
new list and apply it uniformly on all series.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Rotate the relay identity key and v3 identity key for moria1. They
have been online for more than a decade, there was a known potential
compromise, and anyway refreshing keys periodically is good practice.
Advertise new ports too, to avoid confusion.
Closes ticket 40722.
This change mitigates DNS-based website oracles by making the time that
a domain name is cached uncertain (+- 4 minutes of what's measurable).
Resolves TROVE-2021-009.
Fixes#40674
We cap our number of CPU worker threads to at least 2 even if we have a
single core. But also, before we used to always add one extra thread
regardless of the number of core.
This meant that we were off when re-using the get_num_cpus() function
when calculating our onionskin work overhead because we were always off
by one.
This commit makes it that we always use the number of thread our actual
thread pool was configured with.
Fixes#40719
Signed-off-by: David Goulet <dgoulet@torproject.org>
Cap this to 2 threads always because we need a low and high priority
thread even with a single core.
Fixes#40713
Signed-off-by: David Goulet <dgoulet@torproject.org>
Created and Rejected connections are ever going up counters. While
Opened connections are gauges going up and down.
Fixes#40712
Signed-off-by: David Goulet <dgoulet@torproject.org>
This change mitigates DNS-based website oracles by making the time that
a domain name is cached uncertain (+- 4 minutes of what's measurable).
Resolves TROVE-2021-009.
Fixes#40674
This is part of the fast path so we need to cache consensus parameters
instead of querying it everytime we need to learn a value.
Part of #40704
Signed-off-by: David Goulet <dgoulet@torproject.org>
Until now, there was this magic number (64) used as the maximum number
of tasks a CPU worker can take at once.
This commit makes it a consensus parameter so our future selves can
think of a better value depending on network conditions.
Part of #40704
Signed-off-by: David Goulet <dgoulet@torproject.org>
Transform the hardcoded value ONIONQUEUE_WAIT_CUTOFF into a consensus
parameter so we can control it network wide.
Closes#40704
Signed-off-by: David Goulet <dgoulet@torproject.org>
This also incidently removes a use of uninitialized stack data from the
connection_or_set_ext_or_identifier() function.
Fixes#40648
Signed-off-by: David Goulet <dgoulet@torproject.org>
* it also uses sys/param.h to track its version;
* present that to tor_libc_get_version_str() as libc version;
while here, we also fix the return of FreeBSD version
* __FreeBSD_version is the correct var tracking the OSVERSION
* we use OSVERSION here (defined by __FreeBSD__);
* it's part of the <sys/param.h> include;
* that tracks all noteworthy changes made to the base system.
* __BSD_VISIBLE is defined by systems like FreeBSD and OpenBSD;
* that also extends to DragonFlyBSD;
* it's used on stdlib.h and ctypes.h on those systems.
This BUG() was added when the code was written to see if this callback
was ever executed after we marked the handle as EOF. It turns out, it
does, but we handle it gracefully. We can therefore remove the BUG().
Fixes tpo/core/tor#40596.
Remove a harmless "Bug" log message that can happen in
relay_addr_learn_from_dirauth() on relays during startup:
tor_bug_occurred_(): Bug: ../src/feature/relay/relay_find_addr.c:225: relay_addr_learn_from_dirauth: Non-fatal assertion !(!ei) failed. (on Tor 0.4.7.10 )
Bug: Tor 0.4.7.10: Non-fatal assertion !(!ei) failed in relay_addr_learn_from_dirauth at ../src/feature/relay/relay_find_addr.c:225. Stack trace: (on Tor 0.4.7.10 )
Finishes fixing bug 40231.
Fixes bug 40523; bugfix on 0.4.5.4-rc.
Move the retry from circuit_expire_building() to when the offending
circuit is being closed.
Fixes#40695
Signed-off-by: David Goulet <dgoulet@torproject.org>
Logic is too convoluted and we can't efficiently apply a specific
timeout depending on the purpose.
Remove it and instead rely on the right circuit cutoff instead of
keeping this flagged circuit open forever.
Part of #40694
Signed-off-by: David Goulet <dgoulet@torproject.org>
Explicitly set the S_CONNECT_REND purpose to a 4-hop cutoff.
As for the established rendezvous circuit waiting on the RENDEZVOUS2,
set one that is very long considering the possible waiting time for the
service to get the request and join our rendezvous.
Part of #40694
Signed-off-by: David Goulet <dgoulet@torproject.org>
Change it to an "unreachable" error so the intro point can be retried
and not flagged as a failure and never retried again.
Closes#40692
Signed-off-by: David Goulet <dgoulet@torproject.org>
This adds two consensus parameters to control the outbound max circuit
queue cell size limit and how many times it is allowed to reach that
limit for a single client IP.
Closes#40680
Signed-off-by: David Goulet <dgoulet@torproject.org>
Directory authorities and relays now interact properly with directory
authorities if they change addresses. In the past, they would continue
to upload votes, signatures, descriptors, etc to the hard-coded address
in the configuration. Now, if the directory authority is listed in
the consensus at a different address, they will direct queries to this
new address.
Specifically, these three activities have changed:
* Posting a vote, a signature, or a relay descriptor to all the dir auths.
* Dir auths fetching missing votes or signatures from all the dir auths.
* Dir auths fetching new descriptors from a specific dir auth when they
just learned about them from that dir auth's vote.
We already do this desired behavior (prefer the address in the consensus,
but fall back to the hard-coded dirservers info if needed) when fetching
missing certs.
There is a fifth case, in router_pick_trusteddirserver(), where clients
and relays are trying to reach a random dir auth to fetch something. I
left that case alone for now because the interaction with fallbackdirs
is complicated.
Implements ticket 40705.
Directory authorities stop voting a consensus "Measured" weight
for relays with the Authority flag. Now these relays will be
considered unmeasured, which should reserve their bandwidth
for their dir auth role and minimize distractions from other roles.
In place of the "Measured" weight, they now include a
"MeasuredButAuthority" weight (not used by anything) so the bandwidth
authority's opinion on this relay can be recorded for posterity.
Resolves ticket 40698.
The AuthDirDontVoteOnDirAuthBandwidth torrc option never worked, and it
was implemented in a way that could have produced consensus conflicts
if it had.
Resolves bug 40700.
Move the retry from circuit_expire_building() to when the offending
circuit is being closed.
Fixes#40695
Signed-off-by: David Goulet <dgoulet@torproject.org>
Logic is too convoluted and we can't efficiently apply a specific
timeout depending on the purpose.
Remove it and instead rely on the right circuit cutoff instead of
keeping this flagged circuit open forever.
Part of #40694
Signed-off-by: David Goulet <dgoulet@torproject.org>
Explicitly set the S_CONNECT_REND purpose to a 4-hop cutoff.
As for the established rendezvous circuit waiting on the RENDEZVOUS2,
set one that is very long considering the possible waiting time for the
service to get the request and join our rendezvous.
Part of #40694
Signed-off-by: David Goulet <dgoulet@torproject.org>
Change it to an "unreachable" error so the intro point can be retried
and not flagged as a failure and never retried again.
Closes#40692
Signed-off-by: David Goulet <dgoulet@torproject.org>
Bug 1: We were purporting to calculate milliseconds per tick, when we
*should* have been computing ticks per millisecond.
Bug 2: Instead of computing either one of those, we were _actually_
computing femtoseconds per tick.
These two bugs covered for one another on x86 hardware, where 1 tick
== 1 nanosecond. But on M1 OSX, 1 tick is about 41 nanoseconds,
causing surprising results.
Fixes bug 40684; bugfix on 0.3.3.1-alpha.
Patch to address #40673. An additional check has been added to
onion_pending_add() in order to ensure that we avoid counting create
cells from clients.
In the cpuworker.c assign_onionskin_to_cpuworker
method if total_pending_tasks >= max_pending_tasks
and channel_is_client(circ->p_chan) returns false then
rep_hist_note_circuit_handshake_dropped() will be called and
rep_hist_note_circuit_handshake_assigned() will not be called. This
causes relays to run into errors due to the fact that the number of
dropped packets exceeds the total number of assigned packets.
To avoid this situation a check has been added to
onion_pending_add() to ensure that these erroneous calls to
rep_hist_note_circuit_handshake_dropped() are not made.
See the #40673 ticket for the conversation with armadev about this issue.
We need to tune these, but we're not likely to need the subtle differences
between a few of them. Removing them will prevent our consensus parameter
string from becoming too long in the event of tuning.