our pqueue implementation does bizarre unspecified things with
ordering of elements that are equal. it certainly doesn't do any
sort of "first in first out" property that i was expecting.
now make it explicit by saying that "equal-effort, added-earlier" is
higher priority.
specifically, if we have 16 in-flight rend circs, and the next
one at the top of the pqueue is lower than our suggested effort,
then don't launch it yet.
this way we always launch adequate-effort requests immediately, and
we always handle *some* low-effort requests, but we are ready at any
moment to handle a few new adequate-effort requests.
this change makes us reach the callback *after* each mainloop
run, rather than as the next event to run immediately after
activation.
with the old behavior, we were starving everything else to drain the
pqueue entirely, each time we got a new intro2 cell.
now we at least will get to other activities as well.
now we let ourselves queue up to twice as many as we expect, and when
we get to the limit we make a new pqueue and move over the first n
elements that we like most.
(the old approach, of calling SMARTLIST_DEL_CURRENT_KEEPORDER() on
elements in a pqueue, will destroy its heapify property.)
we also discard elements that are too old, either during the trimming
process or if they come up as the next request to respond to.
lastly, fix a fencepost error on how many rend reqs we would handle
per iteration.
If PoW are enabled, use a priority queue by effort for the rendezvous
requests hooked into the mainloop.
Signed-off-by: David Goulet <dgoulet@torproject.org>
When parsing an INTRODUCE2 cell, we extract data in order to launch the
rendezvous circuit. This commit creates a data structure just for that
data so it can be used by future commits for prop327 in order to copy
that data over a priority queue instead of the whole intro data data
structure which contains pointers that could dissapear.
Signed-off-by: David Goulet <dgoulet@torproject.org>
At this commit, the tor main loop solves it. We might consider moving
this to the CPU pool at some point.
Signed-off-by: David Goulet <dgoulet@torproject.org>
We discovered two cases where edge connections can stall during testing:
1. Due to final data sitting in the edge inbuf when it was resumed
2. Due to flag synchronization between the token bucket and XON/XOFF
The first issue has always existed in C-Tor, but we were able to tickle it
in scp testing. If the last data from the protocol is able to fit in the
inbuf, but not large enough to send, if an XOFF or connection block comes in
at exactly that point, when the edge connection resumes, there will be no
data to read from the socket, but the inbuf can just sit there, never
draining.
We noticed the second issue along the way to finding the first. It seems
wrong, but it didn't seem to affect anything in practice.
These are extremely rare in normal operation, but with conflux, XON/XOFF
activity is more common, so we hit these.
Signed-off-by: David Goulet <dgoulet@torproject.org>
In https://gitlab.torproject.org/tpo/core/tor/-/issues/40623, we changed the
DESTROY propogation to ensure memory was freed quickly at relays. This was a
good move, but it exacerbates the condition where a stream is closed on a
circuit, and then it is immediately closed because it is dirty. This creates a
race between the DESTROY and the last data sent on the stream. This race is
visible in shadow, and does happen.
This could be backported. A better solution to these kinds of problems is to
create an ENDED cell, and not close any circuits until the ENDED comes back.
But this will also require thinking, since this ENDED cell can also get lost,
so some kind of timeout may be needed either way. The ENDED cell could just
allow us to have much longer timeouts for this case.
This adds utility functions to help stream block decisions, as well as cpath
layer_hint checks for stream cell acceptance, and syncing stream lists
for conflux circuits.
These functions are then called throughout the codebase to properly manage
conflux streams.
Streams can get blocked on a circuit in two ways:
1. When the circuit package window is full
2. When the channel's cell queue is too high
Conflux needs to decouple stream blocking from both of these conditions,
because streams can continue on another circuit, even if the primary circuit
is blocked for either of these cases.
However, both conflux and congestion control need to know if the channel's
cell queue hit the highwatermark and is still draining, because this condition
is used by those components, independent of stream state.
Therefore, this commit renames the 'streams_blocked_on_chan' variable to
signify that it refers to the cell queue state, and also refactors the actual
stream blocking bits out, so they can be handled separately if conflux is
present.
Because circuit-level sendmes are sent before relay data cells are processed,
we can safely move this to before the conflux decision. In this way,
regardless of conflux being negotiated, we still send sendmes as soon as data
cells are recieved. This avoids introducing conflux queue delay into RTT
measurement, which is important for measuring actual circuit capacity.
The circuit-level tracking must happen inside the call to send a data cell,
since that call now chooses a circuit to send on. Turns out, we were already
doing this kind of here, but only for the digest. Now we do both things here.
We use the oldest-circ-first method here, since that seems good for conflux:
queues could briefly spike, but the bad case is if they are maliciously
bloated to stick around for a long time.
The tradeoff here is that it is possible to kill old circuits on a relay
quickly, but that has always been the case with this algorithm choice.
Signed-off-by: David Goulet <dgoulet@torproject.org>
This adds 2 histogram metrics for hidden services:
* `tor_hs_rend_circ_build_time` - the rendezvous circuit build time in milliseconds
* `tor_hs_intro_circ_build_time` - the introduction circuit build time in milliseconds
The text representation representation of the new metrics looks like this:
```
# HELP tor_hs_rend_circ_build_time The rendezvous circuit build time in milliseconds
# TYPE tor_hs_rend_circ_build_time histogram
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="1000.00"} 2
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="5000.00"} 10
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="10000.00"} 10
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="30000.00"} 10
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="60000.00"} 10
tor_hs_rend_circ_build_time_bucket{onion="<elided>",le="+Inf"} 10
tor_hs_rend_circ_build_time_sum{onion="<elided>"} 10824
tor_hs_rend_circ_build_time_count{onion="<elided>"} 10
# HELP tor_hs_intro_circ_build_time The introduction circuit build time in milliseconds
# TYPE tor_hs_intro_circ_build_time histogram
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="1000.00"} 0
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="5000.00"} 6
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="10000.00"} 6
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="30000.00"} 6
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="60000.00"} 6
tor_hs_intro_circ_build_time_bucket{onion="<elided>",le="+Inf"} 6
tor_hs_intro_circ_build_time_sum{onion="<elided>"} 9843
tor_hs_intro_circ_build_time_count{onion="<elided>"} 6
```
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This adds a `reason` label to the `hs_intro_rejected_intro_req_count` and
`hs_rdv_error_count` metrics introduced in #40755.
Metric look up and intialization is now more a bit more involved. This may be
fine for now, but it will become unwieldy if/when we add more labels (and as
such will need to be refactored).
Also, in the future, we may want to introduce finer grained `reason` labels.
For example, the `invalid_introduce2` label actually covers multiple types of
errors that can happen during the processing of an INTRODUCE2 cell (such as
cell parse errors, replays, decryption errors).
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
This introduces a couple of new service side metrics:
* `hs_intro_rejected_intro_req_count`, which counts the number of introduction
requests rejected by the hidden service
* `hs_rdv_error_count`, which counts the number of rendezvous errors as seen by
the hidden service (this number includes the number of circuit establishment
failures, failed retries, end-to-end circuit setup failures)
Closes#40755. This partially addresses #40717.
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
Directory authorities now include their AuthDirMaxServersPerAddr
config option in the consensus parameter section of their vote. Now
external tools can better predict how they will behave.
In particular, the value should make its way to the
https://consensus-health.torproject.org/#consensusparams page.
Once enough dir auths vote this param, they should also compute a
consensus value for it in the consensus document. Nothing uses this
consensus value yet, but we could imagine having dir auths consult it
in the future.
Implements ticket 40753.
This updates the docs to stop suggesting `pk` can be NULL, as that doesn't seem
to be the case anymore (`tor_assert(pk)`).
Signed-off-by: Gabriela Moldovan <gabi@torproject.org>
When I copied this to arti, I messed up and thought that the default
time period was 1440 seconds for some weird testing reason. That led
to confusion.
This commit adds a test case that time period 1440 is May 20, 1973:
now arti and c tor match!
Add new liblzma enums (LZMA_SEEK_NEEDED and LZMA_RET_INTERNAL*)
conditional to the API version they arrived in. The first stable
version of liblzma this affects is 5.4.0
Fixes#40741
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
Add new liblzma enums (LZMA_SEEK_NEEDED and LZMA_RET_INTERNAL*)
conditional to the API version they arrived in. The first stable
version of liblzma this affects is 5.4.0
Fixes#40741
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
If a circuit only sends a tiny amount of data such that its cwnd is not
full, it won't increase its cwnd above the minimum. Since slow start circuits
should never hit the minimum otherwise, we can just ignore them for RTT reset
to handle this.
Since these are derived from the number of SENDMEs in a cwnd/cc update,
and a cwnd should not exceed ~10k, there's plenty of room in uint16_t
for them, even if the network gets significantly faster.
Also provides some stickiness, so that once full, the congestion window is
considered still full for the rest of an update cycle, or the entire
congestion window.
In this way, we avoid increasing the congestion window if it is not fully
utilized, but we can still back off in this case. This substantially reduces
queue use in Shadow.
Since these are derived from the number of SENDMEs in a cwnd/cc update,
and a cwnd should not exceed ~10k, there's plenty of room in uint16_t
for them, even if the network gets significantly faster.
Also provides some stickiness, so that once full, the congestion window is
considered still full for the rest of an update cycle, or the entire
congestion window.
In this way, we avoid increasing the congestion window if it is not fully
utilized, but we can still back off in this case. This substantially reduces
queue use in Shadow.
Having no TotalBuildTimes along a positive CircuitBuildAbandonedCount
count lead to a segfault. We check for that condition and then BUG + log
warn if that is the case.
It should never happened in theory but if someone modified their state
file, it can lead to this problem so instead of segfaulting, warn.
Fixes#40437
Signed-off-by: David Goulet <dgoulet@torproject.org>
The logic was inverted. Introduced in commit
9155e08450.
This was reported through our bug bounty program on H1. It fixes the
TROVE-2022-002.
Fixes#40730
Signed-off-by: David Goulet <dgoulet@torproject.org>
We need the fallbackdir file to be the same so our release CI can
generate a new list and apply it uniformly on all series.
(Same as geoip)
Signed-off-by: David Goulet <dgoulet@torproject.org>
We need all geoip files to be the same so our release CI can generate a
new list and apply it uniformly on all series.
Signed-off-by: David Goulet <dgoulet@torproject.org>