Without this check, we never actually refetch the hs descriptor
when PoW parameters expire, because can_client_refetch_desc
deems the descriptor to be still good.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
Adds two new metrics for hs_pow, and an internal parameter within
hs_metrics for implementing gauge parameters that reset before
every update.
Signed-off-by: Micah Elizabeth Scott <beth@torproject.org>
We mark the intro circuit with a new flag saying that the pow is
in the cpuworker queue. When the cpuworker comes back, it either
has a solution, in which case we proceed with sending the intro1
cell, or it has no solution, in which case we unmark the intro
circuit and let the whole process restart on the next iteration of
connection_ap_handshake_attach_circuit().
into two parts:
* a "consider whether to send an intro2 cell" part (now called
consider_sending_introduce1()), and
* an "actually send it" (now called send_introduce1()).
prepares the way for client-side pow cpuworkers
also happens to resolve bug https://bugs.torproject.org/tpo/core/tor/40617
(which went into 0.4.7.4-alpha) because now we survive initing the
cpuworker subsystem when we're not a relay.
First (both client and service), make descriptor parsing not fail when
suggested_effort is 0.
Second (client side), if we get a descriptor with a pow_params section
but with suggested_effort of 0, treat it as not requiring a pow.
Third (service side), when deciding whether the suggested effort has
changed, don't treat "previous suggested effort 0, new suggested effort 0"
as a change.
An alternative design to resolve 'first' and 'second' above would be
to omit the pow_params from the descriptor when suggested_effort is 0,
so clients never see the pow_params so they don't compute a pow. But
I decided to include a pow_params with an explicit suggested_effort
of 0, since this way the client knows the seed etc so they can solve
a higher-effort pow if they want. The tradeoff is that the descriptor
reveals whether HiddenServicePoWDefensesEnabled is set to 1 for this onion
service, even if the AIMD calculation is currently requiring effort 0.
our pqueue implementation does bizarre unspecified things with
ordering of elements that are equal. it certainly doesn't do any
sort of "first in first out" property that i was expecting.
now make it explicit by saying that "equal-effort, added-earlier" is
higher priority.
specifically, if we have 16 in-flight rend circs, and the next
one at the top of the pqueue is lower than our suggested effort,
then don't launch it yet.
this way we always launch adequate-effort requests immediately, and
we always handle *some* low-effort requests, but we are ready at any
moment to handle a few new adequate-effort requests.
this change makes us reach the callback *after* each mainloop
run, rather than as the next event to run immediately after
activation.
with the old behavior, we were starving everything else to drain the
pqueue entirely, each time we got a new intro2 cell.
now we at least will get to other activities as well.
now we let ourselves queue up to twice as many as we expect, and when
we get to the limit we make a new pqueue and move over the first n
elements that we like most.
(the old approach, of calling SMARTLIST_DEL_CURRENT_KEEPORDER() on
elements in a pqueue, will destroy its heapify property.)
we also discard elements that are too old, either during the trimming
process or if they come up as the next request to respond to.
lastly, fix a fencepost error on how many rend reqs we would handle
per iteration.
If PoW are enabled, use a priority queue by effort for the rendezvous
requests hooked into the mainloop.
Signed-off-by: David Goulet <dgoulet@torproject.org>
When parsing an INTRODUCE2 cell, we extract data in order to launch the
rendezvous circuit. This commit creates a data structure just for that
data so it can be used by future commits for prop327 in order to copy
that data over a priority queue instead of the whole intro data data
structure which contains pointers that could dissapear.
Signed-off-by: David Goulet <dgoulet@torproject.org>
At this commit, the tor main loop solves it. We might consider moving
this to the CPU pool at some point.
Signed-off-by: David Goulet <dgoulet@torproject.org>
We discovered two cases where edge connections can stall during testing:
1. Due to final data sitting in the edge inbuf when it was resumed
2. Due to flag synchronization between the token bucket and XON/XOFF
The first issue has always existed in C-Tor, but we were able to tickle it
in scp testing. If the last data from the protocol is able to fit in the
inbuf, but not large enough to send, if an XOFF or connection block comes in
at exactly that point, when the edge connection resumes, there will be no
data to read from the socket, but the inbuf can just sit there, never
draining.
We noticed the second issue along the way to finding the first. It seems
wrong, but it didn't seem to affect anything in practice.
These are extremely rare in normal operation, but with conflux, XON/XOFF
activity is more common, so we hit these.
Signed-off-by: David Goulet <dgoulet@torproject.org>
In https://gitlab.torproject.org/tpo/core/tor/-/issues/40623, we changed the
DESTROY propogation to ensure memory was freed quickly at relays. This was a
good move, but it exacerbates the condition where a stream is closed on a
circuit, and then it is immediately closed because it is dirty. This creates a
race between the DESTROY and the last data sent on the stream. This race is
visible in shadow, and does happen.
This could be backported. A better solution to these kinds of problems is to
create an ENDED cell, and not close any circuits until the ENDED comes back.
But this will also require thinking, since this ENDED cell can also get lost,
so some kind of timeout may be needed either way. The ENDED cell could just
allow us to have much longer timeouts for this case.
This adds utility functions to help stream block decisions, as well as cpath
layer_hint checks for stream cell acceptance, and syncing stream lists
for conflux circuits.
These functions are then called throughout the codebase to properly manage
conflux streams.