This commit adds some helper functions to look up the diff from one
consensus and to make sure that applying it leads to another. Then
we add them throughout the existing test cases. Doing this turned
up a reference-leaking bug in consensus_diff_worker_replyfn.
In several places in the old code, we had problems that only an
in-memory index of diff status could solve, including:
* Remembering which diffs were in-progress, so that we didn't
re-launch them.
* Remembering which diffs had failed, so that we didn't try to
recompute them over and over.
* Having a fast way to look up the diff from a given consensus to
the latest consensus of a given flavor.
This patch adds a hashtable mapping from (flavor, source diff), to
solve the problem. It maps to a cache entry handle, rather than to
a cache entry directly, so that it doesn't affect the reference
counts of the cache entries, and so that we don't otherwise need to
worry about lifetime management.
This conscache flag tells conscache that it should munmap the
document as soon as reasonably possible, since its usage pattern is
expected to not have a lot of time-locality.
Initial tests. These just try adding a few consensuses, looking
them up, and making sure that consensus diffs are generated in a
more or less reasonable-looking way. It's enough for 87% coverage,
but it leaves out a lot of functionality.
This module's job is to remember old consensus documents, to
calculate their diffs on demand, and to .
There are some incomplete points in this code; I've marked them with
"XXXX". I intend to fix them in separate commits, since I believe
doing it in separate commits will make the branch easier to review.
The reason for making the temporary list public is to keep it encapsulated in
the rendservice subsystem so the prop224 code does not have direct access to
it and can only affect it through the rendservice pruning function.
It also has been modified to not take list as arguments but rather use the
global lists (main and temporary ones) because prop224 code will call it to
actually prune the rendservice's lists. The function does the needed rotation
of pointers between those lists and then prune if needed.
In order to make the unit test work and not completely horrible, there is a
"impl_" version of the function that doesn't free memory, it simply moves
pointers around. It is directly used in the unit test and two setter functions
for those lists' pointer have been added only for unit test.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Now we have separate getters and setters for service-side and relay-side. I
took this approach over adding arguments to the already existing methods to
have more explicit type-checking, and also because some functions would grow
too large and dirty.
This commit also fixes every callsite to use the new function names which
modifies the legacy HS (v2) and the prop224 (v3) code.
Signed-off-by: David Goulet <dgoulet@torproject.org>
One of the goals of this change is to have trunnel API/ABI being more explicit
so we namespace them with "trn_*". Furthermore, we can now create
hs_cells.[ch] without having to confuse it with trunnel which used to be
"hs_cell_*" before that change.
Here are the perl line that were used for this rename:
perl -i -pe 's/cell_extension/trn_cell_extension/g;' src/*/*.[ch]
perl -i -pe 's/cell_extension/trn_cell_extension/g;' src/trunnel/hs/*.trunnel
perl -i -pe 's/hs_cell_/trn_cell_/g;' src/*/*.[ch]
perl -i -pe 's/hs_cell_/trn_cell_/g;' src/trunnel/hs/*.trunnel
And then "./scripts/codegen/run_trunnel.sh" with trunnel commit id
613fb1b98e58504e2b84ef56b1602b6380629043.
Fixes#21919
Signed-off-by: David Goulet <dgoulet@torproject.org>
Pinning EntryNodes along with hidden services can be possibly harmful (for
instance #14917 and #21155) so at the very least warn the operator if this is
the case.
Fixes#21155
Signed-off-by: David Goulet <dgoulet@torproject.org>
These tests tried to use ridiculously large buffer sizes to check
the sanity-checking in the code; but since the sanity-checking
changed, these need to change too.
Now that base64_decode() checks the destination buffer length against
the actual number of bytes as they're produced, shared_random.c no
longer needs the "SR_COMMIT_LEN+2" workaround.
Remove base64_decode_nopad() because it is redundant now that
base64_decode() correctly handles both padded and unpadded base64
encodings with "right-sized" output buffers.
Test base64_decode() with odd sized decoded lengths, including
unpadded encodings and padded encodings with "right-sized" output
buffers. Convert calls to base64_decode_nopad() to base64_decode()
because base64_decode_nopad() is redundant.
base64_decode() was applying an overly conservative check on the
output buffer length that could incorrectly produce an error if the
input encoding contained padding or newlines. Fix this by checking
the output buffer length against the actual decoded length produced
during decoding.