Previously we could sometimes change our signing key, but not
regenerate the certificates (signing->link and signing->auth) that
were signed with it. Also, we would regularly replace our TLS x.509
link certificate (by rotating our TLS context) but not replace our
signing->link ed25519 certificate. In both cases, the resulting
inconsistency would make other relays reject our link handshakes.
Fixes two cases of bug 22460; bugfix on 0.3.0.1-alpha.
The encrypted_data_length_is_valid() function wasn't validating correctly the
length of the encrypted data of a v3 descriptor. The side effect of this is
that an HSDir was rejecting the descriptor and ultimately not storing it.
Fixes#22447
Signed-off-by: David Goulet <dgoulet@torproject.org>
This adds a couple of configure commands to control whether we're
requiring all dependencies to be available locally (default) or not
(--enable-cargo-online-mode). When building from a tarball, we require
the RUST_DEPENDENCIES variable to point to the local repository of
crates. This also adds src/ext/rust as a git submodule that contains
such a local repository for easy setup.
Passing --enable-cargo-online-mode during configure allows cargo to make
network requests while building Tor or running tests. If this flag is
not supplied, the dependencies need to be available in the form of a
local mirror.
This gives an indication in the log that Tor was built with Rust
support, as well as laying some groundwork for further string-returning
APIs to be converted to Rust
config_get_lines is now split into two functions:
- config_get_lines which is the same as before we had %include
- config_get_lines_include which actually processes %include
Cleanup logic in test_intro_point_registration() invoked tt_assert()
in a way that could cause it to jump backward into the cleanup code if
the assertion failed, causing Coverity to see a double free (CID
1397192). Move the tt_assert() calls into a helper function having
the well-defined task of testing hs_circuitmap_free_all().
Fixes#22231.
A descriptor only contains the curve25519 public key in the enc-key field so
the private key should not be in that data structure. The service data
structures will have access to the full keypair (#20657).
Furthermore, ticket #21871 has highlighted an issue in the proposal 224 about
the encryption key and legacy key being mutually exclusive. This is very wrong
and this commit fixes the code to follow the change to the proposal of that
ticket.
Signed-off-by: David Goulet <dgoulet@torproject.org>
A for-loop in test_channelpadding_timers() would never run because it
was trying to increment a counter up to CHANNELS_TO_TEST/3 after an
earlier block already incremented it to CHANNELS_TO_TEST/2.
Fixes#22221, CID 1405983.
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
The descriptor fields can't be validated properly during encoding because they
are signed by a descriptor signing key that we don't have in the unit test.
Removing the test case for now but ultimately we need an independent
implementation that can encode descriptor and test our decoding functions with
that.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Create the hs_test_helpers.{c|h} files that contains helper functions to
create introduction point, descriptor and compare descriptor.
Used by both the hs cache and hs descriptor tests. Unify them to avoid code
duplication.
Also, this commit fixes the usage of the signing key that was wrongly used
when creating a cross signed certificate.
Signed-off-by: David Goulet <dgoulet@torproject.org>
This change prevents a no-longer-supported behavior where we change
options that would later be written back to torrc with a SAVECONF.
Also, use the "Pointer to final pointer" trick to build the
normalized list, to avoid special-casing the first element.
asan was finding an alignment issue with a cast, so set the field in the
trunnel struct and then encode it instead. Also, enable log capture and
verification.
Checking all of these parameter lists for every single connection every second
seems like it could be an expensive waste.
Updating globally cached versions when there is a new consensus will still
allow us to apply consensus parameter updates to all existing connections
immediately.
IMO, these tests should be calling options_init() to properly set everything
to default values, but when that is done, about a dozen tests fail. Setting
the one default value that broke the tests for my branch. Sorry for being
lame.
This unifies CircuitIdleTimeout and PredictedCircsRelevanceTime into a single
option, and randomizes it.
It also gives us control over the default value as well as relay-to-relay
connection lifespan through the consensus.
Conflicts:
src/or/circuituse.c
src/or/config.c
src/or/main.c
src/test/testing_common.c
This defense will cause Cisco, Juniper, Fortinet, and other routers operating
in the default configuration to collapse netflow records that would normally
be split due to the 15 second flow idle timeout.
Collapsing these records should greatly reduce the utility of default netflow
data for correlation attacks, since all client-side records should become 30
minute chunks of total bytes sent/received, rather than creating multiple
separate records for every webpage load/ssh command interaction/XMPP chat/whatever
else happens to be inactive for more than 15 seconds.
The defense adds consensus parameters to govern the range of timeout values
for sending padding packets, as well as for keeping connections open.
The defense only sends padding when connections are otherwise inactive, and it
does not pad connections used solely for directory traffic at all. By default
it also doesn't pad inter-relay connections.
Statistics on the total padding in the last 24 hours are exported to the
extra-info descriptors.
We need to index diffs by the digest-as-signed of their source
consensus, so that we can find them even from consensuses whose
signatures are encoded differently.
In this patch I add support for "delete through end of file" in our
ed diff handler, and generate our diffs so that they remove
everything after in the consensus after the signatures begin.
test_options_validate_impl() incorrectly executed subsequent phases of
config parsing and validation after an expected error. This caused
msg to leak when those later phases (which would likely produce errors
as well) overwrote it.
This was introduced 90562fc23a adding a code
path where we pass a NULL pointer for the HSDir fingerprint to the control
event subsystem. The HS desc failed function wasn't handling properly that
pointer for a NULL value.
Two unit tests are also added in this commit to make sure we handle properly
the case of a NULL hsdir fingerprint and a NULL content as well.
Fixes#22138
Signed-off-by: David Goulet <dgoulet@torproject.org>
Code movement in the commit introducings tests for #22103 uncovered a
latent memory management bug.
Refactor the log message checking from test_options_checkmsgs() into a
helper test_options_checklog(). This avoids a memory leak (and
possible double-free) in a test failure condition.
Don't reuse variables (especially pointers to allocated memory!) for
multiple unrelated purposes.
Fixes CID 1405778.
Also factor out the error message comparisions from
test_options_validate_impl() into a separate function so it can check
for error messages in different phases of config parsing.
These required some special-casing, since some of the assumption
about real compression algorithms don't actually hold for the
identity transform. Specifically, we had assumed:
- compression functions typically change the lengths of their
inputs.
- decompression functions can detect truncated inputs
- compression functions have detectable headers
None of those is true for the identity transformation.
Introduce a way to optionally enable Rust integration for our builds. No
actual Rust code is added yet and specifying the flag has no effect
other than failing the build if rustc and cargo are unavailable.
There were two issues here: first, zstd didn't exhibit the right
behavior unless it got a very large input. That's fine.
The second issue was a genuine bug, fixed by 39cfaba9e2.
This patch refactors our compression tests such that deflate, gzip,
lzma, and zstd are all tested using the same code.
Additionally we use run-time checks to see if the given compression
method is supported instead of using HAVE_LZMA and HAVE_ZSTD.
See: https://bugs.torproject.org/22085
Since we have a streaming API for each compression backend, we don't
need a non-streaming API for each: we can build a common
non-streaming API at the front-end.
This commit adds the src/trace directory containing the basics for our tracing
subsystem. It is not used in the code base. The "src/trace/debug.h" file
contains an example on how we can map our tor trace events to log_debug().
The tracing subsystem can only be enabled by tracing framework at compile
time. This commit introduces the "--enable-tracing-debug" option that will
make all "tor_trace()" function be maped to "log_debug()".
Closes#13802
Signed-off-by: David Goulet <dgoulet@torproject.org>
This patch adds support for checking if a given `compress_method_t` is
supported by the currently running Tor instance using
`tor_compress_supports_method()`.
See: https://bugs.torproject.org/21662
This patch adds support for enabling support for Zstandard to our configure
script. By default, the --enable-zstd option is set to "auto" which means if
libzstd is available we'll build Tor with Zstandard support.
See: https://bugs.torproject.org/21662
This patch adds support for enabling support for LZMA to our configure
script. By default, the --enable-lzma option is set to "auto" which
means if liblzma is available we'll build Tor with LZMA support.
See: https://bugs.torproject.org/21662
The item referred to the cdm_ht_set_status() case where the item was
not already in the hashtable. But that already happens naturally
when we scan the directory on startup... and we already have a test
for that.
This commit adds some helper functions to look up the diff from one
consensus and to make sure that applying it leads to another. Then
we add them throughout the existing test cases. Doing this turned
up a reference-leaking bug in consensus_diff_worker_replyfn.
Initial tests. These just try adding a few consensuses, looking
them up, and making sure that consensus diffs are generated in a
more or less reasonable-looking way. It's enough for 87% coverage,
but it leaves out a lot of functionality.
This patch refactors our streaming compression code to allow us to
extend it with non-zlib/non-gzip based compression schemas.
See https://bugs.torproject.org/21663
To allow us to use the API name `tor_compress` and `tor_uncompress` as
the main entry-point for all compression/uncompression and not just gzip
and zlib.
See https://bugs.torproject.org/21663
This patch changes some of the tt_assert() usage in test_util_gzip() to
use tt_int_op() to get better error messages upon failure.
Additionally we move to use explicit NULL checks.
See https://bugs.torproject.org/21663
The consdiff generation logic would skip over lines added at the start of the
second file, and generate a diff that it would the immediately refuse because
it couldn't be used to reproduce the second file from the first. Fixes#21996.
The reason for making the temporary list public is to keep it encapsulated in
the rendservice subsystem so the prop224 code does not have direct access to
it and can only affect it through the rendservice pruning function.
It also has been modified to not take list as arguments but rather use the
global lists (main and temporary ones) because prop224 code will call it to
actually prune the rendservice's lists. The function does the needed rotation
of pointers between those lists and then prune if needed.
In order to make the unit test work and not completely horrible, there is a
"impl_" version of the function that doesn't free memory, it simply moves
pointers around. It is directly used in the unit test and two setter functions
for those lists' pointer have been added only for unit test.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Now we have separate getters and setters for service-side and relay-side. I
took this approach over adding arguments to the already existing methods to
have more explicit type-checking, and also because some functions would grow
too large and dirty.
This commit also fixes every callsite to use the new function names which
modifies the legacy HS (v2) and the prop224 (v3) code.
Signed-off-by: David Goulet <dgoulet@torproject.org>
One of the goals of this change is to have trunnel API/ABI being more explicit
so we namespace them with "trn_*". Furthermore, we can now create
hs_cells.[ch] without having to confuse it with trunnel which used to be
"hs_cell_*" before that change.
Here are the perl line that were used for this rename:
perl -i -pe 's/cell_extension/trn_cell_extension/g;' src/*/*.[ch]
perl -i -pe 's/cell_extension/trn_cell_extension/g;' src/trunnel/hs/*.trunnel
perl -i -pe 's/hs_cell_/trn_cell_/g;' src/*/*.[ch]
perl -i -pe 's/hs_cell_/trn_cell_/g;' src/trunnel/hs/*.trunnel
And then "./scripts/codegen/run_trunnel.sh" with trunnel commit id
613fb1b98e58504e2b84ef56b1602b6380629043.
Fixes#21919
Signed-off-by: David Goulet <dgoulet@torproject.org>
These tests tried to use ridiculously large buffer sizes to check
the sanity-checking in the code; but since the sanity-checking
changed, these need to change too.
Test base64_decode() with odd sized decoded lengths, including
unpadded encodings and padded encodings with "right-sized" output
buffers. Convert calls to base64_decode_nopad() to base64_decode()
because base64_decode_nopad() is redundant.