Our mock network put all the guards on the same IPv4 address, which
doesn't fly when we start applying EnforceDistinctSubnets. So in
this commit, I disable EnforceDistinctSubnets when running the old
guard_restriction_t test.
This commit also adds a regression test for #22753.
When the new path selection logic went into place, I accidentally
dropped the code that considered the _family_ of the exit node when
deciding if the guard was usable, and we didn't catch that during
code review.
This patch makes the guard_restriction_t code consider the exit
family as well, and adds some (hopefully redundant) checks for the
case where we lack a node_t for a guard but we have a bridge_info_t
for it.
Fixes bug 22753; bugfix on 0.3.0.1-alpha. Tracked as TROVE-2016-006
and CVE-2017-0377.
This patch fixes a crash in our LZMA module where liblzma will allocate
slightly more data than it is allowed to by its limit, which leads to a
crash.
See: https://bugs.torproject.org/22751
This makes our directory code check if a client is trying to fetch a
document that matches a digest from our latest consensus document.
See: https://bugs.torproject.org/22702
This patch ensures that the published_out output parameter is set to the
current consensus cache entry's "valid after" field.
See: https://bugs.torproject.org/22702
As of ac2f6b608a in 0.2.1.19-alpha,
Sebastian fixed bug 888 by marking descriptors as "impossible" by
digest if they got rejected during the
router_load_routers_from_string() phase. This fix stopped clients
and relays from downloading the same thing over and over.
But we never made the same change for descriptors rejected during
dirserv_add_{descriptor,extrainfo}. Instead, we tried to notice in
advance that we'd reject them with dirserv_would_reject().
This notice-in-advance check stopped working once we added
key-pinning and didn't make a corresponding key-pinning change to
dirserv_would_reject() [since a routerstatus_t doesn't include an
ed25519 key].
So as a fix, let's make the dirserv_add_*() functions mark digests
as undownloadable when they are rejected.
Fixes bug 22349; I am calling this a fix on 0.2.1.19-alpha, though
you could also argue for it being a fix on 0.2.7.2-alpha.
This mistake causes two possible bugs. I believe they are both
harmless IRL.
BUG 1: memory stomping
When we call the memset, we are overwriting two 0 bytes past the end
of packed_cell_t.body. But I think that's harmless in practice,
because the definition of packed_cell_t is:
// ...
typedef struct packed_cell_t {
TOR_SIMPLEQ_ENTRY(packed_cell_t) next;
char body[CELL_MAX_NETWORK_SIZE];
uint32_t inserted_time;
} packed_cell_t;
So we will overwrite either two bytes of inserted_time, or two bytes
of padding, depending on how the platform handles alignment.
If we're overwriting padding, that's safe.
If we are overwriting the inserted_time field, that's also safe: In
every case where we call cell_pack() from connection_or.c, we ignore
the inserted_time field. When we call cell_pack() from relay.c, we
don't set or use inserted_time until right after we have called
cell_pack(). SO I believe we're safe in that case too.
BUG 2: memory exposure
The original reason for this memset was to avoid the possibility of
accidentally leaking uninitialized ram to the network. Now
remember, if wide_circ_ids is false on a connection, we shouldn't
actually be sending more than 512 bytes of packed_cell_t.body, so
these two bytes can only leak to the network if there is another bug
somewhere else in the code that sends more data than is correct.
Fortunately, in relay.c, where we allocate packed_cell_t in
packed_cell_new() , we allocate it with tor_malloc_zero(), which
clears the RAM, right before we call cell_pack. So those
packed_cell_t.body bytes can't leak any information.
That leaves the two calls to cell_pack() in connection_or.c, which
use stack-alocated packed_cell_t instances.
In or_handshake_state_record_cell(), we pass the cell's contents to
crypto_digest_add_bytes(). When we do so, we get the number of
bytes to pass using the same setting of wide_circ_ids as we passed
to cell_pack(). So I believe that's safe.
In connection_or_write_cell_to_buf(), we also use the same setting
of wide_circ_ids in both calls. So I believe that's safe too.
I introduced this bug with 1c0e87f6d8
back in 0.2.4.11-alpha; it is bug 22737 and CID 1401591
This introduces node_supports_v3_hsdir() and node_supports_ed25519_hs_intro()
that checks the routerstatus_t of a node and if not present, checks the
routerinfo_t.
This is groundwork for proposal 224 service implementation in #20657.
Signed-off-by: David Goulet <dgoulet@torproject.org>
It is possible that at some point in time a client will encounter unknown or
new fields for an introduction point in a descriptor so let them ignore it for
forward compatibility.
Signed-off-by: David Goulet <dgoulet@torproject.org>
If `validate_only` is true, then just validate the configuration without warning
about it. This way, we only emit warnings when the listener is actually opened.
(Otherwise, every time we parse the config we will might re-warn and we would
need to keep state; whereas the listeners are only opened once.)
* FIXES#4019.
-authdir_mode_handles_descs(options, ROUTER_PURPOSE_BRIDGE) to authdir_mode_bridge(options).
- authdir_mode_handles_descs(options, ROUTER_PURPOSE_GENERAL) to authdir_mode_v3(options).
If COMPRESS_OK occurs but data is neither consumed nor generated,
treat it as a BUG and a COMPRESS_ERROR.
This change is meant to prevent infinite loops in the case where
we've made a mistake in one of our compression backends.
Closes ticket 22672.
(This approach can lose accuracy, but it's only in debug-level messages.)
Fixes windows compilation. Bugfix on recent compress.c changes; bug
not in any released Tor.
This prevents us from calling
allowed_anonymous_connection_compression_method() on the unused
guessed method (if any), and rejecting something that was already
safe to use.
Rationale: When use a guessed compression method, we already gave a
PROTOCOL_WARN when our guess differed from the declared method,
AND we gave a PROTOCOL_WARN when the declared method failed. It is
not a protocol problem that the guessed method failed too; it's just
a recovery attempt that failed.
A cached_dir_t object (for now) is always compressed with
DEFLATE_METHOD, but in handle_get_status_vote() to we were using the
general compression-negotiation code decide what compression to
claim we were using.
This was one of the reasons behind 22502.
Fixes bug 22669; bugfix on 0.3.1.1-alpha
Move the HTTPProxy option to the deprecated list so for now it will only warn
users but feature is still in the code which will be removed in a future
stable version.
Fixes#20575
Signed-off-by: David Goulet <dgoulet@torproject.org>
On an hidden service rendezvous circuit, a BEGIN_DIR could be sent
(maliciously) which would trigger a tor_assert() because
connection_edge_process_relay_cell() thought that the circuit is an
or_circuit_t but is an origin circuit in reality.
Fixes#22494
Reported-by: Roger Dingledine <arma@torproject.org>
Signed-off-by: David Goulet <dgoulet@torproject.org>
This fixes an assertion failure in relay_send_end_cell_from_edge_() when an
origin circuit and a cpath_layer = NULL were passed.
A service rendezvous circuit could do such a thing when a malformed BEGIN cell
is received but shouldn't in the first place because the service needs to send
an END cell on the circuit for which it can not do without a cpath_layer.
Fixes#22493
Reported-by: Roger Dingledine <arma@torproject.org>
Signed-off-by: David Goulet <dgoulet@torproject.org>
Apparently, the unit tests relied on being able to make ed->x509
link certs even when they hadn't set any server flags in the
options. So instead of making "client" mean "never generate an
ed->x509 cert", we'll have it mean "it's okay not to generate an
ed->x509 cert".
(Going with a minimal fix here, since this is supposed to be a
stable version.)
The tests previously assumed that the link handshake code would be
calling get_my_certs() -- when I changed it to call get_own_cert()
instead for the (case 2) 22460 fix, the tests failed, since the tls
connection wasn't really there.
This change makes us start mocking out the tor_tls_get_own_cert()
function too.
It also corrects the behavior of the mock_get_peer_cert() function
-- it should have been returning a newly allocated copy.
It's okay to call add_ed25519_cert with a NULL argument: so,
document that. Also, add a tor_assert_nonfatal() to catch any case
where we have failed to set own_link_cert when conn_in_server_mode.
Whenever we rotate our TLS context, we change our Ed25519
Signing->Link certificate. But if we've already started a TLS
connection, then we've already sent the old X509 link certificate,
so the new Ed25519 Signing->Link certificate won't match it.
To fix this, we now store a copy of the Signing->Link certificate
when we initialize the handshake state, and send that certificate
as part of our CERTS cell.
Fixes one case of bug22460; bugfix on 0.3.0.1-alpha.
Previously we could sometimes change our signing key, but not
regenerate the certificates (signing->link and signing->auth) that
were signed with it. Also, we would regularly replace our TLS x.509
link certificate (by rotating our TLS context) but not replace our
signing->link ed25519 certificate. In both cases, the resulting
inconsistency would make other relays reject our link handshakes.
Fixes two cases of bug 22460; bugfix on 0.3.0.1-alpha.
The encrypted_data_length_is_valid() function wasn't validating correctly the
length of the encrypted data of a v3 descriptor. The side effect of this is
that an HSDir was rejecting the descriptor and ultimately not storing it.
Fixes#22447
Signed-off-by: David Goulet <dgoulet@torproject.org>
A fair number of our mock_impl declarations were messed up so that
even our special AM_ETAGSFLAGS couldn't find them.
This should be a whitespace-only patch.
When directory authorities reject a router descriptor due to keypinning,
free the router descriptor rather than leaking the memory.
Fixes bug 22370; bugfix on 0.2.7.2-alpha.
If we add the element itself, we will later free it when we free the
descriptor, and the next time we go to look at MyFamily, things will
go badly.
Fixes the rest of bug 22368; bugfix on 0.3.1.1-alpha.
If we free them here, we will still attempt to access the freed memory
later on, and also we will double-free when we are freeing the config.
Fixes part of bug 22368.
This patch lifts the return value, rv, variable to the beginning of the
function, adds a 'done' label for clean-up and function exit and makes
the rest of the function use the rv value + goto done; instead of
cleaning up in multiple places.
See: https://bugs.torproject.org/22305
We used to not set the guard state in launch_direct_bridge_descriptor_fetch().
So when a bridge descriptor fetch failed, the guard subsystem would never
learn about the fail (and hence the guard's reachability state would not
be updated).
We used to not set the guard state in launch_direct_bridge_descriptor_fetch().
So when a bridge descriptor fetch failed, the guard subsystem would never
learn about the fail (and hence the guard's reachability state would not
be updated).
Directory authorities now reject relays running versions
0.2.9.1-alpha through 0.2.9.4-alpha, because those relays
suffer from bug 20499 and don't keep their consensus cache
up-to-date.
Resolves ticket 20509.
parse_config_line_from_str_verbose() already looks for strings
that are surrounded by quotes, and processes them with
unescape_string(). So things were getting decoded twice, which was
(in turn) playing havoc with backslashes on Windows.
This adds a couple of configure commands to control whether we're
requiring all dependencies to be available locally (default) or not
(--enable-cargo-online-mode). When building from a tarball, we require
the RUST_DEPENDENCIES variable to point to the local repository of
crates. This also adds src/ext/rust as a git submodule that contains
such a local repository for easy setup.
Passing --enable-cargo-online-mode during configure allows cargo to make
network requests while building Tor or running tests. If this flag is
not supplied, the dependencies need to be available in the form of a
local mirror.
This gives an indication in the log that Tor was built with Rust
support, as well as laying some groundwork for further string-returning
APIs to be converted to Rust
config_get_lines is now split into two functions:
- config_get_lines which is the same as before we had %include
- config_get_lines_include which actually processes %include
Before we've set our options, we can neither call get_options() nor
networkstatus_get_latest_consensus().
Fixes bug 22252; bugfix on 4d9d2553ba
in 0.2.9.3-alpha.
Replace the 177 fallbacks originally introduced in Tor 0.2.9.8 in
December 2016 (of which ~126 were still functional), with a list of
151 fallbacks (32 new, 119 existing, 58 removed) generated in May 2017.
Resolves ticket 21564.
This patch makes us use FALLBACK_COMPRESS_METHOD to try to fetch an
object from the consensus diff manager in case no mutually supported
result was found. This object, if found, is then decompressed using the
spooling system to the client.
See: https://bugs.torproject.org/21667
This patch removes the calls to spooled_resource_new() when trying to
download the consensus. All calls should now be going through the
consdiff manager.
See: https://bugs.torproject.org/21667
This patch ensures that we use the current consensus in the case where
no consensus diff was found or a consensus diff wasn't requested.
See: https://bugs.torproject.org/21667
These still won't do anything till I get the values to be filled in.
Also, I changed the API a little (with corresponding changes in
directory.c) to match things that it's easier to store.
This patch changes handle_get_current_consensus() to make it read the
current consensus document from the consensus caching subsystem.
See: https://bugs.torproject.org/21667
Failure to do this caused an assertion failure with #22246 . This
assertion failure can be triggered remotely, so we're tracking it as
medium-severity TROVE-2017-002.
Closes bug 22245; bugfix on 0.0.9rc1, when bandwidth accounting was
first introduced.
Found by Andrey Karpov and reported at https://www.viva64.com/en/b/0507/
This patch refactors connection_dir_client_reached_eof() to use
compression_method_get_human_name() to set description1 and
description2 variables.
See: https://bugs.torproject.org/21667
One (HeapEnableTerminationOnCorruption) is on-by-default since win8;
the other (PROCESS_DEP_DISABLE_ATL_THUNK_EMULATION) supposedly only
affects ATL, which (we think) we don't use. Still, these are good
hygiene. Closes ticket 21953.
Cleanup logic in test_intro_point_registration() invoked tt_assert()
in a way that could cause it to jump backward into the cleanup code if
the assertion failed, causing Coverity to see a double free (CID
1397192). Move the tt_assert() calls into a helper function having
the well-defined task of testing hs_circuitmap_free_all().
Fixes#22231.
A descriptor only contains the curve25519 public key in the enc-key field so
the private key should not be in that data structure. The service data
structures will have access to the full keypair (#20657).
Furthermore, ticket #21871 has highlighted an issue in the proposal 224 about
the encryption key and legacy key being mutually exclusive. This is very wrong
and this commit fixes the code to follow the change to the proposal of that
ticket.
Signed-off-by: David Goulet <dgoulet@torproject.org>
A for-loop in test_channelpadding_timers() would never run because it
was trying to increment a counter up to CHANNELS_TO_TEST/3 after an
earlier block already incremented it to CHANNELS_TO_TEST/2.
Fixes#22221, CID 1405983.
We do this by treating the presence of .z as meaning ZLIB_METHOD,
even if Accept-Encoding does not include deflate.
This fixes bug 22206; bug not in any released tor.
Nothing was setting or inspecting these fields, and they were marked
as OBSOLETE() in config.c -- but somehow we still had them in the
or_options_t structure. Ouch.
There was a bug that got exposed with the removal of ORListenAddress. Within
server_mode(), we now only check ORPort_set which is set in parse_ports().
However, options_validate() is using server_mode() at the start to check if we
need to look at the uname but then the ORPort_set is unset at that point
because the port parsing was done just after. This commit fixes that.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
Deprecated in 0.2.9.2-alpha, this commits changes it as OBSOLETE() and cleans
up the code associated with it.
Partially fixes#22060
Signed-off-by: David Goulet <dgoulet@torproject.org>
The descriptor fields can't be validated properly during encoding because they
are signed by a descriptor signing key that we don't have in the unit test.
Removing the test case for now but ultimately we need an independent
implementation that can encode descriptor and test our decoding functions with
that.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Create the hs_test_helpers.{c|h} files that contains helper functions to
create introduction point, descriptor and compare descriptor.
Used by both the hs cache and hs descriptor tests. Unify them to avoid code
duplication.
Also, this commit fixes the usage of the signing key that was wrongly used
when creating a cross signed certificate.
Signed-off-by: David Goulet <dgoulet@torproject.org>
This change prevents a no-longer-supported behavior where we change
options that would later be written back to torrc with a SAVECONF.
Also, use the "Pointer to final pointer" trick to build the
normalized list, to avoid special-casing the first element.
asan was finding an alignment issue with a cast, so set the field in the
trunnel struct and then encode it instead. Also, enable log capture and
verification.
Checking all of these parameter lists for every single connection every second
seems like it could be an expensive waste.
Updating globally cached versions when there is a new consensus will still
allow us to apply consensus parameter updates to all existing connections
immediately.
IMO, these tests should be calling options_init() to properly set everything
to default values, but when that is done, about a dozen tests fail. Setting
the one default value that broke the tests for my branch. Sorry for being
lame.
Accomplished via the following:
1. Use NETINFO cells to determine if both peers will agree on canonical
status. Prefer connections where they agree to those where they do not.
2. Alter channel_is_better() to prefer older orconns in the case of multiple
canonical connections, and use the orconn with more circuits on it in case
of age ties.
Also perform some hourly accounting on how many of these types of connections
there are and log it at info or notice level.
This unifies CircuitIdleTimeout and PredictedCircsRelevanceTime into a single
option, and randomizes it.
It also gives us control over the default value as well as relay-to-relay
connection lifespan through the consensus.
Conflicts:
src/or/circuituse.c
src/or/config.c
src/or/main.c
src/test/testing_common.c
This defense will cause Cisco, Juniper, Fortinet, and other routers operating
in the default configuration to collapse netflow records that would normally
be split due to the 15 second flow idle timeout.
Collapsing these records should greatly reduce the utility of default netflow
data for correlation attacks, since all client-side records should become 30
minute chunks of total bytes sent/received, rather than creating multiple
separate records for every webpage load/ssh command interaction/XMPP chat/whatever
else happens to be inactive for more than 15 seconds.
The defense adds consensus parameters to govern the range of timeout values
for sending padding packets, as well as for keeping connections open.
The defense only sends padding when connections are otherwise inactive, and it
does not pad connections used solely for directory traffic at all. By default
it also doesn't pad inter-relay connections.
Statistics on the total padding in the last 24 hours are exported to the
extra-info descriptors.
Right now it just sets an if-modified-since header, but it's about
to get even bigger.
This patch avoids changing indentation; the next patch will be
whitespace fixes.
We need to index diffs by the digest-as-signed of their source
consensus, so that we can find them even from consensuses whose
signatures are encoded differently.
In this patch I add support for "delete through end of file" in our
ed diff handler, and generate our diffs so that they remove
everything after in the consensus after the signatures begin.
test_options_validate_impl() incorrectly executed subsequent phases of
config parsing and validation after an expected error. This caused
msg to leak when those later phases (which would likely produce errors
as well) overwrote it.
This was introduced 90562fc23a adding a code
path where we pass a NULL pointer for the HSDir fingerprint to the control
event subsystem. The HS desc failed function wasn't handling properly that
pointer for a NULL value.
Two unit tests are also added in this commit to make sure we handle properly
the case of a NULL hsdir fingerprint and a NULL content as well.
Fixes#22138
Signed-off-by: David Goulet <dgoulet@torproject.org>
Code movement in the commit introducings tests for #22103 uncovered a
latent memory management bug.
Refactor the log message checking from test_options_checkmsgs() into a
helper test_options_checklog(). This avoids a memory leak (and
possible double-free) in a test failure condition.
Don't reuse variables (especially pointers to allocated memory!) for
multiple unrelated purposes.
Fixes CID 1405778.
Also factor out the error message comparisions from
test_options_validate_impl() into a separate function so it can check
for error messages in different phases of config parsing.
config_parse_interval() and config_parse_msec_interval() were checking
whether the variable "ok" (a pointer to an int) was null, rather than
derefencing it. Both functions are static, and all existing callers
pass a valid pointer to those static functions. The callers do check
the variables (also confusingly named "ok") whose addresses they pass
as the "ok" arguments, so even if the pointer check were corrected to
be a dereference, it would be redundant.
Fixes#22103.
This was a >630-line function, which doesn't make anybody happy. It
was also mostly composed of a bunch of if-statements that handled
different directory responses differently depending on the original
purpose of the directory connection. The logical refactoring here
is to move the body of each switch statement into a separate handler
function, and to invoke those functions from a separate switch
statement.
This commit leaves whitespace mostly untouched, for ease of review.
I'll reindent in the next commit.
These required some special-casing, since some of the assumption
about real compression algorithms don't actually hold for the
identity transform. Specifically, we had assumed:
- compression functions typically change the lengths of their
inputs.
- decompression functions can detect truncated inputs
- compression functions have detectable headers
None of those is true for the identity transformation.
This will allow us to treat NO_METHOD as a real compression method,
and to simplify code that currently does
if (compressing) {
compress
} else {
copy
}
Inform the control port with an HS_DESC failed event when the client is unable
to pick an HSDir. It's followed by an empty HS_DESC_CONTENT event. In order to
achieve that, some control port code had to be modified to accept a NULL HSDir
identity digest.
This commit also adds a trigger of a failed event when we are unable to
base64-decode the descriptor cookie.
Fixes#22042
Signed-off-by: David Goulet <dgoulet@torproject.org>
Introduce a way to optionally enable Rust integration for our builds. No
actual Rust code is added yet and specifying the flag has no effect
other than failing the build if rustc and cargo are unavailable.
Increase the maximum allowed size passed to mprotect(PROT_WRITE)
from 1MB to 16MB. This was necessary with the glibc allocator
in order to allow worker threads to allocate more memory --
which in turn is necessary because of our new use of worker
threads for compression.
Closes ticket #22096. Found while working on #21648.
This patch changes two things in our LZMA compression backend:
- We lower the preset values for all `compression_level_t` values to
ensure that we can run the LZMA decoder with less than 65 MB of memory
available. This seems to have a small impact on the real world usage
and fits well with our needs.
- We set the upper bound of memory usage for the LZMA decoder to 16 MB.
See: https://bugs.torproject.org/21665
There were two issues here: first, zstd didn't exhibit the right
behavior unless it got a very large input. That's fine.
The second issue was a genuine bug, fixed by 39cfaba9e2.
This patch refactors our compression tests such that deflate, gzip,
lzma, and zstd are all tested using the same code.
Additionally we use run-time checks to see if the given compression
method is supported instead of using HAVE_LZMA and HAVE_ZSTD.
See: https://bugs.torproject.org/22085
This patch adds support for measuring the approximated memory usage by
the individual `tor_zstd_compress_state_t` object instances.
See: https://bugs.torproject.org/22066
This patch fixes the documentation string for `tor_uncompress()` to
ensure that it does not explicitly mention zlib or gzip since we now
support multiple compression backends.
This patch adds support for measuring the approximated memory usage by
the individual `tor_lzma_compress_state_t` object instances.
The LZMA library provides the functions `lzma_easy_encoder_memusage()`
and `lzma_easy_decoder_memusage()` which is used to find the estimated
usage in bytes.
See: https://bugs.torproject.org/22066
We hadn't needed this before, because most getpid() callers on Linux
were looking at the vDSO version of getpid(). I don't know why at
least one version of OpenSSL seems to be ignoring the vDSO, but this
change should fix it.
Fixes bug 21943; bugfix on 0.2.5.1-alpha when the sandbox was
introduced.
The `tor_compress_state_t` data-type is used as a wrapper around the
more specialized state-types used by the various compression backends.
This patch ensures that the overhead of this "thin" wrapper type is
included in the value returned by `tor_compress_get_total_allocation()`.
See: https://bugs.torproject.org/22066
OS X's ar(1) doesn't allow us to create an archive with no object files.
This patch adds a stub file with a stub function in it to make OS X
happy again.
Since we have a streaming API for each compression backend, we don't
need a non-streaming API for each: we can build a common
non-streaming API at the front-end.
This commit adds the src/trace directory containing the basics for our tracing
subsystem. It is not used in the code base. The "src/trace/debug.h" file
contains an example on how we can map our tor trace events to log_debug().
The tracing subsystem can only be enabled by tracing framework at compile
time. This commit introduces the "--enable-tracing-debug" option that will
make all "tor_trace()" function be maped to "log_debug()".
Closes#13802
Signed-off-by: David Goulet <dgoulet@torproject.org>
That log statement can be triggered if somebody on the Internet behaves badly
which is possible with buggy implementation for instance.
Fixes#21293
Signed-off-by: David Goulet <dgoulet@torproject.org>
This patch renames the `compress` parameter of the
`tor_zlib_compress_new()` function to `_compress` to avoid shadowing the
`compress()` function in zlib.h.
This patch splits up `tor_compress_memory_level()` into static functions
in the individual compression backends, which allows us to tune the
values per compression backend rather than globally.
See: https://bugs.torproject.org/21662
Use a switch-statement in `tor_compress()` and `tor_uncompress()` for
the given `compress_method_t` parameter. This allows us to have the
compiler detect if we forgot add a handler in these functions for a
newly added enumeration value.
See: https://bugs.torproject.org/21662
This patch ensures that Tor checks if a given compression method is
supported before printing the version string when calling `tor
--library-versions`.
Additionally, we use the `tor_compress_supports_method()` to check if a
given version is supported for Tor's start-up version string, but here
we print "N/A" if a given compression method is unavailable.
See: https://bugs.torproject.org/21662
This patch adds `tor_compress_version_str()` and
`tor_compress_header_version_str()` to get the version strings of the
different compression schema providers. Both functions returns `NULL` in
case a given `compress_method_t` is unknown or unsupported.
See: https://bugs.torproject.org/21662
This patch adds support for checking if a given `compress_method_t` is
supported by the currently running Tor instance using
`tor_compress_supports_method()`.
See: https://bugs.torproject.org/21662
This patch adds the `tor_compress_get_total_allocation()` which returns
an approximate number of bytes currently in use by all the different
compression backends.
See: https://bugs.torproject.org/21662
This patch adds support for enabling support for Zstandard to our configure
script. By default, the --enable-zstd option is set to "auto" which means if
libzstd is available we'll build Tor with Zstandard support.
See: https://bugs.torproject.org/21662
This patch adds support for enabling support for LZMA to our configure
script. By default, the --enable-lzma option is set to "auto" which
means if liblzma is available we'll build Tor with LZMA support.
See: https://bugs.torproject.org/21662
This patch changes the way `tor_compress_new()`,
`tor_compress_process()`, and `tor_compress_free()` handles different
compression methods. This should give us compiler warnings in case an
additional compression method is added, but the developer forgets to add
handlers in the three aforementioned functions.
See https://bugs.torproject.org/21663
This patch refactors the `torgzip` module to allow us to extend a common
compression API to support multiple compression backends.
Additionally we move the gzip/zlib code into its own module under the
name `compress_zlib`.
See https://bugs.torproject.org/21664
Remove duplicate code that validates a service object which is now in
rend_validate_service().
Add some comments on why we nullify a service in the code path of
rend_config_services().
Signed-off-by: David Goulet <dgoulet@torproject.org>
This new function validates a service object and is used everytime a service
is successfully loaded from the configuration file.
It is currently copying the validation that rend_add_service() also does which
means both functions validate. It will be decoupled in the next commit.
Signed-off-by: David Goulet <dgoulet@torproject.org>
The item referred to the cdm_ht_set_status() case where the item was
not already in the hashtable. But that already happens naturally
when we scan the directory on startup... and we already have a test
for that.
This commit adds some helper functions to look up the diff from one
consensus and to make sure that applying it leads to another. Then
we add them throughout the existing test cases. Doing this turned
up a reference-leaking bug in consensus_diff_worker_replyfn.
In several places in the old code, we had problems that only an
in-memory index of diff status could solve, including:
* Remembering which diffs were in-progress, so that we didn't
re-launch them.
* Remembering which diffs had failed, so that we didn't try to
recompute them over and over.
* Having a fast way to look up the diff from a given consensus to
the latest consensus of a given flavor.
This patch adds a hashtable mapping from (flavor, source diff), to
solve the problem. It maps to a cache entry handle, rather than to
a cache entry directly, so that it doesn't affect the reference
counts of the cache entries, and so that we don't otherwise need to
worry about lifetime management.
This conscache flag tells conscache that it should munmap the
document as soon as reasonably possible, since its usage pattern is
expected to not have a lot of time-locality.
Initial tests. These just try adding a few consensuses, looking
them up, and making sure that consensus diffs are generated in a
more or less reasonable-looking way. It's enough for 87% coverage,
but it leaves out a lot of functionality.
This module's job is to remember old consensus documents, to
calculate their diffs on demand, and to .
There are some incomplete points in this code; I've marked them with
"XXXX". I intend to fix them in separate commits, since I believe
doing it in separate commits will make the branch easier to review.
The GETINFO extra-info/digest/<digest> broke in commit 568dc27a19 that
refactored the base16_decode() API to return the decoded length.
Unfortunately, that if() condition should have checked for the correct length
instead of an error which broke the command in tor-0.2.9.1-alpha.
Fixes#22034
Signed-off-by: David Goulet <dgoulet@torproject.org>
This commit mainly moves the responsibility for directory request
construction one level higher. It also allows a directory request
to contain a pointer to a routerstatus, which will get turned into
the correct contact information at the last minute.
We do dump HS stats now at log info everytime the intro circuit creation retry
period limit has been reached. However, the log was upgraded to warning if we
actually were over the elapsed time (plus an extra slop).
It is actually something that will happen in tor in normal case. For instance,
if the network goes down for 10 minutes then back up again making
have_completed_a_circuit() return false which results in never updating that
retry period marker for a service.
Fixes#22032
Signed-off-by: David Goulet <dgoulet@torproject.org>
This patch makes the internal `get_memlevel()` a part of the public
compression API as `tor_compress_memory_level()`.
See https://bugs.torproject.org/21663
This patch refactors our streaming compression code to allow us to
extend it with non-zlib/non-gzip based compression schemas.
See https://bugs.torproject.org/21663
To allow us to use the API name `tor_compress` and `tor_uncompress` as
the main entry-point for all compression/uncompression and not just gzip
and zlib.
See https://bugs.torproject.org/21663
This patch removes the unused `is_gzip_supported()` and changes the
documentation string around the `compress_method_t` enumeration to
explicitly state that both `ZLIB_METHOD` and `GZIP_METHOD` are both
always supported.
Zlib version 1.2.0 was released on the 9'th of March, 2003 according to
their ChangeLog.
See https://bugs.torproject.org/21663
This patch changes some of the tt_assert() usage in test_util_gzip() to
use tt_int_op() to get better error messages upon failure.
Additionally we move to use explicit NULL checks.
See https://bugs.torproject.org/21663
The consdiff generation logic would skip over lines added at the start of the
second file, and generate a diff that it would the immediately refuse because
it couldn't be used to reproduce the second file from the first. Fixes#21996.
The reason for making the temporary list public is to keep it encapsulated in
the rendservice subsystem so the prop224 code does not have direct access to
it and can only affect it through the rendservice pruning function.
It also has been modified to not take list as arguments but rather use the
global lists (main and temporary ones) because prop224 code will call it to
actually prune the rendservice's lists. The function does the needed rotation
of pointers between those lists and then prune if needed.
In order to make the unit test work and not completely horrible, there is a
"impl_" version of the function that doesn't free memory, it simply moves
pointers around. It is directly used in the unit test and two setter functions
for those lists' pointer have been added only for unit test.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Now we have separate getters and setters for service-side and relay-side. I
took this approach over adding arguments to the already existing methods to
have more explicit type-checking, and also because some functions would grow
too large and dirty.
This commit also fixes every callsite to use the new function names which
modifies the legacy HS (v2) and the prop224 (v3) code.
Signed-off-by: David Goulet <dgoulet@torproject.org>
One of the goals of this change is to have trunnel API/ABI being more explicit
so we namespace them with "trn_*". Furthermore, we can now create
hs_cells.[ch] without having to confuse it with trunnel which used to be
"hs_cell_*" before that change.
Here are the perl line that were used for this rename:
perl -i -pe 's/cell_extension/trn_cell_extension/g;' src/*/*.[ch]
perl -i -pe 's/cell_extension/trn_cell_extension/g;' src/trunnel/hs/*.trunnel
perl -i -pe 's/hs_cell_/trn_cell_/g;' src/*/*.[ch]
perl -i -pe 's/hs_cell_/trn_cell_/g;' src/trunnel/hs/*.trunnel
And then "./scripts/codegen/run_trunnel.sh" with trunnel commit id
613fb1b98e58504e2b84ef56b1602b6380629043.
Fixes#21919
Signed-off-by: David Goulet <dgoulet@torproject.org>
Pinning EntryNodes along with hidden services can be possibly harmful (for
instance #14917 and #21155) so at the very least warn the operator if this is
the case.
Fixes#21155
Signed-off-by: David Goulet <dgoulet@torproject.org>
These tests tried to use ridiculously large buffer sizes to check
the sanity-checking in the code; but since the sanity-checking
changed, these need to change too.
Now that base64_decode() checks the destination buffer length against
the actual number of bytes as they're produced, shared_random.c no
longer needs the "SR_COMMIT_LEN+2" workaround.
Remove base64_decode_nopad() because it is redundant now that
base64_decode() correctly handles both padded and unpadded base64
encodings with "right-sized" output buffers.
Test base64_decode() with odd sized decoded lengths, including
unpadded encodings and padded encodings with "right-sized" output
buffers. Convert calls to base64_decode_nopad() to base64_decode()
because base64_decode_nopad() is redundant.
base64_decode() was applying an overly conservative check on the
output buffer length that could incorrectly produce an error if the
input encoding contained padding or newlines. Fix this by checking
the output buffer length against the actual decoded length produced
during decoding.
When we "fixed" #18280 in 4e4a7d2b0c
in 0291 it appears that we introduced a bug: The base32_encode
function can read off the end of the input buffer, if the input
buffer size modulo 5 is not equal to 0 or 3.
This is not completely horrible, for two reasons:
* The extra bits that are read are never actually used: so this
is only a crash when asan is enabled, in the worst case. Not a
data leak.
* The input sizes passed to base32_encode are only ever multiples
of 5. They are all either DIGEST_LEN (20), REND_SERVICE_ID_LEN
(10), sizeof(rand_bytes) in addressmap.c (10), or an input in
crypto.c that is forced to a multiple of 5.
So this bug can't actually trigger in today's Tor.
Closes bug 21894; bugfix on 0.2.9.1-alpha.
It looks like 32_encoded_size/64_encode_size APIs are inconsistent
not only in the number of "d"s they have, but also in whether they
count the terminating NUL. Taylor noted this in 86477f4e3f,
but I think we should note the inconsistently more loudly in order
to avoid trouble.
(I ran into trouble with this when writing 30b13fd82e243713c6a0d.)
Note down in the routerstatus_t of a node if the router supports the HSIntro=4
version for the ed25519 authentication key and HSDir=2 version for the v3
descriptor supports.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Some of those defines will be used by the v3 HS protocol so move them to a
common header out of rendservice.c. This is also ground work for prop224
service implementation.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Another building blocks for prop224 service work. This also makes the function
takes specific argument instead of the or_option_t object.
Signed-off-by: David Goulet <dgoulet@torproject.org>
When a client tried to connect to an invalid port of an hidden service, a
warning was printed:
[warn] connection_edge_process_relay_cell (at origin) failed.
This is because the connection subsystem wants to close the circuit because
the port can't be found and then returns a negative reason to achieve that.
However, that specific situation triggered a warning. This commit prevents it
for the specific case of an invalid hidden service port.
Fixes#16706
Signed-off-by: David Goulet <dgoulet@torproject.org>
In order to avoid src/or/hs_service.o to contain no symbols and thus making
clang throw a warning, the functions are now exposed not just to unit tests.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Check that route_len_for_purpose() (helper for new_route_len())
correctly fails a non-fatal bug assertion if it encounters an
unhandled circuit purpose when it is called with exit node info.
Add a new helper function route_len_for_purpose(), which explicitly
lists all of the known circuit purposes for a circuit with a chosen
exit node (unlike previously, where the default route length for a
chosen exit was DEFAULT_ROUTE_LEN + 1 except for two purposes). Add a
non-fatal assertion for unhandled purposes that conservatively returns
DEFAULT_ROUTE_LEN + 1.
Add copious comments documenting which circuits need an extra hop and
why.
Thanks to nickm and dgoulet for providing background information.
This change makes it so those those APIs will not require prior
inclusion of openssl headers. I've left some APIs alone-- those
will change to be extra-private.
The old implementation had duplicated code in a bunch of places, and
it interspersed spool-management with resource management. The new
implementation should make it easier to add new resource types and
maintain the spooling code.
Closing ticket 21651.
When calculating max sampled size, Tor would only count the number of
bridges in torrc, without considering that our state file might already
have sampled bridges in it. This caused problems when people swap
bridges, since the following error would trigger:
[warn] Not expanding the guard sample any further; just hit the
maximum sample threshold of 1