The reason for making the temporary list public is to keep it encapsulated in
the rendservice subsystem so the prop224 code does not have direct access to
it and can only affect it through the rendservice pruning function.
It also has been modified to not take list as arguments but rather use the
global lists (main and temporary ones) because prop224 code will call it to
actually prune the rendservice's lists. The function does the needed rotation
of pointers between those lists and then prune if needed.
In order to make the unit test work and not completely horrible, there is a
"impl_" version of the function that doesn't free memory, it simply moves
pointers around. It is directly used in the unit test and two setter functions
for those lists' pointer have been added only for unit test.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Now we have separate getters and setters for service-side and relay-side. I
took this approach over adding arguments to the already existing methods to
have more explicit type-checking, and also because some functions would grow
too large and dirty.
This commit also fixes every callsite to use the new function names which
modifies the legacy HS (v2) and the prop224 (v3) code.
Signed-off-by: David Goulet <dgoulet@torproject.org>
One of the goals of this change is to have trunnel API/ABI being more explicit
so we namespace them with "trn_*". Furthermore, we can now create
hs_cells.[ch] without having to confuse it with trunnel which used to be
"hs_cell_*" before that change.
Here are the perl line that were used for this rename:
perl -i -pe 's/cell_extension/trn_cell_extension/g;' src/*/*.[ch]
perl -i -pe 's/cell_extension/trn_cell_extension/g;' src/trunnel/hs/*.trunnel
perl -i -pe 's/hs_cell_/trn_cell_/g;' src/*/*.[ch]
perl -i -pe 's/hs_cell_/trn_cell_/g;' src/trunnel/hs/*.trunnel
And then "./scripts/codegen/run_trunnel.sh" with trunnel commit id
613fb1b98e58504e2b84ef56b1602b6380629043.
Fixes#21919
Signed-off-by: David Goulet <dgoulet@torproject.org>
Pinning EntryNodes along with hidden services can be possibly harmful (for
instance #14917 and #21155) so at the very least warn the operator if this is
the case.
Fixes#21155
Signed-off-by: David Goulet <dgoulet@torproject.org>
These tests tried to use ridiculously large buffer sizes to check
the sanity-checking in the code; but since the sanity-checking
changed, these need to change too.
Now that base64_decode() checks the destination buffer length against
the actual number of bytes as they're produced, shared_random.c no
longer needs the "SR_COMMIT_LEN+2" workaround.
Remove base64_decode_nopad() because it is redundant now that
base64_decode() correctly handles both padded and unpadded base64
encodings with "right-sized" output buffers.
Test base64_decode() with odd sized decoded lengths, including
unpadded encodings and padded encodings with "right-sized" output
buffers. Convert calls to base64_decode_nopad() to base64_decode()
because base64_decode_nopad() is redundant.
base64_decode() was applying an overly conservative check on the
output buffer length that could incorrectly produce an error if the
input encoding contained padding or newlines. Fix this by checking
the output buffer length against the actual decoded length produced
during decoding.
When we "fixed" #18280 in 4e4a7d2b0c
in 0291 it appears that we introduced a bug: The base32_encode
function can read off the end of the input buffer, if the input
buffer size modulo 5 is not equal to 0 or 3.
This is not completely horrible, for two reasons:
* The extra bits that are read are never actually used: so this
is only a crash when asan is enabled, in the worst case. Not a
data leak.
* The input sizes passed to base32_encode are only ever multiples
of 5. They are all either DIGEST_LEN (20), REND_SERVICE_ID_LEN
(10), sizeof(rand_bytes) in addressmap.c (10), or an input in
crypto.c that is forced to a multiple of 5.
So this bug can't actually trigger in today's Tor.
Closes bug 21894; bugfix on 0.2.9.1-alpha.
It looks like 32_encoded_size/64_encode_size APIs are inconsistent
not only in the number of "d"s they have, but also in whether they
count the terminating NUL. Taylor noted this in 86477f4e3f,
but I think we should note the inconsistently more loudly in order
to avoid trouble.
(I ran into trouble with this when writing 30b13fd82e243713c6a0d.)
Note down in the routerstatus_t of a node if the router supports the HSIntro=4
version for the ed25519 authentication key and HSDir=2 version for the v3
descriptor supports.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Some of those defines will be used by the v3 HS protocol so move them to a
common header out of rendservice.c. This is also ground work for prop224
service implementation.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Another building blocks for prop224 service work. This also makes the function
takes specific argument instead of the or_option_t object.
Signed-off-by: David Goulet <dgoulet@torproject.org>
When a client tried to connect to an invalid port of an hidden service, a
warning was printed:
[warn] connection_edge_process_relay_cell (at origin) failed.
This is because the connection subsystem wants to close the circuit because
the port can't be found and then returns a negative reason to achieve that.
However, that specific situation triggered a warning. This commit prevents it
for the specific case of an invalid hidden service port.
Fixes#16706
Signed-off-by: David Goulet <dgoulet@torproject.org>
In order to avoid src/or/hs_service.o to contain no symbols and thus making
clang throw a warning, the functions are now exposed not just to unit tests.
Signed-off-by: David Goulet <dgoulet@torproject.org>
Check that route_len_for_purpose() (helper for new_route_len())
correctly fails a non-fatal bug assertion if it encounters an
unhandled circuit purpose when it is called with exit node info.
Add a new helper function route_len_for_purpose(), which explicitly
lists all of the known circuit purposes for a circuit with a chosen
exit node (unlike previously, where the default route length for a
chosen exit was DEFAULT_ROUTE_LEN + 1 except for two purposes). Add a
non-fatal assertion for unhandled purposes that conservatively returns
DEFAULT_ROUTE_LEN + 1.
Add copious comments documenting which circuits need an extra hop and
why.
Thanks to nickm and dgoulet for providing background information.
This change makes it so those those APIs will not require prior
inclusion of openssl headers. I've left some APIs alone-- those
will change to be extra-private.
The old implementation had duplicated code in a bunch of places, and
it interspersed spool-management with resource management. The new
implementation should make it easier to add new resource types and
maintain the spooling code.
Closing ticket 21651.
When calculating max sampled size, Tor would only count the number of
bridges in torrc, without considering that our state file might already
have sampled bridges in it. This caused problems when people swap
bridges, since the following error would trigger:
[warn] Not expanding the guard sample any further; just hit the
maximum sample threshold of 1
This patch changes the way we decide when to check for whether it's time
to rotate and/or expiry our onion keys. Due to proposal #274 we can now
have the keys rotate at different frequencies than before and we thus
do the check once an hour when our Tor daemon is running in server mode.
This should allow us to quickly notice if the network consensus
parameter have changed while we are running instead of having to wait
until the current parameters timeout value have passed.
See: See: https://bugs.torproject.org/21641
This patch adds a new timer that is executed when it is time to expire
our current set of old onion keys. Because of proposal #274 this can no
longer be assumed to be at the same time we rotate our onion keys since
they will be updated less frequently.
See: https://bugs.torproject.org/21641
This patch adds an API to get the current grace period, in days, defined
as the consensus parameter "onion-key-grace-period-days".
As per proposal #274 the values for "onion-key-grace-period-days" is a
default value of 7 days, a minimum value of 1 day, and a maximum value
defined by other consensus parameter "onion-key-rotation-days" also
defined in days.
See: https://bugs.torproject.org/21641
This patch turns `MIN_ONION_KEY_LIFETIME` into a new function
`get_onion_key_lifetime()` which gets its value from a network consensus
parameter named "onion-key-rotation-days". This allows us to tune the
value at a later point in time with no code modifications.
We also bump the default onion key lifetime from 7 to 28 days as per
proposal #274.
See: https://bugs.torproject.org/21641
This patch fixes a regression described in bug #21757 that first
appeared after commit 6e78ede73f which was an attempt to fix bug #21654.
When switching from buffered I/O to direct file descriptor I/O our
output strings from get_string_from_pipe() might contain newline
characters (\n). In this patch we modify tor_get_lines_from_handle() to
ensure that the function splits the newly read string at the newline
character and thus might return multiple lines from a single call to
get_string_from_pipe().
Additionally, we add a test case to test_util_string_from_pipe() to
ensure that get_string_from_pipe() correctly returns multiple lines in a
single call.
See: https://bugs.torproject.org/21757
See: https://bugs.torproject.org/21654
We could use one of these for holding "junk" descriptors and
unparseable things -- but we'll _need_ it for having cached
consensuses and diffs between them.
There was a frequent block of code that did "find the next router
line, see if we've hit the end of the list, get the ID hash from the
line, and enforce well-ordering." Per Ahf's review, I'm extracting
it to its own function.
Previously, we operated on smartlists of NUL-terminated strings,
which required us to copy both inputs to produce the NUL-terminated
strings. Then we copied parts of _those_ inputs to produce an
output smartlist of NUL-terminated strings. And finally, we
concatenated everything into a final resulting string.
This implementation, instead, uses a pointer-and-extent pattern to
represent each line as a pointer into the original inputs and a
length. These line objects are then added by reference into the
output. No actual bytes are copied from the original strings until
we finally concatenate the final result together.
Bookkeeping structures and newly allocated strings (like ed
commands) are allocated inside a memarea, to avoid needless mallocs
or complicated should-I-free-this-or-not bookkeeping.
In my measurements, this improves CPU performance by something like
18%. The memory savings should be much, much higher.
This takes two fuzzers: one which generates a diff and makes sure it
works, and one which applies a diff.
So far, they won't crash, but there's a bug in my
string-manipulation code someplace that I'm having to work around,
related to the case where you have a blank line at the end of a
file, or where you diff a file with itself.
Also, add very strict split/join functions, and totally forbid
nonempty files that end with somethig besides a newline. This
change is necessary to ensure that diff/apply are actually reliable
inverse operations.
The 2-line diff changs is needed to make the unit tests actually
test the cases that they thought they were testing.
The bogus free was found while testing those cases
(This commit was extracted by nickm based on the final outcome of
the project, taking only the changes in the files touched by this
commit from the consdiff_rebased branch. The directory-system
changes are going to get worked on separately.)
Windows doesn't let you check the socket error for a socket with
WSAGetLastError() and getsockopt(SO_ERROR). But
getsockopt(SO_ERROR) clears the error on the socket, so you can't
call it more than once per error.
When we introduced recv_ni to help drain alert sockets, back in
0.2.6.3-alpha, we had the failure path for recv_ni call getsockopt()
twice, though: once to check for EINTR and one to check for EAGAIN.
Of course, we never got the eagain, so we treated it as an error,
and warned about: "No error".
The fix here is to have these functions return -errno on failure.
Fixes bug 21540; bugfix on 0.2.6.3-alpha.
The 64-bit load and store code was generating pretty bad output with
my compiler, so I extracted the code from csiphash and used that instead.
Close ticket 21737
So we require that SMARTLIST_FOREACH_END() have the name of the loop
variable in it. But right now the only enforcement for that is to
clear the variable at the end of the loop, which is really not
sufficient: I spent 45 minutes earlier today debugging an issue
where I had said:
SMARTLIST_FOREACH_BEGIN(spool, spooled_resource_t *, spooled) {
...
} SMARTLIST_FOREACH_END(spool);
This patch makes it so that ONLY loop variables can be used, by
referring to the _sl_idx variable.
Also, relaxed the checks of encrypted_data_length_is_valid() since now
only one encrypted section has padding requirements and we don't
actually care to check that all the padding is there.
Consider starting code review from function encode_superencrypted_data().
- Refactor our HS desc crypto funcs to be able to differentiate between
the superencrypted layer and the encrypted layer so that different
crypto constants and padding is used in each layer.
- Introduce some string constants.
- Add some comments.
As part of the work for proposal #274 we are going to remove the need
for MIN_ONION_KEY_LIFETIME and turn it into a dynamic value defined by a
consensus parameter.
See: https://bugs.torproject.org/21641
The bridges+ipv6-min integration test has a client with bridges:
Bridge 127.0.0.1:5003
Bridge [::1]:5003
which got stuck in guard_selection_have_enough_dir_info_to_build_circuits()
because it couldn't find the descriptor of both bridges.
Specifically, the guard_has_descriptor() function could not find the
node_t of the [::1] bridge, because the [::1] bridge had no identity
digest assigned to it.
After further examination, it seems that during fetching the descriptor
for our bridges, we used the CERTS cell to fill the identity digest of
127.0.0.1:5003 properly. However, when we received a CERTS cell from
[::1]:5003 we actually ignored its identity digest because the
learned_router_identity() function was using
get_configured_bridge_by_addr_port_digest() which was returning the
127.0.0.1 bridge instead of the [::1] bridge (because it prioritizes
digest matching over addrport matching).
The fix replaces get_configured_bridge_by_addr_port_digest() with the
recent get_configured_bridge_by_exact_addr_port_digest() function. It
also relaxes the constraints of the
get_configured_bridge_by_exact_addr_port_digest() function by making it
return bridges whose identity digest is not yet known.
By using the _exact_() function, learned_router_identity() actually
fills in the identity digest of the [::1] bridge, which then allows
guard_has_descriptor() to find the right node_t and verify that the
descriptor is there.
FWIW, in the bridges+ipv6-min test both 127.0.0.1 and [::1] bridges
correspond to the same node_t, which I guess makes sense given that it's
actually the same underlying bridge.
This patch removes the `tor_fgets()` wrapper around `fgets(3)` since it
is no longer needed. The function was created due to inconsistency
between the returned values of `fgets(3)` on different versions of Unix
when using `fgets(3)` on non-blocking file descriptors, but with the
recent changes in bug #21654 we switch from unbuffered to direct I/O on
non-blocking file descriptors in our utility module.
We continue to use `fgets(3)` directly in the geoip and dirserv module
since this usage is considered safe.
This patch also removes the test-case that was created to detect
differences in the implementation of `fgets(3)` as well as the changes
file since these changes was not included in any releases yet.
See: https://bugs.torproject.org/21654
This patch changes a number of read loops in the util module to use
less-than comparison instead of not-equal-to comparison. We do this in
the case that we have a bug elsewhere that might cause `numread` to
become larger than `count` and thus become an infinite loop.
This patch removes the buffered I/O stream usage in process_handle_t and
its related utility functions. This simplifies the code and avoids racy
code where we used buffered I/O on non-blocking file descriptors.
See: https://bugs.torproject.org/21654
This patch modifies `tor_read_all_handle()` to use read(2) instead of
fgets(3) when reading the stdout from the child process. This should
eliminate the race condition that can be triggered in the 'slow/util/*'
tests on slower machines running OpenBSD, FreeBSD and HardenedBSD.
See: https://bugs.torproject.org/21654
Make hidden services with 8 to 10 introduction points check for failed
circuits immediately after startup. Previously, they would wait for 5
minutes before performing their first checks.
Fixes bug 21594; bugfix on commit 190aac0eab in Tor 0.2.3.9-alpha.
Reported by alecmuffett.