Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
/* Copyright 2001,2002 Roger Dingledine, Matej Pfajfar. */
|
|
|
|
/* See LICENSE for licensing information */
|
|
|
|
/* $Id$ */
|
2002-06-27 00:45:49 +02:00
|
|
|
|
|
|
|
#include "or.h"
|
|
|
|
|
|
|
|
/********* START VARIABLES **********/
|
|
|
|
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
extern or_options_t options; /* command-line and config-file options */
|
|
|
|
|
2002-06-27 00:45:49 +02:00
|
|
|
char *conn_type_to_string[] = {
|
2002-09-22 00:41:48 +02:00
|
|
|
"", /* 0 */
|
|
|
|
"OP listener", /* 1 */
|
|
|
|
"OP", /* 2 */
|
|
|
|
"OR listener", /* 3 */
|
|
|
|
"OR", /* 4 */
|
|
|
|
"Exit", /* 5 */
|
|
|
|
"App listener",/* 6 */
|
2002-09-26 14:09:10 +02:00
|
|
|
"App", /* 7 */
|
|
|
|
"Dir listener",/* 8 */
|
|
|
|
"Dir", /* 9 */
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
"DNS master", /* 10 */
|
2002-06-27 00:45:49 +02:00
|
|
|
};
|
|
|
|
|
2002-09-26 14:09:10 +02:00
|
|
|
char *conn_state_to_string[][15] = {
|
2002-09-22 00:41:48 +02:00
|
|
|
{ }, /* no type associated with 0 */
|
2002-06-27 00:45:49 +02:00
|
|
|
{ "ready" }, /* op listener, 0 */
|
|
|
|
{ "awaiting keys", /* op, 0 */
|
|
|
|
"open", /* 1 */
|
|
|
|
"close", /* 2 */
|
|
|
|
"close_wait" }, /* 3 */
|
|
|
|
{ "ready" }, /* or listener, 0 */
|
2002-09-22 00:41:48 +02:00
|
|
|
{ "connecting (as OP)", /* or, 0 */
|
|
|
|
"sending keys (as OP)", /* 1 */
|
|
|
|
"connecting (as client)", /* 2 */
|
|
|
|
"sending auth (as client)", /* 3 */
|
|
|
|
"waiting for auth (as client)", /* 4 */
|
|
|
|
"sending nonce (as client)", /* 5 */
|
|
|
|
"waiting for auth (as server)", /* 6 */
|
|
|
|
"sending auth (as server)", /* 7 */
|
|
|
|
"waiting for nonce (as server)",/* 8 */
|
|
|
|
"open" }, /* 9 */
|
|
|
|
{ "waiting for dest info", /* exit, 0 */
|
|
|
|
"connecting", /* 1 */
|
|
|
|
"open" }, /* 2 */
|
|
|
|
{ "ready" }, /* app listener, 0 */
|
|
|
|
{ "awaiting dest info", /* app, 0 */
|
|
|
|
"waiting for OR connection", /* 1 */
|
2002-09-26 14:09:10 +02:00
|
|
|
"open" }, /* 2 */
|
|
|
|
{ "ready" }, /* dir listener, 0 */
|
|
|
|
{ "connecting", /* 0 */
|
|
|
|
"sending command", /* 1 */
|
|
|
|
"reading", /* 2 */
|
|
|
|
"awaiting command", /* 3 */
|
|
|
|
"writing" }, /* 4 */
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
{ "open" }, /* dns master, 0 */
|
2002-06-27 00:45:49 +02:00
|
|
|
};
|
|
|
|
|
|
|
|
/********* END VARIABLES ************/
|
|
|
|
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
/**************************************************************/
|
|
|
|
|
|
|
|
int tv_cmp(struct timeval *a, struct timeval *b) {
|
|
|
|
if (a->tv_sec > b->tv_sec)
|
|
|
|
return 1;
|
|
|
|
if (a->tv_sec < b->tv_sec)
|
|
|
|
return -1;
|
|
|
|
if (a->tv_usec > b->tv_usec)
|
|
|
|
return 1;
|
|
|
|
if (a->tv_usec < b->tv_usec)
|
|
|
|
return -1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void tv_add(struct timeval *a, struct timeval *b) {
|
|
|
|
a->tv_usec += b->tv_usec;
|
|
|
|
a->tv_sec += b->tv_sec + (a->tv_usec / 1000000);
|
|
|
|
a->tv_usec %= 1000000;
|
|
|
|
}
|
|
|
|
|
|
|
|
void tv_addms(struct timeval *a, long ms) {
|
|
|
|
a->tv_usec += (ms * 1000) % 1000000;
|
|
|
|
a->tv_sec += ((ms * 1000) / 1000000) + (a->tv_usec / 1000000);
|
|
|
|
a->tv_usec %= 1000000;
|
|
|
|
}
|
|
|
|
|
|
|
|
/**************************************************************/
|
|
|
|
|
2002-06-27 00:45:49 +02:00
|
|
|
connection_t *connection_new(int type) {
|
|
|
|
connection_t *conn;
|
2002-10-02 01:37:31 +02:00
|
|
|
struct timeval now;
|
|
|
|
|
|
|
|
if(gettimeofday(&now,NULL) < 0)
|
|
|
|
return NULL;
|
2002-06-27 00:45:49 +02:00
|
|
|
|
|
|
|
conn = (connection_t *)malloc(sizeof(connection_t));
|
|
|
|
if(!conn)
|
|
|
|
return NULL;
|
|
|
|
memset(conn,0,sizeof(connection_t)); /* zero it out to start */
|
|
|
|
|
|
|
|
conn->type = type;
|
2002-06-30 09:37:49 +02:00
|
|
|
if(buf_new(&conn->inbuf, &conn->inbuflen, &conn->inbuf_datalen) < 0 ||
|
|
|
|
buf_new(&conn->outbuf, &conn->outbuflen, &conn->outbuf_datalen) < 0)
|
|
|
|
return NULL;
|
2002-06-27 00:45:49 +02:00
|
|
|
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
conn->receiver_bucket = 10240; /* should be enough to do the handshake */
|
|
|
|
conn->bandwidth = conn->receiver_bucket / 10; /* give it a default */
|
2002-10-13 15:17:27 +02:00
|
|
|
|
2002-10-02 01:37:31 +02:00
|
|
|
conn->timestamp_created = now.tv_sec;
|
2002-10-13 15:17:27 +02:00
|
|
|
conn->timestamp_lastread = now.tv_sec;
|
|
|
|
conn->timestamp_lastwritten = now.tv_sec;
|
|
|
|
|
2002-08-22 09:30:03 +02:00
|
|
|
if (connection_speaks_cells(conn)) {
|
|
|
|
conn->f_crypto = crypto_new_cipher_env(CRYPTO_CIPHER_DES);
|
|
|
|
if (!conn->f_crypto) {
|
|
|
|
free((void *)conn);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
conn->b_crypto = crypto_new_cipher_env(CRYPTO_CIPHER_DES);
|
|
|
|
if (!conn->b_crypto) {
|
|
|
|
crypto_free_cipher_env(conn->f_crypto);
|
|
|
|
free((void *)conn);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
}
|
2003-03-11 02:51:41 +01:00
|
|
|
if(type == CONN_TYPE_OR) {
|
|
|
|
directory_set_dirty();
|
|
|
|
}
|
2003-03-17 03:42:45 +01:00
|
|
|
#ifdef USE_ZLIB
|
|
|
|
if (type == CONN_TYPE_AP || type == CONN_TYPE_EXIT) {
|
|
|
|
if (buf_new(&conn->z_outbuf, &conn->z_outbuflen, &conn->z_outbuf_datalen) < 0)
|
|
|
|
return NULL;
|
|
|
|
if (! (conn->compression = malloc(sizeof(z_stream))))
|
|
|
|
return NULL;
|
|
|
|
if (! (conn->decompression = malloc(sizeof(z_stream))))
|
|
|
|
return NULL;
|
|
|
|
memset(conn->compression, 0, sizeof(z_stream));
|
|
|
|
memset(conn->decompression, 0, sizeof(z_stream));
|
|
|
|
if (deflateInit(conn->compression, Z_DEFAULT_COMPRESSION) != Z_OK) {
|
|
|
|
log(LOG_ERR, "Error initializing zlib: %s", conn->compression->msg);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
if (inflateInit(conn->decompression) != Z_OK) {
|
|
|
|
log(LOG_ERR, "Error initializing zlib: %s", conn->decompression->msg);
|
|
|
|
return NULL;
|
|
|
|
}
|
|
|
|
} else {
|
|
|
|
conn->compression = conn->decompression = NULL;
|
|
|
|
}
|
|
|
|
#endif
|
2003-03-17 22:21:35 +01:00
|
|
|
conn->done_sending = conn->done_receiving = 0;
|
2002-06-27 00:45:49 +02:00
|
|
|
return conn;
|
|
|
|
}
|
|
|
|
|
|
|
|
void connection_free(connection_t *conn) {
|
|
|
|
assert(conn);
|
|
|
|
|
|
|
|
buf_free(conn->inbuf);
|
|
|
|
buf_free(conn->outbuf);
|
|
|
|
if(conn->address)
|
|
|
|
free(conn->address);
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
if(conn->dest_addr)
|
|
|
|
free(conn->dest_addr);
|
2002-06-27 00:45:49 +02:00
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
if(connection_speaks_cells(conn)) {
|
2002-08-22 09:30:03 +02:00
|
|
|
if (conn->f_crypto)
|
|
|
|
crypto_free_cipher_env(conn->f_crypto);
|
|
|
|
if (conn->b_crypto)
|
|
|
|
crypto_free_cipher_env(conn->b_crypto);
|
2002-06-27 00:45:49 +02:00
|
|
|
}
|
|
|
|
|
2002-09-24 12:43:57 +02:00
|
|
|
if (conn->pkey)
|
|
|
|
crypto_free_pk_env(conn->pkey);
|
|
|
|
|
2002-09-03 20:36:40 +02:00
|
|
|
if(conn->s > 0) {
|
|
|
|
log(LOG_INFO,"connection_free(): closing fd %d.",conn->s);
|
2002-06-27 00:45:49 +02:00
|
|
|
close(conn->s);
|
2002-09-03 20:36:40 +02:00
|
|
|
}
|
2003-03-11 02:51:41 +01:00
|
|
|
if(conn->type == CONN_TYPE_OR) {
|
|
|
|
directory_set_dirty();
|
|
|
|
}
|
2003-03-17 03:42:45 +01:00
|
|
|
#ifdef USE_ZLIB
|
|
|
|
if (conn->compression) {
|
|
|
|
if (inflateEnd(conn->decompression) != Z_OK)
|
|
|
|
log(LOG_ERR,"connection_free(): while closing zlib: %s",
|
|
|
|
conn->decompression->msg);
|
|
|
|
if (deflateEnd(conn->compression) != Z_OK)
|
|
|
|
log(LOG_ERR,"connection_free(): while closing zlib: %s",
|
|
|
|
conn->compression->msg);
|
|
|
|
free(conn->compression);
|
|
|
|
free(conn->decompression);
|
|
|
|
buf_free(conn->z_outbuf);
|
|
|
|
}
|
|
|
|
#endif
|
2002-06-27 00:45:49 +02:00
|
|
|
free(conn);
|
|
|
|
}
|
|
|
|
|
2002-10-03 00:54:20 +02:00
|
|
|
int connection_create_listener(struct sockaddr_in *bindaddr, int type) {
|
2002-06-27 00:45:49 +02:00
|
|
|
connection_t *conn;
|
|
|
|
int s;
|
|
|
|
int one=1;
|
|
|
|
|
|
|
|
s = socket(PF_INET,SOCK_STREAM,IPPROTO_TCP);
|
|
|
|
if (s < 0)
|
|
|
|
{
|
|
|
|
log(LOG_ERR,"connection_create_listener(): Socket creation failed.");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
setsockopt(s, SOL_SOCKET, SO_REUSEADDR, &one, sizeof(one));
|
|
|
|
|
2002-10-03 00:54:20 +02:00
|
|
|
if(bind(s,(struct sockaddr *)bindaddr,sizeof(*bindaddr)) < 0) {
|
2002-06-27 00:45:49 +02:00
|
|
|
perror("bind ");
|
2002-10-03 00:54:20 +02:00
|
|
|
log(LOG_ERR,"Could not bind to port %u.",ntohs(bindaddr->sin_port));
|
2002-06-27 00:45:49 +02:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
if(listen(s,SOMAXCONN) < 0) {
|
2002-10-03 00:54:20 +02:00
|
|
|
log(LOG_ERR,"Could not listen on port %u.",ntohs(bindaddr->sin_port));
|
2002-06-27 00:45:49 +02:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
fcntl(s, F_SETFL, O_NONBLOCK); /* set s to non-blocking */
|
|
|
|
|
|
|
|
conn = connection_new(type);
|
2002-09-03 20:36:40 +02:00
|
|
|
if(!conn) {
|
|
|
|
log(LOG_DEBUG,"connection_create_listener(): connection_new failed. Giving up.");
|
2002-06-30 09:37:49 +02:00
|
|
|
return -1;
|
2002-09-03 20:36:40 +02:00
|
|
|
}
|
2002-06-27 00:45:49 +02:00
|
|
|
conn->s = s;
|
|
|
|
|
|
|
|
if(connection_add(conn) < 0) { /* no space, forget it */
|
2002-09-03 20:36:40 +02:00
|
|
|
log(LOG_DEBUG,"connection_create_listener(): connection_add failed. Giving up.");
|
2002-06-27 00:45:49 +02:00
|
|
|
connection_free(conn);
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2002-10-03 00:54:20 +02:00
|
|
|
log(LOG_DEBUG,"connection_create_listener(): Listening on port %u.",ntohs(bindaddr->sin_port));
|
2002-06-27 00:45:49 +02:00
|
|
|
|
|
|
|
conn->state = LISTENER_STATE_READY;
|
2002-07-18 08:37:58 +02:00
|
|
|
connection_start_reading(conn);
|
2002-06-27 00:45:49 +02:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int connection_handle_listener_read(connection_t *conn, int new_type, int new_state) {
|
|
|
|
|
|
|
|
int news; /* the new socket */
|
|
|
|
connection_t *newconn;
|
|
|
|
struct sockaddr_in remote; /* information about the remote peer when connecting to other routers */
|
|
|
|
int remotelen = sizeof(struct sockaddr_in); /* length of the remote address */
|
|
|
|
|
|
|
|
news = accept(conn->s,(struct sockaddr *)&remote,&remotelen);
|
|
|
|
if (news == -1) { /* accept() error */
|
|
|
|
if(errno==EAGAIN)
|
|
|
|
return 0; /* he hung up before we could accept(). that's fine. */
|
|
|
|
/* else there was a real error. */
|
|
|
|
log(LOG_ERR,"connection_handle_listener_read(): accept() failed. Closing.");
|
|
|
|
return -1;
|
|
|
|
}
|
2002-09-03 20:36:40 +02:00
|
|
|
log(LOG_INFO,"Connection accepted on socket %d (child of fd %d).",news, conn->s);
|
2002-06-27 00:45:49 +02:00
|
|
|
|
2002-08-23 05:35:44 +02:00
|
|
|
fcntl(news, F_SETFL, O_NONBLOCK); /* set news to non-blocking */
|
2002-07-16 04:12:58 +02:00
|
|
|
|
2002-06-27 00:45:49 +02:00
|
|
|
newconn = connection_new(new_type);
|
|
|
|
newconn->s = news;
|
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
if(!connection_speaks_cells(newconn)) {
|
|
|
|
newconn->receiver_bucket = -1;
|
|
|
|
newconn->bandwidth = -1;
|
|
|
|
}
|
|
|
|
|
2002-08-27 21:28:35 +02:00
|
|
|
newconn->address = strdup(inet_ntoa(remote.sin_addr)); /* remember the remote address */
|
2002-10-13 15:17:27 +02:00
|
|
|
newconn->addr = ntohl(remote.sin_addr.s_addr);
|
|
|
|
newconn->port = ntohs(remote.sin_port);
|
2002-06-27 00:45:49 +02:00
|
|
|
|
|
|
|
if(connection_add(newconn) < 0) { /* no space, forget it */
|
|
|
|
connection_free(newconn);
|
2002-09-03 20:36:40 +02:00
|
|
|
return 0; /* no need to tear down the parent */
|
2002-06-27 00:45:49 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
log(LOG_DEBUG,"connection_handle_listener_read(): socket %d entered state %d.",newconn->s, new_state);
|
|
|
|
newconn->state = new_state;
|
2002-07-18 08:37:58 +02:00
|
|
|
connection_start_reading(newconn);
|
2002-06-27 00:45:49 +02:00
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2002-09-28 02:52:59 +02:00
|
|
|
int retry_all_connections(int role, uint16_t or_listenport,
|
2002-09-26 14:09:10 +02:00
|
|
|
uint16_t op_listenport, uint16_t ap_listenport, uint16_t dir_listenport) {
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
|
|
|
|
/* start all connections that should be up but aren't */
|
|
|
|
|
2002-10-03 00:54:20 +02:00
|
|
|
struct sockaddr_in bindaddr; /* where to bind */
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
|
|
|
|
if(role & ROLE_OR_CONNECT_ALL) {
|
2002-10-03 00:54:20 +02:00
|
|
|
router_retry_connections();
|
2002-06-27 00:45:49 +02:00
|
|
|
}
|
|
|
|
|
2002-10-03 00:54:20 +02:00
|
|
|
memset(&bindaddr,0,sizeof(struct sockaddr_in));
|
|
|
|
bindaddr.sin_family = AF_INET;
|
|
|
|
bindaddr.sin_addr.s_addr = htonl(INADDR_ANY); /* anyone can connect */
|
2002-10-02 03:03:00 +02:00
|
|
|
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
if(role & ROLE_OR_LISTEN) {
|
2002-10-03 00:54:20 +02:00
|
|
|
bindaddr.sin_port = htons(or_listenport);
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
if(!connection_get_by_type(CONN_TYPE_OR_LISTENER)) {
|
2002-10-03 00:54:20 +02:00
|
|
|
connection_or_create_listener(&bindaddr);
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
}
|
2002-06-27 00:45:49 +02:00
|
|
|
}
|
2002-10-02 22:12:44 +02:00
|
|
|
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
if(role & ROLE_OP_LISTEN) {
|
2002-10-03 00:54:20 +02:00
|
|
|
bindaddr.sin_port = htons(op_listenport);
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
if(!connection_get_by_type(CONN_TYPE_OP_LISTENER)) {
|
2002-10-03 00:54:20 +02:00
|
|
|
connection_op_create_listener(&bindaddr);
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-09-26 14:09:10 +02:00
|
|
|
if(role & ROLE_DIR_LISTEN) {
|
2002-10-03 00:54:20 +02:00
|
|
|
bindaddr.sin_port = htons(dir_listenport);
|
2002-09-26 14:09:10 +02:00
|
|
|
if(!connection_get_by_type(CONN_TYPE_DIR_LISTENER)) {
|
2002-10-03 00:54:20 +02:00
|
|
|
connection_dir_create_listener(&bindaddr);
|
2002-09-26 14:09:10 +02:00
|
|
|
}
|
|
|
|
}
|
2002-06-27 00:45:49 +02:00
|
|
|
|
2002-10-02 03:03:00 +02:00
|
|
|
if(role & ROLE_AP_LISTEN) {
|
2002-10-03 00:54:20 +02:00
|
|
|
bindaddr.sin_port = htons(ap_listenport);
|
|
|
|
inet_aton("127.0.0.1", &(bindaddr.sin_addr)); /* the AP listens only on localhost! */
|
2002-10-02 03:03:00 +02:00
|
|
|
if(!connection_get_by_type(CONN_TYPE_AP_LISTENER)) {
|
2002-10-03 00:54:20 +02:00
|
|
|
connection_ap_create_listener(&bindaddr);
|
2002-10-02 03:03:00 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-06-27 00:45:49 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int connection_read_to_buf(connection_t *conn) {
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
int read_result;
|
2002-10-02 01:37:31 +02:00
|
|
|
struct timeval now;
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
if(connection_speaks_cells(conn)) {
|
|
|
|
assert(conn->receiver_bucket >= 0);
|
|
|
|
}
|
|
|
|
if(!connection_speaks_cells(conn)) {
|
|
|
|
assert(conn->receiver_bucket < 0);
|
|
|
|
}
|
2002-10-02 01:37:31 +02:00
|
|
|
|
|
|
|
if(gettimeofday(&now,NULL) < 0)
|
|
|
|
return -1;
|
|
|
|
conn->timestamp_lastread = now.tv_sec;
|
|
|
|
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
read_result = read_to_buf(conn->s, conn->receiver_bucket, &conn->inbuf, &conn->inbuflen,
|
|
|
|
&conn->inbuf_datalen, &conn->inbuf_reached_eof);
|
2002-07-18 08:37:58 +02:00
|
|
|
// log(LOG_DEBUG,"connection_read_to_buf(): read_to_buf returned %d.",read_result);
|
|
|
|
if(read_result >= 0 && connection_speaks_cells(conn)) {
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
conn->receiver_bucket -= read_result;
|
|
|
|
if(conn->receiver_bucket <= 0) {
|
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
// log(LOG_DEBUG,"connection_read_to_buf() stopping reading, receiver bucket full.");
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
connection_stop_reading(conn);
|
|
|
|
|
|
|
|
/* If we're not in 'open' state here, then we're never going to finish the
|
|
|
|
* handshake, because we'll never increment the receiver_bucket. But we
|
|
|
|
* can't check for that here, because the buf we just read might have enough
|
|
|
|
* on it to finish the handshake. So we check for that in check_conn_read().
|
|
|
|
*/
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
return read_result;
|
2002-06-27 00:45:49 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
int connection_fetch_from_buf(char *string, int len, connection_t *conn) {
|
|
|
|
return fetch_from_buf(string, len, &conn->inbuf, &conn->inbuflen, &conn->inbuf_datalen);
|
|
|
|
}
|
|
|
|
|
2003-03-17 03:42:45 +01:00
|
|
|
#ifdef USE_ZLIB
|
|
|
|
int connection_compress_from_buf(char *string, int len, connection_t *conn,
|
|
|
|
int flush) {
|
|
|
|
return compress_from_buf(string, len,
|
|
|
|
&conn->inbuf, &conn->inbuflen, &conn->inbuf_datalen,
|
|
|
|
conn->compression, flush);
|
|
|
|
}
|
|
|
|
|
|
|
|
int connection_decompress_to_buf(char *string, int len, connection_t *conn,
|
|
|
|
int flush) {
|
|
|
|
int n;
|
|
|
|
struct timeval now;
|
|
|
|
|
2003-03-17 22:21:35 +01:00
|
|
|
if (len) {
|
|
|
|
if (write_to_buf(string, len,
|
|
|
|
&conn->z_outbuf, &conn->z_outbuflen, &conn->z_outbuf_datalen) < 0)
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
|
|
|
/* If we have more that 10 payloads worth of data waiting in outbuf,
|
|
|
|
* don't uncompress any more; queue this data in z_outbuf.
|
|
|
|
*
|
|
|
|
* This check should may be different.
|
|
|
|
*/
|
|
|
|
if (connection_outbuf_too_full(conn->outbuf))
|
|
|
|
return 0;
|
2003-03-17 03:42:45 +01:00
|
|
|
|
|
|
|
n = decompress_buf_to_buf(
|
|
|
|
&conn->z_outbuf, &conn->z_outbuflen, &conn->z_outbuf_datalen,
|
|
|
|
&conn->outbuf, &conn->outbuflen, &conn->outbuf_datalen,
|
|
|
|
conn->decompression, flush);
|
|
|
|
|
|
|
|
if (n < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if(gettimeofday(&now,NULL) < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
if(!n)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
if(conn->marked_for_close)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
conn->timestamp_lastwritten = now.tv_sec;
|
2003-03-17 22:21:35 +01:00
|
|
|
conn->outbuf_flushlen += n;
|
2003-03-17 03:42:45 +01:00
|
|
|
|
|
|
|
return n;
|
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2002-09-28 07:53:00 +02:00
|
|
|
int connection_find_on_inbuf(char *string, int len, connection_t *conn) {
|
|
|
|
return find_on_inbuf(string, len, conn->inbuf, conn->inbuf_datalen);
|
|
|
|
}
|
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
int connection_wants_to_flush(connection_t *conn) {
|
|
|
|
return conn->outbuf_flushlen;
|
|
|
|
}
|
|
|
|
|
|
|
|
int connection_outbuf_too_full(connection_t *conn) {
|
|
|
|
return (conn->outbuf_flushlen > 10*CELL_PAYLOAD_SIZE);
|
|
|
|
}
|
|
|
|
|
2002-06-27 00:45:49 +02:00
|
|
|
int connection_flush_buf(connection_t *conn) {
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
return flush_buf(conn->s, &conn->outbuf, &conn->outbuflen, &conn->outbuf_flushlen, &conn->outbuf_datalen);
|
2002-06-27 00:45:49 +02:00
|
|
|
}
|
|
|
|
|
|
|
|
int connection_write_to_buf(char *string, int len, connection_t *conn) {
|
2002-10-02 01:37:31 +02:00
|
|
|
struct timeval now;
|
|
|
|
|
|
|
|
if(gettimeofday(&now,NULL) < 0)
|
|
|
|
return -1;
|
|
|
|
|
2002-06-27 00:45:49 +02:00
|
|
|
if(!len)
|
|
|
|
return 0;
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
if(conn->marked_for_close)
|
|
|
|
return 0;
|
|
|
|
|
2002-10-02 01:37:31 +02:00
|
|
|
conn->timestamp_lastwritten = now.tv_sec;
|
|
|
|
|
2002-09-04 08:29:28 +02:00
|
|
|
if( (!connection_speaks_cells(conn)) ||
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
(!connection_state_is_open(conn)) ||
|
|
|
|
(options.LinkPadding == 0) ) {
|
|
|
|
/* connection types other than or and op, or or/op not in 'open' state, should flush immediately */
|
|
|
|
/* also flush immediately if we're not doing LinkPadding, since otherwise it will never flush */
|
2002-07-18 08:37:58 +02:00
|
|
|
connection_start_writing(conn);
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
conn->outbuf_flushlen += len;
|
|
|
|
}
|
|
|
|
|
2002-06-27 00:45:49 +02:00
|
|
|
return write_to_buf(string, len, &conn->outbuf, &conn->outbuflen, &conn->outbuf_datalen);
|
|
|
|
}
|
|
|
|
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
int connection_receiver_bucket_should_increase(connection_t *conn) {
|
|
|
|
assert(conn);
|
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
if(!connection_speaks_cells(conn))
|
|
|
|
return 0; /* edge connections don't use receiver_buckets */
|
|
|
|
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
if(conn->receiver_bucket > 10*conn->bandwidth)
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
return 1;
|
|
|
|
}
|
|
|
|
|
|
|
|
void connection_increment_receiver_bucket (connection_t *conn) {
|
|
|
|
assert(conn);
|
|
|
|
|
|
|
|
if(connection_receiver_bucket_should_increase(conn)) {
|
|
|
|
/* yes, the receiver_bucket can become overfull here. But not by much. */
|
|
|
|
conn->receiver_bucket += conn->bandwidth*1.1;
|
|
|
|
if(connection_state_is_open(conn)) {
|
|
|
|
/* if we're in state 'open', then start reading again */
|
|
|
|
connection_start_reading(conn);
|
|
|
|
}
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
int connection_speaks_cells(connection_t *conn) {
|
|
|
|
assert(conn);
|
|
|
|
|
|
|
|
if(conn->type == CONN_TYPE_OR || conn->type == CONN_TYPE_OP)
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2002-09-22 00:41:48 +02:00
|
|
|
int connection_is_listener(connection_t *conn) {
|
|
|
|
if(conn->type == CONN_TYPE_OP_LISTENER ||
|
|
|
|
conn->type == CONN_TYPE_OR_LISTENER ||
|
2002-09-26 14:09:10 +02:00
|
|
|
conn->type == CONN_TYPE_AP_LISTENER ||
|
|
|
|
conn->type == CONN_TYPE_DIR_LISTENER)
|
2002-09-22 00:41:48 +02:00
|
|
|
return 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
int connection_state_is_open(connection_t *conn) {
|
|
|
|
assert(conn);
|
|
|
|
|
|
|
|
if((conn->type == CONN_TYPE_OR && conn->state == OR_CONN_STATE_OPEN) ||
|
|
|
|
(conn->type == CONN_TYPE_OP && conn->state == OP_CONN_STATE_OPEN) ||
|
|
|
|
(conn->type == CONN_TYPE_AP && conn->state == AP_CONN_STATE_OPEN) ||
|
|
|
|
(conn->type == CONN_TYPE_EXIT && conn->state == EXIT_CONN_STATE_OPEN))
|
|
|
|
return 1;
|
|
|
|
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
void connection_send_cell(connection_t *conn) {
|
|
|
|
cell_t cell;
|
2002-07-16 20:24:12 +02:00
|
|
|
int bytes_in_full_flushlen;
|
|
|
|
|
|
|
|
/* this function only gets called if options.LinkPadding is 1 */
|
|
|
|
assert(options.LinkPadding == 1);
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
|
|
|
|
assert(conn);
|
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
if(!connection_speaks_cells(conn)) {
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
/* this conn doesn't speak cells. do nothing. */
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
if(!connection_state_is_open(conn)) {
|
|
|
|
/* it's not in 'open' state, all data should already be waiting to be flushed */
|
|
|
|
assert(conn->outbuf_datalen == conn->outbuf_flushlen);
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
|
|
|
#if 0 /* use to send evenly spaced cells, but not padding */
|
|
|
|
if(conn->outbuf_datalen - conn->outbuf_flushlen >= sizeof(cell_t)) {
|
|
|
|
conn->outbuf_flushlen += sizeof(cell_t); /* instruct it to send a cell */
|
2002-07-18 08:37:58 +02:00
|
|
|
connection_start_writing(conn);
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
}
|
|
|
|
#endif
|
|
|
|
|
2002-07-16 20:24:12 +02:00
|
|
|
connection_increment_send_timeval(conn); /* update when we'll send the next cell */
|
|
|
|
|
|
|
|
bytes_in_full_flushlen = conn->bandwidth / 100; /* 10ms worth */
|
|
|
|
if(bytes_in_full_flushlen < 10*sizeof(cell_t))
|
|
|
|
bytes_in_full_flushlen = 10*sizeof(cell_t); /* but at least 10 cells worth */
|
|
|
|
|
|
|
|
if(conn->outbuf_flushlen > bytes_in_full_flushlen - sizeof(cell_t)) {
|
|
|
|
/* if we would exceed bytes_in_full_flushlen by adding a new cell */
|
|
|
|
return;
|
|
|
|
}
|
|
|
|
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
if(conn->outbuf_datalen - conn->outbuf_flushlen < sizeof(cell_t)) {
|
|
|
|
/* we need to queue a padding cell first */
|
|
|
|
memset(&cell,0,sizeof(cell_t));
|
|
|
|
cell.command = CELL_PADDING;
|
|
|
|
connection_write_cell_to_buf(&cell, conn);
|
|
|
|
}
|
|
|
|
|
2003-03-11 22:38:38 +01:00
|
|
|
/* ???? If we might not have added a cell above, why are we
|
|
|
|
* ???? increasing outbuf_flushlen? -NM */
|
2003-03-12 13:06:54 +01:00
|
|
|
/* The connection_write_cell_to_buf() call doesn't increase the flushlen
|
|
|
|
* (if link padding is on). So if there isn't a whole cell waiting-but-
|
|
|
|
* not-yet-flushed, we add a padding cell. Thus in any case the gap between
|
|
|
|
* outbuf_datalen and outbuf_flushlen is at least sizeof(cell_t). -RD
|
|
|
|
*/
|
|
|
|
/* XXXX actually, there are some subtle bugs lurking in here. They
|
|
|
|
* have to do with the fact that we don't handle connection failure
|
|
|
|
* cleanly. Sometimes we mark things to be closed later. Inside
|
|
|
|
* connection_write_cell_to_buf, it returns successfully without
|
|
|
|
* writing if the connection has been marked for close. We need to
|
|
|
|
* look at all our failure cases more carefully and make sure they do
|
|
|
|
* the right thing.
|
|
|
|
*/
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
conn->outbuf_flushlen += sizeof(cell_t); /* instruct it to send a cell */
|
2002-07-18 08:37:58 +02:00
|
|
|
connection_start_writing(conn);
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
|
|
|
|
}
|
|
|
|
|
|
|
|
void connection_increment_send_timeval(connection_t *conn) {
|
|
|
|
/* add "1000000 * sizeof(cell_t) / conn->bandwidth" microseconds to conn->send_timeval */
|
|
|
|
/* FIXME should perhaps use ceil() of this. For now I simply add 1. */
|
|
|
|
|
|
|
|
tv_addms(&conn->send_timeval, 1+1000 * sizeof(cell_t) / conn->bandwidth);
|
|
|
|
}
|
|
|
|
|
|
|
|
void connection_init_timeval(connection_t *conn) {
|
|
|
|
|
|
|
|
assert(conn);
|
|
|
|
|
|
|
|
if(gettimeofday(&conn->send_timeval,NULL) < 0)
|
|
|
|
return;
|
|
|
|
|
|
|
|
connection_increment_send_timeval(conn);
|
|
|
|
}
|
|
|
|
|
2002-06-27 00:45:49 +02:00
|
|
|
int connection_send_destroy(aci_t aci, connection_t *conn) {
|
|
|
|
cell_t cell;
|
|
|
|
|
2002-06-30 09:37:49 +02:00
|
|
|
assert(conn);
|
2002-06-27 00:45:49 +02:00
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
if(!connection_speaks_cells(conn)) {
|
2002-07-22 06:08:37 +02:00
|
|
|
log(LOG_INFO,"connection_send_destroy(): Aci %d: At an edge. Marking connection for close.", aci);
|
2002-06-27 00:45:49 +02:00
|
|
|
conn->marked_for_close = 1;
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2003-03-11 22:38:38 +01:00
|
|
|
memset(&cell, 0, sizeof(cell_t));
|
2002-06-27 00:45:49 +02:00
|
|
|
cell.aci = aci;
|
|
|
|
cell.command = CELL_DESTROY;
|
2002-07-22 06:08:37 +02:00
|
|
|
log(LOG_INFO,"connection_send_destroy(): Sending destroy (aci %d).",aci);
|
2002-06-27 00:45:49 +02:00
|
|
|
return connection_write_cell_to_buf(&cell, conn);
|
2002-09-17 10:14:37 +02:00
|
|
|
}
|
|
|
|
|
2003-02-18 02:35:55 +01:00
|
|
|
int connection_write_cell_to_buf(const cell_t *cellp, connection_t *conn) {
|
2002-10-02 22:12:44 +02:00
|
|
|
char networkcell[CELL_NETWORK_SIZE];
|
|
|
|
char *n = networkcell;
|
2002-06-27 00:45:49 +02:00
|
|
|
|
2002-10-02 22:12:44 +02:00
|
|
|
memset(n,0,CELL_NETWORK_SIZE); /* zero it out to start */
|
|
|
|
*(aci_t *)n = htons(cellp->aci);
|
|
|
|
*(n+2) = cellp->command;
|
|
|
|
*(n+3) = cellp->length;
|
|
|
|
/* seq is reserved, leave zero */
|
|
|
|
memcpy(n+8,cellp->payload,CELL_PAYLOAD_SIZE);
|
|
|
|
|
|
|
|
if(connection_encrypt_cell(n,conn)<0) {
|
2002-06-27 00:45:49 +02:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2002-10-02 22:12:44 +02:00
|
|
|
return connection_write_to_buf(n, CELL_NETWORK_SIZE, conn);
|
2002-06-27 00:45:49 +02:00
|
|
|
}
|
|
|
|
|
2002-10-02 22:12:44 +02:00
|
|
|
int connection_encrypt_cell(char *cellp, connection_t *conn) {
|
|
|
|
char cryptcell[CELL_NETWORK_SIZE];
|
2002-07-18 08:37:58 +02:00
|
|
|
#if 0
|
2002-06-27 00:45:49 +02:00
|
|
|
int x;
|
|
|
|
char *px;
|
|
|
|
|
|
|
|
printf("Sending: Cell header plaintext: ");
|
|
|
|
px = (char *)cellp;
|
|
|
|
for(x=0;x<8;x++) {
|
|
|
|
printf("%u ",px[x]);
|
|
|
|
}
|
|
|
|
printf("\n");
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
#endif
|
2002-06-27 00:45:49 +02:00
|
|
|
|
2002-10-02 22:12:44 +02:00
|
|
|
if(crypto_cipher_encrypt(conn->f_crypto, cellp, CELL_NETWORK_SIZE, cryptcell)) {
|
2002-08-24 10:24:30 +02:00
|
|
|
log(LOG_ERR,"Could not encrypt cell for connection %s:%u.",conn->address,conn->port);
|
2002-06-27 00:45:49 +02:00
|
|
|
return -1;
|
|
|
|
}
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
#if 0
|
2002-06-27 00:45:49 +02:00
|
|
|
printf("Sending: Cell header crypttext: ");
|
2002-09-04 08:29:28 +02:00
|
|
|
px = (char *)&newcell;
|
2002-06-27 00:45:49 +02:00
|
|
|
for(x=0;x<8;x++) {
|
2002-09-04 08:29:28 +02:00
|
|
|
printf("%u ",px[x]);
|
2002-06-27 00:45:49 +02:00
|
|
|
}
|
|
|
|
printf("\n");
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
#endif
|
2002-06-27 00:45:49 +02:00
|
|
|
|
2002-10-02 22:12:44 +02:00
|
|
|
memcpy(cellp,cryptcell,CELL_NETWORK_SIZE);
|
2002-06-27 00:45:49 +02:00
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
|
|
|
int connection_process_inbuf(connection_t *conn) {
|
|
|
|
|
|
|
|
assert(conn);
|
|
|
|
|
|
|
|
switch(conn->type) {
|
|
|
|
case CONN_TYPE_OP:
|
|
|
|
return connection_op_process_inbuf(conn);
|
|
|
|
case CONN_TYPE_OR:
|
|
|
|
return connection_or_process_inbuf(conn);
|
2002-06-30 09:37:49 +02:00
|
|
|
case CONN_TYPE_EXIT:
|
|
|
|
return connection_exit_process_inbuf(conn);
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
case CONN_TYPE_AP:
|
|
|
|
return connection_ap_process_inbuf(conn);
|
2002-09-26 14:09:10 +02:00
|
|
|
case CONN_TYPE_DIR:
|
|
|
|
return connection_dir_process_inbuf(conn);
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
case CONN_TYPE_DNSMASTER:
|
|
|
|
return connection_dns_process_inbuf(conn);
|
2002-06-27 00:45:49 +02:00
|
|
|
default:
|
|
|
|
log(LOG_DEBUG,"connection_process_inbuf() got unexpected conn->type.");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
int connection_package_raw_inbuf(connection_t *conn) {
|
2003-03-17 03:42:45 +01:00
|
|
|
int amount_to_process, len;
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
cell_t cell;
|
|
|
|
circuit_t *circ;
|
|
|
|
|
|
|
|
assert(conn);
|
2002-07-18 08:37:58 +02:00
|
|
|
assert(!connection_speaks_cells(conn));
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
/* this function should never get called if the receive_topicwindow is 0 */
|
|
|
|
|
|
|
|
repeat_connection_package_raw_inbuf:
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
|
|
|
|
amount_to_process = conn->inbuf_datalen;
|
2003-03-17 03:42:45 +01:00
|
|
|
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
if(!amount_to_process)
|
|
|
|
return 0;
|
|
|
|
|
2003-03-11 22:38:38 +01:00
|
|
|
/* Initialize the cell with 0's */
|
|
|
|
memset(&cell, 0, sizeof(cell_t));
|
|
|
|
|
2003-03-17 03:42:45 +01:00
|
|
|
#ifdef USE_ZLIB
|
|
|
|
/* This compression logic is not necessarily optimal:
|
|
|
|
* 1) Maybe we should try to read as much as we can onto the inbuf before
|
|
|
|
* compressing.
|
|
|
|
* 2)
|
|
|
|
*/
|
|
|
|
len = connection_compress_from_buf(cell.payload + TOPIC_HEADER_SIZE,
|
|
|
|
CELL_PAYLOAD_SIZE - TOPIC_HEADER_SIZE,
|
|
|
|
conn, Z_SYNC_FLUSH);
|
|
|
|
if (len < 0)
|
|
|
|
return -1;
|
|
|
|
|
|
|
|
cell.length = len;
|
|
|
|
#else
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
if(amount_to_process > CELL_PAYLOAD_SIZE - TOPIC_HEADER_SIZE) {
|
|
|
|
cell.length = CELL_PAYLOAD_SIZE - TOPIC_HEADER_SIZE;
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
} else {
|
|
|
|
cell.length = amount_to_process;
|
|
|
|
}
|
|
|
|
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
if(connection_fetch_from_buf(cell.payload+TOPIC_HEADER_SIZE, cell.length, conn) < 0)
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
return -1;
|
2003-03-17 03:42:45 +01:00
|
|
|
#endif
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
|
|
|
|
circ = circuit_get_by_conn(conn);
|
|
|
|
if(!circ) {
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
log(LOG_DEBUG,"connection_package_raw_inbuf(): conn has no circuits!");
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
2003-03-01 00:49:52 +01:00
|
|
|
log(LOG_DEBUG,"connection_package_raw_inbuf(): (%d) Packaging %d bytes (%d waiting).",conn->s,cell.length, amount_to_process);
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
|
2003-02-07 00:48:35 +01:00
|
|
|
*(uint16_t *)(cell.payload+2) = htons(conn->topic_id);
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
*cell.payload = TOPIC_COMMAND_DATA;
|
|
|
|
cell.length += TOPIC_HEADER_SIZE;
|
|
|
|
cell.command = CELL_DATA;
|
|
|
|
|
2003-02-06 09:00:49 +01:00
|
|
|
if(conn->type == CONN_TYPE_EXIT) {
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
cell.aci = circ->p_aci;
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
if(circuit_deliver_data_cell_from_edge(&cell, circ, EDGE_EXIT) < 0) {
|
|
|
|
log(LOG_DEBUG,"connection_package_raw_inbuf(): circuit_deliver_data_cell_from_edge (backward) failed. Closing.");
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
circuit_close(circ);
|
|
|
|
return 0;
|
|
|
|
}
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
assert(conn->n_receive_topicwindow > 0);
|
|
|
|
if(--conn->n_receive_topicwindow <= 0) { /* is it 0 after decrement? */
|
2003-03-01 00:49:52 +01:00
|
|
|
connection_stop_reading(conn);
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
log(LOG_DEBUG,"connection_package_raw_inbuf(): receive_topicwindow at exit reached 0.");
|
2002-07-18 08:37:58 +02:00
|
|
|
return 0; /* don't process the inbuf any more */
|
|
|
|
}
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
log(LOG_DEBUG,"connection_package_raw_inbuf(): receive_topicwindow at exit is %d",conn->n_receive_topicwindow);
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
} else { /* send it forward. we're an AP */
|
2003-02-06 09:00:49 +01:00
|
|
|
assert(conn->type == CONN_TYPE_AP);
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
cell.aci = circ->n_aci;
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
if(circuit_deliver_data_cell_from_edge(&cell, circ, EDGE_AP) < 0) {
|
|
|
|
log(LOG_DEBUG,"connection_package_raw_inbuf(): circuit_deliver_data_cell_from_edge (forward) failed. Closing.");
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
circuit_close(circ);
|
|
|
|
return 0;
|
|
|
|
}
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
assert(conn->p_receive_topicwindow > 0);
|
|
|
|
if(--conn->p_receive_topicwindow <= 0) { /* is it 0 after decrement? */
|
2003-03-01 00:49:52 +01:00
|
|
|
connection_stop_reading(conn);
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
log(LOG_DEBUG,"connection_package_raw_inbuf(): receive_topicwindow at AP reached 0.");
|
2002-07-18 08:37:58 +02:00
|
|
|
return 0; /* don't process the inbuf any more */
|
|
|
|
}
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
log(LOG_DEBUG,"connection_package_raw_inbuf(): receive_topicwindow at AP is %d",conn->p_receive_topicwindow);
|
|
|
|
}
|
2003-03-17 03:42:45 +01:00
|
|
|
if (conn->inbuf_datalen) {
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
log(LOG_DEBUG,"connection_package_raw_inbuf(): recursing.");
|
|
|
|
goto repeat_connection_package_raw_inbuf;
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
int connection_consider_sending_sendme(connection_t *conn, int edge_type) {
|
2002-07-18 08:37:58 +02:00
|
|
|
circuit_t *circ;
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
cell_t cell;
|
2002-07-18 08:37:58 +02:00
|
|
|
|
|
|
|
if(connection_outbuf_too_full(conn))
|
|
|
|
return 0;
|
|
|
|
|
|
|
|
circ = circuit_get_by_conn(conn);
|
|
|
|
if(!circ) {
|
2002-09-09 06:06:59 +02:00
|
|
|
/* this can legitimately happen if the destroy has already arrived and torn down the circuit */
|
|
|
|
log(LOG_DEBUG,"connection_consider_sending_sendme(): No circuit associated with conn. Skipping.");
|
|
|
|
return 0;
|
2002-07-18 08:37:58 +02:00
|
|
|
}
|
|
|
|
|
2003-03-11 22:38:38 +01:00
|
|
|
memset(&cell, 0, sizeof(cell_t));
|
2003-02-07 00:48:35 +01:00
|
|
|
*(uint16_t *)(cell.payload+2) = htons(conn->topic_id);
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
*cell.payload = TOPIC_COMMAND_SENDME;
|
|
|
|
cell.length += TOPIC_HEADER_SIZE;
|
|
|
|
cell.command = CELL_DATA;
|
|
|
|
|
|
|
|
if(edge_type == EDGE_EXIT) { /* we're at an exit */
|
|
|
|
if(conn->p_receive_topicwindow < TOPICWINDOW_START - TOPICWINDOW_INCREMENT) {
|
|
|
|
log(LOG_DEBUG,"connection_consider_sending_sendme(): Outbuf %d, Queueing topic sendme back.", conn->outbuf_flushlen);
|
|
|
|
conn->p_receive_topicwindow += TOPICWINDOW_INCREMENT;
|
|
|
|
cell.aci = circ->p_aci;
|
|
|
|
if(circuit_deliver_data_cell_from_edge(&cell, circ, edge_type) < 0) {
|
|
|
|
log(LOG_DEBUG,"connection_consider_sending_sendme(): circuit_deliver_data_cell_from_edge (backward) failed. Closing.");
|
|
|
|
circuit_close(circ);
|
|
|
|
return 0;
|
|
|
|
}
|
2002-07-18 08:37:58 +02:00
|
|
|
}
|
|
|
|
} else { /* we're at an AP */
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
assert(edge_type == EDGE_AP);
|
|
|
|
if(conn->n_receive_topicwindow < TOPICWINDOW_START-TOPICWINDOW_INCREMENT) {
|
|
|
|
log(LOG_DEBUG,"connection_consider_sending_sendme(): Outbuf %d, Queueing topic sendme forward.", conn->outbuf_flushlen);
|
|
|
|
conn->n_receive_topicwindow += TOPICWINDOW_INCREMENT;
|
|
|
|
cell.aci = circ->n_aci;
|
|
|
|
if(circuit_deliver_data_cell_from_edge(&cell, circ, edge_type) < 0) {
|
|
|
|
log(LOG_DEBUG,"connection_consider_sending_sendme(): circuit_deliver_data_cell_from_edge (forward) failed. Closing.");
|
|
|
|
circuit_close(circ);
|
|
|
|
return 0;
|
|
|
|
}
|
2002-07-18 08:37:58 +02:00
|
|
|
}
|
|
|
|
}
|
|
|
|
return 0;
|
|
|
|
}
|
|
|
|
|
2002-06-27 00:45:49 +02:00
|
|
|
int connection_finished_flushing(connection_t *conn) {
|
|
|
|
|
|
|
|
assert(conn);
|
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
// log(LOG_DEBUG,"connection_finished_flushing() entered. Socket %u.", conn->s);
|
2002-06-27 00:45:49 +02:00
|
|
|
|
|
|
|
switch(conn->type) {
|
Integrated onion proxy into or/
The 'or' process can now be told (by the global_role variable) what
roles this server should play -- connect to all ORs, listen for ORs,
listen for OPs, listen for APs, or any combination.
* everything in /src/op/ is now obsolete.
* connection_ap.c now handles all interactions with application proxies
* "port" is now or_port, op_port, ap_port. But routers are still always
referenced (say, in conn_get_by_addr_port()) by addr / or_port. We
should make routers.c actually read these new ports (currently I've
kludged it so op_port = or_port+10, ap_port=or_port+20)
* circuits currently know if they're at the beginning of the path because
circ->cpath is set. They use this instead for crypts (both ways),
if it's set.
* I still obey the "send a 0 back to the AP when you're ready" protocol,
but I think we should phase it out. I can simply not read from the AP
socket until I'm ready.
I need to do a lot of cleanup work here, but the code appears to work, so
now's a good time for a checkin.
svn:r22
2002-07-02 11:36:58 +02:00
|
|
|
case CONN_TYPE_AP:
|
|
|
|
return connection_ap_finished_flushing(conn);
|
2002-06-27 00:45:49 +02:00
|
|
|
case CONN_TYPE_OP:
|
|
|
|
return connection_op_finished_flushing(conn);
|
|
|
|
case CONN_TYPE_OR:
|
|
|
|
return connection_or_finished_flushing(conn);
|
2002-06-30 09:37:49 +02:00
|
|
|
case CONN_TYPE_EXIT:
|
|
|
|
return connection_exit_finished_flushing(conn);
|
2002-09-26 14:09:10 +02:00
|
|
|
case CONN_TYPE_DIR:
|
|
|
|
return connection_dir_finished_flushing(conn);
|
major overhaul: dns slave subsystem, topics
on startup, it forks off a master dns handler, which forks off dns
slaves (like the apache model). slaves as spawned as load increases,
and then reused. excess slaves are not ever killed, currently.
implemented topics. each topic has a receive window in each direction
at each edge of the circuit, and sends sendme's at the data level, as
per before. each circuit also has receive windows in each direction at
each hop; an edge sends a circuit-level sendme as soon as enough data
cells have arrived (regardless of whether the data cells were flushed
to the exit conns). removed the 'connected' cell type, since it's now
a topic command within data cells.
at the edge of the circuit, there can be multiple connections associated
with a single circuit. you find them via the linked list conn->next_topic.
currently each new ap connection starts its own circuit, so we ought
to see comparable performance to what we had before. but that's only
because i haven't written the code to reattach to old circuits. please
try to break it as-is, and then i'll make it reuse the same circuit and
we'll try to break that.
svn:r152
2003-01-26 10:02:24 +01:00
|
|
|
case CONN_TYPE_DNSMASTER:
|
|
|
|
return connection_dns_finished_flushing(conn);
|
2002-06-27 00:45:49 +02:00
|
|
|
default:
|
|
|
|
log(LOG_DEBUG,"connection_finished_flushing() got unexpected conn->type.");
|
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
}
|
|
|
|
|
|
|
|
int connection_process_cell_from_inbuf(connection_t *conn) {
|
|
|
|
/* check if there's a whole cell there.
|
|
|
|
* if yes, pull it off, decrypt it, and process it.
|
|
|
|
*/
|
2002-10-02 22:12:44 +02:00
|
|
|
char crypted[CELL_NETWORK_SIZE];
|
2002-06-27 00:45:49 +02:00
|
|
|
char outbuf[1024];
|
2002-07-18 08:37:58 +02:00
|
|
|
// int x;
|
2002-10-02 22:12:44 +02:00
|
|
|
cell_t cell;
|
2002-06-27 00:45:49 +02:00
|
|
|
|
2002-10-02 22:12:44 +02:00
|
|
|
if(conn->inbuf_datalen < CELL_NETWORK_SIZE) /* entire response available? */
|
2002-06-27 00:45:49 +02:00
|
|
|
return 0; /* not yet */
|
|
|
|
|
2002-10-02 22:12:44 +02:00
|
|
|
if(connection_fetch_from_buf(crypted,CELL_NETWORK_SIZE,conn) < 0) {
|
2002-06-27 00:45:49 +02:00
|
|
|
return -1;
|
|
|
|
}
|
|
|
|
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
#if 0
|
2002-06-27 00:45:49 +02:00
|
|
|
printf("Cell header crypttext: ");
|
|
|
|
for(x=0;x<8;x++) {
|
|
|
|
printf("%u ",crypted[x]);
|
|
|
|
}
|
|
|
|
printf("\n");
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
#endif
|
2002-06-27 00:45:49 +02:00
|
|
|
/* decrypt */
|
2002-10-02 22:12:44 +02:00
|
|
|
if(crypto_cipher_decrypt(conn->b_crypto,crypted,CELL_NETWORK_SIZE,outbuf)) {
|
2002-06-27 00:45:49 +02:00
|
|
|
log(LOG_ERR,"connection_process_cell_from_inbuf(): Decryption failed, dropping.");
|
|
|
|
return connection_process_inbuf(conn); /* process the remainder of the buffer */
|
|
|
|
}
|
2002-07-18 08:37:58 +02:00
|
|
|
// log(LOG_DEBUG,"connection_process_cell_from_inbuf(): Cell decrypted (%d bytes).",outlen);
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
#if 0
|
2002-06-27 00:45:49 +02:00
|
|
|
printf("Cell header plaintext: ");
|
|
|
|
for(x=0;x<8;x++) {
|
|
|
|
printf("%u ",outbuf[x]);
|
|
|
|
}
|
|
|
|
printf("\n");
|
Implemented link padding and receiver token buckets
Each socket reads at most 'bandwidth' bytes per second sustained, but
can handle bursts of up to 10*bandwidth bytes.
Cells are now sent out at evenly-spaced intervals, with padding sent
out otherwise. Set Linkpadding=0 in the rc file to send cells as soon
as they're available (and to never send padding cells).
Added license/copyrights statements at the top of most files.
router->min and router->max have been merged into a single 'bandwidth'
value. We should make the routerinfo_t reflect this (want to do that,
Mat?)
As the bandwidth increases, and we want to stop sleeping more and more
frequently to send a single cell, cpu usage goes up. At 128kB/s we're
pretty much calling poll with a timeout of 1ms or even 0ms. The current
code takes a timeout of 0-9ms and makes it 10ms. prepare_for_poll()
handles everything that should have happened in the past, so as long as
our buffers don't get too full in that 10ms, we're ok.
Speaking of too full, if you run three servers at 100kB/s with -l debug,
it spends too much time printing debugging messages to be able to keep
up with the cells. The outbuf ultimately fills up and it kills that
connection. If you run with -l err, it works fine up through 500kB/s and
probably beyond. Down the road we'll want to teach it to recognize when
an outbuf is getting full, and back off.
svn:r50
2002-07-16 03:12:15 +02:00
|
|
|
#endif
|
2002-06-27 00:45:49 +02:00
|
|
|
|
2003-03-05 21:03:05 +01:00
|
|
|
/* retrieve cell info from outbuf (create the host-order struct from the network-order string) */
|
2002-10-02 22:12:44 +02:00
|
|
|
memset(&cell,0,sizeof(cell_t)); /* zero it out to start */
|
|
|
|
cell.aci = ntohs(*(aci_t *)outbuf);
|
|
|
|
cell.command = *(outbuf+2);
|
|
|
|
cell.length = *(outbuf+3);
|
|
|
|
memcpy(cell.payload, outbuf+8, CELL_PAYLOAD_SIZE);
|
|
|
|
|
2002-07-18 08:37:58 +02:00
|
|
|
// log(LOG_DEBUG,"connection_process_cell_from_inbuf(): Decrypted cell is of type %u (ACI %u).",cellp->command,cellp->aci);
|
2002-10-02 22:12:44 +02:00
|
|
|
command_process_cell(&cell, conn);
|
2002-06-27 00:45:49 +02:00
|
|
|
|
|
|
|
return connection_process_inbuf(conn); /* process the remainder of the buffer */
|
|
|
|
}
|
|
|
|
|