- Oct 17, 2004
-
- Oct 16, 2004
-
- Oct 15, 2004
-
- Oct 14, 2004
-
-
* Allow for a configurable tunnel "growth factor", rather than trying to achieve a steady state. This will let us grow gradually when the router is needed more, rather than blindly accepting the request or arbitrarily choking it at an averaged value. Configure this with "router.tunnelGrowthFactor" in the router.config (default "1.5"). * Adjust the tunnel test timeouts dynamically - rather than the old flat 30s (!!!) timeout, we set the timeout to 2x the average tunnel test time (the deviation factor can be adjusted by setting "router.tunnelTestDeviation" to "3.0" or whatever). This should help find the 'good' tunnels. * Added some crazy debugging to try and track down an intermittent hang.
- Oct 13, 2004
-
- Oct 12, 2004
-
-
* Disable the probabalistic drop by default (enable via the router config property "tcp.dropProbabalistically=true") * Disable the actual watchdog shutdown by default, but keep track of more variables and log a lot more when it occurs (enable via the router config property "watchdog.haltOnHang=true") * Implement some tunnel participation smoothing by refusing requests probabalistically as our participating tunnel count exceeds the previous hour's, or when the 10 minute average tunnel test time exceeds the 60 minute average tunnel test time. The probabilities in both cases are oldAverage / #current, so if you're suddenly flooded with 200 tunnels and you had previously only participated in 50, you'll have a 25% chance of accepting a subsequent request.
- Oct 11, 2004
-
- Oct 10, 2004
-
-
-
* Added a watchdog timer to do some baseline liveliness checking to help debug some odd errors. * Added a pair of summary stats for bandwidth usage, allowing easy export with the other stats ("bw.sendBps" and "bw.receiveBps") * Trimmed another memory allocation on message reception.
-
- Oct 08, 2004
-
- Oct 07, 2004
-
- Oct 06, 2004
-
-
* Implement an active queue management scheme on the TCP transports, dropping messages probabalistically as the queue fills up. The estimated queue capacity is determined by the rate at which messages have been sent to the peer (averaged at 1, 5, and 60m periods). As we exceed 1/2 of the estimated capacity, we drop messages throughout the queue probabalistically with regards to their size. This is based on RFC 2309's RED, with the minimum threshold set to 1/2 the estimated connection capacity. We may want to consider using a send rate and queue size measured across all connections, to deal with our own local bandwidth saturation, but we'll try the per-con metrics first.
-
- Oct 05, 2004
-
- Oct 04, 2004
-
- Oct 03, 2004
-