- Apr 05, 2021
-
- Apr 04, 2021
-
- Apr 03, 2021
-
- Apr 02, 2021
-
- Apr 01, 2021
-
- Mar 31, 2021
-
-
zzz authored
Enable ipv6 check in locked_rebuild() Change locked_needsRebuild() to return codes for ipv4/v6 Change locked_needsRebuild() for introducers so it only returns true if more are available Change rebuildExternalAddress() so we can do a rebuild of ipv6 without an IP Only call rebuildIfNecessary() on peer drop if it could have been an introducer Fix check in pickInbound() for support of AliceIP field Log tweaks
-
- Mar 30, 2021
-
- Mar 29, 2021
-
- Mar 27, 2021
-
- Mar 25, 2021
-
- Mar 24, 2021
-
- Mar 23, 2021
-
- Mar 22, 2021
-
- Mar 21, 2021
-
- Mar 20, 2021
-
-
zzz authored
Part 1: Change bandwidth estimate to exponential moving average (Similar to Westwood+ Simple Bandwidth Estimator in streaming) instead of 40 ms bucket. Also use it for tunnel.participatingBandwidthOut stat. Remove linear moving average code previously used for stat Reduce RED threshold from 120% to 95% of limit Part 2: Fix the other part of RED which is the dropping calculation. Previously, it simply used the bandwidth to start dropping if it was higher than a threshold. The drop percentage rose from 0 to 100%, linearly, based on how far the bandwidth was above the threshold. This was far, far from the RED paper. Now, we follow the RED paper (see ref. in SyntheticREDQueue javadoc) to calculate an average queue size, using the exact same exponential moving average method used for bandwidth. Similar to CoDel, it also includes a count of how long the size is over the threshold, and increases the drop probability with the count. The unadjusted drop probability rises from 0 to 2% and then everything is dropped, as in the RED paper. The low and high thresholds are configured at 77 ms and 333 ms of queued data, respectively. The queue is "synthetic" in that there's not actually a queue. It only calculates how big the queue would be if it were a real queue and were being emptied at exactly the target rate. The actual queueing is done downstream in the transports and in UDP-Sender. The goals are, for an 80% default share, to do most of the part. traffic dropping here in RED, not downstream in UDP-Sender, while fully utilizing the configured share bandwidth. If the router goes into high message delay mode, that means we are not dropping enough in RED. Above 80% share this probably doesn't work as well. There may be more tuning required, in particular to achieve the goal of "protecting" the UDP-Sender queue and local client/router traffic by dropping more aggressively in RED. This patch also improves the overhead estimate for outbound part. tunnel traffic at the OBEP. Reviewed, tested, acked by zlatinb
- Mar 19, 2021
-
- Mar 16, 2021
-
- Mar 15, 2021
-
- Mar 10, 2021
-
-
zzz authored
Do both writes and removes in the writer thread As suggested by jogger http://zzz.i2p/topics/3082 log tweaks
-
- Mar 09, 2021
-
- Mar 08, 2021
-