I2P Address: [http://git.idk.i2p]

Skip to content
Snippets Groups Projects
  1. Mar 29, 2021
  2. Mar 27, 2021
  3. Mar 25, 2021
  4. Mar 24, 2021
  5. Mar 23, 2021
  6. Mar 22, 2021
  7. Mar 21, 2021
  8. Mar 20, 2021
    • zzz's avatar
      I2CP: Ensure nickname properties are set · 4e1848c3
      zzz authored
      Verified
      4e1848c3
    • zzz's avatar
      Boolean.valueOf() -> Boolean.parseBoolean() · b55fbbf0
      zzz authored
      Verified
      b55fbbf0
    • zzz's avatar
      Tunnels: Fix RED dropping for part. tunnels (Gitlab MR !24) · 005ac387
      zzz authored
      Part 1:
      Change bandwidth estimate to exponential moving average
      (Similar to Westwood+ Simple Bandwidth Estimator in streaming)
      instead of 40 ms bucket.
      Also use it for tunnel.participatingBandwidthOut stat.
      Remove linear moving average code previously used for stat
      Reduce RED threshold from 120% to 95% of limit
      
      Part 2:
      Fix the other part of RED which is the dropping calculation.
      Previously, it simply used the bandwidth to start dropping if
      it was higher than a threshold. The drop percentage rose from
      0 to 100%, linearly, based on how far the bandwidth was
      above the threshold. This was far, far from the RED paper.
      
      Now, we follow the RED paper (see ref. in SyntheticREDQueue javadoc)
      to calculate an average queue size, using the exact same
      exponential moving average method used for bandwidth.
      Similar to CoDel, it also includes a count of how long
      the size is over the threshold, and increases the drop probability with the count.
      The unadjusted drop probability rises from 0 to 2%
      and then everything is dropped, as in the RED paper.
      The low and high thresholds are configured at 77 ms and 333 ms of queued data, respectively.
      
      The queue is "synthetic" in that there's not actually a queue.
      It only calculates how big the queue would be if it were
      a real queue and were being emptied at exactly the target rate.
      The actual queueing is done downstream in the transports and in UDP-Sender.
      
      The goals are, for an 80% default share, to do most of the
      part. traffic dropping here in RED, not downstream in UDP-Sender,
      while fully utilizing the configured share bandwidth.
      If the router goes into high message delay mode, that means we are not dropping enough in RED.
      Above 80% share this probably doesn't work as well.
      
      There may be more tuning required, in particular to achieve the goal of "protecting" the UDP-Sender
      queue and local client/router traffic by dropping more aggressively in RED.
      
      This patch also improves the overhead estimate for outbound part. tunnel traffic at the OBEP.
      
      Reviewed, tested, acked by zlatinb
      Verified
      005ac387
  9. Mar 19, 2021
  10. Mar 16, 2021
  11. Mar 15, 2021
  12. Mar 10, 2021
  13. Mar 09, 2021
  14. Mar 08, 2021
  15. Mar 06, 2021
  16. Mar 05, 2021
  17. Mar 04, 2021
  18. Mar 03, 2021
Loading