I2P Address: [http://git.idk.i2p]

Skip to content
Snippets Groups Projects
Commit 8b9ee4df authored by jrandom's avatar jrandom Committed by zzz
Browse files

updated to reflect what was implemented

parent 6e8e77b9
No related branches found
No related tags found
No related merge requests found
<code>$Id: tunnel-alt.html,v 1.5 2005/01/19 18:13:10 jrandom Exp $</code> <code>$Id: tunnel-alt.html,v 1.6 2005/01/25 00:46:22 jrandom Exp $</code>
<pre> <pre>
1) <a href="#tunnel.overview">Tunnel overview</a> 1) <a href="#tunnel.overview">Tunnel overview</a>
2) <a href="#tunnel.operation">Tunnel operation</a> 2) <a href="#tunnel.operation">Tunnel operation</a>
...@@ -91,8 +91,8 @@ requested.</p> ...@@ -91,8 +91,8 @@ requested.</p>
tunnel ID to listen for messages with and what tunnel ID they should be forwarded tunnel ID to listen for messages with and what tunnel ID they should be forwarded
on as to the next hop, and each hop chooses the tunnel ID which they receive messages on as to the next hop, and each hop chooses the tunnel ID which they receive messages
on. Tunnels themselves are short lived (10 minutes at the on. Tunnels themselves are short lived (10 minutes at the
moment), but depending upon the tunnel's purpose, and though subsequent tunnels moment), and even if subsequent tunnels are built using the same sequence of
may be built using the same sequence of peers, each hop's tunnel ID will change.</p> peers, each hop's tunnel ID will change.</p>
<h3>2.1) <a name="tunnel.preprocessing">Message preprocessing</a></h3> <h3>2.1) <a name="tunnel.preprocessing">Message preprocessing</a></h3>
...@@ -103,9 +103,9 @@ each I2NP message should be handled by the tunnel endpoint, encoding that ...@@ -103,9 +103,9 @@ each I2NP message should be handled by the tunnel endpoint, encoding that
data into the raw tunnel payload:</p> data into the raw tunnel payload:</p>
<ul> <ul>
<li>the first 4 bytes of the SHA256 of the remaining preprocessed data concatenated <li>the first 4 bytes of the SHA256 of the remaining preprocessed data concatenated
with the preIV, using the preIV as will be seen on the tunnel endpoint (for with the IV, using the IV as will be seen on the tunnel endpoint (for
outbound tunnels) or the preIV as was seen on the tunnel gateway (for inbound outbound tunnels) or the IV as was seen on the tunnel gateway (for inbound
tunnels) (see below for preIV processing).</li> tunnels) (see below for IV processing).</li>
<li>0 or more bytes containing random nonzero integers</li> <li>0 or more bytes containing random nonzero integers</li>
<li>1 byte containing 0x00</li> <li>1 byte containing 0x00</li>
<li>a series of zero or more { instructions, message } pairs</li> <li>a series of zero or more { instructions, message } pairs</li>
...@@ -114,7 +114,7 @@ data into the raw tunnel payload:</p> ...@@ -114,7 +114,7 @@ data into the raw tunnel payload:</p>
<p>The instructions are encoded with a single control byte, followed by any <p>The instructions are encoded with a single control byte, followed by any
necessary additional information. The first bit in that control byte determines necessary additional information. The first bit in that control byte determines
how the remainder of the header is interpreted - if it is not set, the message how the remainder of the header is interpreted - if it is not set, the message
is eithernot fragmented or this is the first fragment in the message. If it is is either not fragmented or this is the first fragment in the message. If it is
set, this is a follow on fragment.</p> set, this is a follow on fragment.</p>
<p>With the first bit being 0, the instructions are:</p> <p>With the first bit being 0, the instructions are:</p>
...@@ -155,35 +155,34 @@ preprocessed payload must be padded to a multiple of 16 bytes.</p> ...@@ -155,35 +155,34 @@ preprocessed payload must be padded to a multiple of 16 bytes.</p>
<h3>2.2) <a name="tunnel.gateway">Gateway processing</a></h3> <h3>2.2) <a name="tunnel.gateway">Gateway processing</a></h3>
<p>After the preprocessing of messages into a padded payload, the gateway builds <p>After the preprocessing of messages into a padded payload, the gateway builds
a random 16 byte preIV value, iteratively encrypting it and the tunnel message as a random 16 byte IV value, iteratively encrypting it and the tunnel message as
necessary, and forwards the tuple {tunnelID, preIV, encrypted tunnel message} to the next hop.</p> necessary, and forwards the tuple {tunnelID, IV, encrypted tunnel message} to the next hop.</p>
<p>How encryption at the gateway is done depends on whether the tunnel is an <p>How encryption at the gateway is done depends on whether the tunnel is an
inbound or an outbound tunnel. For inbound tunnels, they simply select a random inbound or an outbound tunnel. For inbound tunnels, they simply select a random
preIV, postprocessing and updating it to generate the IV for the gateway and using IV, postprocessing and updating it to generate the IV for the gateway and using
that IV along side their own layer key to encrypt the preprocessed data. For outbound that IV along side their own layer key to encrypt the preprocessed data. For outbound
tunnels they must iteratively decrypt the (unencrypted) preIV and preprocessed tunnels they must iteratively decrypt the (unencrypted) IV and preprocessed
data with the layer keys for all hops in the tunnel. The result of the outbound data with the IV and layer keys for all hops in the tunnel. The result of the outbound
tunnel encryption is that when each peer encrypts it, the endpoint will recover tunnel encryption is that when each peer encrypts it, the endpoint will recover
the initial preprocessed data.</p> the initial preprocessed data.</p>
<p>The preIV postprocessing should be a secure invertible transform of the received value
capable of providing the full 16 byte IV necessary for AES256. At the moment, the
plan is to use AES256 against the received preIV using that layer's IV key (a seperate
session key delivered to the tunnel participant by the creator).</p>
<h3>2.3) <a name="tunnel.participant">Participant processing</a></h3> <h3>2.3) <a name="tunnel.participant">Participant processing</a></h3>
<p>When a peer receives a tunnel message, it checks that the message came from <p>When a peer receives a tunnel message, it checks that the message came from
the same previous hop as before (initialized when the first message comes through the same previous hop as before (initialized when the first message comes through
the tunnel). If the previous peer is a different router, the message is dropped. the tunnel). If the previous peer is a different router, or if the message has
The participant then postprocesses already been seen, the message is dropped. The participant then encrypts the
and updates the preIV received to determine the current hop's IV, using that data with AES256/CBC using the participant's layer key and the received IV,
with the layer key to encrypt the tunnel message. The IV is added to a bloom updates the IV by encrypting it with AES256/ECB using the participant's IV key,
filter maintained for that tunnel - if it is a duplicate, it is dropped then forwards the tuple {nextTunnelId, nextIV, encryptedData} to the next hop.</p>
<i>The details of the hash functions used in the bloom filter
are not yet worked out. Suggestions?</i>. They then forwarding the tuple <p>Duplicate message detection is handled by a decaying Bloom filter on message
{nextTunnelID, nextPreIV, encrypted tunnel message} to the next hop.</p> IVs. Each router maintains a single Bloom filter to contain all of the IVs for
all of the tunnels it is participating in, modified to drop seen entries after
10-20 minutes (when the tunnels will have expired). The size of the bloom
filter and the parameters used are sufficient to more than saturate the router's
network connection with a negligible chance of false positive.</p>
<h3>2.4) <a name="tunnel.endpoint">Endpoint processing</a></h3> <h3>2.4) <a name="tunnel.endpoint">Endpoint processing</a></h3>
...@@ -192,8 +191,8 @@ how the endpoint recovers the data encoded by the gateway depends upon whether ...@@ -192,8 +191,8 @@ how the endpoint recovers the data encoded by the gateway depends upon whether
the tunnel is an inbound or an outbound tunnel. For outbound tunnels, the the tunnel is an inbound or an outbound tunnel. For outbound tunnels, the
endpoint encrypts the message with its layer key just like any other participant, endpoint encrypts the message with its layer key just like any other participant,
exposing the preprocessed data. For inbound tunnels, the endpoint is also the exposing the preprocessed data. For inbound tunnels, the endpoint is also the
tunnel creator so they can merely iteratively decrypt the preIV and message, using the tunnel creator so they can merely iteratively decrypt the IV and message, using the
layer keys (both message and IV keys) of each step in reverse order.</p> layer and IV keys of each step in reverse order.</p>
<p>At this point, the tunnel endpoint has the preprocessed data sent by the gateway, <p>At this point, the tunnel endpoint has the preprocessed data sent by the gateway,
which it may then parse out into the included I2NP messages and forwards them as which it may then parse out into the included I2NP messages and forwards them as
...@@ -211,24 +210,28 @@ requested in their delivery instructions.</p> ...@@ -211,24 +210,28 @@ requested in their delivery instructions.</p>
<li>Padding to the closest exponential size (2^n bytes)</li> <li>Padding to the closest exponential size (2^n bytes)</li>
</ul> </ul>
<p><i>Which to use? no padding is most efficient, random padding is what <p>These padding strategies can be used on a variety of levels, addressing the
we have now, fixed size would either be an extreme waste or force us to exposure of message size information to different adversaries. After gathering
implement fragmentation. Padding to the closest exponential size (ala freenet) and reviewing some <a href="http://dev.i2p.net/~jrandom/messageSizes/">statistics</a>
seems promising. Perhaps we should gather some stats on the net as to what size from the 0.4 network, as well as exploring the anonymity tradeoffs, we're starting
messages are, then see what costs and benefits would arise from different with a fixed tunnel message size of 1024 bytes. Within this however, the fragmented
strategies? <b>See <a href="http://dev.i2p.net/~jrandom/messageSizes/">gathered messages themselves are not padded by the tunnel at all (though for end to end
stats</a></b>. The current plan is to pad to a fixed 1024 byte message size with messages, they may be padded as part of the garlic wrapping).</p>
fragmentation.</i></p>
<h3>2.6) <a name="tunnel.fragmentation">Tunnel fragmentation</a></h3> <h3>2.6) <a name="tunnel.fragmentation">Tunnel fragmentation</a></h3>
<p>To prevent adversaries from tagging the messages along the path by adjusting <p>To prevent adversaries from tagging the messages along the path by adjusting
the message size, all tunnel messages are a fixed 1KB in size. To accommodate the message size, all tunnel messages are a fixed 1024 bytes in size. To accommodate
larger I2NP messages as well as to support smaller ones more efficiently, the larger I2NP messages as well as to support smaller ones more efficiently, the
gateway splits up the larger I2NP messages into fragments contained within each gateway splits up the larger I2NP messages into fragments contained within each
tunnel message. The endpoint will attempt to rebuild the I2NP message from the tunnel message. The endpoint will attempt to rebuild the I2NP message from the
fragments for a short period of time, but will discard them as necessary.</p> fragments for a short period of time, but will discard them as necessary.</p>
<p>Routers have a lot of leeway as to how the fragments are arranged, whether
they are stuffed inefficiently as discrete units, batched for a brief period to
fit more payload into the 1024 byte tunnel messages, or opportunistically padded
with other messages that the gateway wanted to send out.</p>
<h3>2.7) <a name="tunnel.alternatives">Alternatives</a></h3> <h3>2.7) <a name="tunnel.alternatives">Alternatives</a></h3>
<h4>2.7.1) <a name="tunnel.reroute">Adjust tunnel processing midstream</a></h4> <h4>2.7.1) <a name="tunnel.reroute">Adjust tunnel processing midstream</a></h4>
...@@ -268,17 +271,17 @@ along the same peers.</p> ...@@ -268,17 +271,17 @@ along the same peers.</p>
<h4>2.7.3) <a name="tunnel.backchannel">Backchannel communication</a></h4> <h4>2.7.3) <a name="tunnel.backchannel">Backchannel communication</a></h4>
<p>At the moment, the preIV values used are random values. However, it is <p>At the moment, the IV values used are random values. However, it is
possible for that 16 byte value to be used to send control messages from the possible for that 16 byte value to be used to send control messages from the
gateway to the endpoint, or on outbound tunnels, from the gateway to any of the gateway to the endpoint, or on outbound tunnels, from the gateway to any of the
peers. The inbound gateway could encode certain values in the preIV once, which peers. The inbound gateway could encode certain values in the IV once, which
the endpoint would be able to recover (since it knows the endpoint is also the the endpoint would be able to recover (since it knows the endpoint is also the
creator). For outbound tunnels, the creator could deliver certain values to the creator). For outbound tunnels, the creator could deliver certain values to the
participants during the tunnel creation (e.g. "if you see 0x0 as the preIV, that participants during the tunnel creation (e.g. "if you see 0x0 as the IV, that
means X", "0x1 means Y", etc). Since the gateway on the outbound tunnel is also means X", "0x1 means Y", etc). Since the gateway on the outbound tunnel is also
the creator, they can build a preIV so that any of the peers will receive the the creator, they can build a IV so that any of the peers will receive the
correct value. The tunnel creator could even give the inbound tunnel gateway correct value. The tunnel creator could even give the inbound tunnel gateway
a series of preIV values which that gateway could use to communicate with a series of IV values which that gateway could use to communicate with
individual participants exactly one time (though this would have issues regarding individual participants exactly one time (though this would have issues regarding
collusion detection)</p> collusion detection)</p>
...@@ -308,17 +311,14 @@ still exists as peers could use the frequency of each size as the carrier (e.g. ...@@ -308,17 +311,14 @@ still exists as peers could use the frequency of each size as the carrier (e.g.
two 1024 byte messages followed by an 8192). Smaller messages do incur the two 1024 byte messages followed by an 8192). Smaller messages do incur the
overhead of the headers (IV, tunnel ID, hash portion, etc), but larger fixed size overhead of the headers (IV, tunnel ID, hash portion, etc), but larger fixed size
messages either increase latency (due to batching) or dramatically increase messages either increase latency (due to batching) or dramatically increase
overhead (due to padding).</p> overhead (due to padding). Fragmentation helps ammortize the overhead, at the
cost of potential message loss due to lost fragments.</p>
<p><i>Perhaps we should have I2CP use small fixed size messages which are
individually garlic wrapped so that the resulting size fits into a single tunnel <p>Timing attacks are also relevent when reviewing the effectiveness of fixed
message so that not even the tunnel endpoint and gateway can see the size. We'll size messages, though they require a substantial view of network activity
then need to optimize the streaming lib to adjust to the smaller messages, but patterns to be effective. Excessive artificial delays in the tunnel will be
should be able to squeeze sufficient performance out of it. However, if the detected by the tunnel's creator, due to periodic testing, causing that entire
performance is unsatisfactory, we could explore the tradeoff of speed (and hence tunnel to be scrapped and the profiles for peers within it to be adjusted.</p>
userbase) vs. further exposure of the message size to the gateways and endpoints.
If even that is too slow, we could then review the tunnel size limitations vs.
exposure to participating peers.</i></p>
<h2>3) <a name="tunnel.building">Tunnel building</a></h2> <h2>3) <a name="tunnel.building">Tunnel building</a></h2>
...@@ -364,6 +364,10 @@ the hop after A may be B, B may never be before A. Other configuration options ...@@ -364,6 +364,10 @@ the hop after A may be B, B may never be before A. Other configuration options
include the ability for just the inbound tunnel gateways and outbound tunnel include the ability for just the inbound tunnel gateways and outbound tunnel
endpoints to be fixed, or rotated on an MTBF rate.</p> endpoints to be fixed, or rotated on an MTBF rate.</p>
<p>In the initial implementation, only random ordering has been implemented,
though more strict ordering will be developed and deployed over time, as well
as controls for the user to select which strategy to use for individual clients.</p>
<h3>3.2) <a name="tunnel.request">Request delivery</a></h3> <h3>3.2) <a name="tunnel.request">Request delivery</a></h3>
<p>As mentioned above, once the tunnel creator knows what peers should go into <p>As mentioned above, once the tunnel creator knows what peers should go into
...@@ -372,11 +376,11 @@ messages, each containing the necessary information for that peer. For instance ...@@ -372,11 +376,11 @@ messages, each containing the necessary information for that peer. For instance
participating tunnels will be given the 4 byte nonce with which to reply with, participating tunnels will be given the 4 byte nonce with which to reply with,
the 4 byte tunnel ID on which they are to send out the messages, the 4 byte tunnel ID on which they are to send out the messages,
the 32 byte hash of the next hop's identity, the 32 byte layer key used to the 32 byte hash of the next hop's identity, the 32 byte layer key used to
remove a layer from the tunnel, and a 32 byte layer IV key used to transform the remove a layer from the tunnel, and a 32 byte IV key used to encrypt the IV.
preIV into the IV. Of course, outbound tunnel endpoints are not Of course, outbound tunnel endpoints are not
given any "next hop" or "next tunnel ID" information. To allow given any "next hop" or "next tunnel ID" information. To allow
replies, the request contains a random session tag and a random session key with replies, the request contains a random session tag and a random session key with
which the peer may garlic encrypt their decision, as well as the tunnel to which which the peer should garlic encrypt their decision, as well as the tunnel to which
that garlic should be sent. In addition to the above information, various client that garlic should be sent. In addition to the above information, various client
specific options may be included, such as what throttling to place on the tunnel, specific options may be included, such as what throttling to place on the tunnel,
what padding or batch strategies to use, etc.</p> what padding or batch strategies to use, etc.</p>
...@@ -391,6 +395,13 @@ router on which that tunnel listens). Upon receipt of the reply at the tunnel ...@@ -391,6 +395,13 @@ router on which that tunnel listens). Upon receipt of the reply at the tunnel
creator, the tunnel is considered valid on that hop (if accepted). Once all creator, the tunnel is considered valid on that hop (if accepted). Once all
peers have accepted, the tunnel is active.</p> peers have accepted, the tunnel is active.</p>
<p>Peers may reject tunnel creation requests for a variety of reasons, though
a series of four increasingly severe rejections are known: probabalistic rejection
(due to approaching the router's capacity, or in response to a flood of requests),
transient overload, bandwidth overload, and critical failure. When received,
those four are interpreted by the tunnel creator to help adjust their profile of
the router in question.</p>
<h3>3.3) <a name="tunnel.pooling">Pooling</a></h3> <h3>3.3) <a name="tunnel.pooling">Pooling</a></h3>
<p>To allow efficient operation, the router maintains a series of tunnel pools, <p>To allow efficient operation, the router maintains a series of tunnel pools,
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment