Compare commits

...

48 Commits

Author SHA1 Message Date
jrandom
130399a1e7 0.3.2.3 (coming soon to a hard drive near you) 2004-07-16 21:12:27 +00:00
jrandom
37d5531737 logging, including replacing the scary monster with its true self (we had data queued up, but were unable to get an ACK on our last write) 2004-07-16 20:48:40 +00:00
jrandom
f0b6cbaf89 logging 2004-07-16 19:14:39 +00:00
jrandom
707b173e77 differentiate between an explicit tunnel rejection (due to overload, etc) and an implicit one (the request timed out, the tunnels delivering the request failed, etc)
also, within the implementation of the profile, only mark the explicit rejections as a rejection
2004-07-16 00:17:28 +00:00
jrandom
4381bb5026 don't rip the peer's head off after multiple tunnel rejections - penalize them *once* for the instance (not once *per* instance) 2004-07-16 00:15:34 +00:00
duck
5850ad1217 typo fix, thanks Connelly
(duck)
2004-07-15 16:02:53 +00:00
jrandom
806d598a04 _context/getContext()
(missed one)
2004-07-15 05:23:18 +00:00
jrandom
e737e5c950 * work around the disagreement between different versions of sun's compiler and JVM:
Some of them think that its ok for an inner class of a subclass to access protected data of the
outer class's parent when the parent is in another package.
Others do not.
Kaffe doesn't care (but thats because Kaffe doesn't do much for verification ;)
The JLS is aparently confusing, but it doesnt matter whether its a code or javac bug, we've got to change the code.
The simplest change would be to just make the JobImpl._context public, but I loath public data, so we make it private and add an accessor
(and change dozens of files)
whee
2004-07-15 05:12:37 +00:00
jrandom
bbcde2f52b 0.3.2.2 (a lil installation testing and then i'll push) 2004-07-15 01:08:54 +00:00
jrandom
f6ef77429c some boundary cases for the queue pumper's wait time 2004-07-15 01:04:13 +00:00
jrandom
44491c1514 added nickster2.i2p and irc.nickster.i2p (pointing at iip) 2004-07-14 22:12:43 +00:00
jrandom
71a6cf4ee6 * adjust the algorithm to deal with IO bound requests:
if more tokens become available while the first pending request is still blocked on
  read/write (aka after allocation and before next .waitForAllocation()), give the tokens
  to the next request
* refactor the satisfy{In,Out}boundRequests methods into smaller logical units
2004-07-14 21:07:57 +00:00
jrandom
744ce6966f add a new throttle (and stats) based on send processing time
high send processing time and low job lag means the latency is coming from outside the jobQueue - aka bandwidth throttling
2004-07-14 20:01:40 +00:00
jrandom
d25cec02c2 clean up sorting for peer reliability
increase penalties for tunnel rejection, and keep track of the 10 minute rate as well as 1 and 60
2004-07-14 19:56:38 +00:00
jrandom
f02bf37fd3 stats and stats and stats
track the total allocated bytes correctly (even if we're throttled)
2004-07-14 19:54:04 +00:00
jrandom
304b9d41d7 on kaffe i've periodically seen some hangs in the jobqueue, so lets try being a bit more conservative with the synchroniation, and include some debugging output in the router console to help track it down (if this doesnt fix it) 2004-07-13 20:19:28 +00:00
jrandom
2d6af89f60 safer operation (for use in the sim where some things aren't always availble) 2004-07-13 20:17:15 +00:00
jrandom
d6425973e2 include an objectId flag for use in the logging 2004-07-13 20:16:05 +00:00
jrandom
d5ad56c4de use smaller writes to make it look more normal 2004-07-13 20:14:18 +00:00
jrandom
4f1f2cc99e since people are using small buckets, the 10s replenish frequency is a really really bad idea (so default to 1s) 2004-07-13 05:49:16 +00:00
jrandom
da439dd127 sanity checking for a kooky race condition 2004-07-12 21:33:32 +00:00
jrandom
1375d01bdf new bandwidth allocation policy and usage to include support for partial allocations (and in turn, partial write(...)) while still keeping the FIFO ordering
this will give a much smoother traffic pattern, as instead of waiting 6 seconds to write a 32KB message under a 6KB rate, it'll write 6KB for each of the first 5 seconds, and 2KB the next
this also allows people to have small buckets (but again, bucket sizes smaller than the rate just don't make sense)
2004-07-12 21:09:05 +00:00
jrandom
7b9db07f13 target=1.3 and source=1.3, not target=1.1 and source=1.3
(this is what caused the runtime errors on sun jvms but not on kaffe)
((aka i slacked and didn't test sufficiently.  off with my head))
this now builds and runs fine in sun 1.3-1.5 jvms, as well as kaffe
2004-07-12 16:39:22 +00:00
ugha
f2f26136c1 Minior cleanups -- removed commented out debugging code, wrote better
comments.
(ugha)
2004-07-12 05:12:22 +00:00
jrandom
0f60ac5acf 0.3.2.1 (backwards compatible blah blah blah) 2004-07-11 18:57:01 +00:00
jrandom
c28f19fe8a less painful and/or redundant penalties for failures 2004-07-11 18:50:23 +00:00
mpc
09a6dbc755 FreeBSD port 2004-07-11 13:22:37 +00:00
jrandom
3bc0e0fc8a added source and target declarations for the javac commands so we can build with the 1.5^W5.0 JDK
(also added deprecation, since, well, we can :)
2004-07-11 04:16:59 +00:00
jrandom
eb0e187a54 throttle tunnel participation based on whether we've had to throttle our network connection > some number of times in the last 10-20 minutes (rather than a simple "are we throttled *right now*?") 2004-07-10 22:28:34 +00:00
jrandom
a788d30f34 added support for new 'clientoptions' command which alters the properties passed when creating subsequent I2CP connections
e.g.: -e "clientoptions tunnels.depthInbound=0" -e "httpclient 6666"
this updates so many files because they all need a reference to an I2PTunnel object on construction so they query tunnel.getClientOptions() instead of System.getProperties
2004-07-10 16:59:49 +00:00
jrandom
591dfc961e give the reliability more positive influence so it doesn't go negative so easily
update the peerProfile's CLI to make the resulting stats easier to read
2004-07-10 04:15:51 +00:00
jrandom
809e12b034 logging 2004-07-10 04:13:42 +00:00
jrandom
1669d174e1 use mihi's template engine to set a random timestamper password so people dont need to think about that stuff
don't use the dyndns anymore for seeding (use dev.i2p.net/i2pdb)
2004-07-10 02:36:27 +00:00
jrandom
3cfd28de43 add a new unit test for repeated fast reconnections 2004-07-10 01:58:05 +00:00
jrandom
4888207eca if a client reconnects, we always want to get a new leaseSet ASAP (even if the pool hadn't been marked as stopped yet)
logging
2004-07-10 01:46:57 +00:00
jrandom
294cb96107 if the job's startAfter is changed, tell the jobQueue to go through the timed jobs again in case the new time changes the scheduling 2004-07-10 01:44:27 +00:00
jrandom
b648fa2b70 send the stats page out in chunks (more mem efficient, blah blah blah) 2004-07-10 01:39:54 +00:00
jrandom
ab99122211 render status HTML in pieces (continued) 2004-07-09 05:33:19 +00:00
jrandom
dd014fee88 send the router console out bit by bit rather than building it all up and sending it (thereby reducing its memory footprint dramatically) 2004-07-09 05:29:02 +00:00
jrandom
c81f864de3 reduce the throttle threshold from 5s lag to 2s lag 2004-07-09 05:22:29 +00:00
jrandom
90fe7dceec include the expiration in the error message if its dropped 2004-07-09 05:20:26 +00:00
jrandom
3a568096f2 new throttling code which rejects tunnel create requests, networkDb lookup requests, and even tells the I2NP components to stop reading from the network (it doesnt affect writing to the network)
the simple RouterThrottleImpl bases its decision entirely on how congested the jobQueue is - if there are jobs that have been waiting 5+ seconds, reject everything and stop reading from the network
(each i2npMessageReader randomly waits .5-1s when throttled before rechecking it)
minor adjustments in the stats published - removing a few useless ones and adding the router.throttleNetworkCause (which is the average ms lag in the jobQueue when an I2NP reader is throttled)
2004-07-09 03:56:22 +00:00
jrandom
94e694fc61 reduce the job pipeline to send a message by fetching the bids and adding the message to the connection queue synchronously
these had been broken out into seperate jobs before to reduce thread and lock contention, but that isn't as serious an issue anymore (in these cases) and the non-contention-related delays of these mini-jobs are trivial
2004-07-09 03:48:12 +00:00
jrandom
bdfa6e4af5 dont penalize send failures (beyond what we already do for comm errors)
keep a rate for tunnel rejection, rather than a simple 'last' occurrance, and penalize the reliability with it
2004-07-09 03:45:11 +00:00
jrandom
8e64ffb4f6 keep the relay message size rate data for the 10 minute period (so we can throttle on logical periods) 2004-07-09 03:43:07 +00:00
jrandom
6c162643cb expose stat for throttling (# tunnels we're currently participating in) 2004-07-09 03:41:27 +00:00
jrandom
ff7742bca3 expose some stats useful for throttling (# ready & waiting jobs and the max lag of those jobs) 2004-07-09 03:39:38 +00:00
jrandom
9685884279 deal with null peer (used by the SubmitMessageHistoryJob to bw limit the history)
current 0.3.2 throws an NPE which causes the submitMessageHistory functionality to fail, which isn't really a loss since i send that data to /dev/null at the moment ;)
(but you'll want to router.keepHistory=false and router.submitHistory=false)
this'll go into the next rev, whenever it comes out
(thanks ugha!)
2004-07-07 22:23:25 +00:00
135 changed files with 2077 additions and 1299 deletions

View File

@@ -9,12 +9,12 @@
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac srcdir="./src" debug="true" destdir="./build/obj" includes="**/*.java" excludes="net/i2p/heartbeat/gui/**" classpath="../../../core/java/build/i2p.jar" />
<javac srcdir="./src" debug="true" deprecation="on" source="1.3" target="1.3" destdir="./build/obj" includes="**/*.java" excludes="net/i2p/heartbeat/gui/**" classpath="../../../core/java/build/i2p.jar" />
</target>
<target name="compileGUI">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac debug="true" destdir="./build/obj">
<javac debug="true" source="1.3" target="1.3" deprecation="on" destdir="./build/obj">
<src path="src/" />
<classpath path="../../../core/java/build/i2p.jar" />
<classpath path="../../jfreechart/jfreechart-0.9.17/lib/jcommon-0.9.2.jar" />

View File

@@ -11,7 +11,7 @@
<mkdir dir="./build/obj" />
<javac
srcdir="./src"
debug="true"
debug="true" deprecation="on" source="1.3" target="1.3"
destdir="./build/obj"
classpath="../../../core/java/build/i2p.jar:../../ministreaming/java/build/mstreaming.jar" />
</target>

View File

@@ -11,7 +11,7 @@
<mkdir dir="./build/obj" />
<javac
srcdir="./src"
debug="true"
debug="true" deprecation="on" source="1.3" target="1.3"
destdir="./build/obj"
classpath="../../../core/java/build/i2p.jar:../../ministreaming/java/build/mstreaming.jar" />
</target>

View File

@@ -46,6 +46,7 @@ import java.util.HashSet;
import java.util.Iterator;
import java.util.LinkedList;
import java.util.List;
import java.util.Properties;
import java.util.Set;
import java.util.StringTokenizer;
@@ -69,6 +70,7 @@ public class I2PTunnel implements Logging, EventDispatcher {
private I2PAppContext _context;
private static long __tunnelId = 0;
private long _tunnelId;
private Properties _clientOptions;
public static final int PACKET_DELAY = 100;
@@ -104,6 +106,9 @@ public class I2PTunnel implements Logging, EventDispatcher {
_tunnelId = ++__tunnelId;
_log = _context.logManager().getLog(I2PTunnel.class);
_event = new EventDispatcherImpl();
_clientOptions = new Properties();
_clientOptions.putAll(System.getProperties());
addConnectionEventListener(lsnr);
boolean gui = true;
boolean checkRunByE = true;
@@ -167,6 +172,8 @@ public class I2PTunnel implements Logging, EventDispatcher {
}
}
public Properties getClientOptions() { return _clientOptions; }
private void addtask(I2PTunnelTask tsk) {
tsk.setTunnel(this);
if (tsk.isOpen()) {
@@ -197,6 +204,8 @@ public class I2PTunnel implements Logging, EventDispatcher {
if ("help".equals(cmdname)) {
runHelp(l);
} else if ("clientoptions".equals(cmdname)) {
runClientOptions(args, l);
} else if ("server".equals(cmdname)) {
runServer(args, l);
} else if ("textserver".equals(cmdname)) {
@@ -262,6 +271,29 @@ public class I2PTunnel implements Logging, EventDispatcher {
l.log("list");
l.log("run <commandfile>");
}
/**
* Configure the extra I2CP options to use in any subsequent I2CP sessions.
* Usage: "clientoptions[ key=value]*" .
*
* Sets the event "clientoptions_onResult" = "ok" after completion.
*
* @param args each args[i] is a key=value pair to add to the options
* @param l logger to receive events and output
*/
public void runClientOptions(String args[], Logging l) {
_clientOptions.clear();
if (args != null) {
for (int i = 0; i < args.length; i++) {
int index = args[i].indexOf('=');
if (index <= 0) continue;
String key = args[i].substring(0, index);
String val = args[i].substring(index+1);
_clientOptions.setProperty(key, val);
}
}
notifyEvent("clientoptions_onResult", "ok");
}
/**
* Run the server pointing at the host and port specified using the private i2p
@@ -304,7 +336,7 @@ public class I2PTunnel implements Logging, EventDispatcher {
notifyEvent("serverTaskId", new Integer(-1));
return;
}
I2PTunnelServer serv = new I2PTunnelServer(serverHost, portNum, privKeyFile, args[2], l, (EventDispatcher) this);
I2PTunnelServer serv = new I2PTunnelServer(serverHost, portNum, privKeyFile, args[2], l, (EventDispatcher) this, this);
serv.setReadTimeout(readTimeout);
serv.startRunning();
addtask(serv);
@@ -350,7 +382,7 @@ public class I2PTunnel implements Logging, EventDispatcher {
return;
}
I2PTunnelServer serv = new I2PTunnelServer(serverHost, portNum, args[2], l, (EventDispatcher) this);
I2PTunnelServer serv = new I2PTunnelServer(serverHost, portNum, args[2], l, (EventDispatcher) this, this);
serv.setReadTimeout(readTimeout);
serv.startRunning();
addtask(serv);
@@ -386,7 +418,7 @@ public class I2PTunnel implements Logging, EventDispatcher {
return;
}
I2PTunnelTask task;
task = new I2PTunnelClient(port, args[1], l, ownDest, (EventDispatcher) this);
task = new I2PTunnelClient(port, args[1], l, ownDest, (EventDispatcher) this, this);
addtask(task);
notifyEvent("clientTaskId", new Integer(task.getId()));
} else {
@@ -423,7 +455,7 @@ public class I2PTunnel implements Logging, EventDispatcher {
proxy = args[1];
}
I2PTunnelTask task;
task = new I2PTunnelHTTPClient(port, l, ownDest, proxy, (EventDispatcher) this);
task = new I2PTunnelHTTPClient(port, l, ownDest, proxy, (EventDispatcher) this, this);
addtask(task);
notifyEvent("httpclientTaskId", new Integer(task.getId()));
} else {
@@ -460,7 +492,7 @@ public class I2PTunnel implements Logging, EventDispatcher {
}
I2PTunnelTask task;
task = new I2PSOCKSTunnel(port, l, ownDest, (EventDispatcher) this);
task = new I2PSOCKSTunnel(port, l, ownDest, (EventDispatcher) this, this);
addtask(task);
notifyEvent("sockstunnelTaskId", new Integer(task.getId()));
} else {
@@ -779,7 +811,7 @@ public class I2PTunnel implements Logging, EventDispatcher {
if (allargs.length() != 0) {
I2PTunnelTask task;
// pings always use the main destination
task = new I2Ping(allargs, l, false, (EventDispatcher) this);
task = new I2Ping(allargs, l, false, (EventDispatcher) this, this);
addtask(task);
notifyEvent("pingTaskId", new Integer(task.getId()));
} else {

View File

@@ -19,8 +19,8 @@ public class I2PTunnelClient extends I2PTunnelClientBase {
private static final long DEFAULT_READ_TIMEOUT = 5*60*1000; // -1
protected long readTimeout = DEFAULT_READ_TIMEOUT;
public I2PTunnelClient(int localPort, String destination, Logging l, boolean ownDest, EventDispatcher notifyThis) {
super(localPort, ownDest, l, notifyThis, "SynSender");
public I2PTunnelClient(int localPort, String destination, Logging l, boolean ownDest, EventDispatcher notifyThis, I2PTunnel tunnel) {
super(localPort, ownDest, l, notifyThis, "SynSender", tunnel);
if (waitEventValue("openBaseClientResult").equals("error")) {
notifyEvent("openClientResult", "error");

View File

@@ -60,8 +60,8 @@ public abstract class I2PTunnelClientBase extends I2PTunnelTask implements Runna
// I2PTunnelClientBase(localPort, ownDest, l, (EventDispatcher)null);
//}
public I2PTunnelClientBase(int localPort, boolean ownDest, Logging l, EventDispatcher notifyThis, String handlerName) {
super(localPort + " (uninitialized)", notifyThis);
public I2PTunnelClientBase(int localPort, boolean ownDest, Logging l, EventDispatcher notifyThis, String handlerName, I2PTunnel tunnel) {
super(localPort + " (uninitialized)", notifyThis, tunnel);
_clientId = ++__clientId;
this.localPort = localPort;
this.l = l;
@@ -103,16 +103,25 @@ public abstract class I2PTunnelClientBase extends I2PTunnelTask implements Runna
private static I2PSocketManager socketManager;
protected static synchronized I2PSocketManager getSocketManager() {
protected synchronized I2PSocketManager getSocketManager() {
return getSocketManager(getTunnel());
}
protected static synchronized I2PSocketManager getSocketManager(I2PTunnel tunnel) {
if (socketManager == null) {
socketManager = buildSocketManager();
socketManager = buildSocketManager(tunnel);
}
return socketManager;
}
protected static I2PSocketManager buildSocketManager() {
protected I2PSocketManager buildSocketManager() {
return buildSocketManager(getTunnel());
}
protected static I2PSocketManager buildSocketManager(I2PTunnel tunnel) {
Properties props = new Properties();
props.putAll(System.getProperties());
if (tunnel == null)
props.putAll(System.getProperties());
else
props.putAll(tunnel.getClientOptions());
I2PSocketManager sockManager = I2PSocketManagerFactory.createManager(I2PTunnel.host, Integer.parseInt(I2PTunnel.port), props);
sockManager.setName("Client");
return sockManager;

View File

@@ -59,7 +59,7 @@ public class I2PTunnelHTTPClient extends I2PTunnelClientBase implements Runnable
"Cache-control: no-cache\r\n"+
"\r\n"+
"<html><body><H1>I2P ERROR: DESTINATION NOT FOUND</H1>"+
"That I2P Desitination was not found. Perhaps you pasted in the "+
"That I2P Destination was not found. Perhaps you pasted in the "+
"wrong BASE64 I2P Destination or the link you are following is "+
"bad. The host (or the WWW proxy, if you're using one) could also "+
"be temporarily offline. You may want to <b>retry</b>. "+
@@ -71,7 +71,7 @@ public class I2PTunnelHTTPClient extends I2PTunnelClientBase implements Runnable
"Content-Type: text/html; charset=iso-8859-1\r\n"+
"Cache-control: no-cache\r\n\r\n"+
"<html><body><H1>I2P ERROR: TIMEOUT</H1>"+
"That Desitination was reachable, but timed out getting a "+
"That Destination was reachable, but timed out getting a "+
"response. This is likely a temporary error, so you should simply "+
"try to refresh, though if the problem persists, the remote "+
"destination may have issues. Could not get a response from "+
@@ -81,8 +81,8 @@ public class I2PTunnelHTTPClient extends I2PTunnelClientBase implements Runnable
/** used to assign unique IDs to the threads / clients. no logic or functionality */
private static volatile long __clientId = 0;
public I2PTunnelHTTPClient(int localPort, Logging l, boolean ownDest, String wwwProxy, EventDispatcher notifyThis) {
super(localPort, ownDest, l, notifyThis, "HTTPHandler " + (++__clientId));
public I2PTunnelHTTPClient(int localPort, Logging l, boolean ownDest, String wwwProxy, EventDispatcher notifyThis, I2PTunnel tunnel) {
super(localPort, ownDest, l, notifyThis, "HTTPHandler " + (++__clientId), tunnel);
if (waitEventValue("openBaseClientResult").equals("error")) {
notifyEvent("openHTTPClientResult", "error");
@@ -127,7 +127,7 @@ public class I2PTunnelHTTPClient extends I2PTunnelClientBase implements Runnable
if (pos == -1) break;
method = line.substring(0, pos);
String request = line.substring(pos + 1);
if (request.startsWith("/") && System.getProperty("i2ptunnel.noproxy") != null) {
if (request.startsWith("/") && getTunnel().getClientOptions().getProperty("i2ptunnel.noproxy") != null) {
request = "http://i2p" + request;
}
pos = request.indexOf("//");

View File

@@ -45,15 +45,15 @@ public class I2PTunnelServer extends I2PTunnelTask implements Runnable {
/** default timeout to 3 minutes - override if desired */
private long readTimeout = DEFAULT_READ_TIMEOUT;
public I2PTunnelServer(InetAddress host, int port, String privData, Logging l, EventDispatcher notifyThis) {
super(host + ":" + port + " <- " + privData, notifyThis);
public I2PTunnelServer(InetAddress host, int port, String privData, Logging l, EventDispatcher notifyThis, I2PTunnel tunnel) {
super(host + ":" + port + " <- " + privData, notifyThis, tunnel);
ByteArrayInputStream bais = new ByteArrayInputStream(Base64.decode(privData));
init(host, port, bais, privData, l);
}
public I2PTunnelServer(InetAddress host, int port, File privkey, String privkeyname, Logging l,
EventDispatcher notifyThis) {
super(host + ":" + port + " <- " + privkeyname, notifyThis);
EventDispatcher notifyThis, I2PTunnel tunnel) {
super(host + ":" + port + " <- " + privkeyname, notifyThis, tunnel);
try {
init(host, port, new FileInputStream(privkey), privkeyname, l);
} catch (IOException ioe) {
@@ -62,8 +62,8 @@ public class I2PTunnelServer extends I2PTunnelTask implements Runnable {
}
}
public I2PTunnelServer(InetAddress host, int port, InputStream privData, String privkeyname, Logging l, EventDispatcher notifyThis) {
super(host + ":" + port + " <- " + privkeyname, notifyThis);
public I2PTunnelServer(InetAddress host, int port, InputStream privData, String privkeyname, Logging l, EventDispatcher notifyThis, I2PTunnel tunnel) {
super(host + ":" + port + " <- " + privkeyname, notifyThis, tunnel);
init(host, port, privData, privkeyname, l);
}
@@ -73,7 +73,7 @@ public class I2PTunnelServer extends I2PTunnelTask implements Runnable {
this.remotePort = port;
I2PClient client = I2PClientFactory.createClient();
Properties props = new Properties();
props.putAll(System.getProperties());
props.putAll(getTunnel().getClientOptions());
synchronized (slock) {
sockMgr = I2PSocketManagerFactory.createManager(privData, I2PTunnel.host, Integer.parseInt(I2PTunnel.port),
props);

View File

@@ -26,16 +26,19 @@ public abstract class I2PTunnelTask implements EventDispatcher {
// I2PTunnelTask(name, (EventDispatcher)null);
//}
protected I2PTunnelTask(String name, EventDispatcher notifyThis) {
protected I2PTunnelTask(String name, EventDispatcher notifyThis, I2PTunnel tunnel) {
attachEventDispatcher(notifyThis);
this.name = name;
this.id = -1;
this.tunnel = tunnel;
}
/** for apps that use multiple I2PTunnel instances */
public void setTunnel(I2PTunnel pTunnel) {
tunnel = pTunnel;
}
public I2PTunnel getTunnel() { return tunnel; }
public int getId() {
return this.id;

View File

@@ -47,15 +47,15 @@ public class I2Ping extends I2PTunnelTask implements Runnable {
// I2Ping(cmd, l, (EventDispatcher)null);
//}
public I2Ping(String cmd, Logging l, boolean ownDest, EventDispatcher notifyThis) {
super("I2Ping [" + cmd + "]", notifyThis);
public I2Ping(String cmd, Logging l, boolean ownDest, EventDispatcher notifyThis, I2PTunnel tunnel) {
super("I2Ping [" + cmd + "]", notifyThis, tunnel);
this.l = l;
command = cmd;
synchronized (slock) {
if (ownDest) {
sockMgr = I2PTunnelClient.buildSocketManager();
sockMgr = I2PTunnelClient.buildSocketManager(tunnel);
} else {
sockMgr = I2PTunnelClient.getSocketManager();
sockMgr = I2PTunnelClient.getSocketManager(tunnel);
}
}
Thread t = new I2PThread(this);

View File

@@ -10,6 +10,7 @@ import java.net.Socket;
import net.i2p.client.streaming.I2PSocket;
import net.i2p.data.Destination;
import net.i2p.i2ptunnel.I2PTunnel;
import net.i2p.i2ptunnel.I2PTunnelClientBase;
import net.i2p.i2ptunnel.I2PTunnelRunner;
import net.i2p.i2ptunnel.Logging;
@@ -26,8 +27,8 @@ public class I2PSOCKSTunnel extends I2PTunnelClientBase {
// I2PSOCKSTunnel(localPort, l, ownDest, (EventDispatcher)null);
//}
public I2PSOCKSTunnel(int localPort, Logging l, boolean ownDest, EventDispatcher notifyThis) {
super(localPort, ownDest, l, notifyThis, "SOCKSHandler");
public I2PSOCKSTunnel(int localPort, Logging l, boolean ownDest, EventDispatcher notifyThis, I2PTunnel tunnel) {
super(localPort, ownDest, l, notifyThis, "SOCKSHandler", tunnel);
if (waitEventValue("openBaseClientResult").equals("error")) {
notifyEvent("openSOCKSTunnelResult", "error");

View File

@@ -8,7 +8,7 @@
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac srcdir="./src" debug="true" destdir="./build/obj" classpath="../../../core/java/build/i2p.jar" />
<javac srcdir="./src" debug="true" deprecation="on" source="1.3" target="1.3" destdir="./build/obj" classpath="../../../core/java/build/i2p.jar" />
</target>
<target name="jar" depends="compile">
<jar destfile="./build/mstreaming.jar" basedir="./build/obj" includes="**/*.class" />

View File

@@ -454,7 +454,8 @@ class I2PSocketImpl implements I2PSocket {
_log.debug(getPrefix() + "Message size is: " + data.length);
boolean sent = sendBlock(data);
if (!sent) {
_log.error(getPrefix() + "Error sending message to peer. Killing socket runner");
if (_log.shouldLog(Log.WARN))
_log.warn(getPrefix() + "Error sending message to peer. Killing socket runner");
errorOccurred();
return false;
}
@@ -475,9 +476,10 @@ class I2PSocketImpl implements I2PSocket {
packetsHandled++;
}
if ((bc.getCurrentSize() > 0) && (packetsHandled > 1)) {
_log.error(getPrefix() + "A SCARY MONSTER HAS EATEN SOME DATA! " + "(input stream: "
+ in.hashCode() + "; "
+ "queue size: " + bc.getCurrentSize() + ")");
if (_log.shouldLog(Log.WARN))
_log.warn(getPrefix() + "We lost some data queued up due to a network send error (input stream: "
+ in.hashCode() + "; "
+ "queue size: " + bc.getCurrentSize() + ")");
}
synchronized (flagLock) {
closed2 = true;
@@ -492,7 +494,8 @@ class I2PSocketImpl implements I2PSocket {
byte[] packet = I2PSocketManager.makePacket(getMask(0x02), remoteID, new byte[0]);
boolean sent = manager.getSession().sendMessage(remote, packet);
if (!sent) {
_log.error(getPrefix() + "Error sending close packet to peer");
if (_log.shouldLog(Log.WARN))
_log.warn(getPrefix() + "Error sending close packet to peer");
errorOccurred();
}
}

View File

@@ -264,7 +264,8 @@ public class I2PSocketManager implements I2PSessionListener {
s.queueData(payload);
return;
} else {
_log.error(getName() + ": Null socket with data available");
if (_log.shouldLog(Log.WARN))
_log.warn(getName() + ": Null socket with data available");
throw new IllegalStateException("Null socket with data available");
}
}

View File

@@ -9,13 +9,13 @@
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac srcdir="./src" debug="true" destdir="./build/obj" includes="net/**/*.java" excludes="net/i2p/netmonitor/gui/**" classpath="../../../core/java/build/i2p.jar" />
<javac srcdir="./src" debug="true" source="1.3" target="1.3" deprecation="on" destdir="./build/obj" includes="net/**/*.java" excludes="net/i2p/netmonitor/gui/**" classpath="../../../core/java/build/i2p.jar" />
</target>
<target name="compileGUI" depends="builddep">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac debug="true" destdir="./build/obj">
<javac debug="true" source="1.3" target="1.3" deprecation="on" destdir="./build/obj">
<src path="src/" />
<classpath path="../../../core/java/build/i2p.jar" />
<classpath path="../../jfreechart/jfreechart-0.9.17/lib/jcommon-0.9.2.jar" />

View File

@@ -8,7 +8,7 @@
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac srcdir="./src" debug="true" destdir="./build/obj" classpath="../../../core/java/build/i2p.jar:lib/javax.servlet.jar" />
<javac srcdir="./src" debug="true" source="1.3" target="1.3" deprecation="on" destdir="./build/obj" classpath="../../../core/java/build/i2p.jar:lib/javax.servlet.jar" />
</target>
<target name="jar" depends="compile">
<war destfile="./build/phttprelay.war" webxml="web.xml">

View File

@@ -0,0 +1,63 @@
#
# This Makefile is compatible with GNU Make (gmake) and should work on FreeBSD
#
#
# Your operating system
#
OS = FREEBSD
#
# Directories
#
INCDIR = inc
LIBDIR = lib
OBJDIR = obj
SRCDIR = src
#
# Programs
#
AR = ar
CC = gcc
#
# Flags
#
CFLAGS = -g -O2 -pipe -std=c99 -Wall
CFLAGS += -DOS=$(OS)
CFLAGS += -I$(INCDIR)
#
# Object files
#
OBJS = $(OBJDIR)/sam.o
#
# Build rules
#
all: depend libsam
depend:
$(CC) $(CFLAGS) -MM $(SRCDIR)/*.c > .depend
$(OBJDIR)/%.o: $(SRCDIR)/%.c
$(CC) $(CFLAGS) -o $@ -c $<
libsam: $(OBJS)
$(AR) rcs $(LIBDIR)/libsam.a $(OBJS)
#
# Cleanup rules
#
clean:
-rm -f $(LIBDIR)/libsam.a $(OBJDIR)/* .depend
tidy: clean

View File

@@ -1,4 +1,5 @@
v1.20
v1.20 2004-07-11
* Ported to FreeBSD (Makefile.freebsd)
* Full winsock compatibility - all Windows functions now return appropriate
error strings

View File

@@ -34,7 +34,7 @@
/*
* Operating system
*/
#define FREEBSD 0 // FreeBSD (untested)
#define FREEBSD 0 // FreeBSD
#define MINGW 1 // Windows native (Mingw)
#define LINUX 2 // Linux
#define CYGWIN 3 // Cygwin
@@ -83,9 +83,10 @@
#include <winsock.h>
#else
#include <arpa/inet.h>
#include <netinet/in.h>
#include <sys/select.h>
#include <sys/types.h>
#include <sys/socket.h>
#include <sys/types.h>
#endif
#include <assert.h>
#include <errno.h>

View File

@@ -189,7 +189,7 @@ samerr_t sam_dgram_send(const sam_pubkey_t dest, const void *data, size_t size)
#ifdef NO_Z_FORMAT
SAMLOG("Invalid data send size (%u bytes)", size);
#else
SAMLOG("Invalid data send size (%z bytes)", size);
SAMLOG("Invalid data send size (%dz bytes)", size);
#endif
return SAM_TOO_BIG;
}
@@ -197,7 +197,7 @@ samerr_t sam_dgram_send(const sam_pubkey_t dest, const void *data, size_t size)
snprintf(cmd, sizeof cmd, "DATAGRAM SEND DESTINATION=%s SIZE=%u\n",
dest, size);
#else
snprintf(cmd, sizeof cmd, "DATAGRAM SEND DESTINATION=%s SIZE=%z\n",
snprintf(cmd, sizeof cmd, "DATAGRAM SEND DESTINATION=%s SIZE=%dz\n",
dest, size);
#endif
sam_write(cmd, strlen(cmd));
@@ -957,7 +957,7 @@ samerr_t sam_stream_send(sam_sid_t stream_id, const void *data, size_t size)
SAMLOG("Invalid data send size (%u bytes) for stream %d",
size, stream_id);
#else
SAMLOG("Invalid data send size (%z bytes) for stream %d",
SAMLOG("Invalid data send size (%dz bytes) for stream %d",
size, stream_id);
#endif
return SAM_TOO_BIG;
@@ -971,7 +971,7 @@ samerr_t sam_stream_send(sam_sid_t stream_id, const void *data, size_t size)
stream_id, size);
#endif
#else
snprintf(cmd, sizeof cmd, "STREAM SEND ID=%d SIZE=%z\n",
snprintf(cmd, sizeof cmd, "STREAM SEND ID=%d SIZE=%dz\n",
stream_id, size);
#endif
sam_write(cmd, strlen(cmd));

View File

@@ -11,7 +11,7 @@
<mkdir dir="./build/obj" />
<javac
srcdir="./src"
debug="true"
debug="true" deprecation="on" source="1.3" target="1.3"
destdir="./build/obj"
classpath="../../../core/java/build/i2p.jar:../../ministreaming/java/build/mstreaming.jar" />
</target>

View File

@@ -7,6 +7,7 @@ import java.io.OutputStream;
import java.net.Socket;
import net.i2p.util.Log;
import net.i2p.util.Clock;
public class TestCreateSessionRaw {
private static Log _log = new Log(TestCreateSessionRaw.class);
@@ -15,6 +16,7 @@ public class TestCreateSessionRaw {
testTransient(samHost, samPort, conOptions);
testNewDest(samHost, samPort, conOptions);
testOldDest(samHost, samPort, conOptions);
testFast(samHost, samPort, conOptions);
}
private static void testTransient(String host, int port, String conOptions) {
@@ -36,21 +38,34 @@ public class TestCreateSessionRaw {
_log.debug("\n\nTest of subsequent contact complete\n\n");
}
private static void testFast(String host, int port, String conOptions) {
String destName = "Alice" + Math.random();
long totalTime = 0;
for (int i = 0; i < 10; i++) {
long before = Clock.getInstance().now();
testDest(host, port, conOptions, destName);
long after = Clock.getInstance().now();
long difference = after-before;
_log.debug("Time to test destination: " + difference + " \n\n");
totalTime += difference;
}
_log.debug("\n\nTime to test fast reconnection: " + totalTime + " over 10 runs");
}
private static void testDest(String host, int port, String conOptions, String destName) {
_log.info("\n\nTesting creating a new destination (should come back with 'SESSION STATUS RESULT=OK DESTINATION=someName)\n\n\n");
//_log.info("\n\nTesting creating a new destination (should come back with 'SESSION STATUS RESULT=OK DESTINATION=someName)\n\n\n");
try {
Socket s = new Socket(host, port);
OutputStream out = s.getOutputStream();
out.write("HELLO VERSION MIN=1.0 MAX=1.0\n".getBytes());
BufferedReader reader = new BufferedReader(new InputStreamReader(s.getInputStream()));
String line = reader.readLine();
_log.debug("line read for valid version: " + line);
//_log.debug("line read for valid version: " + line);
String req = "SESSION CREATE STYLE=RAW DESTINATION=" + destName + " " + conOptions + "\n";
out.write(req.getBytes());
line = reader.readLine();
_log.info("Response to creating the session with destination " + destName + ": " + line);
_log.debug("The above should contain SESSION STATUS RESULT=OK\n\n\n");
try { Thread.sleep(5*1000); } catch (InterruptedException ie) {}
_log.debug("The above should contain SESSION STATUS RESULT=OK");
s.close();
} catch (Exception e) {
_log.error("Error testing for valid version", e);
@@ -60,7 +75,7 @@ public class TestCreateSessionRaw {
public static void main(String args[]) {
// "i2cp.tcp.host=www.i2p.net i2cp.tcp.port=7765";
// "i2cp.tcp.host=localhost i2cp.tcp.port=7654 tunnels.inboundDepth=0";
String conOptions = "i2cp.tcp.host=dev.i2p.net i2cp.tcp.port=7002 tunnels.inboundDepth=0";
String conOptions = "i2cp.tcp.host=dev.i2p.net i2cp.tcp.port=7002 tunnels.depthInbound=0 tunnels.depthOutbound=0";
if (args.length > 0) {
conOptions = "";
for (int i = 0; i < args.length; i++)

View File

@@ -8,7 +8,7 @@
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac srcdir="./src" debug="true" destdir="./build/obj" includes="**/*.java" classpath="../../../core/java/build/i2p.jar" />
<javac srcdir="./src" debug="true" source="1.3" target="1.3" deprecation="on" destdir="./build/obj" includes="**/*.java" classpath="../../../core/java/build/i2p.jar" />
</target>
<target name="jar" depends="compile">
<jar destfile="./build/timestamper.jar" basedir="./build/obj" includes="**/*.class">

View File

@@ -2,15 +2,12 @@
#include <gmp.h>
#include "jbigi.h"
/********/
//function prototypes
//FIXME: should these go into jbigi.h? -- ughabugha
/******** prototypes */
void convert_j2mp(JNIEnv* env, jbyteArray jvalue, mpz_t* mvalue);
void convert_mp2j(JNIEnv* env, mpz_t mvalue, jbyteArray* jvalue);
/********/
/******** nativeModPow() */
/*
* Class: net_i2p_util_NativeBigInteger
* Method: nativeModPow
@@ -27,47 +24,41 @@ void convert_mp2j(JNIEnv* env, mpz_t mvalue, jbyteArray* jvalue);
JNIEXPORT jbyteArray JNICALL Java_net_i2p_util_NativeBigInteger_nativeModPow
(JNIEnv* env, jclass cls, jbyteArray jbase, jbyteArray jexp, jbyteArray jmod) {
// convert base, exponent, modulus into the format libgmp understands
// call libgmp's modPow
// convert libgmp's result into a big endian twos complement number
/* 1) Convert base, exponent, modulus into the format libgmp understands
* 2) Call libgmp's modPow.
* 3) Convert libgmp's result into a big endian twos complement number.
*
* Luckily we can use GMP's mpz_import() and mpz_export() functions.
*/
mpz_t mbase;
mpz_t mexp;
mpz_t mmod;
//mpz_t mresult;
jbyteArray jresult;
convert_j2mp(env, jbase, &mbase);
convert_j2mp(env, jexp, &mexp);
convert_j2mp(env, jmod, &mmod);
//gmp_printf("mbase =%Zd\n", mbase);
//gmp_printf("mexp =%Zd\n", mexp);
//gmp_printf("mmod =%Zd\n", mmod);
/* Perform the actual powmod. We use mmod for the result because it is
* always at least as big as the result.
*/
mpz_powm(mmod, mbase, mexp, mmod);
//we use mod for the result because it is always at least as big
//gmp_printf("mresult=%Zd\n", mmod);
convert_mp2j(env, mmod, &jresult);
//convert_j2mp(env, jresult, &mresult);
//gmp_printf("", mpz_cmp(mmod, mresult) == 0 ? "true" : "false");
mpz_clear(mbase);
mpz_clear(mexp);
mpz_clear(mmod);
//mpz_clear(mresult);
return jresult;
}
/********/
/******** convert_j2mp() */
/*
* Initializes the GMP value with enough preallocated size, and converts the
* Java value into the GMP value. The value that mvalue is pointint to
* should be uninitialized
* Java value into the GMP value. The value that mvalue points to should be
* uninitialized
*/
void convert_j2mp(JNIEnv* env, jbyteArray jvalue, mpz_t* mvalue)
@@ -81,18 +72,29 @@ void convert_j2mp(JNIEnv* env, jbyteArray jvalue, mpz_t* mvalue)
mpz_init2(*mvalue, sizeof(jbyte) * 8 * size); //preallocate the size
/*
* void mpz_import (mpz_t rop, size_t count, int order, int size, int endian, size_t nails, const void *op)
* order = 1 - order can be 1 for most significant word first or -1 for least significant first.
* endian = 1 - Within each word endian can be 1 for most significant byte first, -1 for least significant first
* nails = 0 - The most significant nails bits of each word are skipped, this can be 0 to use the full words
* void mpz_import(
* mpz_t rop, size_t count, int order, int size, int endian,
* size_t nails, const void *op);
*
* order = 1
* order can be 1 for most significant word first or -1 for least
* significant first.
* endian = 1
* Within each word endian can be 1 for most significant byte first,
* -1 for least significant first.
* nails = 0
* The most significant nails bits of each word are skipped, this can
* be 0 to use the full words.
*/
mpz_import(*mvalue, size, 1, sizeof(jbyte), 1, 0, (void*)jbuffer);
(*env)->ReleaseByteArrayElements(env, jvalue, jbuffer, JNI_ABORT);
}
/********/
/******** convert_mp2j() */
/*
* Converts the GMP value into the Java value; Doesn't do anything else.
* Pads the resulting jbyte array with 0, so the twos complement value is always
* positive.
*/
void convert_mp2j(JNIEnv* env, mpz_t mvalue, jbyteArray* jvalue)
@@ -103,21 +105,25 @@ void convert_mp2j(JNIEnv* env, mpz_t mvalue, jbyteArray* jvalue)
copy = JNI_FALSE;
size = (mpz_sizeinbase(mvalue, 2) + 7) / 8 + sizeof(jbyte); //+7 => ceil division
/* sizeinbase() + 7 => Ceil division */
size = (mpz_sizeinbase(mvalue, 2) + 7) / 8 + sizeof(jbyte);
*jvalue = (*env)->NewByteArray(env, size);
buffer = (*env)->GetByteArrayElements(env, *jvalue, &copy);
buffer[0] = 0;
buffer[0] = 0x00;
/*
* void *mpz_export (void *rop, size_t *count, int order, int size, int endian, size_t nails, mpz_t op)
* void *mpz_export(
* void *rop, size_t *count, int order, int size,
* int endian, size_t nails, mpz_t op);
*/
mpz_export((void*)&buffer[1], &size, 1, sizeof(jbyte), 1, 0, mvalue);
/* mode has (supposedly) no effect if elems is not a copy of the
* elements in array
*/
(*env)->ReleaseByteArrayElements(env, *jvalue, buffer, 0);
//mode has (supposedly) no effect if elems is not a copy of the elements in array
}
/********/
/******** eof */

View File

@@ -8,7 +8,7 @@
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac srcdir="./src:./test" debug="true" destdir="./build/obj" />
<javac srcdir="./src:./test" debug="true" source="1.3" target="1.3" deprecation="on" destdir="./build/obj" />
</target>
<target name="jar" depends="compile">
<jar destfile="./build/i2p.jar" basedir="./build/obj" includes="**/*.class" />

View File

@@ -14,8 +14,8 @@ package net.i2p;
*
*/
public class CoreVersion {
public final static String ID = "$Revision: 1.9 $ $Date: 2004/06/25 14:25:33 $";
public final static String VERSION = "0.3.2";
public final static String ID = "$Revision: 1.12 $ $Date: 2004/07/14 20:08:54 $";
public final static String VERSION = "0.3.2.3";
public static void main(String args[]) {
System.out.println("I2P Core version: " + VERSION);

View File

@@ -31,9 +31,13 @@ class MessageState {
private long _created;
private Object _lock = new Object();
private static long __stateId = 0;
private long _stateId;
public MessageState(long nonce, String prefix) {
_stateId = ++__stateId;
_nonce = nonce;
_prefix = prefix;
_prefix = prefix + "[" + _stateId + "]: ";
_id = null;
_receivedStatus = new HashSet();
_cancelled = false;

View File

@@ -1,6 +1,7 @@
; TC's hosts.txt guaranteed freshness
; $Id: hosts.txt,v 1.12 2004/06/25 13:48:33 jrandom Exp $
; $Id: hosts.txt,v 1.13 2004/07/02 08:31:09 duck Exp $
; changelog:
; (1.40) added nickster2.i2p and irc.nickster.i2p (pointing at iip)
; (1.39) added jdot.i2p
; (1.38) added forum.i2p
; (1.36) added ferret.i2p
@@ -102,3 +103,6 @@ anonynanny.i2p=U-xjIR-MQZ7iLKViaaIpfMzOWthI8jSMyougGi6catC4fmBJjkmo18SZ1Vyy1mXZV
ferret.i2p=P9EUYEBRkuA6R4nqLHrJzcqA64KjOpnQahDeb2SJZdGywI7E70Rf2TzXguryTmNLJb9riDiFouAzsb30FwF0I93Qx8iMq9ADVH6NowUgcXUaBJWoFh6dEnwwCcSbVsn1CvqoEgl-9GA2nm4NSqQv8RU4g3qrr~bfMP-gZLU4lroHb6LqCF3JNz8dege100n994nqxqdUaX1JpxDacq8jG4W5p3bUhlovQNj3cLcHG5K5PVB4jGqjkq-7UHvh0-blHoBhnYLABCe8IM2GAxgGEmS0zjQIXinsUu1wtLVmVMXF6Iy7bVf4~q80ngVNA7M~MVyNzNAocVkgb4bfhAX8WAuAA17XwsG3thWMJFYrWqKKq0hN5cDdbUNYjgx-ue~D~Sm0oJvUGjJVJnPfkbu-mO2bEQvGZaYml7kKF2kxmdmcFWAa-1HKnUXf35kKwYFWx10i8mSmLWhIx~Wvk~RIzNqjqVSy3Nbc6OkfCLvGcTQOtK-5b-BcU8HiZJJH2XOLAAAA
forum.i2p=XaZscxXGaXxuIkZDX87dfN0dcEG1xwSXktDbMX9YBOQ1LWbf0j6Kzde37j8dlPUhUK9kqVRZWpDtP7a2QBRl3aT~t~bYRj5bgTOIf9hTW46iViKdKObR-wPPjej~Px8OSYrXbFv2KUekS4baXcqHS7aJMy4rcbC1hsJm3qcXtut~7VFwEhg9w-HrHhsT5aYtcr4u79HvNvUva38NQ4NJn7vI9OPhPVgb5gxkefgM1tF0QC6QO1b~RADN~BW~X2S-YRPyKKxv6xx9mfqEbl5lVA1nBTaoFsN5ZfLZoJOFIVNpNoXxCrCQhvG2zjS-pJD2NF6g0bCcT4cKBWPYtJenLiK3L6fKJuVJ-og5ootLdJNBXGsO~FSwdabvDUaPDbTKmqS-ibFjmq1C7vEde3TGo3cRZgqG0YZi3S3BpBTYN9kGhYHrThGH69ECViUJnUWlUsWux5FI4pZL5Du7TwDYT0BwnX2kTdZQ8WGSFlflXgVQIh1n0XpElShWrOQPR0jGAAAA
jdot.i2p=yUqfBnnApqi3gcDlh8cafqd6T0Btw2yT8AA97Qtadu~wDbNd4znyrHnitrSCqGNk3KBAq~-Y5wxQ933NMQXyqgxQ-6U-2nNCS6EbH01dJfwCsoa1WayQR9k~LheSW7Vu6L12-STuNP~qhgaquSFOohpRuFEWwEzQ71zVcCpOYfAgkxD1zdIa8Dz-9ZcbPP5Nr4xCq08ls5CCROWyP0EKiB45GxPOfDTZmZxLGGcRly0NoU9uAB441GKJyPtuhrk-XhR~UhlH~BB9kaGfv09saQaEV~KsHOPU3u02O6uweE2EqzJ7oPrVM40L9OHsrrVKLZlUBwoReiwHxSjAZJlZfg4LkXGH~i7cczDaVQFOwBdj3MuzYuMhnnoIZu2I-Kg2J7aLM1vH1piO8-IH9mzItLOVUXS1gExAcflp9lmoXnGzojkn6~so6cL-iXHj7LY~~301o4-JnfqB65B4keZF4v2Fc4TKBnIERcAmx6adVOwjPXLL9nYBQNreoUFVxULsAAAA
irc.nickster.i2p=4xIbFi15l6BFLkKCPDEVcb23aQia4Ry1pQeC5C0RGzqy5IednmnDQqG5l8mDID8vL831rUmCrj~sC537iQiUXlkKFJvdiuI0HEL4c6a7NCYz3cPncc2Uz~gnlG1YOPv-CkxcXSHxxGrv0-HA281a87hrEc7uQ7hBLPybMl6-Z4k-qsyABDdaZwbqEJDWxJKWNfEWfhj2fHSuYB9c6CJgkPektLdMEIxIO4fWgRaIvyr0jt7ObBcB9QhvZAUnP5~iD9gnl~sxfSg~Zi7UW2sB0ewrb63KLZtRDXmnb-Gc3Cn-6oqvqt~YeNXW2OKiEMggkonLJR8RmdTsgMSwbHvXhyp0utqwgIIP7W0if0IIcDg7t38JzSo67uKs3m9aBf1kJWL~d31v6enPjIpgeLllJB6OaJVtKojn~Yi7Sje~5DJnLxiZVGf~Dn3a9IynFCQ6KXiPo-6418Wl2-vkrq0~cWjmlYMASR9AMILZMr1rOQUf748e92~oYwX2W5saRVnoAAAA
nickster2.i2p=aVMk7wdk-ebssXJmuthnwIy8IdAWLiNiftdNHVvqQCmbJUC1fSVuEkUPkAEW2Hdhzzv9M8t1-MH5Ip2UnTIe5XWjwLvfyg0skX4QouLWX~EZIMq6SrTpIBguYuG9pV5XKfVslwnYyYU-S-lL82S98YJDRWw4eGO~Alg2Sisp8x8iuvLrWwXX61hOcQp6e5OVM5Al3kHcVBFwIGfgIglXy6d7lPZHBRDj7W4jTKKj4OUOaJ84vdRtMZaVXbM~xHZJTZxRo99SmVQpxr4j2n0f4RfYDQxjz2eMPLfhl1IF4svBGbQ1d0PGQlTy0vftRqMjxrS4XoGdlYGyACpDE5eMjIgRhnAyAXbZ4OY7htXKvLgpM6VBQ0JjBwQITsI~MO1IUJXvc3yEg~rLnBtv01YNspiGyE8v0~Va6rps3KjEc6E-C4ZN6pNHBqtKajFPLz4kgrJ-E6P2bXoHMadbNIkEEM5vKSPTFAf~y1ElBH9RD5yaxOxGEzyS076vJ6phK~w2AAAA

View File

@@ -15,7 +15,7 @@
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac srcdir="./src" debug="true" destdir="./build/obj" />
<javac srcdir="./src" debug="true" source="1.3" target="1.3" deprecation="on" destdir="./build/obj" />
</target>
<target name="jar" depends="installer, guiinstaller" />
<target name="fetchseeds" depends="compile">

View File

@@ -79,7 +79,7 @@ public class FetchSeeds {
public static void main(String[] args) {
switch (args.length) {
case 1:
fetchSeeds(new File(args[0]), "http://i2p.dnsalias.net/i2pdb/");
fetchSeeds(new File(args[0]), "http://dev.i2p.net/i2pdb/");
return;
case 2:
fetchSeeds(new File(args[0]), args[1]);
@@ -93,7 +93,7 @@ public class FetchSeeds {
System.out.println("Usage: FetchSeeds <outDir>");
System.out.println(" or FetchSeeds <outDir> <seedURL>");
System.out.println(" or FetchSeeds <outDir> <seedURL> <secondsBetweenFetch>");
System.out.println("The default seedURL is http://i2p.dnsalias.net/i2pdb/");
System.out.println("The default seedURL is http://dev.i2p.net/i2pdb/");
return;
}
}

View File

@@ -325,6 +325,10 @@ public abstract class Install {
_i2cpPort = ((Integer)_answers.get("i2cpPort")).intValue();
_inBPS = ((Integer)_answers.get("inBPS")).intValue();
_outBPS = ((Integer)_answers.get("outBPS")).intValue();
long num = new java.util.Random().nextLong();
if (num < 0)
num = 0 - num;
_answers.put("timestamperPassword", new Long(num));
}
private void useTemplate(String templateName, File destFile) {

View File

@@ -1,3 +1,3 @@
cd ##_scripts_installdir##
java -jar lib\fetchseeds.jar netDb
java -jar lib\fetchseeds.jar netDb http://dev.i2p.net/i2pdb/
pause

View File

@@ -1,4 +1,4 @@
#!/bin/sh
cd ##_scripts_installdir##
java -jar lib/fetchseeds.jar netDb
java -jar lib/fetchseeds.jar netDb http://dev.i2p.net/i2pdb/
echo Router network database reseeded

View File

@@ -125,13 +125,13 @@ tunnels.tunnelDuration=600000
clientApp.0.main=net.i2p.time.Timestamper
clientApp.0.name=Timestamper
clientApp.0.onBoot=true
clientApp.0.args=http://localhost:7655/setTime?putTheValueFromBelowHere pool.ntp.org pool.ntp.org pool.ntp.org
clientApp.0.args=http://localhost:7655/setTime?##timestamperPassword## pool.ntp.org pool.ntp.org pool.ntp.org
# The admin time passphrase, used to prevent unauthorized people from updating your
# routers time. The value should be included in the timestamper's args above,
# otherwise it wont honor timestamp updates. You shouldnt include any spaces or funky
# characters - just pick some random numbers.
adminTimePassphrase=pleaseSetSomeValueHere
adminTimePassphrase=##timestamperPassword##
# SAM bridge (a simplified socket based protocol for using I2P - listens on port 7656. see
# the specs at http://www.i2p.net/node/view/144 for more info)

View File

@@ -8,7 +8,7 @@
<target name="compile">
<mkdir dir="./build" />
<mkdir dir="./build/obj" />
<javac srcdir="./src:./test" debug="true" destdir="./build/obj" classpath="../../core/java/build/i2p.jar" />
<javac srcdir="./src:./test" debug="true" source="1.3" target="1.3" deprecation="on" destdir="./build/obj" classpath="../../core/java/build/i2p.jar" />
</target>
<target name="jar" depends="compile">
<jar destfile="./build/router.jar" basedir="./build/obj" includes="**/*.class" />

View File

@@ -114,6 +114,9 @@ public class I2NPMessageReader {
public void run() {
while (_stayAlive) {
while (_doRun) {
while (!_context.throttle().acceptNetworkMessage()) {
try { Thread.sleep(500 + _context.random().nextInt(512)); } catch (InterruptedException ie) {}
}
// do read
try {
I2NPMessage msg = _handler.readMessage(_stream);

View File

@@ -8,6 +8,9 @@ package net.i2p.router;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import net.i2p.data.Destination;
import net.i2p.data.Hash;
import net.i2p.data.LeaseSet;
@@ -67,7 +70,7 @@ public abstract class ClientManagerFacade implements Service {
*
*/
public abstract SessionConfig getClientSessionConfig(Destination dest);
public String renderStatusHTML() { return ""; }
public void renderStatusHTML(OutputStream out) throws IOException { }
}
class DummyClientManagerFacade extends ClientManagerFacade {

View File

@@ -8,6 +8,9 @@ package net.i2p.router;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import java.util.HashSet;
import java.util.Set;
@@ -19,7 +22,7 @@ import java.util.Set;
public abstract class CommSystemFacade implements Service {
public abstract void processMessage(OutNetMessage msg);
public String renderStatusHTML() { return ""; }
public void renderStatusHTML(OutputStream out) throws IOException { }
/** Create the set of RouterAddress structures based on the router's config */
public Set createAddresses() { return new HashSet(); }

View File

@@ -131,8 +131,10 @@ public class InNetMessagePool {
_log.info("Dropping unhandled delivery status message created " + timeSinceSent + "ms ago: " + msg);
_context.statManager().addRateData("inNetPool.droppedDeliveryStatusDelay", timeSinceSent, timeSinceSent);
} else {
if (_log.shouldLog(Log.ERROR))
_log.error("Message " + messageBody + " was not handled by a HandlerJobBuilder - DROPPING: "
if (_log.shouldLog(Log.WARN))
_log.warn("Message " + messageBody + " expiring on "
+ (messageBody != null ? (messageBody.getMessageExpiration()+"") : " [unknown]")
+ " was not handled by a HandlerJobBuilder - DROPPING: "
+ msg, new Exception("DROPPED MESSAGE"));
_context.statManager().addRateData("inNetPool.dropped", 1, 0);
}

View File

@@ -13,7 +13,7 @@ import net.i2p.util.Log;
* Base implementation of a Job
*/
public abstract class JobImpl implements Job {
protected RouterContext _context;
private RouterContext _context;
private JobTiming _timing;
private static int _idSrc = 0;
private int _id;
@@ -31,6 +31,8 @@ public abstract class JobImpl implements Job {
public int getJobId() { return _id; }
public JobTiming getTiming() { return _timing; }
public final RouterContext getContext() { return _context; }
public String toString() {
StringBuffer buf = new StringBuffer(128);
buf.append(super.toString());

View File

@@ -8,6 +8,9 @@ package net.i2p.router;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Date;
@@ -117,7 +120,7 @@ public class JobQueue {
I2PThread pumperThread = new I2PThread(_pumper);
pumperThread.setDaemon(true);
pumperThread.setName("QueuePumper");
//pumperThread.setPriority(I2PThread.MIN_PRIORITY);
//pumperThread.setPriority(I2PThread.NORM_PRIORITY+1);
pumperThread.start();
}
@@ -186,6 +189,26 @@ public class JobQueue {
return;
}
public void timingUpdated() {
synchronized (_timedJobs) {
_timedJobs.notifyAll();
}
}
public int getReadyCount() {
synchronized (_readyJobs) {
return _readyJobs.size();
}
}
public long getMaxLag() {
synchronized (_readyJobs) {
if (_readyJobs.size() <= 0) return 0;
// first job is the one that has been waiting the longest
long startAfter = ((Job)_readyJobs.get(0)).getTiming().getStartAfter();
return _context.clock().now() - startAfter;
}
}
/**
* are we so overloaded that we should drop the given job?
* This is driven both by the numReady and waiting jobs, the type of job
@@ -350,10 +373,13 @@ public class JobQueue {
*/
private void awaken(int numMadeReady) {
// notify a sufficient number of waiting runners
for (int i = 0; i < numMadeReady; i++) {
synchronized (_runnerLock) {
_runnerLock.notify();
}
//for (int i = 0; i < numMadeReady; i++) {
// synchronized (_runnerLock) {
// _runnerLock.notify();
// }
//}
synchronized (_runnerLock) {
_runnerLock.notify();
}
}
@@ -391,17 +417,12 @@ public class JobQueue {
timeToWait = timeLeft;
}
}
if (toAdd == null) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Waiting " + timeToWait + " before rechecking the timed queue");
try {
_timedJobs.wait(timeToWait);
} catch (InterruptedException ie) {}
}
}
if (toAdd != null) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Not waiting - we have " + toAdd.size() + " newly ready jobs");
synchronized (_readyJobs) {
// rather than addAll, which allocs a byte array rv before adding,
// we iterate, since toAdd is usually going to only be 1 or 2 entries
@@ -413,6 +434,18 @@ public class JobQueue {
}
awaken(toAdd.size());
} else {
if (timeToWait < 100)
timeToWait = 100;
if (timeToWait > 10*1000)
timeToWait = 10*1000;
if (_log.shouldLog(Log.DEBUG))
_log.debug("Waiting " + timeToWait + " before rechecking the timed queue");
try {
synchronized (_timedJobs) {
_timedJobs.wait(timeToWait);
}
} catch (InterruptedException ie) {}
}
}
} catch (Throwable t) {
@@ -520,13 +553,20 @@ public class JobQueue {
// the remainder are utility methods for dumping status info
////
public String renderStatusHTML() {
public void renderStatusHTML(OutputStream out) throws IOException {
ArrayList readyJobs = null;
ArrayList timedJobs = null;
ArrayList activeJobs = new ArrayList(1);
ArrayList justFinishedJobs = new ArrayList(4);
out.write("<!-- jobQueue rendering -->\n".getBytes());
out.flush();
synchronized (_readyJobs) { readyJobs = new ArrayList(_readyJobs); }
out.write("<!-- jobQueue rendering: after readyJobs sync -->\n".getBytes());
out.flush();
synchronized (_timedJobs) { timedJobs = new ArrayList(_timedJobs); }
out.write("<!-- jobQueue rendering: after timedJobs sync -->\n".getBytes());
out.flush();
int numRunners = 0;
synchronized (_queueRunners) {
for (Iterator iter = _queueRunners.values().iterator(); iter.hasNext();) {
JobQueueRunner runner = (JobQueueRunner)iter.next();
@@ -538,13 +578,15 @@ public class JobQueue {
justFinishedJobs.add(job);
}
}
numRunners = _queueRunners.size();
}
StringBuffer buf = new StringBuffer(20*1024);
out.write("<!-- jobQueue rendering: after queueRunners sync -->\n".getBytes());
out.flush();
StringBuffer buf = new StringBuffer(32*1024);
buf.append("<h2>JobQueue</h2>");
buf.append("# runners: ");
synchronized (_queueRunners) {
buf.append(_queueRunners.size());
}
buf.append("# runners: ").append(numRunners);
buf.append("<br />\n");
long now = _context.clock().now();
@@ -583,13 +625,20 @@ public class JobQueue {
buf.append(new Date(j.getTiming().getStartAfter())).append("</li>\n");
}
buf.append("</ol>\n");
buf.append(getJobStats());
return buf.toString();
out.write("<!-- jobQueue rendering: after main buffer, before stats -->\n".getBytes());
out.flush();
getJobStats(buf);
out.write("<!-- jobQueue rendering: after stats -->\n".getBytes());
out.flush();
out.write(buf.toString().getBytes());
}
/** render the HTML for the job stats */
private String getJobStats() {
StringBuffer buf = new StringBuffer(16*1024);
private void getJobStats(StringBuffer buf) {
buf.append("<table border=\"1\">\n");
buf.append("<tr><td><b>Job</b></td><td><b>Runs</b></td>");
buf.append("<td><b>Time</b></td><td><b><i>Avg</i></b></td><td><b><i>Max</i></b></td><td><b><i>Min</i></b></td>");
@@ -608,7 +657,7 @@ public class JobQueue {
synchronized (_jobStats) {
tstats = new TreeMap(_jobStats);
}
for (Iterator iter = tstats.values().iterator(); iter.hasNext(); ) {
JobStats stats = (JobStats)iter.next();
buf.append("<tr>");
@@ -658,6 +707,5 @@ public class JobQueue {
buf.append("</tr>\n");
buf.append("</table>\n");
return buf.toString();
}
}

View File

@@ -33,7 +33,15 @@ public class JobTiming implements Clock.ClockUpdateListener {
*
*/
public long getStartAfter() { return _start; }
public void setStartAfter(long startTime) { _start = startTime; }
public void setStartAfter(long startTime) {
_start = startTime;
// sure, this current job object may not already be on the queue, so
// telling the queue of the update may be irrelevent...
// but...
// ...
// who cares? this helps in the case where it is on the queue
_context.jobQueue().timingUpdated();
}
/**
* # of milliseconds after the epoch the job actually started

View File

@@ -8,6 +8,9 @@ package net.i2p.router;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Map;
@@ -44,7 +47,6 @@ public abstract class NetworkDatabaseFacade implements Service {
public abstract void publish(LeaseSet localLeaseSet);
public abstract void unpublish(LeaseSet localLeaseSet);
public abstract void fail(Hash dbEntry);
public String renderStatusHTML() { return ""; }
}
@@ -84,4 +86,6 @@ class DummyNetworkDatabaseFacade extends NetworkDatabaseFacade {
public void fail(Hash dbEntry) {}
public Set findNearestRouters(Hash key, int maxNumRouters, Set peersToIgnore) { return new HashSet(_routers.values()); }
public void renderStatusHTML(OutputStream out) throws IOException {}
}

View File

@@ -8,6 +8,7 @@ package net.i2p.router;
*
*/
import java.io.OutputStream;
import java.util.List;
/**
@@ -29,6 +30,6 @@ public interface PeerManagerFacade extends Service {
class DummyPeerManagerFacade implements PeerManagerFacade {
public void shutdown() {}
public void startup() {}
public String renderStatusHTML() { return ""; }
public void renderStatusHTML(OutputStream out) { }
public List selectPeers(PeerSelectionCriteria criteria) { return null; }
}

View File

@@ -46,9 +46,13 @@ public interface ProfileManager {
/**
* Note that a router explicitly rejected joining a tunnel
*
*
* @param peer who rejected us
* @param responseTimeMs how long it took to get the rejection
* @param explicit true if the tunnel request was explicitly rejected, false
* if we just didn't get a reply back in time.
*/
void tunnelRejected(Hash peer, long responseTimeMs);
void tunnelRejected(Hash peer, long responseTimeMs, boolean explicit);
/**
* Note that a tunnel that the router is participating in

View File

@@ -10,6 +10,7 @@ package net.i2p.router;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.text.DecimalFormat;
import java.util.Calendar;
import java.util.Date;
@@ -220,34 +221,32 @@ public class Router {
_context.inNetMessagePool().registerHandlerJobBuilder(SourceRouteReplyMessage.MESSAGE_TYPE, new SourceRouteReplyMessageHandler(_context));
}
public String renderStatusHTML() {
StringBuffer buf = new StringBuffer();
buf.append("<html><head><title>I2P Router Console</title></head><body>\n");
buf.append("<h1>Router console</h1>\n");
buf.append("<i><a href=\"/routerConsole.html\">console</a> | <a href=\"/routerStats.html\">stats</a></i><br>\n");
buf.append("<form action=\"/routerConsole.html\">");
buf.append("<select name=\"go\" onChange='location.href=this.value'>");
buf.append("<option value=\"/routerConsole.html#bandwidth\">Bandwidth</option>\n");
buf.append("<option value=\"/routerConsole.html#clients\">Clients</option>\n");
buf.append("<option value=\"/routerConsole.html#transports\">Transports</option>\n");
buf.append("<option value=\"/routerConsole.html#profiles\">Peer Profiles</option>\n");
buf.append("<option value=\"/routerConsole.html#tunnels\">Tunnels</option>\n");
buf.append("<option value=\"/routerConsole.html#jobs\">Jobs</option>\n");
buf.append("<option value=\"/routerConsole.html#shitlist\">Shitlist</option>\n");
buf.append("<option value=\"/routerConsole.html#pending\">Pending messages</option>\n");
buf.append("<option value=\"/routerConsole.html#netdb\">Network Database</option>\n");
buf.append("<option value=\"/routerConsole.html#logs\">Log messages</option>\n");
buf.append("</select>");
buf.append("</form>");
buf.append("<form action=\"/shutdown\" method=\"GET\">");
buf.append("<b>Shut down the router:</b>");
buf.append("<input type=\"password\" name=\"password\" size=\"8\" />");
buf.append("<input type=\"submit\" value=\"shutdown!\" />");
buf.append("</form>");
buf.append("<hr />\n");
public void renderStatusHTML(OutputStream out) throws IOException {
out.write(("<html><head><title>I2P Router Console</title></head><body>\n" +
"<h1>Router console</h1>\n" +
"<i><a href=\"/routerConsole.html\">console</a> | <a href=\"/routerStats.html\">stats</a></i><br>\n" +
"<form action=\"/routerConsole.html\">" +
"<select name=\"go\" onChange='location.href=this.value'>" +
"<option value=\"/routerConsole.html#bandwidth\">Bandwidth</option>\n" +
"<option value=\"/routerConsole.html#clients\">Clients</option>\n" +
"<option value=\"/routerConsole.html#transports\">Transports</option>\n" +
"<option value=\"/routerConsole.html#profiles\">Peer Profiles</option>\n" +
"<option value=\"/routerConsole.html#tunnels\">Tunnels</option>\n" +
"<option value=\"/routerConsole.html#jobs\">Jobs</option>\n" +
"<option value=\"/routerConsole.html#shitlist\">Shitlist</option>\n" +
"<option value=\"/routerConsole.html#pending\">Pending messages</option>\n" +
"<option value=\"/routerConsole.html#netdb\">Network Database</option>\n" +
"<option value=\"/routerConsole.html#logs\">Log messages</option>\n" +
"</select>" +"</form>" +
"<form action=\"/shutdown\" method=\"GET\">" +
"<b>Shut down the router:</b>" +
"<input type=\"password\" name=\"password\" size=\"8\" />" +
"<input type=\"submit\" value=\"shutdown!\" />" +
"</form>" +
"<hr />\n").getBytes());
StringBuffer buf = new StringBuffer(32*1024);
if ( (_routerInfo != null) && (_routerInfo.getIdentity() != null) )
buf.append("<b>Router: </b> ").append(_routerInfo.getIdentity().getHash().toBase64()).append("<br />\n");
buf.append("<b>As of: </b> ").append(new Date(_context.clock().now())).append(" (uptime: ").append(DataHelper.formatDuration(getUptime())).append(") <br />\n");
@@ -352,24 +351,43 @@ public class Router {
buf.append("trying to transfer data. Lifetime averages count how many elephants there are on the moon [like anyone reads this text]</i>");
buf.append("\n");
buf.append(_context.bandwidthLimiter().renderStatusHTML());
out.write(buf.toString().getBytes());
_context.bandwidthLimiter().renderStatusHTML(out);
buf.append("<hr /><a name=\"clients\"> </a>\n");
buf.append(_context.clientManager().renderStatusHTML());
buf.append("\n<hr /><a name=\"transports\"> </a>\n");
buf.append(_context.commSystem().renderStatusHTML());
buf.append("\n<hr /><a name=\"profiles\"> </a>\n");
buf.append(_context.peerManager().renderStatusHTML());
buf.append("\n<hr /><a name=\"tunnels\"> </a>\n");
buf.append(_context.tunnelManager().renderStatusHTML());
buf.append("\n<hr /><a name=\"jobs\"> </a>\n");
buf.append(_context.jobQueue().renderStatusHTML());
buf.append("\n<hr /><a name=\"shitlist\"> </a>\n");
buf.append(_context.shitlist().renderStatusHTML());
buf.append("\n<hr /><a name=\"pending\"> </a>\n");
buf.append(_context.messageRegistry().renderStatusHTML());
buf.append("\n<hr /><a name=\"netdb\"> </a>\n");
buf.append(_context.netDb().renderStatusHTML());
out.write("<hr /><a name=\"clients\"> </a>\n".getBytes());
_context.clientManager().renderStatusHTML(out);
out.write("\n<hr /><a name=\"transports\"> </a>\n".getBytes());
_context.commSystem().renderStatusHTML(out);
out.write("\n<hr /><a name=\"profiles\"> </a>\n".getBytes());
_context.peerManager().renderStatusHTML(out);
out.write("\n<hr /><a name=\"tunnels\"> </a>\n".getBytes());
_context.tunnelManager().renderStatusHTML(out);
out.write("\n<hr /><a name=\"jobs\"> </a>\n".getBytes());
_context.jobQueue().renderStatusHTML(out);
out.write("\n<hr /><a name=\"shitlist\"> </a>\n".getBytes());
_context.shitlist().renderStatusHTML(out);
out.write("\n<hr /><a name=\"pending\"> </a>\n".getBytes());
_context.messageRegistry().renderStatusHTML(out);
out.write("\n<hr /><a name=\"netdb\"> </a>\n".getBytes());
_context.netDb().renderStatusHTML(out);
buf.setLength(0);
buf.append("\n<hr /><a name=\"logs\"> </a>\n");
List msgs = _context.logManager().getBuffer().getMostRecentMessages();
buf.append("\n<h2>Most recent console messages:</h2><table border=\"1\">\n");
@@ -380,7 +398,7 @@ public class Router {
}
buf.append("</table>");
buf.append("</body></html>\n");
return buf.toString();
out.write(buf.toString().getBytes());
}
public void shutdown() {

View File

@@ -49,6 +49,7 @@ public class RouterContext extends I2PAppContext {
private Shitlist _shitlist;
private MessageValidator _messageValidator;
private MessageStateMonitor _messageStateMonitor;
private RouterThrottle _throttle;
private Calculator _isFailingCalc;
private Calculator _integrationCalc;
private Calculator _speedCalc;
@@ -83,6 +84,7 @@ public class RouterContext extends I2PAppContext {
_statPublisher = new StatisticsManager(this);
_shitlist = new Shitlist(this);
_messageValidator = new MessageValidator(this);
_throttle = new RouterThrottleImpl(this);
_isFailingCalc = new IsFailingCalculator(this);
_integrationCalc = new IntegrationCalculator(this);
_speedCalc = new SpeedCalculator(this);
@@ -188,6 +190,11 @@ public class RouterContext extends I2PAppContext {
* well as other criteria for "validity".
*/
public MessageValidator messageValidator() { return _messageValidator; }
/**
* Component to coordinate our accepting/rejecting of requests under load
*
*/
public RouterThrottle throttle() { return _throttle; }
/** how do we rank the failure of profiles? */
public Calculator isFailingCalculator() { return _isFailingCalc; }

View File

@@ -0,0 +1,34 @@
package net.i2p.router;
import net.i2p.data.Hash;
import net.i2p.data.i2np.TunnelCreateMessage;
/**
* Gatekeeper for deciding whether to throttle the further processing
* of messages through the router. This is seperate from the bandwidth
* limiting which simply makes sure the bytes transferred dont exceed the
* bytes allowed (though the router throttle should take into account the
* current bandwidth usage and limits when determining whether to accept or
* reject certain activities, such as tunnels)
*
*/
public interface RouterThrottle {
/**
* Should we accept any more data from the network for any sort of message,
* taking into account our current load, or should we simply slow down?
*
*/
public boolean acceptNetworkMessage();
/**
* Should we accept the request to participate in the given tunnel,
* taking into account our current load and bandwidth usage commitments?
*
*/
public boolean acceptTunnelRequest(TunnelCreateMessage msg);
/**
* Should we accept the netDb lookup message, replying either with the
* value or some closer peers, or should we simply drop it due to overload?
*
*/
public boolean acceptNetDbLookupRequest(Hash key);
}

View File

@@ -0,0 +1,132 @@
package net.i2p.router;
import net.i2p.data.Hash;
import net.i2p.data.i2np.TunnelCreateMessage;
import net.i2p.stat.RateStat;
import net.i2p.stat.Rate;
import net.i2p.util.Log;
/**
* Simple throttle that basically stops accepting messages or nontrivial
* requests if the jobQueue lag is too large.
*
*/
class RouterThrottleImpl implements RouterThrottle {
private RouterContext _context;
private Log _log;
/**
* arbitrary hard limit of 2 seconds - if its taking this long to get
* to a job, we're congested.
*
*/
private static int JOB_LAG_LIMIT = 2000;
/**
* Arbitrary hard limit - if we throttle our network connection this many
* times in the previous 10-20 minute period, don't accept requests to
* participate in tunnels.
*
*/
private static int THROTTLE_EVENT_LIMIT = 300;
public RouterThrottleImpl(RouterContext context) {
_context = context;
_log = context.logManager().getLog(RouterThrottleImpl.class);
_context.statManager().createRateStat("router.throttleNetworkCause", "How lagged the jobQueue was when an I2NP was throttled", "Throttle", new long[] { 60*1000, 10*60*1000, 60*60*1000, 24*60*60*1000 });
_context.statManager().createRateStat("router.throttleNetDbCause", "How lagged the jobQueue was when a networkDb request was throttled", "Throttle", new long[] { 60*1000, 10*60*1000, 60*60*1000, 24*60*60*1000 });
_context.statManager().createRateStat("router.throttleTunnelCause", "How lagged the jobQueue was when a tunnel request was throttled", "Throttle", new long[] { 60*1000, 10*60*1000, 60*60*1000, 24*60*60*1000 });
_context.statManager().createRateStat("tunnel.bytesAllocatedAtAccept", "How many bytes had been 'allocated' for participating tunnels when we accepted a request?", "Tunnels", new long[] { 10*60*1000, 60*60*1000, 24*60*60*1000 });
_context.statManager().createRateStat("router.throttleTunnelProcessingTime1m", "How long it takes to process a message (1 minute average) when we throttle a tunnel?", "Throttle", new long[] { 60*1000, 10*60*1000, 60*60*1000, 24*60*60*1000 });
_context.statManager().createRateStat("router.throttleTunnelProcessingTime10m", "How long it takes to process a message (10 minute average) when we throttle a tunnel?", "Throttle", new long[] { 60*1000, 10*60*1000, 60*60*1000, 24*60*60*1000 });
}
public boolean acceptNetworkMessage() {
long lag = _context.jobQueue().getMaxLag();
if (lag > JOB_LAG_LIMIT) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Throttling network reader, as the job lag is " + lag);
_context.statManager().addRateData("router.throttleNetworkCause", lag, lag);
return false;
} else {
return true;
}
}
public boolean acceptNetDbLookupRequest(Hash key) {
long lag = _context.jobQueue().getMaxLag();
if (lag > JOB_LAG_LIMIT) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Refusing netDb request, as the job lag is " + lag);
_context.statManager().addRateData("router.throttleNetDbCause", lag, lag);
return false;
} else {
return true;
}
}
public boolean acceptTunnelRequest(TunnelCreateMessage msg) {
long lag = _context.jobQueue().getMaxLag();
RateStat rs = _context.statManager().getRate("router.throttleNetworkCause");
Rate r = null;
if (rs != null)
r = rs.getRate(10*60*1000);
long throttleEvents = (r != null ? r.getCurrentEventCount() + r.getLastEventCount() : 0);
if (throttleEvents > THROTTLE_EVENT_LIMIT) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Refusing tunnel request with the job lag of " + lag
+ " since there have been " + throttleEvents
+ " throttle events in the last 15 minutes or so");
_context.statManager().addRateData("router.throttleTunnelCause", lag, lag);
return false;
}
rs = _context.statManager().getRate("transport.sendProcessingTime");
r = null;
if (rs != null)
r = rs.getRate(10*60*1000);
double processTime = (r != null ? r.getAverageValue() : 0);
if (processTime > 1000) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Refusing tunnel request with the job lag of " + lag
+ "since the 10 minute message processing time is too slow (" + processTime + ")");
_context.statManager().addRateData("router.throttleTunnelProcessingTime10m", (long)processTime, (long)processTime);
return false;
}
if (rs != null)
r = rs.getRate(60*1000);
processTime = (r != null ? r.getAverageValue() : 0);
if (processTime > 2000) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Refusing tunnel request with the job lag of " + lag
+ "since the 1 minute message processing time is too slow (" + processTime + ")");
_context.statManager().addRateData("router.throttleTunnelProcessingTime1m", (long)processTime, (long)processTime);
return false;
}
// ok, we're not hosed, but can we handle the bandwidth requirements
// of another tunnel?
rs = _context.statManager().getRate("tunnel.participatingMessagesProcessed");
r = null;
if (rs != null)
r = rs.getRate(10*60*1000);
double msgsPerTunnel = (r != null ? r.getAverageValue() : 0);
r = null;
rs = _context.statManager().getRate("tunnel.relayMessageSize");
if (rs != null)
r = rs.getRate(10*60*1000);
double bytesPerMsg = (r != null ? r.getAverageValue() : 0);
double bytesPerTunnel = msgsPerTunnel * bytesPerMsg;
int numTunnels = _context.tunnelManager().getParticipatingCount();
double bytesAllocated = (numTunnels + 1) * bytesPerTunnel;
_context.statManager().addRateData("tunnel.bytesAllocatedAtAccept", (long)bytesAllocated, msg.getTunnelDurationSeconds()*1000);
// todo: um, throttle (include bw usage of the netDb, our own tunnels, the clients,
// and check to see that they are less than the bandwidth limits
if (_log.shouldLog(Log.DEBUG))
_log.debug("Accepting a new tunnel request (now allocating " + bytesAllocated + " bytes across " + numTunnels
+ " tunnels with lag of " + lag + " and " + throttleEvents + " throttle events)");
return true;
}
}

View File

@@ -15,8 +15,8 @@ import net.i2p.CoreVersion;
*
*/
public class RouterVersion {
public final static String ID = "$Revision: 1.8 $ $Date: 2004/06/25 14:25:33 $";
public final static String VERSION = "0.3.2";
public final static String ID = "$Revision: 1.11 $ $Date: 2004/07/14 20:08:55 $";
public final static String VERSION = "0.3.2.3";
public static void main(String args[]) {
System.out.println("I2P Router version: " + VERSION);
System.out.println("Router ID: " + RouterVersion.ID);

View File

@@ -8,6 +8,9 @@ package net.i2p.router;
*
*/
import java.io.IOException;
import java.io.OutputStream;
/**
* Define the manageable service interface for the subsystems in the I2P router
*
@@ -28,5 +31,5 @@ public interface Service {
*/
public void shutdown();
public String renderStatusHTML();
public void renderStatusHTML(OutputStream out) throws IOException;
}

View File

@@ -4,6 +4,7 @@ import java.io.File;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import net.i2p.crypto.PersistentSessionKeyManager;
import net.i2p.crypto.SessionKeyManager;
@@ -89,7 +90,7 @@ public class SessionKeyPersistenceHelper implements Service {
}
}
public String renderStatusHTML() { return ""; }
public void renderStatusHTML(OutputStream out) { }
private class SessionKeyWriterJob extends JobImpl {
public SessionKeyWriterJob() {

View File

@@ -8,6 +8,9 @@ package net.i2p.router;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import java.util.Date;
import java.util.HashMap;
import java.util.Iterator;
@@ -79,8 +82,8 @@ public class Shitlist {
}
}
public String renderStatusHTML() {
StringBuffer buf = new StringBuffer();
public void renderStatusHTML(OutputStream out) throws IOException {
StringBuffer buf = new StringBuffer(1024);
buf.append("<h2>Shitlist</h2>");
Map shitlist = new HashMap();
synchronized (_shitlist) {
@@ -99,6 +102,6 @@ public class Shitlist {
buf.append("<li><b>").append(key.toBase64()).append("</b> was shitlisted on ").append(shitDate).append("</li>\n");
}
buf.append("</ul>\n");
return buf.toString();
out.write(buf.toString().getBytes());
}
}

View File

@@ -8,6 +8,9 @@ package net.i2p.router;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import java.text.DecimalFormat;
import java.text.DecimalFormatSymbols;
import java.util.Locale;
@@ -106,7 +109,7 @@ public class StatisticsManager implements Service {
includeRate("crypto.garlic.decryptFail", stats, new long[] { 60*60*1000, 24*60*60*1000 });
includeRate("tunnel.unknownTunnelTimeLeft", stats, new long[] { 60*60*1000, 24*60*60*1000 });
includeRate("jobQueue.readyJobs", stats, new long[] { 60*1000, 60*60*1000 });
includeRate("jobQueue.droppedJobs", stats, new long[] { 60*60*1000, 24*60*60*1000 });
//includeRate("jobQueue.droppedJobs", stats, new long[] { 60*60*1000, 24*60*60*1000 });
includeRate("inNetPool.dropped", stats, new long[] { 60*60*1000, 24*60*60*1000 });
includeRate("tunnel.participatingTunnels", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("tunnel.testSuccessTime", stats, new long[] { 60*60*1000l, 24*60*60*1000l });
@@ -114,6 +117,7 @@ public class StatisticsManager implements Service {
includeRate("tunnel.inboundMessagesProcessed", stats, new long[] { 10*60*1000, 60*60*1000 });
includeRate("tunnel.participatingMessagesProcessed", stats, new long[] { 10*60*1000, 60*60*1000 });
includeRate("tunnel.expiredAfterAcceptTime", stats, new long[] { 10*60*1000l, 60*60*1000l, 24*60*60*1000l });
includeRate("tunnel.bytesAllocatedAtAccept", stats, new long[] { 10*60*1000l, 60*60*1000l, 24*60*60*1000l });
includeRate("netDb.lookupsReceived", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("netDb.lookupsHandled", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("netDb.lookupsMatched", stats, new long[] { 5*60*1000, 60*60*1000 });
@@ -121,16 +125,17 @@ public class StatisticsManager implements Service {
includeRate("netDb.successPeers", stats, new long[] { 60*60*1000 });
includeRate("netDb.failedPeers", stats, new long[] { 60*60*1000 });
includeRate("netDb.searchCount", stats, new long[] { 3*60*60*1000});
includeRate("inNetMessage.timeToDiscard", stats, new long[] { 5*60*1000, 10*60*1000, 60*60*1000 });
includeRate("outNetMessage.timeToDiscard", stats, new long[] { 5*60*1000, 10*60*1000, 60*60*1000 });
//includeRate("inNetMessage.timeToDiscard", stats, new long[] { 5*60*1000, 10*60*1000, 60*60*1000 });
//includeRate("outNetMessage.timeToDiscard", stats, new long[] { 5*60*1000, 10*60*1000, 60*60*1000 });
includeRate("router.throttleNetworkCause", stats, new long[] { 10*60*1000, 60*60*1000 });
includeRate("transport.receiveMessageSize", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("transport.sendMessageSize", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("transport.sendMessageSmall", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("transport.sendMessageMedium", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("transport.sendMessageLarge", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("transport.receiveMessageSmall", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("transport.receiveMessageMedium", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("transport.receiveMessageLarge", stats, new long[] { 5*60*1000, 60*60*1000 });
//includeRate("transport.sendMessageSize", stats, new long[] { 5*60*1000, 60*60*1000 });
//includeRate("transport.sendMessageSmall", stats, new long[] { 5*60*1000, 60*60*1000 });
//includeRate("transport.sendMessageMedium", stats, new long[] { 5*60*1000, 60*60*1000 });
//includeRate("transport.sendMessageLarge", stats, new long[] { 5*60*1000, 60*60*1000 });
//includeRate("transport.receiveMessageSmall", stats, new long[] { 5*60*1000, 60*60*1000 });
//includeRate("transport.receiveMessageMedium", stats, new long[] { 5*60*1000, 60*60*1000 });
//includeRate("transport.receiveMessageLarge", stats, new long[] { 5*60*1000, 60*60*1000 });
includeRate("client.sendAckTime", stats, new long[] { 60*60*1000, 24*60*60*1000l }, true);
stats.setProperty("stat_uptime", DataHelper.formatDuration(_context.router().getUptime()));
stats.setProperty("stat__rateKey", "avg;maxAvg;pctLifetime;[sat;satLim;maxSat;maxSatLim;][num;lifetimeFreq;maxFreq]");
@@ -257,5 +262,5 @@ public class StatisticsManager implements Service {
private final String num(double num) { synchronized (_fmt) { return _fmt.format(num); } }
private final String pct(double num) { synchronized (_pct) { return _pct.format(num); } }
public String renderStatusHTML() { return ""; }
public void renderStatusHTML(OutputStream out) { }
}

View File

@@ -60,13 +60,13 @@ public class SubmitMessageHistoryJob extends JobImpl {
I2PThread t = new I2PThread(new Runnable() {
public void run() {
_log.debug("Submitting data");
_context.messageHistory().setPauseFlushes(true);
String filename = _context.messageHistory().getFilename();
getContext().messageHistory().setPauseFlushes(true);
String filename = getContext().messageHistory().getFilename();
send(filename);
_context.messageHistory().setPauseFlushes(false);
Job job = new SubmitMessageHistoryJob(_context);
job.getTiming().setStartAfter(_context.clock().now() + getRequeueDelay());
_context.jobQueue().addJob(job);
getContext().messageHistory().setPauseFlushes(false);
Job job = new SubmitMessageHistoryJob(getContext());
job.getTiming().setStartAfter(getContext().clock().now() + getRequeueDelay());
getContext().jobQueue().addJob(job);
}
});
t.setName("SubmitData");
@@ -88,7 +88,7 @@ public class SubmitMessageHistoryJob extends JobImpl {
if (size > 0)
expectedSend += (int)size/10; // compression
FileInputStream fin = new FileInputStream(dataFile);
BandwidthLimitedInputStream in = new BandwidthLimitedInputStream(_context, fin, null, true);
BandwidthLimitedInputStream in = new BandwidthLimitedInputStream(getContext(), fin, null, true);
boolean sent = HTTPSendData.postData(url, size, in);
fin.close();
boolean deleted = dataFile.delete();
@@ -99,7 +99,7 @@ public class SubmitMessageHistoryJob extends JobImpl {
}
private String getURL() {
String str = _context.router().getConfigSetting(PARAM_SUBMIT_URL);
String str = getContext().router().getConfigSetting(PARAM_SUBMIT_URL);
if ( (str == null) || (str.trim().length() <= 0) )
return DEFAULT_SUBMIT_URL;
else
@@ -107,7 +107,7 @@ public class SubmitMessageHistoryJob extends JobImpl {
}
private boolean shouldSubmit() {
String str = _context.router().getConfigSetting(PARAM_SUBMIT_DATA);
String str = getContext().router().getConfigSetting(PARAM_SUBMIT_DATA);
if (str == null) {
_log.debug("History submit config not specified [" + PARAM_SUBMIT_DATA + "], default = " + DEFAULT_SUBMIT_DATA);
return DEFAULT_SUBMIT_DATA;

View File

@@ -60,4 +60,7 @@ public interface TunnelManagerFacade extends Service {
*
*/
boolean isInUse(Hash peer);
/** how many tunnels are we participating in? */
public int getParticipatingCount();
}

View File

@@ -1,5 +1,8 @@
package net.i2p.router.admin;
import java.io.IOException;
import java.io.OutputStream;
import net.i2p.router.RouterContext;
import net.i2p.router.Service;
import net.i2p.util.I2PThread;
@@ -18,7 +21,7 @@ public class AdminManager implements Service {
_log = context.logManager().getLog(AdminManager.class);
}
public String renderStatusHTML() { return ""; }
public void renderStatusHTML(OutputStream out) { }
public void shutdown() {
if (_listener != null) {

View File

@@ -47,7 +47,15 @@ class AdminRunner implements Runnable {
if (command.indexOf("favicon") >= 0) {
reply(out, "this is not a website");
} else if (command.indexOf("routerStats.html") >= 0) {
reply(out, _generator.generateStatsPage());
try {
out.write("HTTP/1.1 200 OK\nConnection: close\nCache-control: no-cache\nContent-type: text/html\n\n".getBytes());
_generator.generateStatsPage(out);
out.close();
} catch (IOException ioe) {
if (_log.shouldLog(Log.WARN))
_log.warn("Error writing out the admin reply");
throw ioe;
}
} else if (command.indexOf("/profile/") >= 0) {
replyText(out, getProfile(command));
} else if (command.indexOf("setTime") >= 0) {
@@ -60,7 +68,15 @@ class AdminRunner implements Runnable {
} else if (command.indexOf("/shutdown") >= 0) {
reply(out, shutdown(command));
} else if (true || command.indexOf("routerConsole.html") > 0) {
reply(out, _context.router().renderStatusHTML());
try {
out.write("HTTP/1.1 200 OK\nConnection: close\nCache-control: no-cache\nContent-type: text/html\n\n".getBytes());
_context.router().renderStatusHTML(out);
out.close();
} catch (IOException ioe) {
if (_log.shouldLog(Log.WARN))
_log.warn("Error writing out the admin reply");
throw ioe;
}
}
}

View File

@@ -29,183 +29,180 @@ public class StatsGenerator {
_log = context.logManager().getLog(StatsGenerator.class);
}
public String generateStatsPage() {
ByteArrayOutputStream baos = new ByteArrayOutputStream(32*1024);
try {
generateStatsPage(baos);
} catch (IOException ioe) {
_log.error("Error generating stats", ioe);
}
return new String(baos.toByteArray());
}
public void generateStatsPage(OutputStream out) throws IOException {
PrintWriter pw = new PrintWriter(out);
pw.println("<html><head><title>I2P Router Stats</title></head><body>");
pw.println("<h1>Router statistics</h1>");
pw.println("<i><a href=\"/routerConsole.html\">console</a> | <a href=\"/routerStats.html\">stats</a></i><hr />");
StringBuffer buf = new StringBuffer(16*1024);
buf.append("<html><head><title>I2P Router Stats</title></head><body>");
buf.append("<h1>Router statistics</h1>");
buf.append("<i><a href=\"/routerConsole.html\">console</a> | <a href=\"/routerStats.html\">stats</a></i><hr />");
buf.append("<form action=\"/routerStats.html\">");
buf.append("<select name=\"go\" onChange='location.href=this.value'>");
out.write(buf.toString().getBytes());
buf.setLength(0);
Map groups = _context.statManager().getStatsByGroup();
pw.println("<form action=\"/routerStats.html\">");
pw.println("<select name=\"go\" onChange='location.href=this.value'>");
for (Iterator iter = groups.keySet().iterator(); iter.hasNext(); ) {
String group = (String)iter.next();
Set stats = (Set)groups.get(group);
pw.print("<option value=\"/routerStats.html#");
pw.print(group);
pw.print("\">");
pw.print(group);
pw.println("</option>\n");
buf.append("<option value=\"/routerStats.html#").append(group).append("\">");
buf.append(group).append("</option>\n");
for (Iterator statIter = stats.iterator(); statIter.hasNext(); ) {
String stat = (String)statIter.next();
pw.print("<option value=\"/routerStats.html#");
pw.print(stat);
pw.print("\">...");
pw.print(stat);
pw.println("</option>\n");
buf.append("<option value=\"/routerStats.html#");
buf.append(stat);
buf.append("\">...");
buf.append(stat);
buf.append("</option>\n");
}
out.write(buf.toString().getBytes());
buf.setLength(0);
}
pw.println("</select>");
pw.println("</form>");
buf.append("</select>");
buf.append("</form>");
pw.print("Statistics gathered during this router's uptime (");
buf.append("Statistics gathered during this router's uptime (");
long uptime = _context.router().getUptime();
pw.print(DataHelper.formatDuration(uptime));
pw.println("). The data gathered is quantized over a 1 minute period, so should just be used as an estimate<p />");
buf.append(DataHelper.formatDuration(uptime));
buf.append("). The data gathered is quantized over a 1 minute period, so should just be used as an estimate<p />");
out.write(buf.toString().getBytes());
buf.setLength(0);
for (Iterator iter = groups.keySet().iterator(); iter.hasNext(); ) {
String group = (String)iter.next();
Set stats = (Set)groups.get(group);
pw.print("<h2><a name=\"");
pw.print(group);
pw.print("\">");
pw.print(group);
pw.println("</a></h2>");
pw.println("<ul>");
buf.append("<h2><a name=\"");
buf.append(group);
buf.append("\">");
buf.append(group);
buf.append("</a></h2>");
buf.append("<ul>");
out.write(buf.toString().getBytes());
buf.setLength(0);
for (Iterator statIter = stats.iterator(); statIter.hasNext(); ) {
String stat = (String)statIter.next();
pw.print("<li><b><a name=\"");
pw.print(stat);
pw.print("\">");
pw.print(stat);
pw.println("</a></b><br />");
buf.append("<li><b><a name=\"");
buf.append(stat);
buf.append("\">");
buf.append(stat);
buf.append("</a></b><br />");
if (_context.statManager().isFrequency(stat))
renderFrequency(stat, pw);
renderFrequency(stat, buf);
else
renderRate(stat, pw);
renderRate(stat, buf);
out.write(buf.toString().getBytes());
buf.setLength(0);
}
pw.println("</ul><hr />");
out.write("</ul><hr />".getBytes());
}
pw.println("</body></html>");
pw.flush();
out.write("</body></html>".getBytes());
}
private void renderFrequency(String name, PrintWriter pw) throws IOException {
private void renderFrequency(String name, StringBuffer buf) {
FrequencyStat freq = _context.statManager().getFrequency(name);
pw.print("<i>");
pw.print(freq.getDescription());
pw.println("</i><br />");
buf.append("<i>");
buf.append(freq.getDescription());
buf.append("</i><br />");
long periods[] = freq.getPeriods();
Arrays.sort(periods);
for (int i = 0; i < periods.length; i++) {
renderPeriod(pw, periods[i], "frequency");
renderPeriod(buf, periods[i], "frequency");
Frequency curFreq = freq.getFrequency(periods[i]);
pw.print(" <i>avg per period:</i> (");
pw.print(num(curFreq.getAverageEventsPerPeriod()));
pw.print(", max ");
pw.print(num(curFreq.getMaxAverageEventsPerPeriod()));
buf.append(" <i>avg per period:</i> (");
buf.append(num(curFreq.getAverageEventsPerPeriod()));
buf.append(", max ");
buf.append(num(curFreq.getMaxAverageEventsPerPeriod()));
if ( (curFreq.getMaxAverageEventsPerPeriod() > 0) && (curFreq.getAverageEventsPerPeriod() > 0) ) {
pw.print(", current is ");
pw.print(pct(curFreq.getAverageEventsPerPeriod()/curFreq.getMaxAverageEventsPerPeriod()));
pw.print(" of max");
buf.append(", current is ");
buf.append(pct(curFreq.getAverageEventsPerPeriod()/curFreq.getMaxAverageEventsPerPeriod()));
buf.append(" of max");
}
pw.print(")");
buf.append(")");
//buf.append(" <i>avg interval between updates:</i> (").append(num(curFreq.getAverageInterval())).append("ms, min ");
//buf.append(num(curFreq.getMinAverageInterval())).append("ms)");
pw.print(" <i>strict average per period:</i> ");
pw.print(num(curFreq.getStrictAverageEventsPerPeriod()));
pw.print(" events (averaged ");
pw.print(" using the lifetime of ");
pw.print(num(curFreq.getEventCount()));
pw.print(" events)");
pw.println("<br />");
buf.append(" <i>strict average per period:</i> ");
buf.append(num(curFreq.getStrictAverageEventsPerPeriod()));
buf.append(" events (averaged ");
buf.append(" using the lifetime of ");
buf.append(num(curFreq.getEventCount()));
buf.append(" events)");
buf.append("<br />");
}
pw.println("<br />");
buf.append("<br />");
}
private void renderRate(String name, PrintWriter pw) throws IOException {
private void renderRate(String name, StringBuffer buf) {
RateStat rate = _context.statManager().getRate(name);
pw.print("<i>");
pw.print(rate.getDescription());
pw.println("</i><br />");
buf.append("<i>");
buf.append(rate.getDescription());
buf.append("</i><br />");
long periods[] = rate.getPeriods();
Arrays.sort(periods);
pw.println("<ul>");
buf.append("<ul>");
for (int i = 0; i < periods.length; i++) {
pw.println("<li>");
renderPeriod(pw, periods[i], "rate");
buf.append("<li>");
renderPeriod(buf, periods[i], "rate");
Rate curRate = rate.getRate(periods[i]);
pw.print( "<i>avg value:</i> (");
pw.print(num(curRate.getAverageValue()));
pw.print(" peak ");
pw.print(num(curRate.getExtremeAverageValue()));
pw.print(", [");
pw.print(pct(curRate.getPercentageOfExtremeValue()));
pw.print(" of max");
pw.print(", and ");
pw.print(pct(curRate.getPercentageOfLifetimeValue()));
pw.print(" of lifetime average]");
buf.append( "<i>avg value:</i> (");
buf.append(num(curRate.getAverageValue()));
buf.append(" peak ");
buf.append(num(curRate.getExtremeAverageValue()));
buf.append(", [");
buf.append(pct(curRate.getPercentageOfExtremeValue()));
buf.append(" of max");
buf.append(", and ");
buf.append(pct(curRate.getPercentageOfLifetimeValue()));
buf.append(" of lifetime average]");
pw.print(")");
pw.print(" <i>highest total period value:</i> (");
pw.print(num(curRate.getExtremeTotalValue()));
pw.print(")");
buf.append(")");
buf.append(" <i>highest total period value:</i> (");
buf.append(num(curRate.getExtremeTotalValue()));
buf.append(")");
if (curRate.getLifetimeTotalEventTime() > 0) {
pw.print(" <i>saturation:</i> (");
pw.print(pct(curRate.getLastEventSaturation()));
pw.print(")");
pw.print(" <i>saturated limit:</i> (");
pw.print(num(curRate.getLastSaturationLimit()));
pw.print(")");
pw.print(" <i>peak saturation:</i> (");
pw.print(pct(curRate.getExtremeEventSaturation()));
pw.print(")");
pw.print(" <i>peak saturated limit:</i> (");
pw.print(num(curRate.getExtremeSaturationLimit()));
pw.print(")");
buf.append(" <i>saturation:</i> (");
buf.append(pct(curRate.getLastEventSaturation()));
buf.append(")");
buf.append(" <i>saturated limit:</i> (");
buf.append(num(curRate.getLastSaturationLimit()));
buf.append(")");
buf.append(" <i>peak saturation:</i> (");
buf.append(pct(curRate.getExtremeEventSaturation()));
buf.append(")");
buf.append(" <i>peak saturated limit:</i> (");
buf.append(num(curRate.getExtremeSaturationLimit()));
buf.append(")");
}
pw.print(" <i>events per period:</i> ");
pw.print(num(curRate.getLastEventCount()));
buf.append(" <i>events per period:</i> ");
buf.append(num(curRate.getLastEventCount()));
long numPeriods = curRate.getLifetimePeriods();
if (numPeriods > 0) {
double avgFrequency = curRate.getLifetimeEventCount() / (double)numPeriods;
double peakFrequency = curRate.getExtremeEventCount();
pw.print(" (lifetime average: ");
pw.print(num(avgFrequency));
pw.print(", peak average: ");
pw.print(num(curRate.getExtremeEventCount()));
pw.println(")");
buf.append(" (lifetime average: ");
buf.append(num(avgFrequency));
buf.append(", peak average: ");
buf.append(num(curRate.getExtremeEventCount()));
buf.append(")");
}
pw.print("</li>");
buf.append("</li>");
if (i + 1 == periods.length) {
// last one, so lets display the strict average
pw.print("<li><b>lifetime average value:</b> ");
pw.print(num(curRate.getLifetimeAverageValue()));
pw.print(" over ");
pw.print(num(curRate.getLifetimeEventCount()));
pw.println(" events<br /></li>");
buf.append("<li><b>lifetime average value:</b> ");
buf.append(num(curRate.getLifetimeAverageValue()));
buf.append(" over ");
buf.append(num(curRate.getLifetimeEventCount()));
buf.append(" events<br /></li>");
}
}
pw.print("</ul>");
pw.println("<br />");
buf.append("</ul>");
buf.append("<br />");
}
private static void renderPeriod(PrintWriter pw, long period, String name) throws IOException {
pw.print("<b>");
pw.print(DataHelper.formatDuration(period));
pw.print(" ");
pw.print(name);
pw.print(":</b> ");
private static void renderPeriod(StringBuffer buf, long period, String name) {
buf.append("<b>");
buf.append(DataHelper.formatDuration(period));
buf.append(" ");
buf.append(name);
buf.append(":</b> ");
}
private final static DecimalFormat _fmt = new DecimalFormat("###,##0.00");

View File

@@ -495,16 +495,16 @@ public class ClientConnectionRunner {
+ MessageStatusMessage.getStatusString(msg.getStatus())
+ " for session [" + _sessionId.getSessionId()
+ "] before they knew the messageId! delaying .5s");
_lastTried = ClientConnectionRunner.this._context.clock().now();
_lastTried = _context.clock().now();
requeue(REQUEUE_DELAY);
return;
}
boolean alreadyProcessed = false;
long beforeLock = MessageDeliveryStatusUpdate.this._context.clock().now();
long beforeLock = _context.clock().now();
long inLock = 0;
synchronized (_alreadyProcessed) {
inLock = MessageDeliveryStatusUpdate.this._context.clock().now();
inLock = _context.clock().now();
if (_alreadyProcessed.contains(_messageId)) {
_log.warn("Status already updated");
alreadyProcessed = true;
@@ -514,7 +514,7 @@ public class ClientConnectionRunner {
_alreadyProcessed.remove(0);
}
}
long afterLock = MessageDeliveryStatusUpdate.this._context.clock().now();
long afterLock = _context.clock().now();
if (afterLock - beforeLock > 50) {
_log.warn("MessageDeliveryStatusUpdate.locking took too long: " + (afterLock-beforeLock)
@@ -529,7 +529,7 @@ public class ClientConnectionRunner {
+ MessageStatusMessage.getStatusString(msg.getStatus())
+ " for session [" + _sessionId.getSessionId()
+ "] (with nonce=2), retrying after ["
+ (ClientConnectionRunner.this._context.clock().now() - _lastTried)
+ (_context.clock().now() - _lastTried)
+ "]", getAddedBy());
} else {
if (_log.shouldLog(Log.DEBUG))

View File

@@ -8,6 +8,9 @@ package net.i2p.router.client;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
@@ -305,8 +308,8 @@ public class ClientManager {
}
}
public String renderStatusHTML() {
StringBuffer buf = new StringBuffer();
public void renderStatusHTML(OutputStream out) throws IOException {
StringBuffer buf = new StringBuffer(8*1024);
buf.append("<h2>Clients</h2><ul>");
Map runners = null;
synchronized (_runners) {
@@ -325,7 +328,7 @@ public class ClientManager {
buf.append(runner.getLeaseSet()).append("</pre>\n");
}
buf.append("</ul>\n");
return buf.toString();
out.write(buf.toString().getBytes());
}
public void messageReceived(ClientMessage msg) {
@@ -335,7 +338,7 @@ public class ClientManager {
private class HandleJob extends JobImpl {
private ClientMessage _msg;
public HandleJob(ClientMessage msg) {
super(ClientManager.this._context);
super(_context);
_msg = msg;
}
public String getName() { return "Handle Inbound Client Messages"; }
@@ -347,7 +350,7 @@ public class ClientManager {
runner = getRunner(_msg.getDestinationHash());
if (runner != null) {
HandleJob.this._context.statManager().addRateData("client.receiveMessageSize",
_context.statManager().addRateData("client.receiveMessageSize",
_msg.getPayload().getSize(), 0);
runner.receiveMessage(_msg.getDestination(), null, _msg.getPayload());
} else {

View File

@@ -8,6 +8,9 @@ package net.i2p.router.client;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import net.i2p.data.Destination;
import net.i2p.data.Hash;
import net.i2p.data.LeaseSet;
@@ -148,12 +151,8 @@ public class ClientManagerFacadeImpl extends ClientManagerFacade {
}
}
public String renderStatusHTML() {
public void renderStatusHTML(OutputStream out) throws IOException {
if (_manager != null)
return _manager.renderStatusHTML();
else {
_log.error("Null manager on renderStatusHTML!");
return null;
}
_manager.renderStatusHTML(out);
}
}

View File

@@ -65,6 +65,6 @@ class CreateSessionJob extends JobImpl {
// and load 'em up (using anything not yet set as the software defaults)
settings.readFromProperties(props);
_context.tunnelManager().createTunnels(_runner.getConfig().getDestination(), settings, LEASE_CREATION_TIMEOUT);
getContext().tunnelManager().createTunnels(_runner.getConfig().getDestination(), settings, LEASE_CREATION_TIMEOUT);
}
}

View File

@@ -48,7 +48,7 @@ class RequestLeaseSetJob extends JobImpl {
if (_runner.isDead()) return;
LeaseRequestState oldReq = _runner.getLeaseRequest();
if (oldReq != null) {
if (oldReq.getExpiration() > _context.clock().now()) {
if (oldReq.getExpiration() > getContext().clock().now()) {
_log.error("Old *current* leaseRequest already exists! Why are we trying to request too quickly?", getAddedBy());
return;
} else {
@@ -76,7 +76,7 @@ class RequestLeaseSetJob extends JobImpl {
try {
_runner.setLeaseRequest(state);
_runner.doSend(msg);
_context.jobQueue().addJob(new CheckLeaseRequestStatus(state));
getContext().jobQueue().addJob(new CheckLeaseRequestStatus(state));
return;
} catch (I2CPMessageException ime) {
_log.error("Error sending I2CP message requesting the lease set", ime);
@@ -97,7 +97,7 @@ class RequestLeaseSetJob extends JobImpl {
private LeaseRequestState _req;
public CheckLeaseRequestStatus(LeaseRequestState state) {
super(RequestLeaseSetJob.this._context);
super(RequestLeaseSetJob.this.getContext());
_req = state;
getTiming().setStartAfter(state.getExpiration());
}
@@ -111,7 +111,7 @@ class RequestLeaseSetJob extends JobImpl {
_log.error("Failed to receive a leaseSet in the time allotted (" + new Date(_req.getExpiration()) + ")");
_runner.disconnectClient("Took too long to request leaseSet");
if (_req.getOnFailed() != null)
RequestLeaseSetJob.this._context.jobQueue().addJob(_req.getOnFailed());
RequestLeaseSetJob.this.getContext().jobQueue().addJob(_req.getOnFailed());
// only zero out the request if its the one we know about
if (_req == _runner.getLeaseRequest())

View File

@@ -75,18 +75,18 @@ public class BuildTestMessageJob extends JobImpl {
_log.debug("Building garlic message to test " + _target.getIdentity().getHash().toBase64());
GarlicConfig config = buildGarlicCloveConfig();
// TODO: make the last params on this specify the correct sessionKey and tags used
ReplyJob replyJob = new JobReplyJob(_context, _onSend, config.getRecipient().getIdentity().getPublicKey(), config.getId(), null, new HashSet());
ReplyJob replyJob = new JobReplyJob(getContext(), _onSend, config.getRecipient().getIdentity().getPublicKey(), config.getId(), null, new HashSet());
MessageSelector sel = buildMessageSelector();
SendGarlicJob job = new SendGarlicJob(_context, config, null, _onSendFailed, replyJob, _onSendFailed, _timeoutMs, _priority, sel);
_context.jobQueue().addJob(job);
SendGarlicJob job = new SendGarlicJob(getContext(), config, null, _onSendFailed, replyJob, _onSendFailed, _timeoutMs, _priority, sel);
getContext().jobQueue().addJob(job);
}
private MessageSelector buildMessageSelector() {
return new TestMessageSelector(_testMessageKey, _timeoutMs + _context.clock().now());
return new TestMessageSelector(_testMessageKey, _timeoutMs + getContext().clock().now());
}
private GarlicConfig buildGarlicCloveConfig() {
_testMessageKey = _context.random().nextLong(I2NPMessage.MAX_ID_VALUE);
_testMessageKey = getContext().random().nextLong(I2NPMessage.MAX_ID_VALUE);
if (_log.shouldLog(Log.INFO))
_log.info("Test message key: " + _testMessageKey);
GarlicConfig config = new GarlicConfig();
@@ -105,8 +105,8 @@ public class BuildTestMessageJob extends JobImpl {
config.setCertificate(new Certificate(Certificate.CERTIFICATE_TYPE_NULL, null));
config.setDeliveryInstructions(instructions);
config.setId(_context.random().nextLong(I2NPMessage.MAX_ID_VALUE));
config.setExpiration(_timeoutMs+_context.clock().now()+2*Router.CLOCK_FUDGE_FACTOR);
config.setId(getContext().random().nextLong(I2NPMessage.MAX_ID_VALUE));
config.setExpiration(_timeoutMs+getContext().clock().now()+2*Router.CLOCK_FUDGE_FACTOR);
config.setRecipient(_target);
config.setRequestAck(false);
@@ -126,16 +126,16 @@ public class BuildTestMessageJob extends JobImpl {
ackInstructions.setDelaySeconds(0);
ackInstructions.setEncrypted(false);
DeliveryStatusMessage msg = new DeliveryStatusMessage(_context);
msg.setArrival(new Date(_context.clock().now()));
DeliveryStatusMessage msg = new DeliveryStatusMessage(getContext());
msg.setArrival(new Date(getContext().clock().now()));
msg.setMessageId(_testMessageKey);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Delivery status message key: " + _testMessageKey + " arrival: " + msg.getArrival());
ackClove.setCertificate(new Certificate(Certificate.CERTIFICATE_TYPE_NULL, null));
ackClove.setDeliveryInstructions(ackInstructions);
ackClove.setExpiration(_timeoutMs+_context.clock().now());
ackClove.setId(_context.random().nextLong(I2NPMessage.MAX_ID_VALUE));
ackClove.setExpiration(_timeoutMs+getContext().clock().now());
ackClove.setId(getContext().random().nextLong(I2NPMessage.MAX_ID_VALUE));
ackClove.setPayload(msg);
ackClove.setRecipient(_target);
ackClove.setRequestAck(false);
@@ -187,9 +187,9 @@ public class BuildTestMessageJob extends JobImpl {
if ( (_keyDelivered != null) &&
(_sessionTagsDelivered != null) &&
(_sessionTagsDelivered.size() > 0) )
_context.sessionKeyManager().tagsDelivered(_target, _keyDelivered, _sessionTagsDelivered);
getContext().sessionKeyManager().tagsDelivered(_target, _keyDelivered, _sessionTagsDelivered);
_context.jobQueue().addJob(_job);
getContext().jobQueue().addJob(_job);
}
public void setMessage(I2NPMessage message) {

View File

@@ -47,7 +47,7 @@ public class HandleGarlicMessageJob extends JobImpl {
public HandleGarlicMessageJob(RouterContext context, GarlicMessage msg, RouterIdentity from, Hash fromHash) {
super(context);
_log = context.logManager().getLog(HandleGarlicMessageJob.class);
_context.statManager().createRateStat("crypto.garlic.decryptFail", "How often garlic messages are undecryptable", "Encryption", new long[] { 5*60*1000, 60*60*1000, 24*60*60*1000 });
getContext().statManager().createRateStat("crypto.garlic.decryptFail", "How often garlic messages are undecryptable", "Encryption", new long[] { 5*60*1000, 60*60*1000, 24*60*60*1000 });
if (_log.shouldLog(Log.DEBUG))
_log.debug("New handle garlicMessageJob called w/ message from [" + from + "]", new Exception("Debug"));
_message = msg;
@@ -60,9 +60,9 @@ public class HandleGarlicMessageJob extends JobImpl {
public String getName() { return "Handle Inbound Garlic Message"; }
public void runJob() {
CloveSet set = _parser.getGarlicCloves(_message, _context.keyManager().getPrivateKey());
CloveSet set = _parser.getGarlicCloves(_message, getContext().keyManager().getPrivateKey());
if (set == null) {
Set keys = _context.keyManager().getAllKeys();
Set keys = getContext().keyManager().getAllKeys();
if (_log.shouldLog(Log.DEBUG))
_log.debug("Decryption with the router's key failed, now try with the " + keys.size() + " leaseSet keys");
// our router key failed, which means that it was either encrypted wrong
@@ -95,8 +95,8 @@ public class HandleGarlicMessageJob extends JobImpl {
_log.error("CloveMessageParser failed to decrypt the message [" + _message.getUniqueId()
+ "] to us when received from [" + _fromHash + "] / [" + _from + "]",
new Exception("Decrypt garlic failed"));
_context.statManager().addRateData("crypto.garlic.decryptFail", 1, 0);
_context.messageHistory().messageProcessingError(_message.getUniqueId(),
getContext().statManager().addRateData("crypto.garlic.decryptFail", 1, 0);
getContext().messageHistory().messageProcessingError(_message.getUniqueId(),
_message.getClass().getName(),
"Garlic could not be decrypted");
}
@@ -116,7 +116,7 @@ public class HandleGarlicMessageJob extends JobImpl {
// this should be in its own thread perhaps? and maybe _cloves should be
// synced to disk?
List toRemove = new ArrayList(32);
long now = _context.clock().now();
long now = getContext().clock().now();
synchronized (_cloves) {
for (Iterator iter = _cloves.keySet().iterator(); iter.hasNext();) {
Long id = (Long)iter.next();
@@ -139,7 +139,7 @@ public class HandleGarlicMessageJob extends JobImpl {
_log.debug("Clove " + clove.getCloveId() + " expiring on " + clove.getExpiration()
+ " is not known");
}
long now = _context.clock().now();
long now = getContext().clock().now();
if (clove.getExpiration().getTime() < now) {
if (clove.getExpiration().getTime() < now + Router.CLOCK_FUDGE_FACTOR) {
_log.warn("Expired garlic received, but within our fudge factor ["
@@ -148,7 +148,7 @@ public class HandleGarlicMessageJob extends JobImpl {
if (_log.shouldLog(Log.DEBUG))
_log.error("Expired garlic clove received - replay attack in progress? [cloveId = "
+ clove.getCloveId() + " expiration = " + clove.getExpiration()
+ " now = " + (new Date(_context.clock().now())));
+ " now = " + (new Date(getContext().clock().now())));
return false;
}
}
@@ -174,7 +174,7 @@ public class HandleGarlicMessageJob extends JobImpl {
}
public void dropped() {
_context.messageHistory().messageProcessingError(_message.getUniqueId(),
getContext().messageHistory().messageProcessingError(_message.getUniqueId(),
_message.getClass().getName(),
"Dropped due to overload");
}

View File

@@ -42,7 +42,7 @@ public class HandleSourceRouteReplyMessageJob extends JobImpl {
public HandleSourceRouteReplyMessageJob(RouterContext context, SourceRouteReplyMessage msg, RouterIdentity from, Hash fromHash) {
super(context);
_log = _context.logManager().getLog(HandleSourceRouteReplyMessageJob.class);
_log = getContext().logManager().getLog(HandleSourceRouteReplyMessageJob.class);
_message = msg;
_from = from;
_fromHash = fromHash;
@@ -53,9 +53,9 @@ public class HandleSourceRouteReplyMessageJob extends JobImpl {
public String getName() { return "Handle Source Route Reply Message"; }
public void runJob() {
try {
long before = _context.clock().now();
_message.decryptHeader(_context.keyManager().getPrivateKey());
long after = _context.clock().now();
long before = getContext().clock().now();
_message.decryptHeader(getContext().keyManager().getPrivateKey());
long after = getContext().clock().now();
if ( (after-before) > 1000) {
if (_log.shouldLog(Log.WARN))
_log.warn("Took more than a second (" + (after-before)
@@ -71,7 +71,7 @@ public class HandleSourceRouteReplyMessageJob extends JobImpl {
+ _message.getUniqueId() + ")", dfe);
if (_log.shouldLog(Log.WARN))
_log.warn("Message header could not be decrypted: " + _message, getAddedBy());
_context.messageHistory().messageProcessingError(_message.getUniqueId(),
getContext().messageHistory().messageProcessingError(_message.getUniqueId(),
_message.getClass().getName(),
"Source route message header could not be decrypted");
return;
@@ -85,7 +85,7 @@ public class HandleSourceRouteReplyMessageJob extends JobImpl {
DeliveryInstructions instructions = _message.getDecryptedInstructions();
long now = _context.clock().now();
long now = getContext().clock().now();
long expiration = _message.getDecryptedExpiration();
// if its expiring really soon, jack the expiration 30 seconds
if (expiration < now+10*1000)
@@ -97,7 +97,7 @@ public class HandleSourceRouteReplyMessageJob extends JobImpl {
}
private boolean isValid() {
long now = _context.clock().now();
long now = getContext().clock().now();
if (_message.getDecryptedExpiration() < now) {
if (_message.getDecryptedExpiration() < now + Router.CLOCK_FUDGE_FACTOR) {
_log.info("Expired message received, but within our fudge factor");
@@ -135,7 +135,7 @@ public class HandleSourceRouteReplyMessageJob extends JobImpl {
// this should be in its own thread perhaps, or job? and maybe _seenMessages should be
// synced to disk?
List toRemove = new ArrayList(32);
long now = _context.clock().now()-Router.CLOCK_FUDGE_FACTOR;
long now = getContext().clock().now()-Router.CLOCK_FUDGE_FACTOR;
synchronized (_seenMessages) {
for (Iterator iter = _seenMessages.keySet().iterator(); iter.hasNext();) {
Long id = (Long)iter.next();
@@ -149,7 +149,7 @@ public class HandleSourceRouteReplyMessageJob extends JobImpl {
}
public void dropped() {
_context.messageHistory().messageProcessingError(_message.getUniqueId(),
getContext().messageHistory().messageProcessingError(_message.getUniqueId(),
_message.getClass().getName(),
"Dropped due to overload");
}

View File

@@ -52,7 +52,7 @@ public class HandleTunnelMessageJob extends JobImpl {
_handler = new I2NPMessageHandler(ctx);
ctx.statManager().createRateStat("tunnel.unknownTunnelTimeLeft", "How much time is left on tunnel messages we receive that are for unknown tunnels?", "Tunnels", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
ctx.statManager().createRateStat("tunnel.gatewayMessageSize", "How large are the messages we are forwarding on as an inbound gateway?", "Tunnels", new long[] { 60*1000l, 60*60*1000l, 24*60*60*1000l });
ctx.statManager().createRateStat("tunnel.relayMessageSize", "How large are the messages we are forwarding on as a participant in a tunnel?", "Tunnels", new long[] { 60*1000l, 60*60*1000l, 24*60*60*1000l });
ctx.statManager().createRateStat("tunnel.relayMessageSize", "How large are the messages we are forwarding on as a participant in a tunnel?", "Tunnels", new long[] { 60*1000l, 10*60*1000l, 60*60*1000l, 24*60*60*1000l });
ctx.statManager().createRateStat("tunnel.endpointMessageSize", "How large are the messages we are forwarding in as an outbound endpoint?", "Tunnels", new long[] { 60*1000l, 60*60*1000l, 24*60*60*1000l });
ctx.statManager().createRateStat("tunnel.expiredAfterAcceptTime", "How long after expiration do we finally start running an expired tunnel message?", "Tunnels", new long[] { 10*60*1000l, 60*60*1000l, 24*60*60*1000l });
_message = msg;
@@ -64,7 +64,7 @@ public class HandleTunnelMessageJob extends JobImpl {
public void runJob() {
TunnelId id = _message.getTunnelId();
long excessLag = _context.clock().now() - _message.getMessageExpiration().getTime();
long excessLag = getContext().clock().now() - _message.getMessageExpiration().getTime();
if (excessLag > Router.CLOCK_FUDGE_FACTOR) {
// expired while on the queue
if (_log.shouldLog(Log.WARN))
@@ -72,8 +72,8 @@ public class HandleTunnelMessageJob extends JobImpl {
+ id.getTunnelId() + " expiring "
+ excessLag
+ "ms ago");
_context.statManager().addRateData("tunnel.expiredAfterAcceptTime", excessLag, excessLag);
_context.messageHistory().messageProcessingError(_message.getUniqueId(),
getContext().statManager().addRateData("tunnel.expiredAfterAcceptTime", excessLag, excessLag);
getContext().messageHistory().messageProcessingError(_message.getUniqueId(),
TunnelMessage.class.getName(),
"tunnel message expired on the queue");
return;
@@ -86,18 +86,18 @@ public class HandleTunnelMessageJob extends JobImpl {
+ "ms ago");
}
TunnelInfo info = _context.tunnelManager().getTunnelInfo(id);
TunnelInfo info = getContext().tunnelManager().getTunnelInfo(id);
if (info == null) {
Hash from = _fromHash;
if (_from != null)
from = _from.getHash();
_context.messageHistory().droppedTunnelMessage(id, from);
getContext().messageHistory().droppedTunnelMessage(id, from);
if (_log.shouldLog(Log.ERROR))
_log.error("Received a message for an unknown tunnel [" + id.getTunnelId()
+ "], dropping it: " + _message, getAddedBy());
long timeRemaining = _message.getMessageExpiration().getTime() - _context.clock().now();
_context.statManager().addRateData("tunnel.unknownTunnelTimeLeft", timeRemaining, 0);
long timeRemaining = _message.getMessageExpiration().getTime() - getContext().clock().now();
getContext().statManager().addRateData("tunnel.unknownTunnelTimeLeft", timeRemaining, 0);
return;
}
@@ -107,8 +107,8 @@ public class HandleTunnelMessageJob extends JobImpl {
if (info == null) {
if (_log.shouldLog(Log.ERROR))
_log.error("We are not part of a known tunnel?? wtf! drop.", getAddedBy());
long timeRemaining = _message.getMessageExpiration().getTime() - _context.clock().now();
_context.statManager().addRateData("tunnel.unknownTunnelTimeLeft", timeRemaining, 0);
long timeRemaining = _message.getMessageExpiration().getTime() - getContext().clock().now();
getContext().statManager().addRateData("tunnel.unknownTunnelTimeLeft", timeRemaining, 0);
return;
} else {
if (_log.shouldLog(Log.DEBUG))
@@ -123,7 +123,7 @@ public class HandleTunnelMessageJob extends JobImpl {
_log.debug("We are the gateway to tunnel " + id.getTunnelId());
byte data[] = _message.getData();
I2NPMessage msg = getBody(data);
_context.jobQueue().addJob(new HandleGatewayMessageJob(msg, info, data.length));
getContext().jobQueue().addJob(new HandleGatewayMessageJob(msg, info, data.length));
return;
} else {
if (_log.shouldLog(Log.DEBUG))
@@ -131,14 +131,14 @@ public class HandleTunnelMessageJob extends JobImpl {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Process locally");
if (info.getDestination() != null) {
if (!_context.clientManager().isLocal(info.getDestination())) {
if (!getContext().clientManager().isLocal(info.getDestination())) {
if (_log.shouldLog(Log.WARN))
_log.warn("Received a message on a tunnel allocated to a client that has disconnected - dropping it!");
if (_log.shouldLog(Log.DEBUG))
_log.debug("Dropping message for disconnected client: " + _message);
_context.messageHistory().droppedOtherMessage(_message);
_context.messageHistory().messageProcessingError(_message.getUniqueId(),
getContext().messageHistory().droppedOtherMessage(_message);
getContext().messageHistory().messageProcessingError(_message.getUniqueId(),
_message.getClass().getName(),
"Disconnected client");
return;
@@ -147,7 +147,7 @@ public class HandleTunnelMessageJob extends JobImpl {
I2NPMessage body = getBody(_message.getData());
if (body != null) {
_context.jobQueue().addJob(new HandleLocallyJob(body, info));
getContext().jobQueue().addJob(new HandleLocallyJob(body, info));
return;
} else {
if (_log.shouldLog(Log.ERROR))
@@ -167,7 +167,7 @@ public class HandleTunnelMessageJob extends JobImpl {
} else {
// participant
TunnelVerificationStructure struct = _message.getVerificationStructure();
boolean ok = struct.verifySignature(_context, info.getVerificationKey().getKey());
boolean ok = struct.verifySignature(getContext(), info.getVerificationKey().getKey());
if (!ok) {
if (_log.shouldLog(Log.WARN))
_log.warn("Failed tunnel verification! Spoofing / tagging attack? " + _message, getAddedBy());
@@ -179,18 +179,18 @@ public class HandleTunnelMessageJob extends JobImpl {
+ " received where we're not the gateway and there are remaining hops, so forward it on to "
+ info.getNextHop().toBase64() + " via SendTunnelMessageJob");
_context.statManager().addRateData("tunnel.relayMessageSize",
getContext().statManager().addRateData("tunnel.relayMessageSize",
_message.getData().length, 0);
_context.jobQueue().addJob(new SendMessageDirectJob(_context, _message,
getContext().jobQueue().addJob(new SendMessageDirectJob(getContext(), _message,
info.getNextHop(),
_context.clock().now() + FORWARD_TIMEOUT,
getContext().clock().now() + FORWARD_TIMEOUT,
FORWARD_PRIORITY));
return;
} else {
if (_log.shouldLog(Log.DEBUG))
_log.debug("No more hops, unwrap and follow the instructions");
_context.jobQueue().addJob(new HandleEndpointJob(info));
getContext().jobQueue().addJob(new HandleEndpointJob(info));
return;
}
}
@@ -227,20 +227,20 @@ public class HandleTunnelMessageJob extends JobImpl {
_log.error("Unable to recover the body from the tunnel", getAddedBy());
return;
} else {
_context.jobQueue().addJob(new ProcessBodyLocallyJob(body, instructions, ourPlace));
getContext().jobQueue().addJob(new ProcessBodyLocallyJob(body, instructions, ourPlace));
}
}
}
private void honorInstructions(DeliveryInstructions instructions, I2NPMessage body) {
_context.statManager().addRateData("tunnel.endpointMessageSize", _message.getData().length, 0);
getContext().statManager().addRateData("tunnel.endpointMessageSize", _message.getData().length, 0);
switch (instructions.getDeliveryMode()) {
case DeliveryInstructions.DELIVERY_MODE_LOCAL:
sendToLocal(body);
break;
case DeliveryInstructions.DELIVERY_MODE_ROUTER:
if (_context.routerHash().equals(instructions.getRouter())) {
if (getContext().routerHash().equals(instructions.getRouter())) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Delivery instructions point at a router, but we're that router, so send to local");
sendToLocal(body);
@@ -261,7 +261,7 @@ public class HandleTunnelMessageJob extends JobImpl {
private void sendToDest(Hash dest, I2NPMessage body) {
if (body instanceof DataMessage) {
boolean isLocal = _context.clientManager().isLocal(dest);
boolean isLocal = getContext().clientManager().isLocal(dest);
if (isLocal) {
deliverMessage(null, dest, (DataMessage)body);
return;
@@ -282,17 +282,17 @@ public class HandleTunnelMessageJob extends JobImpl {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Sending on to requested tunnel " + id.getTunnelId() + " on router "
+ router.toBase64());
TunnelMessage msg = new TunnelMessage(_context);
TunnelMessage msg = new TunnelMessage(getContext());
msg.setTunnelId(id);
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream(1024);
body.writeBytes(baos);
msg.setData(baos.toByteArray());
long exp = _context.clock().now() + FORWARD_TIMEOUT;
_context.jobQueue().addJob(new SendMessageDirectJob(_context, msg, router, exp, FORWARD_PRIORITY));
long exp = getContext().clock().now() + FORWARD_TIMEOUT;
getContext().jobQueue().addJob(new SendMessageDirectJob(getContext(), msg, router, exp, FORWARD_PRIORITY));
String bodyType = body.getClass().getName();
_context.messageHistory().wrap(bodyType, body.getUniqueId(), TunnelMessage.class.getName(), msg.getUniqueId());
getContext().messageHistory().wrap(bodyType, body.getUniqueId(), TunnelMessage.class.getName(), msg.getUniqueId());
} catch (DataFormatException dfe) {
if (_log.shouldLog(Log.ERROR))
_log.error("Error writing out the message to forward to the tunnel", dfe);
@@ -306,26 +306,26 @@ public class HandleTunnelMessageJob extends JobImpl {
// TODO: we may want to send it via a tunnel later on, but for now, direct will do.
if (_log.shouldLog(Log.DEBUG))
_log.debug("Sending on to requested router " + router.toBase64());
long exp = _context.clock().now() + FORWARD_TIMEOUT;
_context.jobQueue().addJob(new SendMessageDirectJob(_context, body, router, exp, FORWARD_PRIORITY));
long exp = getContext().clock().now() + FORWARD_TIMEOUT;
getContext().jobQueue().addJob(new SendMessageDirectJob(getContext(), body, router, exp, FORWARD_PRIORITY));
}
private void sendToLocal(I2NPMessage body) {
InNetMessage msg = new InNetMessage(_context);
InNetMessage msg = new InNetMessage(getContext());
msg.setMessage(body);
msg.setFromRouter(_from);
msg.setFromRouterHash(_fromHash);
_context.inNetMessagePool().add(msg);
getContext().inNetMessagePool().add(msg);
}
private void deliverMessage(Destination dest, Hash destHash, DataMessage msg) {
boolean valid = _context.messageValidator().validateMessage(msg.getUniqueId(), msg.getMessageExpiration().getTime());
boolean valid = getContext().messageValidator().validateMessage(msg.getUniqueId(), msg.getMessageExpiration().getTime());
if (!valid) {
if (_log.shouldLog(Log.WARN))
_log.warn("Duplicate data message received [" + msg.getUniqueId()
+ " expiring on " + msg.getMessageExpiration() + "]");
_context.messageHistory().droppedOtherMessage(msg);
_context.messageHistory().messageProcessingError(msg.getUniqueId(), msg.getClass().getName(),
getContext().messageHistory().droppedOtherMessage(msg);
getContext().messageHistory().messageProcessingError(msg.getUniqueId(), msg.getClass().getName(),
"Duplicate payload");
return;
}
@@ -344,9 +344,9 @@ public class HandleTunnelMessageJob extends JobImpl {
cmsg.setPayload(payload);
cmsg.setReceptionInfo(info);
_context.messageHistory().receivePayloadMessage(msg.getUniqueId());
getContext().messageHistory().receivePayloadMessage(msg.getUniqueId());
// if the destination isn't local, the ClientMessagePool forwards it off as an OutboundClientMessageJob
_context.clientMessagePool().add(cmsg);
getContext().clientMessagePool().add(cmsg);
}
private I2NPMessage getBody(byte body[]) {
@@ -364,9 +364,9 @@ public class HandleTunnelMessageJob extends JobImpl {
private I2NPMessage decryptBody(byte encryptedMessage[], SessionKey key) {
byte iv[] = new byte[16];
Hash h = _context.sha().calculateHash(key.getData());
Hash h = getContext().sha().calculateHash(key.getData());
System.arraycopy(h.getData(), 0, iv, 0, iv.length);
byte decrypted[] = _context.AESEngine().safeDecrypt(encryptedMessage, key, iv);
byte decrypted[] = getContext().AESEngine().safeDecrypt(encryptedMessage, key, iv);
if (decrypted == null) {
if (_log.shouldLog(Log.ERROR))
_log.error("Error decrypting the message", getAddedBy());
@@ -378,9 +378,9 @@ public class HandleTunnelMessageJob extends JobImpl {
private DeliveryInstructions getInstructions(byte encryptedInstructions[], SessionKey key) {
try {
byte iv[] = new byte[16];
Hash h = _context.sha().calculateHash(key.getData());
Hash h = getContext().sha().calculateHash(key.getData());
System.arraycopy(h.getData(), 0, iv, 0, iv.length);
byte decrypted[] = _context.AESEngine().safeDecrypt(encryptedInstructions, key, iv);
byte decrypted[] = getContext().AESEngine().safeDecrypt(encryptedInstructions, key, iv);
if (decrypted == null) {
if (_log.shouldLog(Log.ERROR))
_log.error("Error decrypting the instructions", getAddedBy());
@@ -400,7 +400,7 @@ public class HandleTunnelMessageJob extends JobImpl {
}
private TunnelInfo getUs(TunnelInfo info) {
Hash us = _context.routerHash();
Hash us = getContext().routerHash();
while (info != null) {
if (us.equals(info.getThisHop()))
return info;
@@ -423,7 +423,7 @@ public class HandleTunnelMessageJob extends JobImpl {
return false;
}
if (!vstruct.verifySignature(_context, info.getVerificationKey().getKey())) {
if (!vstruct.verifySignature(getContext(), info.getVerificationKey().getKey())) {
if (_log.shouldLog(Log.ERROR))
_log.error("Received a tunnel message with an invalid signature!");
// shitlist the sender?
@@ -431,7 +431,7 @@ public class HandleTunnelMessageJob extends JobImpl {
}
// now validate the message
Hash msgHash = _context.sha().calculateHash(_message.getData());
Hash msgHash = getContext().sha().calculateHash(_message.getData());
if (msgHash.equals(vstruct.getMessageHash())) {
// hash matches. good.
return true;
@@ -444,7 +444,7 @@ public class HandleTunnelMessageJob extends JobImpl {
}
public void dropped() {
_context.messageHistory().messageProcessingError(_message.getUniqueId(), _message.getClass().getName(),
getContext().messageHistory().messageProcessingError(_message.getUniqueId(), _message.getClass().getName(),
"Dropped due to overload");
}
@@ -459,13 +459,13 @@ public class HandleTunnelMessageJob extends JobImpl {
private TunnelInfo _info;
public HandleGatewayMessageJob(I2NPMessage body, TunnelInfo tunnel, int length) {
super(HandleTunnelMessageJob.this._context);
super(HandleTunnelMessageJob.this.getContext());
_body = body;
_length = length;
_info = tunnel;
}
public void runJob() {
RouterContext ctx = HandleTunnelMessageJob.this._context;
RouterContext ctx = HandleTunnelMessageJob.this.getContext();
if (_body != null) {
ctx.statManager().addRateData("tunnel.gatewayMessageSize", _length, 0);
if (_log.shouldLog(Log.INFO))
@@ -488,7 +488,7 @@ public class HandleTunnelMessageJob extends JobImpl {
private TunnelInfo _info;
public HandleLocallyJob(I2NPMessage body, TunnelInfo tunnel) {
super(HandleTunnelMessageJob.this._context);
super(HandleTunnelMessageJob.this.getContext());
_body = body;
_info = tunnel;
}
@@ -507,11 +507,11 @@ public class HandleTunnelMessageJob extends JobImpl {
_log.info("Message for tunnel " + _info.getTunnelId() +
" received at the gateway (us), but its a 0 length tunnel though it is a "
+ _body.getClass().getName() + ", so process it locally");
InNetMessage msg = new InNetMessage(HandleLocallyJob.this._context);
InNetMessage msg = new InNetMessage(HandleLocallyJob.this.getContext());
msg.setFromRouter(_from);
msg.setFromRouterHash(_fromHash);
msg.setMessage(_body);
HandleLocallyJob.this._context.inNetMessagePool().add(msg);
HandleLocallyJob.this.getContext().inNetMessagePool().add(msg);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Message added to Inbound network pool for local processing: " + _message);
}
@@ -523,7 +523,7 @@ public class HandleTunnelMessageJob extends JobImpl {
private class HandleEndpointJob extends JobImpl {
private TunnelInfo _info;
public HandleEndpointJob(TunnelInfo info) {
super(HandleTunnelMessageJob.this._context);
super(HandleTunnelMessageJob.this.getContext());
_info = info;
}
public void runJob() {
@@ -538,7 +538,7 @@ public class HandleTunnelMessageJob extends JobImpl {
private TunnelInfo _ourPlace;
private DeliveryInstructions _instructions;
public ProcessBodyLocallyJob(I2NPMessage body, DeliveryInstructions instructions, TunnelInfo ourPlace) {
super(HandleTunnelMessageJob.this._context);
super(HandleTunnelMessageJob.this.getContext());
_body = body;
_instructions = instructions;
_ourPlace = ourPlace;

View File

@@ -121,7 +121,7 @@ public class OutboundClientMessageJob extends JobImpl {
}
}
_overallExpiration = timeoutMs + _context.clock().now();
_overallExpiration = timeoutMs + getContext().clock().now();
_status = new OutboundClientMessageStatus(msg);
_nextStep = new NextStepJob();
_lookupLeaseSetFailed = new LookupLeaseSetFailedJob();
@@ -137,11 +137,11 @@ public class OutboundClientMessageJob extends JobImpl {
if (_log.shouldLog(Log.DEBUG))
_log.debug(getJobId() + ": Clove built");
Hash to = _status.getTo().calculateHash();
long timeoutMs = _overallExpiration - _context.clock().now();
long timeoutMs = _overallExpiration - getContext().clock().now();
if (_log.shouldLog(Log.DEBUG))
_log.debug(getJobId() + ": Send outbound client message - sending off leaseSet lookup job");
_status.incrementLookups();
_context.netDb().lookupLeaseSet(to, _nextStep, _lookupLeaseSetFailed, timeoutMs);
getContext().netDb().lookupLeaseSet(to, _nextStep, _lookupLeaseSetFailed, timeoutMs);
}
/**
@@ -163,7 +163,7 @@ public class OutboundClientMessageJob extends JobImpl {
return;
}
long now = _context.clock().now();
long now = getContext().clock().now();
if (now >= _overallExpiration) {
if (_log.shouldLog(Log.WARN))
_log.warn(getJobId() + ": sendNext() - Expired");
@@ -183,13 +183,13 @@ public class OutboundClientMessageJob extends JobImpl {
_log.warn(getJobId() + ": No more leases, and we still haven't heard back from the peer"
+ ", refetching the leaseSet to try again");
_status.setLeaseSet(null);
long remainingMs = _overallExpiration - _context.clock().now();
long remainingMs = _overallExpiration - getContext().clock().now();
if (_status.getNumLookups() < MAX_LEASE_LOOKUPS) {
_status.incrementLookups();
Hash to = _status.getMessage().getDestination().calculateHash();
_status.clearAlreadySent(); // so we can send down old tunnels again
_context.netDb().fail(to); // so we don't just fetch what we have
_context.netDb().lookupLeaseSet(to, _nextStep, _lookupLeaseSetFailed, remainingMs);
getContext().netDb().fail(to); // so we don't just fetch what we have
getContext().netDb().lookupLeaseSet(to, _nextStep, _lookupLeaseSetFailed, remainingMs);
return;
} else {
if (_log.shouldLog(Log.WARN))
@@ -200,7 +200,7 @@ public class OutboundClientMessageJob extends JobImpl {
}
}
_context.jobQueue().addJob(new SendJob(nextLease));
getContext().jobQueue().addJob(new SendJob(nextLease));
}
/**
@@ -214,7 +214,7 @@ public class OutboundClientMessageJob extends JobImpl {
private Lease getNextLease() {
LeaseSet ls = _status.getLeaseSet();
if (ls == null) {
ls = _context.netDb().lookupLeaseSetLocally(_status.getTo().calculateHash());
ls = getContext().netDb().lookupLeaseSetLocally(_status.getTo().calculateHash());
if (ls == null) {
if (_log.shouldLog(Log.INFO))
_log.info(getJobId() + ": Lookup locally didn't find the leaseSet");
@@ -225,7 +225,7 @@ public class OutboundClientMessageJob extends JobImpl {
}
_status.setLeaseSet(ls);
}
long now = _context.clock().now();
long now = getContext().clock().now();
// get the possible leases
List leases = new ArrayList(4);
@@ -285,7 +285,7 @@ public class OutboundClientMessageJob extends JobImpl {
_log.warn(getJobId() + ": Bundle leaseSet probability overridden incorrectly ["
+ str + "]", nfe);
}
if (probability >= _context.random().nextInt(100))
if (probability >= getContext().random().nextInt(100))
return true;
else
return false;
@@ -303,16 +303,16 @@ public class OutboundClientMessageJob extends JobImpl {
*
*/
private void send(Lease lease) {
long token = _context.random().nextLong(I2NPMessage.MAX_ID_VALUE);
long token = getContext().random().nextLong(I2NPMessage.MAX_ID_VALUE);
PublicKey key = _status.getLeaseSet().getEncryptionKey();
SessionKey sessKey = new SessionKey();
Set tags = new HashSet();
LeaseSet replyLeaseSet = null;
if (_shouldBundle) {
replyLeaseSet = _context.netDb().lookupLeaseSetLocally(_status.getFrom().calculateHash());
replyLeaseSet = getContext().netDb().lookupLeaseSetLocally(_status.getFrom().calculateHash());
}
GarlicMessage msg = OutboundClientMessageJobHelper.createGarlicMessage(_context, token,
GarlicMessage msg = OutboundClientMessageJobHelper.createGarlicMessage(getContext(), token,
_overallExpiration, key,
_status.getClove(),
_status.getTo(), sessKey,
@@ -338,12 +338,12 @@ public class OutboundClientMessageJob extends JobImpl {
_log.debug(getJobId() + ": Sending tunnel message out " + outTunnelId + " to "
+ lease.getTunnelId() + " on "
+ lease.getRouterIdentity().getHash().toBase64());
SendTunnelMessageJob j = new SendTunnelMessageJob(_context, msg, outTunnelId,
SendTunnelMessageJob j = new SendTunnelMessageJob(getContext(), msg, outTunnelId,
lease.getRouterIdentity().getHash(),
lease.getTunnelId(), null, onReply,
onFail, selector, SEND_TIMEOUT_MS,
SEND_PRIORITY);
_context.jobQueue().addJob(j);
getContext().jobQueue().addJob(j);
} else {
if (_log.shouldLog(Log.ERROR))
_log.error(getJobId() + ": Could not find any outbound tunnels to send the payload through... wtf?");
@@ -360,7 +360,7 @@ public class OutboundClientMessageJob extends JobImpl {
TunnelSelectionCriteria crit = new TunnelSelectionCriteria();
crit.setMaximumTunnelsRequired(1);
crit.setMinimumTunnelsRequired(1);
List tunnelIds = _context.tunnelManager().selectOutboundTunnelIds(crit);
List tunnelIds = getContext().tunnelManager().selectOutboundTunnelIds(crit);
if (tunnelIds.size() <= 0)
return null;
else
@@ -375,7 +375,7 @@ public class OutboundClientMessageJob extends JobImpl {
private void dieFatal() {
if (_status.getSuccess()) return;
boolean alreadyFailed = _status.failed();
long sendTime = _context.clock().now() - _status.getStart();
long sendTime = getContext().clock().now() - _status.getStart();
ClientMessage msg = _status.getMessage();
if (alreadyFailed) {
if (_log.shouldLog(Log.DEBUG))
@@ -390,10 +390,10 @@ public class OutboundClientMessageJob extends JobImpl {
new Exception("Message send failure"));
}
_context.messageHistory().sendPayloadMessage(msg.getMessageId().getMessageId(), false, sendTime);
_context.clientManager().messageDeliveryStatusUpdate(msg.getFromDestination(), msg.getMessageId(), false);
_context.statManager().updateFrequency("client.sendMessageFailFrequency");
_context.statManager().addRateData("client.sendAttemptAverage", _status.getNumSent(), sendTime);
getContext().messageHistory().sendPayloadMessage(msg.getMessageId().getMessageId(), false, sendTime);
getContext().clientManager().messageDeliveryStatusUpdate(msg.getFromDestination(), msg.getMessageId(), false);
getContext().statManager().updateFrequency("client.sendMessageFailFrequency");
getContext().statManager().addRateData("client.sendAttemptAverage", _status.getNumSent(), sendTime);
}
/** build the payload clove that will be used for all of the messages, placing the clove in the status structure */
@@ -411,9 +411,9 @@ public class OutboundClientMessageJob extends JobImpl {
clove.setCertificate(new Certificate(Certificate.CERTIFICATE_TYPE_NULL, null));
clove.setDeliveryInstructions(instructions);
clove.setExpiration(_overallExpiration);
clove.setId(_context.random().nextLong(I2NPMessage.MAX_ID_VALUE));
clove.setId(getContext().random().nextLong(I2NPMessage.MAX_ID_VALUE));
DataMessage msg = new DataMessage(_context);
DataMessage msg = new DataMessage(getContext());
msg.setData(_status.getMessage().getPayload().getEncryptedData());
clove.setPayload(msg);
@@ -450,7 +450,7 @@ public class OutboundClientMessageJob extends JobImpl {
_failure = false;
_numLookups = 0;
_previousSent = 0;
_start = _context.clock().now();
_start = getContext().clock().now();
}
/** raw payload */
@@ -572,7 +572,7 @@ public class OutboundClientMessageJob extends JobImpl {
/** queued by the db lookup success and the send timeout to get us to try the next lease */
private class NextStepJob extends JobImpl {
public NextStepJob() {
super(OutboundClientMessageJob.this._context);
super(OutboundClientMessageJob.this.getContext());
}
public String getName() { return "Process next step for outbound client message"; }
public void runJob() { sendNext(); }
@@ -585,7 +585,7 @@ public class OutboundClientMessageJob extends JobImpl {
*/
private class LookupLeaseSetFailedJob extends JobImpl {
public LookupLeaseSetFailedJob() {
super(OutboundClientMessageJob.this._context);
super(OutboundClientMessageJob.this.getContext());
}
public String getName() { return "Lookup for outbound client message failed"; }
public void runJob() {
@@ -597,7 +597,7 @@ public class OutboundClientMessageJob extends JobImpl {
private class SendJob extends JobImpl {
private Lease _lease;
public SendJob(Lease lease) {
super(OutboundClientMessageJob.this._context);
super(OutboundClientMessageJob.this.getContext());
_lease = lease;
}
public String getName() { return "Send outbound client message through the lease"; }
@@ -620,7 +620,7 @@ public class OutboundClientMessageJob extends JobImpl {
*
*/
public SendSuccessJob(Lease lease, SessionKey key, Set tags) {
super(OutboundClientMessageJob.this._context);
super(OutboundClientMessageJob.this.getContext());
_lease = lease;
_key = key;
_tags = tags;
@@ -628,7 +628,7 @@ public class OutboundClientMessageJob extends JobImpl {
public String getName() { return "Send client message successful to a lease"; }
public void runJob() {
long sendTime = _context.clock().now() - _status.getStart();
long sendTime = getContext().clock().now() - _status.getStart();
boolean alreadySuccessful = _status.success();
MessageId msgId = _status.getMessage().getMessageId();
if (_log.shouldLog(Log.INFO))
@@ -639,8 +639,10 @@ public class OutboundClientMessageJob extends JobImpl {
+ _status.getNumSent() + " sends");
if ( (_key != null) && (_tags != null) && (_tags.size() > 0) ) {
_context.sessionKeyManager().tagsDelivered(_status.getLeaseSet().getEncryptionKey(),
_key, _tags);
LeaseSet ls = _status.getLeaseSet();
if (ls != null)
getContext().sessionKeyManager().tagsDelivered(ls.getEncryptionKey(),
_key, _tags);
}
if (alreadySuccessful) {
@@ -651,13 +653,13 @@ public class OutboundClientMessageJob extends JobImpl {
return;
}
long dataMsgId = _status.getClove().getId();
_context.messageHistory().sendPayloadMessage(dataMsgId, true, sendTime);
_context.clientManager().messageDeliveryStatusUpdate(_status.getFrom(), msgId, true);
getContext().messageHistory().sendPayloadMessage(dataMsgId, true, sendTime);
getContext().clientManager().messageDeliveryStatusUpdate(_status.getFrom(), msgId, true);
_lease.setNumSuccess(_lease.getNumSuccess()+1);
_context.statManager().addRateData("client.sendAckTime", sendTime, 0);
_context.statManager().addRateData("client.sendMessageSize", _status.getMessage().getPayload().getSize(), sendTime);
_context.statManager().addRateData("client.sendAttemptAverage", _status.getNumSent(), sendTime);
getContext().statManager().addRateData("client.sendAckTime", sendTime, 0);
getContext().statManager().addRateData("client.sendMessageSize", _status.getMessage().getPayload().getSize(), sendTime);
getContext().statManager().addRateData("client.sendAttemptAverage", _status.getNumSent(), sendTime);
}
public void setMessage(I2NPMessage msg) {}
@@ -672,7 +674,7 @@ public class OutboundClientMessageJob extends JobImpl {
private Lease _lease;
public SendTimeoutJob(Lease lease) {
super(OutboundClientMessageJob.this._context);
super(OutboundClientMessageJob.this.getContext());
_lease = lease;
}

View File

@@ -76,20 +76,20 @@ public class SendGarlicJob extends JobImpl {
public String getName() { return "Build Garlic Message"; }
public void runJob() {
long before = _context.clock().now();
_message = GarlicMessageBuilder.buildMessage(_context, _config, _wrappedKey, _wrappedTags);
long after = _context.clock().now();
long before = getContext().clock().now();
_message = GarlicMessageBuilder.buildMessage(getContext(), _config, _wrappedKey, _wrappedTags);
long after = getContext().clock().now();
if ( (after - before) > 1000) {
_log.warn("Building the garlic took too long [" + (after-before)+" ms]", getAddedBy());
} else {
_log.debug("Building the garlic was fast! " + (after - before) + " ms");
}
_context.jobQueue().addJob(new SendJob());
getContext().jobQueue().addJob(new SendJob());
}
private class SendJob extends JobImpl {
public SendJob() {
super(SendGarlicJob.this._context);
super(SendGarlicJob.this.getContext());
}
public String getName() { return "Send Built Garlic Message"; }
public void runJob() {
@@ -102,7 +102,7 @@ public class SendGarlicJob extends JobImpl {
}
private void sendGarlic() {
OutNetMessage msg = new OutNetMessage(_context);
OutNetMessage msg = new OutNetMessage(getContext());
long when = _message.getMessageExpiration().getTime(); // + Router.CLOCK_FUDGE_FACTOR;
msg.setExpiration(when);
msg.setMessage(_message);
@@ -116,7 +116,7 @@ public class SendGarlicJob extends JobImpl {
//_log.info("Sending garlic message to [" + _config.getRecipient() + "] encrypted with " + _config.getRecipientPublicKey() + " or " + _config.getRecipient().getIdentity().getPublicKey());
//_log.debug("Garlic config data:\n" + _config);
//msg.setTarget(_target);
_context.outNetMessagePool().add(msg);
getContext().outNetMessagePool().add(msg);
_log.debug("Garlic message added to outbound network message pool");
}
}

View File

@@ -36,7 +36,7 @@ public class SendMessageAckJob extends JobImpl {
}
public void runJob() {
_context.jobQueue().addJob(new SendReplyMessageJob(_context, _block, createAckMessage(), ACK_PRIORITY));
getContext().jobQueue().addJob(new SendReplyMessageJob(getContext(), _block, createAckMessage(), ACK_PRIORITY));
}
/**
@@ -48,8 +48,8 @@ public class SendMessageAckJob extends JobImpl {
*
*/
protected I2NPMessage createAckMessage() {
DeliveryStatusMessage statusMessage = new DeliveryStatusMessage(_context);
statusMessage.setArrival(new Date(_context.clock().now()));
DeliveryStatusMessage statusMessage = new DeliveryStatusMessage(getContext());
statusMessage.setArrival(new Date(getContext().clock().now()));
statusMessage.setMessageId(_ackId);
return statusMessage;
}

View File

@@ -50,7 +50,7 @@ public class SendMessageDirectJob extends JobImpl {
}
public SendMessageDirectJob(RouterContext ctx, I2NPMessage message, Hash toPeer, Job onSend, ReplyJob onSuccess, Job onFail, MessageSelector selector, long expiration, int priority) {
super(ctx);
_log = _context.logManager().getLog(SendMessageDirectJob.class);
_log = getContext().logManager().getLog(SendMessageDirectJob.class);
_message = message;
_targetHash = toPeer;
_router = null;
@@ -67,7 +67,7 @@ public class SendMessageDirectJob extends JobImpl {
if (_targetHash == null)
throw new IllegalArgumentException("Attempt to send a message to a null peer");
_sent = false;
long remaining = expiration - _context.clock().now();
long remaining = expiration - getContext().clock().now();
if (remaining < 50*1000) {
_log.info("Sending message to expire in " + remaining + "ms containing " + message.getUniqueId() + " (a " + message.getClass().getName() + ")", new Exception("SendDirect from"));
}
@@ -75,7 +75,7 @@ public class SendMessageDirectJob extends JobImpl {
public String getName() { return "Send Message Direct"; }
public void runJob() {
long now = _context.clock().now();
long now = getContext().clock().now();
if (_expiration == 0)
_expiration = now + DEFAULT_TIMEOUT;
@@ -95,7 +95,7 @@ public class SendMessageDirectJob extends JobImpl {
_log.debug("Router specified, sending");
send();
} else {
_router = _context.netDb().lookupRouterInfoLocally(_targetHash);
_router = getContext().netDb().lookupRouterInfoLocally(_targetHash);
if (_router != null) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Router not specified but lookup found it");
@@ -104,14 +104,14 @@ public class SendMessageDirectJob extends JobImpl {
if (!_alreadySearched) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Router not specified, so we're looking for it...");
_context.netDb().lookupRouterInfo(_targetHash, this, this,
_expiration - _context.clock().now());
_searchOn = _context.clock().now();
getContext().netDb().lookupRouterInfo(_targetHash, this, this,
_expiration - getContext().clock().now());
_searchOn = getContext().clock().now();
_alreadySearched = true;
} else {
if (_log.shouldLog(Log.WARN))
_log.warn("Unable to find the router to send to: " + _targetHash
+ " after searching for " + (_context.clock().now()-_searchOn)
+ " after searching for " + (getContext().clock().now()-_searchOn)
+ "ms, message: " + _message, getAddedBy());
}
}
@@ -126,10 +126,10 @@ public class SendMessageDirectJob extends JobImpl {
}
_sent = true;
Hash to = _router.getIdentity().getHash();
Hash us = _context.router().getRouterInfo().getIdentity().getHash();
Hash us = getContext().router().getRouterInfo().getIdentity().getHash();
if (us.equals(to)) {
if (_selector != null) {
OutNetMessage outM = new OutNetMessage(_context);
OutNetMessage outM = new OutNetMessage(getContext());
outM.setExpiration(_expiration);
outM.setMessage(_message);
outM.setOnFailedReplyJob(_onFail);
@@ -139,23 +139,23 @@ public class SendMessageDirectJob extends JobImpl {
outM.setPriority(_priority);
outM.setReplySelector(_selector);
outM.setTarget(_router);
_context.messageRegistry().registerPending(outM);
getContext().messageRegistry().registerPending(outM);
}
if (_onSend != null)
_context.jobQueue().addJob(_onSend);
getContext().jobQueue().addJob(_onSend);
InNetMessage msg = new InNetMessage(_context);
InNetMessage msg = new InNetMessage(getContext());
msg.setFromRouter(_router.getIdentity());
msg.setMessage(_message);
_context.inNetMessagePool().add(msg);
getContext().inNetMessagePool().add(msg);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Adding " + _message.getClass().getName()
+ " to inbound message pool as it was destined for ourselves");
//_log.debug("debug", _createdBy);
} else {
OutNetMessage msg = new OutNetMessage(_context);
OutNetMessage msg = new OutNetMessage(getContext());
msg.setExpiration(_expiration);
msg.setMessage(_message);
msg.setOnFailedReplyJob(_onFail);
@@ -165,7 +165,7 @@ public class SendMessageDirectJob extends JobImpl {
msg.setPriority(_priority);
msg.setReplySelector(_selector);
msg.setTarget(_router);
_context.outNetMessagePool().add(msg);
getContext().outNetMessagePool().add(msg);
if (_log.shouldLog(Log.DEBUG))
_log.debug("Adding " + _message.getClass().getName()
+ " to outbound message pool targeting "

View File

@@ -37,7 +37,7 @@ public class SendReplyMessageJob extends JobImpl {
}
public void runJob() {
SourceRouteReplyMessage msg = new SourceRouteReplyMessage(_context);
SourceRouteReplyMessage msg = new SourceRouteReplyMessage(getContext());
msg.setMessage(_message);
msg.setEncryptedHeader(_block.getData());
msg.setMessageExpiration(_message.getMessageExpiration());
@@ -56,8 +56,8 @@ public class SendReplyMessageJob extends JobImpl {
*/
protected void send(I2NPMessage msg) {
_log.info("Sending reply with " + _message.getClass().getName() + " in a sourceRouteeplyMessage to " + _block.getRouter().toBase64());
SendMessageDirectJob j = new SendMessageDirectJob(_context, msg, _block.getRouter(), _priority);
_context.jobQueue().addJob(j);
SendMessageDirectJob j = new SendMessageDirectJob(getContext(), msg, _block.getRouter(), _priority);
getContext().jobQueue().addJob(j);
}
public String getName() { return "Send Reply Message"; }

View File

@@ -83,11 +83,11 @@ public class SendTunnelMessageJob extends JobImpl {
new Exception("SendTunnel from"));
}
//_log.info("Send tunnel message " + msg.getClass().getName() + " to " + _destRouter + " over " + _tunnelId + " targetting tunnel " + _targetTunnelId, new Exception("SendTunnel from"));
_expiration = _context.clock().now() + timeoutMs;
_expiration = getContext().clock().now() + timeoutMs;
}
public void runJob() {
TunnelInfo info = _context.tunnelManager().getTunnelInfo(_tunnelId);
TunnelInfo info = getContext().tunnelManager().getTunnelInfo(_tunnelId);
if (info == null) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Message for unknown tunnel [" + _tunnelId
@@ -124,21 +124,21 @@ public class SendTunnelMessageJob extends JobImpl {
*
*/
private void forwardToGateway() {
TunnelMessage msg = new TunnelMessage(_context);
TunnelMessage msg = new TunnelMessage(getContext());
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream(1024);
_message.writeBytes(baos);
msg.setData(baos.toByteArray());
msg.setTunnelId(_tunnelId);
msg.setMessageExpiration(new Date(_expiration));
_context.jobQueue().addJob(new SendMessageDirectJob(_context, msg,
getContext().jobQueue().addJob(new SendMessageDirectJob(getContext(), msg,
_destRouter, _onSend,
_onReply, _onFailure,
_selector, _expiration,
_priority));
String bodyType = _message.getClass().getName();
_context.messageHistory().wrap(bodyType, _message.getUniqueId(),
getContext().messageHistory().wrap(bodyType, _message.getUniqueId(),
TunnelMessage.class.getName(), msg.getUniqueId());
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR))
@@ -162,7 +162,7 @@ public class SendTunnelMessageJob extends JobImpl {
if (_log.shouldLog(Log.ERROR))
_log.error("We are not participating in this /known/ tunnel - was the router reset?");
if (_onFailure != null)
_context.jobQueue().addJob(_onFailure);
getContext().jobQueue().addJob(_onFailure);
} else {
// we're the gateway, so sign, encrypt, and forward to info.getNextHop()
TunnelMessage msg = prepareMessage(info);
@@ -170,20 +170,20 @@ public class SendTunnelMessageJob extends JobImpl {
if (_log.shouldLog(Log.ERROR))
_log.error("wtf, unable to prepare a tunnel message to the next hop, when we're the gateway and hops remain? tunnel: " + info);
if (_onFailure != null)
_context.jobQueue().addJob(_onFailure);
getContext().jobQueue().addJob(_onFailure);
return;
}
if (_log.shouldLog(Log.DEBUG))
_log.debug("Tunnel message created: " + msg + " out of encrypted message: "
+ _message);
long now = _context.clock().now();
long now = getContext().clock().now();
if (_expiration < now + 15*1000) {
if (_log.shouldLog(Log.WARN))
_log.warn("Adding a tunnel message that will expire shortly ["
+ new Date(_expiration) + "]", getAddedBy());
}
msg.setMessageExpiration(new Date(_expiration));
_context.jobQueue().addJob(new SendMessageDirectJob(_context, msg,
getContext().jobQueue().addJob(new SendMessageDirectJob(getContext(), msg,
info.getNextHop(), _onSend,
_onReply, _onFailure,
_selector, _expiration,
@@ -205,7 +205,7 @@ public class SendTunnelMessageJob extends JobImpl {
if (_log.shouldLog(Log.ERROR))
_log.error("Cannot inject non-tunnel messages as a participant!" + _message, getAddedBy());
if (_onFailure != null)
_context.jobQueue().addJob(_onFailure);
getContext().jobQueue().addJob(_onFailure);
return;
}
@@ -216,29 +216,29 @@ public class SendTunnelMessageJob extends JobImpl {
if (_log.shouldLog(Log.ERROR))
_log.error("No verification key for the participant? tunnel: " + info, getAddedBy());
if (_onFailure != null)
_context.jobQueue().addJob(_onFailure);
getContext().jobQueue().addJob(_onFailure);
return;
}
boolean ok = struct.verifySignature(_context, info.getVerificationKey().getKey());
boolean ok = struct.verifySignature(getContext(), info.getVerificationKey().getKey());
if (!ok) {
if (_log.shouldLog(Log.WARN))
_log.warn("Failed tunnel verification! Spoofing / tagging attack? " + _message, getAddedBy());
if (_onFailure != null)
_context.jobQueue().addJob(_onFailure);
getContext().jobQueue().addJob(_onFailure);
return;
} else {
if (info.getNextHop() != null) {
if (_log.shouldLog(Log.INFO))
_log.info("Message for tunnel " + info.getTunnelId().getTunnelId() + " received where we're not the gateway and there are remaining hops, so forward it on to "
+ info.getNextHop().toBase64() + " via SendMessageDirectJob");
_context.jobQueue().addJob(new SendMessageDirectJob(_context, msg, info.getNextHop(), _onSend, null, _onFailure, null, _message.getMessageExpiration().getTime(), _priority));
getContext().jobQueue().addJob(new SendMessageDirectJob(getContext(), msg, info.getNextHop(), _onSend, null, _onFailure, null, _message.getMessageExpiration().getTime(), _priority));
return;
} else {
if (_log.shouldLog(Log.ERROR))
_log.error("Should not be reached - participant, but no more hops?!");
if (_onFailure != null)
_context.jobQueue().addJob(_onFailure);
getContext().jobQueue().addJob(_onFailure);
return;
}
}
@@ -247,7 +247,7 @@ public class SendTunnelMessageJob extends JobImpl {
/** find our place in the tunnel */
private TunnelInfo getUs(TunnelInfo info) {
Hash us = _context.routerHash();
Hash us = getContext().routerHash();
TunnelInfo lastUs = null;
while (info != null) {
if (us.equals(info.getThisHop()))
@@ -277,9 +277,9 @@ public class SendTunnelMessageJob extends JobImpl {
*
*/
private TunnelMessage prepareMessage(TunnelInfo info) {
TunnelMessage msg = new TunnelMessage(_context);
TunnelMessage msg = new TunnelMessage(getContext());
SessionKey key = _context.keyGenerator().generateSessionKey();
SessionKey key = getContext().keyGenerator().generateSessionKey();
DeliveryInstructions instructions = new DeliveryInstructions();
instructions.setDelayRequested(false);
@@ -329,7 +329,7 @@ public class SendTunnelMessageJob extends JobImpl {
TunnelVerificationStructure verification = createVerificationStructure(encryptedMessage, info);
String bodyType = _message.getClass().getName();
_context.messageHistory().wrap(bodyType, _message.getUniqueId(), TunnelMessage.class.getName(), msg.getUniqueId());
getContext().messageHistory().wrap(bodyType, _message.getUniqueId(), TunnelMessage.class.getName(), msg.getUniqueId());
if (_log.shouldLog(Log.DEBUG))
_log.debug("Tunnel message prepared: instructions = " + instructions);
@@ -347,8 +347,8 @@ public class SendTunnelMessageJob extends JobImpl {
*/
private TunnelVerificationStructure createVerificationStructure(byte encryptedMessage[], TunnelInfo info) {
TunnelVerificationStructure struct = new TunnelVerificationStructure();
struct.setMessageHash(_context.sha().calculateHash(encryptedMessage));
struct.sign(_context, info.getSigningKey().getKey());
struct.setMessageHash(getContext().sha().calculateHash(encryptedMessage));
struct.sign(getContext(), info.getSigningKey().getKey());
return struct;
}
@@ -363,9 +363,9 @@ public class SendTunnelMessageJob extends JobImpl {
struct.writeBytes(baos);
byte iv[] = new byte[16];
Hash h = _context.sha().calculateHash(key.getData());
Hash h = getContext().sha().calculateHash(key.getData());
System.arraycopy(h.getData(), 0, iv, 0, iv.length);
return _context.AESEngine().safeEncrypt(baos.toByteArray(), key, iv, paddedSize);
return getContext().AESEngine().safeEncrypt(baos.toByteArray(), key, iv, paddedSize);
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR))
_log.error("Error writing out data to encrypt", ioe);
@@ -389,12 +389,12 @@ public class SendTunnelMessageJob extends JobImpl {
if (_onSend != null) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Firing onSend as we're honoring the instructions");
_context.jobQueue().addJob(_onSend);
getContext().jobQueue().addJob(_onSend);
}
// since we are the gateway, we don't need to decrypt the delivery instructions or the payload
RouterIdentity ident = _context.router().getRouterInfo().getIdentity();
RouterIdentity ident = getContext().router().getRouterInfo().getIdentity();
if (_destRouter != null) {
honorSendRemote(info, ident);
@@ -416,7 +416,7 @@ public class SendTunnelMessageJob extends JobImpl {
+ " message off to remote tunnel "
+ _targetTunnelId.getTunnelId() + " on router "
+ _destRouter.toBase64());
TunnelMessage tmsg = new TunnelMessage(_context);
TunnelMessage tmsg = new TunnelMessage(getContext());
tmsg.setEncryptedDeliveryInstructions(null);
tmsg.setTunnelId(_targetTunnelId);
tmsg.setVerificationStructure(null);
@@ -438,7 +438,7 @@ public class SendTunnelMessageJob extends JobImpl {
+ " message off to remote router " + _destRouter.toBase64());
msg = _message;
}
long now = _context.clock().now();
long now = getContext().clock().now();
//if (_expiration < now) {
//_expiration = now + Router.CLOCK_FUDGE_FACTOR;
//_log.info("Fudging the message send so it expires in the fudge factor...");
@@ -451,11 +451,11 @@ public class SendTunnelMessageJob extends JobImpl {
}
String bodyType = _message.getClass().getName();
_context.messageHistory().wrap(bodyType, _message.getUniqueId(),
getContext().messageHistory().wrap(bodyType, _message.getUniqueId(),
TunnelMessage.class.getName(), msg.getUniqueId());
// don't specify a selector, since createFakeOutNetMessage already does that
_context.jobQueue().addJob(new SendMessageDirectJob(_context, msg, _destRouter,
getContext().jobQueue().addJob(new SendMessageDirectJob(getContext(), msg, _destRouter,
_onSend, _onReply, _onFailure,
null, _expiration, _priority));
}
@@ -471,22 +471,22 @@ public class SendTunnelMessageJob extends JobImpl {
// its a network message targeting us...
if (_log.shouldLog(Log.DEBUG))
_log.debug("Destination is null or its not a DataMessage - pass it off to the InNetMessagePool");
InNetMessage msg = new InNetMessage(_context);
InNetMessage msg = new InNetMessage(getContext());
msg.setFromRouter(ident);
msg.setFromRouterHash(ident.getHash());
msg.setMessage(_message);
msg.setReplyBlock(null);
_context.inNetMessagePool().add(msg);
getContext().inNetMessagePool().add(msg);
} else {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Destination is not null and it is a DataMessage - pop it into the ClientMessagePool");
DataMessage msg = (DataMessage)_message;
boolean valid = _context.messageValidator().validateMessage(msg.getUniqueId(), msg.getMessageExpiration().getTime());
boolean valid = getContext().messageValidator().validateMessage(msg.getUniqueId(), msg.getMessageExpiration().getTime());
if (!valid) {
if (_log.shouldLog(Log.WARN))
_log.warn("Duplicate data message received [" + msg.getUniqueId() + " expiring on " + msg.getMessageExpiration() + "]");
_context.messageHistory().droppedOtherMessage(msg);
_context.messageHistory().messageProcessingError(msg.getUniqueId(), msg.getClass().getName(), "Duplicate");
getContext().messageHistory().droppedOtherMessage(msg);
getContext().messageHistory().messageProcessingError(msg.getUniqueId(), msg.getClass().getName(), "Duplicate");
return;
}
@@ -501,8 +501,8 @@ public class SendTunnelMessageJob extends JobImpl {
clientMessage.setDestination(info.getDestination());
clientMessage.setPayload(payload);
clientMessage.setReceptionInfo(receptionInfo);
_context.clientMessagePool().add(clientMessage);
_context.messageHistory().receivePayloadMessage(msg.getUniqueId());
getContext().clientMessagePool().add(clientMessage);
getContext().messageHistory().receivePayloadMessage(msg.getUniqueId());
}
}
@@ -510,7 +510,7 @@ public class SendTunnelMessageJob extends JobImpl {
// now we create a fake outNetMessage to go onto the registry so we can select
if (_log.shouldLog(Log.DEBUG))
_log.debug("Registering a fake outNetMessage for the message tunneled locally since we have a selector");
OutNetMessage outM = new OutNetMessage(_context);
OutNetMessage outM = new OutNetMessage(getContext());
outM.setExpiration(_expiration);
outM.setMessage(_message);
outM.setOnFailedReplyJob(_onFailure);
@@ -520,7 +520,7 @@ public class SendTunnelMessageJob extends JobImpl {
outM.setPriority(_priority);
outM.setReplySelector(_selector);
outM.setTarget(null);
_context.messageRegistry().registerPending(outM);
getContext().messageRegistry().registerPending(outM);
// we dont really need the data
outM.discardData();
}

View File

@@ -16,6 +16,7 @@ import net.i2p.data.i2np.SourceRouteBlock;
import net.i2p.router.HandlerJobBuilder;
import net.i2p.router.Job;
import net.i2p.router.RouterContext;
import net.i2p.util.Log;
/**
* Build a HandleDatabaseLookupMessageJob whenever a DatabaseLookupMessage arrives
@@ -23,14 +24,24 @@ import net.i2p.router.RouterContext;
*/
public class DatabaseLookupMessageHandler implements HandlerJobBuilder {
private RouterContext _context;
private Log _log;
public DatabaseLookupMessageHandler(RouterContext context) {
_context = context;
_log = context.logManager().getLog(DatabaseLookupMessageHandler.class);
_context.statManager().createRateStat("netDb.lookupsReceived", "How many netDb lookups have we received?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
_context.statManager().createRateStat("netDb.lookupsDropped", "How many netDb lookups did we drop due to throttling?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
}
public Job createJob(I2NPMessage receivedMessage, RouterIdentity from, Hash fromHash, SourceRouteBlock replyBlock) {
_context.statManager().addRateData("netDb.lookupsReceived", 1, 0);
// ignore the reply block for the moment
return new HandleDatabaseLookupMessageJob(_context, (DatabaseLookupMessage)receivedMessage, from, fromHash);
if (_context.throttle().acceptNetDbLookupRequest(((DatabaseLookupMessage)receivedMessage).getSearchKey())) {
return new HandleDatabaseLookupMessageJob(_context, (DatabaseLookupMessage)receivedMessage, from, fromHash);
} else {
if (_log.shouldLog(Log.INFO))
_log.info("Dropping lookup request as throttled");
_context.statManager().addRateData("netDb.lookupsDropped", 1, 1);
return null;
}
}
}

View File

@@ -49,9 +49,9 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
public HandleDatabaseLookupMessageJob(RouterContext ctx, DatabaseLookupMessage receivedMessage, RouterIdentity from, Hash fromHash) {
super(ctx);
_log = _context.logManager().getLog(HandleDatabaseLookupMessageJob.class);
_context.statManager().createRateStat("netDb.lookupsHandled", "How many netDb lookups have we handled?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
_context.statManager().createRateStat("netDb.lookupsMatched", "How many netDb lookups did we have the data for?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
_log = getContext().logManager().getLog(HandleDatabaseLookupMessageJob.class);
getContext().statManager().createRateStat("netDb.lookupsHandled", "How many netDb lookups have we handled?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
getContext().statManager().createRateStat("netDb.lookupsMatched", "How many netDb lookups did we have the data for?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
_message = receivedMessage;
_from = from;
_fromHash = fromHash;
@@ -70,14 +70,14 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
}
// might as well grab what they sent us
_context.netDb().store(fromKey, _message.getFrom());
getContext().netDb().store(fromKey, _message.getFrom());
// whatdotheywant?
handleRequest(fromKey);
}
private void handleRequest(Hash fromKey) {
LeaseSet ls = _context.netDb().lookupLeaseSetLocally(_message.getSearchKey());
LeaseSet ls = getContext().netDb().lookupLeaseSetLocally(_message.getSearchKey());
if (ls != null) {
// send that lease set to the _message.getFromHash peer
if (_log.shouldLog(Log.DEBUG))
@@ -85,7 +85,7 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
+ " locally as a lease set. sending to " + fromKey.toBase64());
sendData(_message.getSearchKey(), ls, fromKey, _message.getReplyTunnel());
} else {
RouterInfo info = _context.netDb().lookupRouterInfoLocally(_message.getSearchKey());
RouterInfo info = getContext().netDb().lookupRouterInfoLocally(_message.getSearchKey());
if (info != null) {
// send that routerInfo to the _message.getFromHash peer
if (_log.shouldLog(Log.DEBUG))
@@ -94,7 +94,7 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
sendData(_message.getSearchKey(), info, fromKey, _message.getReplyTunnel());
} else {
// not found locally - return closest peer routerInfo structs
Set routerInfoSet = _context.netDb().findNearestRouters(_message.getSearchKey(),
Set routerInfoSet = getContext().netDb().findNearestRouters(_message.getSearchKey(),
MAX_ROUTERS_RETURNED,
_message.getDontIncludePeers());
if (_log.shouldLog(Log.DEBUG))
@@ -109,7 +109,7 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Sending data matching key key " + key.toBase64() + " to peer " + toPeer.toBase64()
+ " tunnel " + replyTunnel);
DatabaseStoreMessage msg = new DatabaseStoreMessage(_context);
DatabaseStoreMessage msg = new DatabaseStoreMessage(getContext());
msg.setKey(key);
if (data instanceof LeaseSet) {
msg.setLeaseSet((LeaseSet)data);
@@ -118,8 +118,8 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
msg.setRouterInfo((RouterInfo)data);
msg.setValueType(DatabaseStoreMessage.KEY_TYPE_ROUTERINFO);
}
_context.statManager().addRateData("netDb.lookupsMatched", 1, 0);
_context.statManager().addRateData("netDb.lookupsHandled", 1, 0);
getContext().statManager().addRateData("netDb.lookupsMatched", 1, 0);
getContext().statManager().addRateData("netDb.lookupsHandled", 1, 0);
sendMessage(msg, toPeer, replyTunnel);
}
@@ -127,15 +127,15 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Sending closest routers to key " + key.toBase64() + ": # peers = "
+ routerInfoSet.size() + " tunnel " + replyTunnel);
DatabaseSearchReplyMessage msg = new DatabaseSearchReplyMessage(_context);
msg.setFromHash(_context.router().getRouterInfo().getIdentity().getHash());
DatabaseSearchReplyMessage msg = new DatabaseSearchReplyMessage(getContext());
msg.setFromHash(getContext().router().getRouterInfo().getIdentity().getHash());
msg.setSearchKey(key);
if (routerInfoSet.size() <= 0) {
// always include something, so lets toss ourselves in there
routerInfoSet.add(_context.router().getRouterInfo());
routerInfoSet.add(getContext().router().getRouterInfo());
}
msg.addReplies(routerInfoSet);
_context.statManager().addRateData("netDb.lookupsHandled", 1, 0);
getContext().statManager().addRateData("netDb.lookupsHandled", 1, 0);
sendMessage(msg, toPeer, replyTunnel); // should this go via garlic messages instead?
}
@@ -146,21 +146,21 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
} else {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Sending reply directly to " + toPeer);
send = new SendMessageDirectJob(_context, message, toPeer, REPLY_TIMEOUT+_context.clock().now(), MESSAGE_PRIORITY);
send = new SendMessageDirectJob(getContext(), message, toPeer, REPLY_TIMEOUT+getContext().clock().now(), MESSAGE_PRIORITY);
}
_context.netDb().lookupRouterInfo(toPeer, send, null, REPLY_TIMEOUT);
getContext().netDb().lookupRouterInfo(toPeer, send, null, REPLY_TIMEOUT);
}
private void sendThroughTunnel(I2NPMessage message, Hash toPeer, TunnelId replyTunnel) {
TunnelInfo info = _context.tunnelManager().getTunnelInfo(replyTunnel);
TunnelInfo info = getContext().tunnelManager().getTunnelInfo(replyTunnel);
// the sendTunnelMessageJob can't handle injecting into the tunnel anywhere but the beginning
// (and if we are the beginning, we have the signing key)
if ( (info == null) || (info.getSigningKey() != null)) {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Sending reply through " + replyTunnel + " on " + toPeer);
_context.jobQueue().addJob(new SendTunnelMessageJob(_context, message, replyTunnel, toPeer, null, null, null, null, null, REPLY_TIMEOUT, MESSAGE_PRIORITY));
getContext().jobQueue().addJob(new SendTunnelMessageJob(getContext(), message, replyTunnel, toPeer, null, null, null, null, null, REPLY_TIMEOUT, MESSAGE_PRIORITY));
} else {
// its a tunnel we're participating in, but we're NOT the gateway, so
sendToGateway(message, toPeer, replyTunnel, info);
@@ -177,19 +177,19 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
return;
}
long expiration = REPLY_TIMEOUT + _context.clock().now();
long expiration = REPLY_TIMEOUT + getContext().clock().now();
TunnelMessage msg = new TunnelMessage(_context);
TunnelMessage msg = new TunnelMessage(getContext());
try {
ByteArrayOutputStream baos = new ByteArrayOutputStream(1024);
message.writeBytes(baos);
msg.setData(baos.toByteArray());
msg.setTunnelId(replyTunnel);
msg.setMessageExpiration(new Date(expiration));
_context.jobQueue().addJob(new SendMessageDirectJob(_context, msg, toPeer, null, null, null, null, expiration, MESSAGE_PRIORITY));
getContext().jobQueue().addJob(new SendMessageDirectJob(getContext(), msg, toPeer, null, null, null, null, expiration, MESSAGE_PRIORITY));
String bodyType = message.getClass().getName();
_context.messageHistory().wrap(bodyType, message.getUniqueId(), TunnelMessage.class.getName(), msg.getUniqueId());
getContext().messageHistory().wrap(bodyType, message.getUniqueId(), TunnelMessage.class.getName(), msg.getUniqueId());
} catch (IOException ioe) {
if (_log.shouldLog(Log.ERROR))
_log.error("Error writing out the tunnel message to send to the tunnel", ioe);
@@ -202,7 +202,7 @@ public class HandleDatabaseLookupMessageJob extends JobImpl {
public String getName() { return "Handle Database Lookup Message"; }
public void dropped() {
_context.messageHistory().messageProcessingError(_message.getUniqueId(),
getContext().messageHistory().messageProcessingError(_message.getUniqueId(),
_message.getClass().getName(),
"Dropped due to overload");
}

View File

@@ -38,7 +38,7 @@ public class HandleDatabaseSearchReplyMessageJob extends JobImpl {
if (_log.shouldLog(Log.DEBUG))
_log.debug("Handling database search reply message for key " + _message.getSearchKey().toBase64() + " with " + _message.getNumReplies() + " replies");
if (_message.getNumReplies() > 0)
_context.jobQueue().addJob(new HandlePeerJob(0));
getContext().jobQueue().addJob(new HandlePeerJob(0));
}
/**
@@ -49,7 +49,7 @@ public class HandleDatabaseSearchReplyMessageJob extends JobImpl {
private final class HandlePeerJob extends JobImpl {
private int _curReply;
public HandlePeerJob(int reply) {
super(HandleDatabaseSearchReplyMessageJob.this._context);
super(HandleDatabaseSearchReplyMessageJob.this.getContext());
_curReply = reply;
}
public void runJob() {
@@ -63,7 +63,7 @@ public class HandleDatabaseSearchReplyMessageJob extends JobImpl {
if (_log.shouldLog(Log.INFO))
_log.info("On search for " + _message.getSearchKey().toBase64() + ", received " + info.getIdentity().getHash().toBase64());
HandlePeerJob.this._context.netDb().store(info.getIdentity().getHash(), info);
HandlePeerJob.this.getContext().netDb().store(info.getIdentity().getHash(), info);
_curReply++;
return _message.getNumReplies() > _curReply;
}

View File

@@ -50,15 +50,15 @@ public class HandleDatabaseStoreMessageJob extends JobImpl {
boolean wasNew = false;
if (_message.getValueType() == DatabaseStoreMessage.KEY_TYPE_LEASESET) {
Object match = _context.netDb().store(_message.getKey(), _message.getLeaseSet());
Object match = getContext().netDb().store(_message.getKey(), _message.getLeaseSet());
wasNew = (null == match);
} else if (_message.getValueType() == DatabaseStoreMessage.KEY_TYPE_ROUTERINFO) {
if (_log.shouldLog(Log.INFO))
_log.info("Handling dbStore of router " + _message.getKey() + " with publishDate of "
+ new Date(_message.getRouterInfo().getPublished()));
Object match = _context.netDb().store(_message.getKey(), _message.getRouterInfo());
Object match = getContext().netDb().store(_message.getKey(), _message.getRouterInfo());
wasNew = (null == match);
_context.profileManager().heardAbout(_message.getKey());
getContext().profileManager().heardAbout(_message.getKey());
} else {
if (_log.shouldLog(Log.ERROR))
_log.error("Invalid DatabaseStoreMessage data type - " + _message.getValueType()
@@ -71,16 +71,16 @@ public class HandleDatabaseStoreMessageJob extends JobImpl {
if (_from != null)
_fromHash = _from.getHash();
if (_fromHash != null)
_context.profileManager().dbStoreReceived(_fromHash, wasNew);
_context.statManager().addRateData("netDb.storeHandled", 1, 0);
getContext().profileManager().dbStoreReceived(_fromHash, wasNew);
getContext().statManager().addRateData("netDb.storeHandled", 1, 0);
}
private void sendAck() {
DeliveryStatusMessage msg = new DeliveryStatusMessage(_context);
DeliveryStatusMessage msg = new DeliveryStatusMessage(getContext());
msg.setMessageId(_message.getReplyToken());
msg.setArrival(new Date(_context.clock().now()));
msg.setArrival(new Date(getContext().clock().now()));
TunnelId outTunnelId = selectOutboundTunnel();
_context.jobQueue().addJob(new SendTunnelMessageJob(_context, msg, outTunnelId,
getContext().jobQueue().addJob(new SendTunnelMessageJob(getContext(), msg, outTunnelId,
_message.getReplyGateway(), _message.getReplyTunnel(),
null, null, null, null, ACK_TIMEOUT, ACK_PRIORITY));
}
@@ -92,7 +92,7 @@ public class HandleDatabaseStoreMessageJob extends JobImpl {
criteria.setReliabilityPriority(20);
criteria.setMaximumTunnelsRequired(1);
criteria.setMinimumTunnelsRequired(1);
List tunnelIds = _context.tunnelManager().selectOutboundTunnelIds(criteria);
List tunnelIds = getContext().tunnelManager().selectOutboundTunnelIds(criteria);
if (tunnelIds.size() <= 0) {
_log.error("No outbound tunnels?!");
return null;
@@ -104,6 +104,6 @@ public class HandleDatabaseStoreMessageJob extends JobImpl {
public String getName() { return "Handle Database Store Message"; }
public void dropped() {
_context.messageHistory().messageProcessingError(_message.getUniqueId(), _message.getClass().getName(), "Dropped due to overload");
getContext().messageHistory().messageProcessingError(_message.getUniqueId(), _message.getClass().getName(), "Dropped due to overload");
}
}

View File

@@ -32,25 +32,25 @@ public class PublishLocalRouterInfoJob extends JobImpl {
public String getName() { return "Publish Local Router Info"; }
public void runJob() {
RouterInfo ri = new RouterInfo(_context.router().getRouterInfo());
RouterInfo ri = new RouterInfo(getContext().router().getRouterInfo());
if (_log.shouldLog(Log.DEBUG))
_log.debug("Old routerInfo contains " + ri.getAddresses().size()
+ " addresses and " + ri.getOptions().size() + " options");
Properties stats = _context.statPublisher().publishStatistics();
Properties stats = getContext().statPublisher().publishStatistics();
try {
ri.setPublished(_context.clock().now());
ri.setPublished(getContext().clock().now());
ri.setOptions(stats);
ri.setAddresses(_context.commSystem().createAddresses());
ri.sign(_context.keyManager().getSigningPrivateKey());
_context.router().setRouterInfo(ri);
ri.setAddresses(getContext().commSystem().createAddresses());
ri.sign(getContext().keyManager().getSigningPrivateKey());
getContext().router().setRouterInfo(ri);
if (_log.shouldLog(Log.INFO))
_log.info("Newly updated routerInfo is published with " + stats.size()
+ "/" + ri.getOptions().size() + " options on "
+ new Date(ri.getPublished()));
_context.netDb().publish(ri);
getContext().netDb().publish(ri);
} catch (DataFormatException dfe) {
_log.error("Error signing the updated local router info!", dfe);
}
requeue(PUBLISH_DELAY + _context.random().nextInt((int)PUBLISH_DELAY));
requeue(PUBLISH_DELAY + getContext().random().nextInt((int)PUBLISH_DELAY));
}
}

View File

@@ -55,8 +55,8 @@ class DataPublisherJob extends JobImpl {
new Exception("Publish expired lease?"));
}
}
StoreJob store = new StoreJob(_context, _facade, key, data, null, null, STORE_TIMEOUT);
_context.jobQueue().addJob(store);
StoreJob store = new StoreJob(getContext(), _facade, key, data, null, null, STORE_TIMEOUT);
getContext().jobQueue().addJob(store);
}
requeue(RERUN_DELAY_MS);
}

View File

@@ -118,7 +118,7 @@ class DataRepublishingSelectorJob extends JobImpl {
private long rankPublishNeed(Hash key, Long lastPublished) {
int bucket = _facade.getKBuckets().pickBucket(key);
long sendPeriod = (bucket+1) * RESEND_BUCKET_FACTOR;
long now = _context.clock().now();
long now = getContext().clock().now();
if (lastPublished.longValue() < now-sendPeriod) {
RouterInfo ri = _facade.lookupRouterInfoLocally(key);
if (ri != null) {
@@ -158,7 +158,7 @@ class DataRepublishingSelectorJob extends JobImpl {
if (_facade.lookupRouterInfoLocally(key) != null) {
// randomize the chance of rebroadcast for leases if we haven't
// sent it within 5 minutes
int val = _context.random().nextInt(LEASE_REBROADCAST_PROBABILITY_SCALE);
int val = getContext().random().nextInt(LEASE_REBROADCAST_PROBABILITY_SCALE);
if (val <= LEASE_REBROADCAST_PROBABILITY) {
if (_log.shouldLog(Log.INFO))
_log.info("Randomized rebroadcast of leases tells us to send "

View File

@@ -69,7 +69,7 @@ class ExpireRoutersJob extends JobImpl {
private Set selectKeysToExpire() {
Set possible = getNotInUse();
Set expiring = new HashSet(16);
long earliestPublishDate = _context.clock().now() - EXPIRE_DELAY;
long earliestPublishDate = getContext().clock().now() - EXPIRE_DELAY;
for (Iterator iter = possible.iterator(); iter.hasNext(); ) {
Hash key = (Hash)iter.next();
@@ -94,7 +94,7 @@ class ExpireRoutersJob extends JobImpl {
Set possible = new HashSet(16);
for (Iterator iter = _facade.getAllRouters().iterator(); iter.hasNext(); ) {
Hash peer = (Hash)iter.next();
if (!_context.tunnelManager().isInUse(peer)) {
if (!getContext().tunnelManager().isInUse(peer)) {
possible.add(peer);
} else {
if (_log.shouldLog(Log.DEBUG))

View File

@@ -68,7 +68,7 @@ class ExploreJob extends SearchJob {
* @param expiration when the search should stop
*/
protected DatabaseLookupMessage buildMessage(TunnelId replyTunnelId, RouterInfo replyGateway, long expiration) {
DatabaseLookupMessage msg = new DatabaseLookupMessage(_context);
DatabaseLookupMessage msg = new DatabaseLookupMessage(getContext());
msg.setSearchKey(getState().getTarget());
msg.setFrom(replyGateway);
msg.setDontIncludePeers(getState().getAttempted());
@@ -95,7 +95,7 @@ class ExploreJob extends SearchJob {
*
*/
protected DatabaseLookupMessage buildMessage(long expiration) {
return buildMessage(null, _context.router().getRouterInfo(), expiration);
return buildMessage(null, getContext().router().getRouterInfo(), expiration);
}
/** max # of concurrent searches */
@@ -110,7 +110,7 @@ class ExploreJob extends SearchJob {
protected void newPeersFound(int numNewPeers) {
// who cares about how many new peers. well, maybe we do. but for now,
// we'll do the simplest thing that could possibly work.
_facade.setLastExploreNewDate(_context.clock().now());
_facade.setLastExploreNewDate(getContext().clock().now());
}
/*

View File

@@ -11,6 +11,7 @@ package net.i2p.router.networkdb.kademlia;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.OutputStream;
import java.util.Collection;
import java.util.Date;
import java.util.HashMap;
@@ -605,16 +606,19 @@ public class KademliaNetworkDatabaseFacade extends NetworkDatabaseFacade {
return routers;
}
public String renderStatusHTML() {
StringBuffer buf = new StringBuffer();
public void renderStatusHTML(OutputStream out) throws IOException {
StringBuffer buf = new StringBuffer(10*1024);
buf.append("<h2>Kademlia Network DB Contents</h2>\n");
if (!_initialized) {
buf.append("<i>Not initialized</i>\n");
return buf.toString();
out.write(buf.toString().getBytes());
return;
}
Set leases = getLeases();
buf.append("<h3>Leases</h3>\n");
buf.append("<table border=\"1\">\n");
out.write(buf.toString().getBytes());
buf.setLength(0);
for (Iterator iter = leases.iterator(); iter.hasNext(); ) {
LeaseSet ls = (LeaseSet)iter.next();
Hash key = ls.getDestination().calculateHash();
@@ -625,6 +629,8 @@ public class KademliaNetworkDatabaseFacade extends NetworkDatabaseFacade {
else
buf.append("<td valign=\"top\" align=\"left\"><b>Last sent successfully:</b> never</td></tr>");
buf.append("<tr><td valign=\"top\" align=\"left\" colspan=\"2\"><pre>\n").append(ls.toString()).append("</pre></td></tr>\n");
out.write(buf.toString().getBytes());
buf.setLength(0);
}
buf.append("</table>\n");
@@ -632,6 +638,9 @@ public class KademliaNetworkDatabaseFacade extends NetworkDatabaseFacade {
Set routers = getRouters();
buf.append("<h3>Routers</h3>\n");
buf.append("<table border=\"1\">\n");
out.write(buf.toString().getBytes());
buf.setLength(0);
for (Iterator iter = routers.iterator(); iter.hasNext(); ) {
RouterInfo ri = (RouterInfo)iter.next();
Hash key = ri.getIdentity().getHash();
@@ -648,10 +657,10 @@ public class KademliaNetworkDatabaseFacade extends NetworkDatabaseFacade {
buf.append("<td valign=\"top\" align=\"left\"><a href=\"/profile/").append(key.toBase64().substring(0, 32)).append("\">Profile</a></td></tr>");
}
buf.append("<tr><td valign=\"top\" align=\"left\" colspan=\"3\"><pre>\n").append(ri.toString()).append("</pre></td></tr>\n");
out.write(buf.toString().getBytes());
buf.setLength(0);
}
buf.append("</table>\n");
return buf.toString();
out.write("</table>\n".getBytes());
}
}

View File

@@ -36,14 +36,14 @@ public class RepublishLeaseSetJob extends JobImpl {
public String getName() { return "Republish a local leaseSet"; }
public void runJob() {
try {
if (_context.clientManager().isLocal(_dest)) {
if (getContext().clientManager().isLocal(_dest)) {
LeaseSet ls = _facade.lookupLeaseSetLocally(_dest);
if (ls != null) {
_log.warn("Client " + _dest + " is local, so we're republishing it");
if (!ls.isCurrent(Router.CLOCK_FUDGE_FACTOR)) {
_log.warn("Not publishing a LOCAL lease that isn't current - " + _dest, new Exception("Publish expired LOCAL lease?"));
} else {
_context.jobQueue().addJob(new StoreJob(_context, _facade, _dest, ls, null, null, REPUBLISH_LEASESET_DELAY));
getContext().jobQueue().addJob(new StoreJob(getContext(), _facade, _dest, ls, null, null, REPUBLISH_LEASESET_DELAY));
}
} else {
_log.warn("Client " + _dest + " is local, but we can't find a valid LeaseSet? perhaps its being rebuilt?");

View File

@@ -71,21 +71,21 @@ class SearchJob extends JobImpl {
super(context);
if ( (key == null) || (key.getData() == null) )
throw new IllegalArgumentException("Search for null key? wtf");
_log = _context.logManager().getLog(SearchJob.class);
_log = getContext().logManager().getLog(SearchJob.class);
_facade = facade;
_state = new SearchState(_context, key);
_state = new SearchState(getContext(), key);
_onSuccess = onSuccess;
_onFailure = onFailure;
_timeoutMs = timeoutMs;
_keepStats = keepStats;
_isLease = isLease;
_peerSelector = new PeerSelector(_context);
_expiration = _context.clock().now() + timeoutMs;
_context.statManager().createRateStat("netDb.successTime", "How long a successful search takes", "Network Database", new long[] { 60*60*1000l, 24*60*60*1000l });
_context.statManager().createRateStat("netDb.failedTime", "How long a failed search takes", "Network Database", new long[] { 60*60*1000l, 24*60*60*1000l });
_context.statManager().createRateStat("netDb.successPeers", "How many peers are contacted in a successful search", "Network Database", new long[] { 60*60*1000l, 24*60*60*1000l });
_context.statManager().createRateStat("netDb.failedPeers", "How many peers fail to respond to a lookup?", "Network Database", new long[] { 60*60*1000l, 24*60*60*1000l });
_context.statManager().createRateStat("netDb.searchCount", "Overall number of searches sent", "Network Database", new long[] { 60*60*1000l, 3*60*60*1000l, 24*60*60*1000l });
_peerSelector = new PeerSelector(getContext());
_expiration = getContext().clock().now() + timeoutMs;
getContext().statManager().createRateStat("netDb.successTime", "How long a successful search takes", "Network Database", new long[] { 60*60*1000l, 24*60*60*1000l });
getContext().statManager().createRateStat("netDb.failedTime", "How long a failed search takes", "Network Database", new long[] { 60*60*1000l, 24*60*60*1000l });
getContext().statManager().createRateStat("netDb.successPeers", "How many peers are contacted in a successful search", "Network Database", new long[] { 60*60*1000l, 24*60*60*1000l });
getContext().statManager().createRateStat("netDb.failedPeers", "How many peers fail to respond to a lookup?", "Network Database", new long[] { 60*60*1000l, 24*60*60*1000l });
getContext().statManager().createRateStat("netDb.searchCount", "Overall number of searches sent", "Network Database", new long[] { 60*60*1000l, 3*60*60*1000l, 24*60*60*1000l });
if (_log.shouldLog(Log.DEBUG))
_log.debug("Search (" + getClass().getName() + " for " + key.toBase64(), new Exception("Search enqueued by"));
}
@@ -93,7 +93,7 @@ class SearchJob extends JobImpl {
public void runJob() {
if (_log.shouldLog(Log.INFO))
_log.info(getJobId() + ": Searching for " + _state.getTarget()); // , getAddedBy());
_context.statManager().addRateData("netDb.searchCount", 1, 0);
getContext().statManager().addRateData("netDb.searchCount", 1, 0);
searchNext();
}
@@ -136,7 +136,7 @@ class SearchJob extends JobImpl {
private boolean isLocal() { return _facade.getDataStore().isKnown(_state.getTarget()); }
private boolean isExpired() {
return _context.clock().now() >= _expiration;
return getContext().clock().now() >= _expiration;
}
/** max # of concurrent searches */
@@ -199,15 +199,15 @@ class SearchJob extends JobImpl {
private void requeuePending() {
if (_pendingRequeueJob == null)
_pendingRequeueJob = new RequeuePending();
long now = _context.clock().now();
long now = getContext().clock().now();
if (_pendingRequeueJob.getTiming().getStartAfter() < now)
_pendingRequeueJob.getTiming().setStartAfter(now+5*1000);
_context.jobQueue().addJob(_pendingRequeueJob);
getContext().jobQueue().addJob(_pendingRequeueJob);
}
private class RequeuePending extends JobImpl {
public RequeuePending() {
super(SearchJob.this._context);
super(SearchJob.this.getContext());
}
public String getName() { return "Requeue search with pending"; }
public void runJob() { searchNext(); }
@@ -220,7 +220,7 @@ class SearchJob extends JobImpl {
* @return ordered list of Hash objects
*/
private List getClosestRouters(Hash key, int numClosest, Set alreadyChecked) {
Hash rkey = _context.routingKeyGenerator().getRoutingKey(key);
Hash rkey = getContext().routingKeyGenerator().getRoutingKey(key);
if (_log.shouldLog(Log.DEBUG))
_log.debug(getJobId() + ": Current routing key for " + key + ": " + rkey);
return _peerSelector.selectNearestExplicit(rkey, numClosest, alreadyChecked, _facade.getKBuckets());
@@ -231,7 +231,7 @@ class SearchJob extends JobImpl {
*
*/
protected void sendSearch(RouterInfo router) {
if (router.getIdentity().equals(_context.router().getRouterInfo().getIdentity())) {
if (router.getIdentity().equals(getContext().router().getRouterInfo().getIdentity())) {
// don't search ourselves
if (_log.shouldLog(Log.ERROR))
_log.error(getJobId() + ": Dont send search to ourselves - why did we try?");
@@ -257,26 +257,26 @@ class SearchJob extends JobImpl {
TunnelId inTunnelId = getInboundTunnelId();
if (inTunnelId == null) {
_log.error("No tunnels to get search replies through! wtf!");
_context.jobQueue().addJob(new FailedJob(router));
getContext().jobQueue().addJob(new FailedJob(router));
return;
}
TunnelInfo inTunnel = _context.tunnelManager().getTunnelInfo(inTunnelId);
RouterInfo inGateway = _context.netDb().lookupRouterInfoLocally(inTunnel.getThisHop());
TunnelInfo inTunnel = getContext().tunnelManager().getTunnelInfo(inTunnelId);
RouterInfo inGateway = getContext().netDb().lookupRouterInfoLocally(inTunnel.getThisHop());
if (inGateway == null) {
_log.error("We can't find the gateway to our inbound tunnel?! wtf");
_context.jobQueue().addJob(new FailedJob(router));
getContext().jobQueue().addJob(new FailedJob(router));
return;
}
long expiration = _context.clock().now() + PER_PEER_TIMEOUT; // getTimeoutMs();
long expiration = getContext().clock().now() + PER_PEER_TIMEOUT; // getTimeoutMs();
DatabaseLookupMessage msg = buildMessage(inTunnelId, inGateway, expiration);
TunnelId outTunnelId = getOutboundTunnelId();
if (outTunnelId == null) {
_log.error("No tunnels to send search out through! wtf!");
_context.jobQueue().addJob(new FailedJob(router));
getContext().jobQueue().addJob(new FailedJob(router));
return;
}
@@ -286,18 +286,18 @@ class SearchJob extends JobImpl {
+ msg.getFrom().getIdentity().getHash().toBase64() + "] via tunnel ["
+ msg.getReplyTunnel() + "]");
SearchMessageSelector sel = new SearchMessageSelector(_context, router, _expiration, _state);
SearchMessageSelector sel = new SearchMessageSelector(getContext(), router, _expiration, _state);
long timeoutMs = PER_PEER_TIMEOUT; // getTimeoutMs();
SearchUpdateReplyFoundJob reply = new SearchUpdateReplyFoundJob(_context, router, _state, _facade, this);
SendTunnelMessageJob j = new SendTunnelMessageJob(_context, msg, outTunnelId, router.getIdentity().getHash(),
SearchUpdateReplyFoundJob reply = new SearchUpdateReplyFoundJob(getContext(), router, _state, _facade, this);
SendTunnelMessageJob j = new SendTunnelMessageJob(getContext(), msg, outTunnelId, router.getIdentity().getHash(),
null, null, reply, new FailedJob(router), sel,
timeoutMs, SEARCH_PRIORITY);
_context.jobQueue().addJob(j);
getContext().jobQueue().addJob(j);
}
/** we're searching for a router, so we can just send direct */
protected void sendRouterSearch(RouterInfo router) {
long expiration = _context.clock().now() + PER_PEER_TIMEOUT; // getTimeoutMs();
long expiration = getContext().clock().now() + PER_PEER_TIMEOUT; // getTimeoutMs();
DatabaseLookupMessage msg = buildMessage(expiration);
@@ -305,12 +305,12 @@ class SearchJob extends JobImpl {
_log.info(getJobId() + ": Sending router search to " + router.getIdentity().getHash().toBase64()
+ " for " + msg.getSearchKey().toBase64() + " w/ replies to us ["
+ msg.getFrom().getIdentity().getHash().toBase64() + "]");
SearchMessageSelector sel = new SearchMessageSelector(_context, router, _expiration, _state);
SearchMessageSelector sel = new SearchMessageSelector(getContext(), router, _expiration, _state);
long timeoutMs = PER_PEER_TIMEOUT;
SearchUpdateReplyFoundJob reply = new SearchUpdateReplyFoundJob(_context, router, _state, _facade, this);
SendMessageDirectJob j = new SendMessageDirectJob(_context, msg, router.getIdentity().getHash(),
SearchUpdateReplyFoundJob reply = new SearchUpdateReplyFoundJob(getContext(), router, _state, _facade, this);
SendMessageDirectJob j = new SendMessageDirectJob(getContext(), msg, router.getIdentity().getHash(),
reply, new FailedJob(router), sel, expiration, SEARCH_PRIORITY);
_context.jobQueue().addJob(j);
getContext().jobQueue().addJob(j);
}
/**
@@ -322,7 +322,7 @@ class SearchJob extends JobImpl {
TunnelSelectionCriteria crit = new TunnelSelectionCriteria();
crit.setMaximumTunnelsRequired(1);
crit.setMinimumTunnelsRequired(1);
List tunnelIds = _context.tunnelManager().selectOutboundTunnelIds(crit);
List tunnelIds = getContext().tunnelManager().selectOutboundTunnelIds(crit);
if (tunnelIds.size() <= 0) {
return null;
}
@@ -339,7 +339,7 @@ class SearchJob extends JobImpl {
TunnelSelectionCriteria crit = new TunnelSelectionCriteria();
crit.setMaximumTunnelsRequired(1);
crit.setMinimumTunnelsRequired(1);
List tunnelIds = _context.tunnelManager().selectInboundTunnelIds(crit);
List tunnelIds = getContext().tunnelManager().selectInboundTunnelIds(crit);
if (tunnelIds.size() <= 0) {
return null;
}
@@ -354,7 +354,7 @@ class SearchJob extends JobImpl {
* @param expiration when the search should stop
*/
protected DatabaseLookupMessage buildMessage(TunnelId replyTunnelId, RouterInfo replyGateway, long expiration) {
DatabaseLookupMessage msg = new DatabaseLookupMessage(_context);
DatabaseLookupMessage msg = new DatabaseLookupMessage(getContext());
msg.setSearchKey(_state.getTarget());
msg.setFrom(replyGateway);
msg.setDontIncludePeers(_state.getAttempted());
@@ -369,9 +369,9 @@ class SearchJob extends JobImpl {
*
*/
protected DatabaseLookupMessage buildMessage(long expiration) {
DatabaseLookupMessage msg = new DatabaseLookupMessage(_context);
DatabaseLookupMessage msg = new DatabaseLookupMessage(getContext());
msg.setSearchKey(_state.getTarget());
msg.setFrom(_context.router().getRouterInfo());
msg.setFrom(getContext().router().getRouterInfo());
msg.setDontIncludePeers(_state.getAttempted());
msg.setMessageExpiration(new Date(expiration));
msg.setReplyTunnel(null);
@@ -381,7 +381,7 @@ class SearchJob extends JobImpl {
void replyFound(DatabaseSearchReplyMessage message, Hash peer) {
long duration = _state.replyFound(peer);
// this processing can take a while, so split 'er up
_context.jobQueue().addJob(new SearchReplyJob((DatabaseSearchReplyMessage)message, peer, duration));
getContext().jobQueue().addJob(new SearchReplyJob((DatabaseSearchReplyMessage)message, peer, duration));
}
/**
@@ -403,7 +403,7 @@ class SearchJob extends JobImpl {
private int _duplicatePeers;
private long _duration;
public SearchReplyJob(DatabaseSearchReplyMessage message, Hash peer, long duration) {
super(SearchJob.this._context);
super(SearchJob.this.getContext());
_msg = message;
_peer = peer;
_curIndex = 0;
@@ -415,7 +415,7 @@ class SearchJob extends JobImpl {
public String getName() { return "Process Reply for Kademlia Search"; }
public void runJob() {
if (_curIndex >= _msg.getNumReplies()) {
_context.profileManager().dbLookupReply(_peer, _newPeers, _seenPeers,
getContext().profileManager().dbLookupReply(_peer, _newPeers, _seenPeers,
_invalidPeers, _duplicatePeers, _duration);
if (_newPeers > 0)
newPeersFound(_newPeers);
@@ -462,7 +462,7 @@ class SearchJob extends JobImpl {
*
*/
public FailedJob(RouterInfo peer, boolean penalizePeer) {
super(SearchJob.this._context);
super(SearchJob.this.getContext());
_penalizePeer = penalizePeer;
_peer = peer.getIdentity().getHash();
}
@@ -471,12 +471,12 @@ class SearchJob extends JobImpl {
if (_penalizePeer) {
if (_log.shouldLog(Log.WARN))
_log.warn("Penalizing peer for timeout on search: " + _peer.toBase64());
_context.profileManager().dbLookupFailed(_peer);
getContext().profileManager().dbLookupFailed(_peer);
} else {
if (_log.shouldLog(Log.ERROR))
_log.error("NOT (!!) Penalizing peer for timeout on search: " + _peer.toBase64());
}
_context.statManager().addRateData("netDb.failedPeers", 1, 0);
getContext().statManager().addRateData("netDb.failedPeers", 1, 0);
searchNext();
}
public String getName() { return "Kademlia Search Failed"; }
@@ -493,12 +493,12 @@ class SearchJob extends JobImpl {
_log.debug(getJobId() + ": State of successful search: " + _state);
if (_keepStats) {
long time = _context.clock().now() - _state.getWhenStarted();
_context.statManager().addRateData("netDb.successTime", time, 0);
_context.statManager().addRateData("netDb.successPeers", _state.getAttempted().size(), time);
long time = getContext().clock().now() - _state.getWhenStarted();
getContext().statManager().addRateData("netDb.successTime", time, 0);
getContext().statManager().addRateData("netDb.successPeers", _state.getAttempted().size(), time);
}
if (_onSuccess != null)
_context.jobQueue().addJob(_onSuccess);
getContext().jobQueue().addJob(_onSuccess);
resend();
}
@@ -513,7 +513,7 @@ class SearchJob extends JobImpl {
if (ds == null)
ds = _facade.lookupRouterInfoLocally(_state.getTarget());
if (ds != null)
_context.jobQueue().addJob(new StoreJob(_context, _facade, _state.getTarget(),
getContext().jobQueue().addJob(new StoreJob(getContext(), _facade, _state.getTarget(),
ds, null, null, RESEND_TIMEOUT,
_state.getSuccessful()));
}
@@ -528,11 +528,11 @@ class SearchJob extends JobImpl {
_log.debug(getJobId() + ": State of failed search: " + _state);
if (_keepStats) {
long time = _context.clock().now() - _state.getWhenStarted();
_context.statManager().addRateData("netDb.failedTime", time, 0);
long time = getContext().clock().now() - _state.getWhenStarted();
getContext().statManager().addRateData("netDb.failedTime", time, 0);
}
if (_onFailure != null)
_context.jobQueue().addJob(_onFailure);
getContext().jobQueue().addJob(_onFailure);
}
public String getName() { return "Kademlia NetDb Search"; }

View File

@@ -58,7 +58,7 @@ class SearchUpdateReplyFoundJob extends JobImpl implements ReplyJob {
_log.error(getJobId() + ": Unknown db store type?!@ " + msg.getValueType());
}
_context.profileManager().dbLookupSuccessful(_peer, timeToReply);
getContext().profileManager().dbLookupSuccessful(_peer, timeToReply);
} else if (_message instanceof DatabaseSearchReplyMessage) {
_job.replyFound((DatabaseSearchReplyMessage)_message, _peer);
} else {

View File

@@ -47,7 +47,7 @@ class StartExplorersJob extends JobImpl {
for (Iterator iter = toExplore.iterator(); iter.hasNext(); ) {
Hash key = (Hash)iter.next();
//_log.info("Starting explorer for " + key, new Exception("Exploring!"));
_context.jobQueue().addJob(new ExploreJob(_context, _facade, key));
getContext().jobQueue().addJob(new ExploreJob(getContext(), _facade, key));
}
long delay = getNextRunDelay();
if (_log.shouldLog(Log.DEBUG))
@@ -63,12 +63,12 @@ class StartExplorersJob extends JobImpl {
long delay = getNextRunDelay();
if (_log.shouldLog(Log.DEBUG))
_log.debug("Updating exploration schedule with a delay of " + delay);
getTiming().setStartAfter(_context.clock().now() + delay);
getTiming().setStartAfter(getContext().clock().now() + delay);
}
/** how long should we wait before exploring? */
private long getNextRunDelay() {
long delay = _context.clock().now() - _facade.getLastExploreNewDate();
long delay = getContext().clock().now() - _facade.getLastExploreNewDate();
if (delay < MIN_RERUN_DELAY_MS)
return MIN_RERUN_DELAY_MS;
else if (delay > MAX_RERUN_DELAY_MS)

View File

@@ -73,11 +73,11 @@ class StoreJob extends JobImpl {
DataStructure data, Job onSuccess, Job onFailure, long timeoutMs, Set toSkip) {
super(context);
_log = context.logManager().getLog(StoreJob.class);
_context.statManager().createRateStat("netDb.storeSent", "How many netDb store messages have we sent?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
_context.statManager().createRateStat("netDb.storePeers", "How many peers each netDb must be sent to before success?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
_context.statManager().createRateStat("netDb.ackTime", "How long does it take for a peer to ack a netDb store?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
getContext().statManager().createRateStat("netDb.storeSent", "How many netDb store messages have we sent?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
getContext().statManager().createRateStat("netDb.storePeers", "How many peers each netDb must be sent to before success?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
getContext().statManager().createRateStat("netDb.ackTime", "How long does it take for a peer to ack a netDb store?", "Network Database", new long[] { 5*60*1000l, 60*60*1000l, 24*60*60*1000l });
_facade = facade;
_state = new StoreState(_context, key, data, toSkip);
_state = new StoreState(getContext(), key, data, toSkip);
_onSuccess = onSuccess;
_onFailure = onFailure;
_timeoutMs = timeoutMs;
@@ -91,7 +91,7 @@ class StoreJob extends JobImpl {
}
private boolean isExpired() {
return _context.clock().now() >= _expiration;
return getContext().clock().now() >= _expiration;
}
/**
@@ -168,7 +168,7 @@ class StoreJob extends JobImpl {
* @return ordered list of Hash objects
*/
private List getClosestRouters(Hash key, int numClosest, Set alreadyChecked) {
Hash rkey = _context.routingKeyGenerator().getRoutingKey(key);
Hash rkey = getContext().routingKeyGenerator().getRoutingKey(key);
//if (_log.shouldLog(Log.DEBUG))
// _log.debug(getJobId() + ": Current routing key for " + key + ": " + rkey);
@@ -181,7 +181,7 @@ class StoreJob extends JobImpl {
*
*/
private void sendStore(RouterInfo router) {
DatabaseStoreMessage msg = new DatabaseStoreMessage(_context);
DatabaseStoreMessage msg = new DatabaseStoreMessage(getContext());
msg.setKey(_state.getTarget());
if (_state.getData() instanceof RouterInfo)
msg.setRouterInfo((RouterInfo)_state.getData());
@@ -189,9 +189,9 @@ class StoreJob extends JobImpl {
msg.setLeaseSet((LeaseSet)_state.getData());
else
throw new IllegalArgumentException("Storing an unknown data type! " + _state.getData());
msg.setMessageExpiration(new Date(_context.clock().now() + _timeoutMs));
msg.setMessageExpiration(new Date(getContext().clock().now() + _timeoutMs));
if (router.getIdentity().equals(_context.router().getRouterInfo().getIdentity())) {
if (router.getIdentity().equals(getContext().router().getRouterInfo().getIdentity())) {
// don't send it to ourselves
if (_log.shouldLog(Log.ERROR))
_log.error(getJobId() + ": Dont send store to ourselves - why did we try?");
@@ -205,15 +205,15 @@ class StoreJob extends JobImpl {
}
private void sendStore(DatabaseStoreMessage msg, RouterInfo peer, long expiration) {
_context.statManager().addRateData("netDb.storeSent", 1, 0);
getContext().statManager().addRateData("netDb.storeSent", 1, 0);
sendStoreThroughGarlic(msg, peer, expiration);
}
private void sendStoreThroughGarlic(DatabaseStoreMessage msg, RouterInfo peer, long expiration) {
long token = _context.random().nextLong(I2NPMessage.MAX_ID_VALUE);
long token = getContext().random().nextLong(I2NPMessage.MAX_ID_VALUE);
TunnelId replyTunnelId = selectInboundTunnel();
TunnelInfo replyTunnel = _context.tunnelManager().getTunnelInfo(replyTunnelId);
TunnelInfo replyTunnel = getContext().tunnelManager().getTunnelInfo(replyTunnelId);
if (replyTunnel == null) {
_log.error("No reply inbound tunnels available!");
return;
@@ -229,7 +229,7 @@ class StoreJob extends JobImpl {
SendSuccessJob onReply = new SendSuccessJob(peer);
FailedJob onFail = new FailedJob(peer);
StoreMessageSelector selector = new StoreMessageSelector(_context, getJobId(), peer, token, expiration);
StoreMessageSelector selector = new StoreMessageSelector(getContext(), getJobId(), peer, token, expiration);
TunnelId outTunnelId = selectOutboundTunnel();
if (outTunnelId != null) {
@@ -238,12 +238,12 @@ class StoreJob extends JobImpl {
// + peer.getIdentity().getHash().toBase64());
TunnelId targetTunnelId = null; // not needed
Job onSend = null; // not wanted
SendTunnelMessageJob j = new SendTunnelMessageJob(_context, msg, outTunnelId,
SendTunnelMessageJob j = new SendTunnelMessageJob(getContext(), msg, outTunnelId,
peer.getIdentity().getHash(),
targetTunnelId, onSend, onReply,
onFail, selector, STORE_TIMEOUT_MS,
STORE_PRIORITY);
_context.jobQueue().addJob(j);
getContext().jobQueue().addJob(j);
} else {
if (_log.shouldLog(Log.ERROR))
_log.error("No outbound tunnels to send a dbStore out!");
@@ -258,7 +258,7 @@ class StoreJob extends JobImpl {
criteria.setReliabilityPriority(20);
criteria.setMaximumTunnelsRequired(1);
criteria.setMinimumTunnelsRequired(1);
List tunnelIds = _context.tunnelManager().selectOutboundTunnelIds(criteria);
List tunnelIds = getContext().tunnelManager().selectOutboundTunnelIds(criteria);
if (tunnelIds.size() <= 0) {
_log.error("No outbound tunnels?!");
return null;
@@ -274,7 +274,7 @@ class StoreJob extends JobImpl {
criteria.setReliabilityPriority(20);
criteria.setMaximumTunnelsRequired(1);
criteria.setMinimumTunnelsRequired(1);
List tunnelIds = _context.tunnelManager().selectInboundTunnelIds(criteria);
List tunnelIds = getContext().tunnelManager().selectInboundTunnelIds(criteria);
if (tunnelIds.size() <= 0) {
_log.error("No inbound tunnels?!");
return null;
@@ -292,7 +292,7 @@ class StoreJob extends JobImpl {
private RouterInfo _peer;
public SendSuccessJob(RouterInfo peer) {
super(StoreJob.this._context);
super(StoreJob.this.getContext());
_peer = peer;
}
@@ -302,8 +302,8 @@ class StoreJob extends JobImpl {
if (_log.shouldLog(Log.INFO))
_log.info(StoreJob.this.getJobId() + ": Marking store of " + _state.getTarget()
+ " to " + _peer.getIdentity().getHash().toBase64() + " successful after " + howLong);
_context.profileManager().dbStoreSent(_peer.getIdentity().getHash(), howLong);
_context.statManager().addRateData("netDb.ackTime", howLong, howLong);
getContext().profileManager().dbStoreSent(_peer.getIdentity().getHash(), howLong);
getContext().statManager().addRateData("netDb.ackTime", howLong, howLong);
if (_state.getSuccessful().size() >= REDUNDANCY) {
succeed();
@@ -326,14 +326,14 @@ class StoreJob extends JobImpl {
private RouterInfo _peer;
public FailedJob(RouterInfo peer) {
super(StoreJob.this._context);
super(StoreJob.this.getContext());
_peer = peer;
}
public void runJob() {
if (_log.shouldLog(Log.WARN))
_log.warn(StoreJob.this.getJobId() + ": Peer " + _peer.getIdentity().getHash().toBase64() + " timed out");
_state.replyTimeout(_peer.getIdentity().getHash());
_context.profileManager().dbStoreFailed(_peer.getIdentity().getHash());
getContext().profileManager().dbStoreFailed(_peer.getIdentity().getHash());
sendNext();
}
@@ -349,9 +349,9 @@ class StoreJob extends JobImpl {
if (_log.shouldLog(Log.DEBUG))
_log.debug(getJobId() + ": State of successful send: " + _state);
if (_onSuccess != null)
_context.jobQueue().addJob(_onSuccess);
getContext().jobQueue().addJob(_onSuccess);
_facade.noteKeySent(_state.getTarget());
_context.statManager().addRateData("netDb.storePeers", _state.getAttempted().size(), _state.getWhenCompleted()-_state.getWhenStarted());
getContext().statManager().addRateData("netDb.storePeers", _state.getAttempted().size(), _state.getWhenCompleted()-_state.getWhenStarted());
}
/**
@@ -363,6 +363,6 @@ class StoreJob extends JobImpl {
if (_log.shouldLog(Log.DEBUG))
_log.debug(getJobId() + ": State of failed send: " + _state, new Exception("Who failed me?"));
if (_onFailure != null)
_context.jobQueue().addJob(_onFailure);
getContext().jobQueue().addJob(_onFailure);
}
}

View File

@@ -27,18 +27,18 @@ class EvaluateProfilesJob extends JobImpl {
public String getName() { return "Evaluate peer profiles"; }
public void runJob() {
try {
long start = _context.clock().now();
Set allPeers = _context.profileOrganizer().selectAllPeers();
long afterSelect = _context.clock().now();
long start = getContext().clock().now();
Set allPeers = getContext().profileOrganizer().selectAllPeers();
long afterSelect = getContext().clock().now();
for (Iterator iter = allPeers.iterator(); iter.hasNext(); ) {
Hash peer = (Hash)iter.next();
PeerProfile profile = _context.profileOrganizer().getProfile(peer);
PeerProfile profile = getContext().profileOrganizer().getProfile(peer);
if (profile != null)
profile.coallesceStats();
}
long afterCoallesce = _context.clock().now();
_context.profileOrganizer().reorganize();
long afterReorganize = _context.clock().now();
long afterCoallesce = getContext().clock().now();
getContext().profileOrganizer().reorganize();
long afterReorganize = getContext().clock().now();
if (_log.shouldLog(Log.DEBUG))
_log.debug("Profiles coallesced and reorganized. total: " + allPeers.size() + ", selectAll: " + (afterSelect-start) + "ms, coallesce: " + (afterCoallesce-afterSelect) + "ms, reorganize: " + (afterReorganize-afterSelect));

View File

@@ -8,6 +8,9 @@ package net.i2p.router.peermanager;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import java.util.HashSet;
import java.util.Iterator;
import java.util.Set;
@@ -124,5 +127,7 @@ class PeerManager {
return rv;
}
public String renderStatusHTML() { return _organizer.renderStatusHTML(); }
public void renderStatusHTML(OutputStream out) throws IOException {
_organizer.renderStatusHTML(out);
}
}

View File

@@ -8,6 +8,9 @@ package net.i2p.router.peermanager;
*
*/
import java.io.IOException;
import java.io.OutputStream;
import java.util.ArrayList;
import java.util.List;
@@ -51,5 +54,7 @@ public class PeerManagerFacadeImpl implements PeerManagerFacade {
return new ArrayList(_manager.selectPeers(criteria));
}
public String renderStatusHTML() { return _manager.renderStatusHTML(); }
public void renderStatusHTML(OutputStream out) throws IOException {
_manager.renderStatusHTML(out);
}
}

View File

@@ -2,6 +2,8 @@ package net.i2p.router.peermanager;
import java.io.File;
import java.text.DecimalFormat;
import net.i2p.data.Hash;
import net.i2p.router.RouterContext;
import net.i2p.stat.RateStat;
@@ -297,6 +299,8 @@ public class PeerProfile {
*/
public static void main(String args[]) {
RouterContext ctx = new RouterContext(new net.i2p.router.Router());
DecimalFormat fmt = new DecimalFormat("0,000.0");
fmt.setPositivePrefix("+");
ProfilePersistenceHelper helper = new ProfilePersistenceHelper(ctx);
try { Thread.sleep(5*1000); } catch (InterruptedException e) {}
StringBuffer buf = new StringBuffer(1024);
@@ -308,9 +312,9 @@ public class PeerProfile {
}
//profile.coallesceStats();
buf.append("Peer " + profile.getPeer().toBase64()
+ ":\t Speed:\t" + profile.calculateSpeed()
+ " Reliability:\t" + profile.calculateReliability()
+ " Integration:\t" + profile.calculateIntegration()
+ ":\t Speed:\t" + fmt.format(profile.calculateSpeed())
+ " Reliability:\t" + fmt.format(profile.calculateReliability())
+ " Integration:\t" + fmt.format(profile.calculateIntegration())
+ " Active?\t" + profile.getIsActive()
+ " Failing?\t" + profile.calculateIsFailing()
+ '\n');

View File

@@ -53,7 +53,7 @@ public class PeerTestJob extends JobImpl {
public void startTesting(PeerManager manager) {
_manager = manager;
_keepTesting = true;
_context.jobQueue().addJob(this);
getContext().jobQueue().addJob(this);
if (_log.shouldLog(Log.INFO))
_log.info("Start testing peers");
}
@@ -97,7 +97,7 @@ public class PeerTestJob extends JobImpl {
Set peers = new HashSet(peerHashes.size());
for (Iterator iter = peerHashes.iterator(); iter.hasNext(); ) {
Hash peer = (Hash)iter.next();
RouterInfo peerInfo = _context.netDb().lookupRouterInfoLocally(peer);
RouterInfo peerInfo = getContext().netDb().lookupRouterInfoLocally(peer);
if (peerInfo != null) {
peers.add(peerInfo);
} else {
@@ -119,17 +119,17 @@ public class PeerTestJob extends JobImpl {
return;
}
TunnelInfo inTunnel = _context.tunnelManager().getTunnelInfo(inTunnelId);
RouterInfo inGateway = _context.netDb().lookupRouterInfoLocally(inTunnel.getThisHop());
TunnelInfo inTunnel = getContext().tunnelManager().getTunnelInfo(inTunnelId);
RouterInfo inGateway = getContext().netDb().lookupRouterInfoLocally(inTunnel.getThisHop());
if (inGateway == null) {
_log.error("We can't find the gateway to our inbound tunnel?! wtf");
return;
}
long timeoutMs = getTestTimeout();
long expiration = _context.clock().now() + timeoutMs;
long expiration = getContext().clock().now() + timeoutMs;
long nonce = _context.random().nextLong(I2NPMessage.MAX_ID_VALUE);
long nonce = getContext().random().nextLong(I2NPMessage.MAX_ID_VALUE);
DatabaseStoreMessage msg = buildMessage(peer, inTunnelId, inGateway.getIdentity().getHash(), nonce, expiration);
TunnelId outTunnelId = getOutboundTunnelId();
@@ -137,7 +137,7 @@ public class PeerTestJob extends JobImpl {
_log.error("No tunnels to send search out through! wtf!");
return;
}
TunnelInfo outTunnel = _context.tunnelManager().getTunnelInfo(outTunnelId);
TunnelInfo outTunnel = getContext().tunnelManager().getTunnelInfo(outTunnelId);
if (_log.shouldLog(Log.DEBUG))
_log.debug(getJobId() + ": Sending peer test to " + peer.getIdentity().getHash().toBase64()
@@ -145,12 +145,12 @@ public class PeerTestJob extends JobImpl {
+ "] via tunnel [" + msg.getReplyTunnel() + "]");
ReplySelector sel = new ReplySelector(peer.getIdentity().getHash(), nonce, expiration);
PeerReplyFoundJob reply = new PeerReplyFoundJob(_context, peer, inTunnel, outTunnel);
PeerReplyTimeoutJob timeoutJob = new PeerReplyTimeoutJob(_context, peer, inTunnel, outTunnel);
SendTunnelMessageJob j = new SendTunnelMessageJob(_context, msg, outTunnelId, peer.getIdentity().getHash(),
PeerReplyFoundJob reply = new PeerReplyFoundJob(getContext(), peer, inTunnel, outTunnel);
PeerReplyTimeoutJob timeoutJob = new PeerReplyTimeoutJob(getContext(), peer, inTunnel, outTunnel);
SendTunnelMessageJob j = new SendTunnelMessageJob(getContext(), msg, outTunnelId, peer.getIdentity().getHash(),
null, null, reply, timeoutJob, sel,
timeoutMs, TEST_PRIORITY);
_context.jobQueue().addJob(j);
getContext().jobQueue().addJob(j);
}
@@ -163,7 +163,7 @@ public class PeerTestJob extends JobImpl {
TunnelSelectionCriteria crit = new TunnelSelectionCriteria();
crit.setMaximumTunnelsRequired(1);
crit.setMinimumTunnelsRequired(1);
List tunnelIds = _context.tunnelManager().selectOutboundTunnelIds(crit);
List tunnelIds = getContext().tunnelManager().selectOutboundTunnelIds(crit);
if (tunnelIds.size() <= 0) {
return null;
}
@@ -180,7 +180,7 @@ public class PeerTestJob extends JobImpl {
TunnelSelectionCriteria crit = new TunnelSelectionCriteria();
crit.setMaximumTunnelsRequired(1);
crit.setMinimumTunnelsRequired(1);
List tunnelIds = _context.tunnelManager().selectInboundTunnelIds(crit);
List tunnelIds = getContext().tunnelManager().selectInboundTunnelIds(crit);
if (tunnelIds.size() <= 0) {
return null;
}
@@ -191,7 +191,7 @@ public class PeerTestJob extends JobImpl {
* Build a message to test the peer with
*/
private DatabaseStoreMessage buildMessage(RouterInfo peer, TunnelId replyTunnel, Hash replyGateway, long nonce, long expiration) {
DatabaseStoreMessage msg = new DatabaseStoreMessage(_context);
DatabaseStoreMessage msg = new DatabaseStoreMessage(getContext());
msg.setKey(peer.getIdentity().getHash());
msg.setRouterInfo(peer);
msg.setReplyGateway(replyGateway);
@@ -220,7 +220,7 @@ public class PeerTestJob extends JobImpl {
if (message instanceof DeliveryStatusMessage) {
DeliveryStatusMessage msg = (DeliveryStatusMessage)message;
if (_nonce == msg.getMessageId()) {
long timeLeft = _expiration - _context.clock().now();
long timeLeft = _expiration - getContext().clock().now();
if (timeLeft < 0)
_log.warn("Took too long to get a reply from peer " + _peer.toBase64()
+ ": " + (0-timeLeft) + "ms too slow");
@@ -247,30 +247,30 @@ public class PeerTestJob extends JobImpl {
}
public String getName() { return "Peer test successful"; }
public void runJob() {
long responseTime = _context.clock().now() - _testBegin;
long responseTime = getContext().clock().now() - _testBegin;
if (_log.shouldLog(Log.DEBUG))
_log.debug("successful peer test after " + responseTime + " for "
+ _peer.getIdentity().getHash().toBase64() + " using outbound tunnel "
+ _sendTunnel.getTunnelId().getTunnelId() + " and inbound tunnel "
+ _replyTunnel.getTunnelId().getTunnelId());
_context.profileManager().dbLookupSuccessful(_peer.getIdentity().getHash(), responseTime);
getContext().profileManager().dbLookupSuccessful(_peer.getIdentity().getHash(), responseTime);
_sendTunnel.setLastTested(_context.clock().now());
_replyTunnel.setLastTested(_context.clock().now());
_sendTunnel.setLastTested(getContext().clock().now());
_replyTunnel.setLastTested(getContext().clock().now());
TunnelInfo cur = _replyTunnel;
while (cur != null) {
Hash peer = cur.getThisHop();
if ( (peer != null) && (!_context.routerHash().equals(peer)) )
_context.profileManager().tunnelTestSucceeded(peer, responseTime);
if ( (peer != null) && (!getContext().routerHash().equals(peer)) )
getContext().profileManager().tunnelTestSucceeded(peer, responseTime);
cur = cur.getNextHopInfo();
}
cur = _sendTunnel;
while (cur != null) {
Hash peer = cur.getThisHop();
if ( (peer != null) && (!_context.routerHash().equals(peer)) )
_context.profileManager().tunnelTestSucceeded(peer, responseTime);
if ( (peer != null) && (!getContext().routerHash().equals(peer)) )
getContext().profileManager().tunnelTestSucceeded(peer, responseTime);
cur = cur.getNextHopInfo();
}
}
@@ -298,7 +298,7 @@ public class PeerTestJob extends JobImpl {
private boolean getShouldFailPeer() { return true; }
public void runJob() {
if (getShouldFailPeer())
_context.profileManager().dbLookupFailed(_peer.getIdentity().getHash());
getContext().profileManager().dbLookupFailed(_peer.getIdentity().getHash());
if (_log.shouldLog(Log.DEBUG))
_log.debug("failed peer test for "
@@ -308,21 +308,21 @@ public class PeerTestJob extends JobImpl {
if (getShouldFailTunnels()) {
_sendTunnel.setLastTested(_context.clock().now());
_replyTunnel.setLastTested(_context.clock().now());
_sendTunnel.setLastTested(getContext().clock().now());
_replyTunnel.setLastTested(getContext().clock().now());
TunnelInfo cur = _replyTunnel;
while (cur != null) {
Hash peer = cur.getThisHop();
if ( (peer != null) && (!_context.routerHash().equals(peer)) )
_context.profileManager().tunnelFailed(peer);
if ( (peer != null) && (!getContext().routerHash().equals(peer)) )
getContext().profileManager().tunnelFailed(peer);
cur = cur.getNextHopInfo();
}
cur = _sendTunnel;
while (cur != null) {
Hash peer = cur.getThisHop();
if ( (peer != null) && (!_context.routerHash().equals(peer)) )
_context.profileManager().tunnelFailed(peer);
if ( (peer != null) && (!getContext().routerHash().equals(peer)) )
getContext().profileManager().tunnelFailed(peer);
cur = cur.getNextHopInfo();
}
}

View File

@@ -14,7 +14,7 @@ class PersistProfilesJob extends JobImpl {
public PersistProfilesJob(RouterContext ctx, PeerManager mgr) {
super(ctx);
_mgr = mgr;
getTiming().setStartAfter(_context.clock().now() + PERSIST_DELAY);
getTiming().setStartAfter(getContext().clock().now() + PERSIST_DELAY);
}
public String getName() { return "Persist profiles"; }
@@ -24,14 +24,14 @@ class PersistProfilesJob extends JobImpl {
int i = 0;
for (Iterator iter = peers.iterator(); iter.hasNext(); )
hashes[i] = (Hash)iter.next();
_context.jobQueue().addJob(new PersistProfileJob(hashes));
getContext().jobQueue().addJob(new PersistProfileJob(hashes));
}
private class PersistProfileJob extends JobImpl {
private Hash _peers[];
private int _cur;
public PersistProfileJob(Hash peers[]) {
super(PersistProfilesJob.this._context);
super(PersistProfilesJob.this.getContext());
_peers = peers;
_cur = 0;
}
@@ -42,11 +42,11 @@ class PersistProfilesJob extends JobImpl {
}
if (_cur >= _peers.length) {
// no more left, requeue up the main persist-em-all job
PersistProfilesJob.this.getTiming().setStartAfter(_context.clock().now() + PERSIST_DELAY);
PersistProfilesJob.this._context.jobQueue().addJob(PersistProfilesJob.this);
PersistProfilesJob.this.getTiming().setStartAfter(getContext().clock().now() + PERSIST_DELAY);
PersistProfilesJob.this.getContext().jobQueue().addJob(PersistProfilesJob.this);
} else {
// we've got peers left to persist, so requeue the persist profile job
PersistProfilesJob.this._context.jobQueue().addJob(PersistProfileJob.this);
PersistProfilesJob.this.getContext().jobQueue().addJob(PersistProfileJob.this);
}
}
public String getName() { return "Persist profile"; }

View File

@@ -91,14 +91,17 @@ public class ProfileManagerImpl implements ProfileManager {
}
/**
* Note that a router explicitly rejected joining a tunnel
* Note that a router explicitly rejected joining a tunnel.
*
* @param explicit true if the tunnel request was explicitly rejected, false
* if we just didn't get a reply back in time.
*/
public void tunnelRejected(Hash peer, long responseTimeMs) {
public void tunnelRejected(Hash peer, long responseTimeMs, boolean explicit) {
PeerProfile data = getProfile(peer);
if (data == null) return;
data.setLastHeardFrom(_context.clock().now());
data.getTunnelHistory().incrementRejected();
if (explicit)
data.getTunnelHistory().incrementRejected();
}
/**

View File

@@ -56,6 +56,8 @@ public class ProfileOrganizer {
/** integration value, seperating well integrated from not well integrated */
private double _thresholdIntegrationValue;
private InverseReliabilityComparator _calc;
/**
* Defines what percentage of the average reliability will be used as the
* reliability threshold. For example, .5 means all peers with the reliability
@@ -82,12 +84,13 @@ public class ProfileOrganizer {
public ProfileOrganizer(RouterContext context) {
_context = context;
_log = context.logManager().getLog(ProfileOrganizer.class);
_calc = new InverseReliabilityComparator();
_fastAndReliablePeers = new HashMap(16);
_reliablePeers = new HashMap(16);
_wellIntegratedPeers = new HashMap(16);
_notFailingPeers = new HashMap(16);
_failingPeers = new HashMap(16);
_strictReliabilityOrder = new TreeSet(new InverseReliabilityComparator());
_strictReliabilityOrder = new TreeSet(_calc);
_thresholdSpeedValue = 0.0d;
_thresholdReliabilityValue = 0.0d;
_thresholdIntegrationValue = 0.0d;
@@ -98,25 +101,40 @@ public class ProfileOrganizer {
* Order profiles by their reliability, but backwards (most reliable / highest value first).
*
*/
private static final class InverseReliabilityComparator implements Comparator {
private static final Comparator _comparator = new InverseReliabilityComparator();
private final class InverseReliabilityComparator implements Comparator {
/**
* Compare the two objects backwards. The standard comparator returns
* -1 if lhs is less than rhs, 1 if lhs is greater than rhs, or 0 if they're
* equal. To keep a strict ordering, we measure peers with equal reliability
* values according to their hashes
*
* @return -1 if the right hand side is smaller, 1 if the left hand side is
* smaller, or 0 if they are the same peer (Comparator.compare() inverted)
*/
public int compare(Object lhs, Object rhs) {
if ( (lhs == null) || (rhs == null) || (!(lhs instanceof PeerProfile)) || (!(rhs instanceof PeerProfile)) )
throw new ClassCastException("Only profiles can be compared - lhs = " + lhs + " rhs = " + rhs);
PeerProfile left = (PeerProfile)lhs;
PeerProfile right= (PeerProfile)rhs;
double rval = right.getReliabilityValue();
double lval = left.getReliabilityValue();
// note below that yes, we are treating left and right backwards. see: classname
int diff = (int)(right.getReliabilityValue() - left.getReliabilityValue());
// we can't just return that, since the set would b0rk on equal values (just because two profiles
// rank the same way doesn't mean they're the same peer!) So if they reliabilities are equal, we
// order them by the peer's hash
if (diff != 0)
return diff;
if (left.getPeer().equals(right.getPeer()))
return 0;
else
if (lval == rval) // note the following call inverts right and left (see: classname)
return DataHelper.compareTo(right.getPeer().getData(), left.getPeer().getData());
boolean rightBigger = rval > lval;
if (_log.shouldLog(Log.DEBUG))
_log.debug("The reliability of " + right.getPeer().toBase64()
+ " and " + left.getPeer().toBase64() + " marks " + (rightBigger ? "right" : "left")
+ " as larger: r=" + right.getReliabilityValue() + " l="
+ left.getReliabilityValue());
if (rightBigger)
return 1;
else
return -1;
}
}
@@ -319,8 +337,11 @@ public class ProfileOrganizer {
_reliablePeers.clear();
_fastAndReliablePeers.clear();
Set reordered = new TreeSet(InverseReliabilityComparator._comparator);
reordered.addAll(_strictReliabilityOrder);
Set reordered = new TreeSet(_calc);
for (Iterator iter = _strictReliabilityOrder.iterator(); iter.hasNext(); ) {
PeerProfile prof = (PeerProfile)iter.next();
reordered.add(prof);
}
_strictReliabilityOrder = reordered;
calculateThresholds(allPeers);
@@ -332,11 +353,17 @@ public class ProfileOrganizer {
locked_unfailAsNecessary();
locked_promoteFastAsNecessary();
}
if (_log.shouldLog(Log.DEBUG)) {
_log.debug("Profiles reorganized. averages: [integration: " + _thresholdIntegrationValue + ", reliability: " + _thresholdReliabilityValue + ", speed: " + _thresholdSpeedValue + "]");
_log.debug("Strictly organized: " + _strictReliabilityOrder);
if (_log.shouldLog(Log.DEBUG)) {
_log.debug("Profiles reorganized. averages: [integration: " + _thresholdIntegrationValue + ", reliability: " + _thresholdReliabilityValue + ", speed: " + _thresholdSpeedValue + "]");
StringBuffer buf = new StringBuffer(512);
for (Iterator iter = _strictReliabilityOrder.iterator(); iter.hasNext(); ) {
PeerProfile prof = (PeerProfile)iter.next();
buf.append('[').append(prof.toString()).append('=').append(prof.getReliabilityValue()).append("] ");
}
_log.debug("Strictly organized (most reliable first): " + buf.toString());
_log.debug("fast and reliable: " + _fastAndReliablePeers.values());
}
}
}
@@ -421,7 +448,7 @@ public class ProfileOrganizer {
*
*/
private void calculateThresholds(Set allPeers) {
Set reordered = new TreeSet(InverseReliabilityComparator._comparator);
Set reordered = new TreeSet(_calc);
for (Iterator iter = allPeers.iterator(); iter.hasNext(); ) {
PeerProfile profile = (PeerProfile)iter.next();
@@ -563,7 +590,7 @@ public class ProfileOrganizer {
_persistenceHelper.writeProfile(prof, out);
}
public String renderStatusHTML() {
public void renderStatusHTML(OutputStream out) throws IOException {
Set peers = selectAllPeers();
long hideBefore = _context.clock().now() - 6*60*60*1000;
@@ -581,7 +608,7 @@ public class ProfileOrganizer {
int reliable = 0;
int integrated = 0;
int failing = 0;
StringBuffer buf = new StringBuffer(8*1024);
StringBuffer buf = new StringBuffer(16*1024);
buf.append("<h2>Peer Profiles</h2>\n");
buf.append("<table border=\"1\">");
buf.append("<tr>");
@@ -660,7 +687,7 @@ public class ProfileOrganizer {
buf.append("<b>Speed:</b> ").append(num(_thresholdSpeedValue)).append(" (").append(fast).append(" fast peers)<br />");
buf.append("<b>Reliability:</b> ").append(num(_thresholdReliabilityValue)).append(" (").append(reliable).append(" reliable peers)<br />");
buf.append("<b>Integration:</b> ").append(num(_thresholdIntegrationValue)).append(" (").append(integrated).append(" well integrated peers)<br />");
return buf.toString();
out.write(buf.toString().getBytes());
}

View File

@@ -1,6 +1,7 @@
package net.i2p.router.peermanager;
import net.i2p.router.RouterContext;
import net.i2p.stat.RateStat;
import net.i2p.util.Log;
/**
@@ -23,34 +24,48 @@ public class ReliabilityCalculator extends Calculator {
return profile.getReliabilityBonus();
long val = 0;
val += profile.getSendSuccessSize().getRate(60*1000).getCurrentEventCount() * 5;
val += profile.getSendSuccessSize().getRate(60*1000).getLastEventCount() * 2;
val += profile.getSendSuccessSize().getRate(60*60*1000).getLastEventCount();
val += profile.getSendSuccessSize().getRate(60*60*1000).getCurrentEventCount();
val += profile.getSendSuccessSize().getRate(60*1000).getCurrentEventCount() * 20;
val += profile.getSendSuccessSize().getRate(60*1000).getLastEventCount() * 10;
val += profile.getSendSuccessSize().getRate(60*60*1000).getLastEventCount() * 1;
val += profile.getSendSuccessSize().getRate(60*60*1000).getCurrentEventCount() * 5;
val += profile.getTunnelCreateResponseTime().getRate(10*60*1000).getLastEventCount() * 5;
val += profile.getTunnelCreateResponseTime().getRate(60*60*1000).getCurrentEventCount();
val += profile.getTunnelCreateResponseTime().getRate(60*60*1000).getLastEventCount();
val -= profile.getSendFailureSize().getRate(60*1000).getLastEventCount() * 5;
val -= profile.getSendFailureSize().getRate(60*60*1000).getCurrentEventCount()*2;
val -= profile.getSendFailureSize().getRate(60*60*1000).getLastEventCount()*2;
//val -= profile.getSendFailureSize().getRate(60*1000).getLastEventCount() * 5;
//val -= profile.getSendFailureSize().getRate(60*60*1000).getCurrentEventCount()*2;
//val -= profile.getSendFailureSize().getRate(60*60*1000).getLastEventCount()*2;
// penalize them heavily for dropping netDb requests
val -= profile.getDBHistory().getFailedLookupRate().getRate(60*1000).getCurrentEventCount() * 10;
val -= profile.getDBHistory().getFailedLookupRate().getRate(60*1000).getLastEventCount() * 5;
//val -= profile.getDBHistory().getFailedLookupRate().getRate(60*60*1000).getCurrentEventCount();
//val -= profile.getDBHistory().getFailedLookupRate().getRate(60*60*1000).getLastEventCount();
//val -= profile.getDBHistory().getFailedLookupRate().getRate(24*60*60*1000).getCurrentEventCount() * 50;
//val -= profile.getDBHistory().getFailedLookupRate().getRate(24*60*60*1000).getLastEventCount() * 20;
RateStat rejRate = profile.getTunnelHistory().getRejectionRate();
if (rejRate.getRate(60*1000).getCurrentEventCount() > 0)
val -= 200;
if (rejRate.getRate(60*1000).getLastEventCount() > 0)
val -= 100;
if (rejRate.getRate(10*60*1000).getCurrentEventCount() > 0)
val -= 10;
if (rejRate.getRate(10*60*1000).getCurrentEventCount() > 0)
val -= 5;
val -= profile.getCommError().getRate(60*1000).getCurrentEventCount() * 200;
val -= profile.getCommError().getRate(60*1000).getLastEventCount() * 200;
// penalize them heavily for dropping netDb requests (though these could have
// failed due to tunnel timeouts, so don't be too mean)
if (profile.getDBHistory().getFailedLookupRate().getRate(60*1000).getCurrentEventCount() > 0)
val -= 10;
if (profile.getDBHistory().getFailedLookupRate().getRate(60*1000).getLastEventCount() > 0)
val -= 5;
val -= profile.getCommError().getRate(60*60*1000).getCurrentEventCount() * 50;
val -= profile.getCommError().getRate(60*60*1000).getLastEventCount() * 50;
// scream and shout on network errors
if (profile.getCommError().getRate(60*1000).getCurrentEventCount() > 0)
val -= 200;
if (profile.getCommError().getRate(60*1000).getLastEventCount() > 0)
val -= 200;
val -= profile.getCommError().getRate(24*60*60*1000).getCurrentEventCount() * 10;
if (profile.getCommError().getRate(60*60*1000).getCurrentEventCount() > 0)
val -= 10;
if (profile.getCommError().getRate(60*60*1000).getLastEventCount() > 0)
val -= 10;
val -= profile.getCommError().getRate(24*60*60*1000).getCurrentEventCount() * 1;
long now = _context.clock().now();
@@ -65,10 +80,10 @@ public class ReliabilityCalculator extends Calculator {
val -= 100; // we got a rejection within the last minute
}
if ( (profile.getLastSendSuccessful() > 0) && (now - 24*60*60*1000 > profile.getLastSendSuccessful()) ) {
// we know they're real, but we havent sent them a message successfully in over a day.
val -= 1000;
}
//if ( (profile.getLastSendSuccessful() > 0) && (now - 24*60*60*1000 > profile.getLastSendSuccessful()) ) {
// // we know they're real, but we havent sent them a message successfully in over a day.
// val -= 1000;
//}
val += profile.getReliabilityBonus();
return val;

View File

@@ -5,6 +5,8 @@ import java.io.OutputStream;
import java.util.Properties;
import net.i2p.router.RouterContext;
import net.i2p.stat.RateStat;
import net.i2p.util.Log;
/**
* Tunnel related history information
@@ -12,21 +14,29 @@ import net.i2p.router.RouterContext;
*/
public class TunnelHistory {
private RouterContext _context;
private Log _log;
private volatile long _lifetimeAgreedTo;
private volatile long _lifetimeRejected;
private volatile long _lastAgreedTo;
private volatile long _lastRejected;
private volatile long _lifetimeFailed;
private volatile long _lastFailed;
private RateStat _rejectRate;
public TunnelHistory(RouterContext context) {
_context = context;
_log = context.logManager().getLog(TunnelHistory.class);
_lifetimeAgreedTo = 0;
_lifetimeFailed = 0;
_lifetimeRejected = 0;
_lastAgreedTo = 0;
_lastFailed = 0;
_lastRejected = 0;
createRates();
}
private void createRates() {
_rejectRate = new RateStat("tunnelHistory.rejectRate", "How often does this peer reject a tunnel request?", "tunnelHistory", new long[] { 60*1000l, 10*60*1000l, 60*60*1000l, 24*60*60*1000l });
}
/** total tunnels the peer has agreed to participate in */
@@ -48,6 +58,7 @@ public class TunnelHistory {
}
public void incrementRejected() {
_lifetimeRejected++;
_rejectRate.addData(1, 1);
_lastRejected = _context.clock().now();
}
public void incrementFailed() {
@@ -62,6 +73,8 @@ public class TunnelHistory {
public void setLastRejected(long when) { _lastRejected = when; }
public void setLastFailed(long when) { _lastFailed = when; }
public RateStat getRejectionRate() { return _rejectRate; }
private final static String NL = System.getProperty("line.separator");
public void store(OutputStream out) throws IOException {
@@ -77,6 +90,7 @@ public class TunnelHistory {
add(buf, "lifetimeFailed", _lifetimeFailed, "How many tunnels has the peer ever agreed to participate in that failed prematurely?");
add(buf, "lifetimeRejected", _lifetimeRejected, "How many tunnels has the peer ever refused to participate in?");
out.write(buf.toString().getBytes());
_rejectRate.store(out, "tunnelHistory.rejectRate");
}
private void add(StringBuffer buf, String name, long val, String description) {
@@ -91,6 +105,14 @@ public class TunnelHistory {
_lifetimeAgreedTo = getLong(props, "tunnels.lifetimeAgreedTo");
_lifetimeFailed = getLong(props, "tunnels.lifetimeFailed");
_lifetimeRejected = getLong(props, "tunnels.lifetimeRejected");
try {
_rejectRate.load(props, "tunnelHistory.rejectRate", true);
_log.debug("Loading tunnelHistory.rejectRate");
} catch (IllegalArgumentException iae) {
_log.warn("TunnelHistory reject rate is corrupt, resetting", iae);
createRates();
}
}
private final static long getLong(Properties props, String key) {

View File

@@ -28,23 +28,23 @@ public class BootCommSystemJob extends JobImpl {
public void runJob() {
// start up the network comm system
_context.commSystem().startup();
_context.tunnelManager().startup();
_context.peerManager().startup();
getContext().commSystem().startup();
getContext().tunnelManager().startup();
getContext().peerManager().startup();
Job bootDb = new BootNetworkDbJob(_context);
Job bootDb = new BootNetworkDbJob(getContext());
boolean useTrusted = false;
String useTrustedStr = _context.router().getConfigSetting(PROP_USE_TRUSTED_LINKS);
String useTrustedStr = getContext().router().getConfigSetting(PROP_USE_TRUSTED_LINKS);
if (useTrustedStr != null) {
useTrusted = Boolean.TRUE.toString().equalsIgnoreCase(useTrustedStr);
}
if (useTrusted) {
_log.debug("Using trusted links...");
_context.jobQueue().addJob(new BuildTrustedLinksJob(_context, bootDb));
getContext().jobQueue().addJob(new BuildTrustedLinksJob(getContext(), bootDb));
return;
} else {
_log.debug("Not using trusted links - boot db");
_context.jobQueue().addJob(bootDb);
getContext().jobQueue().addJob(bootDb);
}
}
}

Some files were not shown because too many files have changed in this diff Show More