共计 4780 个字符,预计需要花费 12 分钟才能阅读完成。
前几天在分享 ” 实现自己的 wget” 的时候,因为我们的请求是一次性的,http 头里设置的 Connection: Close
。在HTTP/1.1
为了提升 HTTP 1.0
的网络性能,增加了 keepalive
的特性。浏览器在请求的时候都会加上 Connection: Keep-Alive
的头信息,是如何实现的呢?
我们知道在服务端(nginx)可以通过设置 keepalive_timeout
来控制连接保持时间,那么 http
连接的保持需要浏览器(客户端)支持吗?今天咱们一起来通过 java.net.HttpURLConnection
源码看看客户端是如何维护这些 http
连接的。
测试代码
package net.mengkang.demo;
import java.io.*;
import java.net.HttpURLConnection;
import java.net.URL;
public class Demo {public static void main(String[] args) throws IOException {test();
test();}
private static void test() throws IOException {URL url = new URL("http://static.mengkang.net/upload/image/2019/0921/1569075837628814.jpeg");
HttpURLConnection connection = (HttpURLConnection) url.openConnection();
connection.setRequestProperty("Charset", "UTF-8");
connection.setRequestProperty("Connection", "Keep-Alive");
connection.setRequestMethod("GET");
connection.connect();
BufferedInputStream bufferedInputStream = new BufferedInputStream(connection.getInputStream());
File file = new File("./xxx.jpeg");
OutputStream out = new FileOutputStream(file);
int size;
byte[] buf = new byte[1024];
while ((size = bufferedInputStream.read(buf)) != -1) {out.write(buf, 0, size);
}
connection.disconnect();}
}
解析返回的头信息
当客户端从服务端获取返回的字节流时
connection.getInputStream()
HttpClient
会对返回的头信息进行解析,我简化了摘取了最重要的逻辑代码
private boolean parseHTTPHeader(MessageHeader var1, ProgressSource var2, HttpURLConnection var3) throws IOException {String var15 = var1.findValue("Connection");
...
if (var15 != null && var15.toLowerCase(Locale.US).equals("keep-alive")) {HeaderParser var11 = new HeaderParser(var1.findValue("Keep-Alive"));
this.keepAliveConnections = var11.findInt("max", this.usingProxy ? 50 : 5);
this.keepAliveTimeout = var11.findInt("timeout", this.usingProxy ? 60 : 5);
}
...
}
是否需要保持长连接,是客户端申请,服务端决定,所以要以服务端返回的头信息为准。比如客户端发送的请求是 Connection: Keep-Alive
,服务端返回的是Connection: Close
那也得以服务端为准。
客户端请求完成
当第一次执行时 bufferedInputStream.read(buf)
时,HttpClient
会执行 finished()
方法
public void finished() {if (!this.reuse) {
--this.keepAliveConnections;
this.poster = null;
if (this.keepAliveConnections > 0 && this.isKeepingAlive() && !this.serverOutput.checkError()) {this.putInKeepAliveCache();
} else {this.closeServer();
}
}
}
加入到 http 长连接缓存
protected static KeepAliveCache kac = new KeepAliveCache();
protected synchronized void putInKeepAliveCache() {if (this.inCache) {assert false : "Duplicate put to keep alive cache";} else {
this.inCache = true;
kac.put(this.url, (Object)null, this);
}
}
public class KeepAliveCache extends HashMap<KeepAliveKey, ClientVector> implements Runnable {
...
public synchronized void put(URL var1, Object var2, HttpClient var3) {KeepAliveKey var5 = new KeepAliveKey(var1, var2); // var2 null
ClientVector var6 = (ClientVector)super.get(var5);
if (var6 == null) {int var7 = var3.getKeepAliveTimeout();
var6 = new ClientVector(var7 > 0 ? var7 * 1000 : 5000);
var6.put(var3);
super.put(var5, var6);
} else {var6.put(var3);
}
}
...
}
这里涉及了 KeepAliveKey
和ClientVector
class KeepAliveKey {
private String protocol = null;
private String host = null;
private int port = 0;
private Object obj = null;
}
设计这个对象呢,是因为只有 protocol
+host
+port
才能确定为同一个连接。所以用 KeepAliveKey
作为 KeepAliveCache
的key
。ClientVector
则是一个栈,每次有同一个域下的请求都入栈。
class ClientVector extends Stack<KeepAliveEntry> {
private static final long serialVersionUID = -8680532108106489459L;
int nap;
ClientVector(int var1) {this.nap = var1;}
synchronized void put(HttpClient var1) {if (this.size() >= KeepAliveCache.getMaxConnections()) {var1.closeServer();
} else {this.push(new KeepAliveEntry(var1, System.currentTimeMillis()));
}
}
...
}
“断开”连接
connection.disconnect();
如果是保持长连接的,实际只是关闭了一些流,socket 并没有关闭。
public void disconnect() {
...
boolean var2 = var1.isKeepingAlive();
if (var2) {var1.closeIdleConnection();
}
...
}
public void closeIdleConnection() {HttpClient var1 = kac.get(this.url, (Object)null);
if (var1 != null) {var1.closeServer();
}
}
连接的复用
public static HttpClient New(URL var0, Proxy var1, int var2, boolean var3, HttpURLConnection var4) throws IOException {
...
HttpClient var5 = null;
if (var3) {var5 = kac.get(var0, (Object)null);
...
}
if (var5 == null) {var5 = new HttpClient(var0, var1, var2);
} else {
...
var5.url = var0;
}
return var5;
}
public class KeepAliveCache extends HashMap<KeepAliveKey, ClientVector> implements Runnable {
...
public synchronized HttpClient get(URL var1, Object var2) {KeepAliveKey var3 = new KeepAliveKey(var1, var2);
ClientVector var4 = (ClientVector)super.get(var3);
return var4 == null ? null : var4.get();}
...
}
ClientVector
取的时候则出栈,出栈过程中如果该连接已经超时,则关闭与服务端的连接,继续执行出栈操作。
class ClientVector extends Stack<KeepAliveEntry> {
private static final long serialVersionUID = -8680532108106489459L;
int nap;
ClientVector(int var1) {this.nap = var1;}
synchronized HttpClient get() {if (this.empty()) {return null;} else {
HttpClient var1 = null;
long var2 = System.currentTimeMillis();
do {KeepAliveEntry var4 = (KeepAliveEntry)this.pop();
if (var2 - var4.idleStartTime > (long)this.nap) {var4.hc.closeServer();
} else {var1 = var4.hc;}
} while(var1 == null && !this.empty());
return var1;
}
}
...
}
这样就实现了客户端 http
连接的复用。
小结
存储结构如下
复用 tcp
的连接标准是 protocol
+host
+port
,客户端连接与服务端维持的连接数也不宜过多,HttpURLConnection
默认只能存 5 个不同的连接,再多则直接断开连接(见上面 HttpClient#finished
方法),保持连接数过多对客户端和服务端都会增加不小的压力。
同时 KeepAliveCache
也每隔 5 秒钟扫描检测一次,清除过期的httpClient
。