Java NIO Pipe vs BlockingQueue

我刚刚发现它只有一个NIO工具,Java NIO Pipe,用于在线程之间传递数据。 使用此机制是否优于通过队列传递的更传统的消息,例如ArrayBlockingQueue?

通常,为另一个线程传递数据的最简单方法是使用ExecutorService。 这包括队列和线程池(可以有一个线程)

当您拥有支持NIO通道的库时,可以使用管道。 如果要在线程之间传递ByteBuffers数据,这也很有用。

否则它通常简单/快速地使用ArrayBlockingQueue。

如果你想要一种更快的方式在线程之间交换数据,我建议你看一下Exchanger,但它不像ArrayBlockingQueue那样通用。

Exchanger和GC-less Java

所以在管道遇到很多问题后( 请点击这里 )我决定支持NIO管道上的非阻塞并发队列。 所以我在Java的ConcurrentLinkedQueue上做了一些基准测试。 见下文:

public static void main(String[] args) throws Exception { ConcurrentLinkedQueue queue = new ConcurrentLinkedQueue(); // first test nothing: for (int j = 0; j < 20; j++) { Benchmarker bench = new Benchmarker(); String s = "asd"; for (int i = 0; i < 1000000; i++) { bench.mark(); // s = queue.poll(); bench.measure(); } System.out.println(bench.results()); Thread.sleep(100); } System.out.println(); // first test empty queue: for (int j = 0; j < 20; j++) { Benchmarker bench = new Benchmarker(); String s = "asd"; for (int i = 0; i < 1000000; i++) { bench.mark(); s = queue.poll(); bench.measure(); } System.out.println(bench.results()); Thread.sleep(100); } System.out.println(); // now test polling one element on a queue with size one for (int j = 0; j < 20; j++) { Benchmarker bench = new Benchmarker(); String s = "asd"; String x = "pela"; for (int i = 0; i < 1000000; i++) { queue.offer(x); bench.mark(); s = queue.poll(); bench.measure(); if (s != x) throw new Exception("bad!"); } System.out.println(bench.results()); Thread.sleep(100); } System.out.println(); // now test polling one element on a queue with size two for (int j = 0; j < 20; j++) { Benchmarker bench = new Benchmarker(); String s = "asd"; String x = "pela"; for (int i = 0; i < 1000000; i++) { queue.offer(x); queue.offer(x); bench.mark(); s = queue.poll(); bench.measure(); if (s != x) throw new Exception("bad!"); queue.poll(); } System.out.println(bench.results()); Thread.sleep(100); } } 

结果:

 totalLogs=1000000, minTime=0, maxTime=85000, avgTime=58.61 (times in nanos) totalLogs=1000000, minTime=0, maxTime=5281000, avgTime=63.35 (times in nanos) totalLogs=1000000, minTime=0, maxTime=725000, avgTime=59.71 (times in nanos) totalLogs=1000000, minTime=0, maxTime=25000, avgTime=58.13 (times in nanos) totalLogs=1000000, minTime=0, maxTime=378000, avgTime=58.45 (times in nanos) totalLogs=1000000, minTime=0, maxTime=15000, avgTime=57.71 (times in nanos) totalLogs=1000000, minTime=0, maxTime=170000, avgTime=58.11 (times in nanos) totalLogs=1000000, minTime=0, maxTime=1495000, avgTime=59.87 (times in nanos) totalLogs=1000000, minTime=0, maxTime=232000, avgTime=63.0 (times in nanos) totalLogs=1000000, minTime=0, maxTime=184000, avgTime=57.89 (times in nanos) totalLogs=1000000, minTime=0, maxTime=2600000, avgTime=65.22 (times in nanos) totalLogs=1000000, minTime=0, maxTime=850000, avgTime=60.5 (times in nanos) totalLogs=1000000, minTime=0, maxTime=150000, avgTime=63.83 (times in nanos) totalLogs=1000000, minTime=0, maxTime=43000, avgTime=59.75 (times in nanos) totalLogs=1000000, minTime=0, maxTime=276000, avgTime=60.02 (times in nanos) totalLogs=1000000, minTime=0, maxTime=457000, avgTime=61.69 (times in nanos) totalLogs=1000000, minTime=0, maxTime=204000, avgTime=60.44 (times in nanos) totalLogs=1000000, minTime=0, maxTime=154000, avgTime=63.67 (times in nanos) totalLogs=1000000, minTime=0, maxTime=355000, avgTime=60.75 (times in nanos) totalLogs=1000000, minTime=0, maxTime=338000, avgTime=60.44 (times in nanos) totalLogs=1000000, minTime=0, maxTime=345000, avgTime=110.93 (times in nanos) totalLogs=1000000, minTime=0, maxTime=396000, avgTime=100.32 (times in nanos) totalLogs=1000000, minTime=0, maxTime=298000, avgTime=98.93 (times in nanos) totalLogs=1000000, minTime=0, maxTime=1891000, avgTime=101.9 (times in nanos) totalLogs=1000000, minTime=0, maxTime=254000, avgTime=103.06 (times in nanos) totalLogs=1000000, minTime=0, maxTime=1894000, avgTime=100.97 (times in nanos) totalLogs=1000000, minTime=0, maxTime=230000, avgTime=99.21 (times in nanos) totalLogs=1000000, minTime=0, maxTime=348000, avgTime=99.63 (times in nanos) totalLogs=1000000, minTime=0, maxTime=922000, avgTime=99.53 (times in nanos) totalLogs=1000000, minTime=0, maxTime=168000, avgTime=99.12 (times in nanos) totalLogs=1000000, minTime=0, maxTime=686000, avgTime=107.41 (times in nanos) totalLogs=1000000, minTime=0, maxTime=320000, avgTime=95.58 (times in nanos) totalLogs=1000000, minTime=0, maxTime=248000, avgTime=94.94 (times in nanos) totalLogs=1000000, minTime=0, maxTime=217000, avgTime=95.01 (times in nanos) totalLogs=1000000, minTime=0, maxTime=159000, avgTime=93.62 (times in nanos) totalLogs=1000000, minTime=0, maxTime=155000, avgTime=95.28 (times in nanos) totalLogs=1000000, minTime=0, maxTime=106000, avgTime=98.57 (times in nanos) totalLogs=1000000, minTime=0, maxTime=370000, avgTime=95.01 (times in nanos) totalLogs=1000000, minTime=0, maxTime=1836000, avgTime=96.21 (times in nanos) totalLogs=1000000, minTime=0, maxTime=212000, avgTime=98.62 (times in nanos) 

结论:

maxTime可能很吓人但我认为可以安全地得出结论,我们在50纳米范围内用于轮询并发队列。

我相信NIO管道的设计使您可以以线程安全的方式将数据发送到选择器循环内的通道,换句话说,任何线程都可以写入管道,数据将在管道的另一个极端处理,在选择器循环内。 当您写入管道时,您可以使另一侧的通道可读。

我认为管道将具有更好的延迟,因为它很可能在幕后使用协同程序实现。 因此,生产者在数据可用时立即屈服于消费者,而不是在线程调度程序决定时。

管道通常代表消费者 – 生产者问题,并且很可能以这种方式实现,以便两个线程合作并且不会在外部被抢占。