强制刷新java中的GZIPOutputStream

我们正在开发一个程序,我们需要刷新(强制压缩和发送数据)GZIPOutputStream。 问题是,GZIPOutputStream的flush方法不能按预期工作(强制压缩和发送数据),而是Stream等待更多数据进行有效的数据压缩。

当你调用完成时,数据被压缩并通过输出流发送,但GZIPOutputStream(不是底层流)将被关闭,所以我们不能写更多的数据,直到我们创建一个新的GZIPOutputStream,这需要时间和性能。

希望任何人都可以帮助解决这个

最好的祝福。

我找不到另一个工作的答案。 它仍然拒绝刷新,因为GZIPOutputStream正在使用的本机代码保留在数据上。

值得庆幸的是,我发现有人已经将FlushableGZIPOutputStream实现为Apache Tomcat项目的一部分。 这是神奇的部分:

@Override public synchronized void flush() throws IOException { if (hasLastByte) { // - do not allow the gzip header to be flushed on its own // - do not do anything if there is no data to send // trick the deflater to flush /** * Now this is tricky: We force the Deflater to flush its data by * switching compression level. As yet, a perplexingly simple workaround * for * http://developer.java.sun.com/developer/bugParade/bugs/4255743.html */ if (!def.finished()) { def.setLevel(Deflater.NO_COMPRESSION); flushLastByte(); flagReenableCompression = true; } } out.flush(); } 

你可以在这个jar中找到整个类(如果你使用Maven):

  org.apache.tomcat tomcat-coyote 7.0.8  

或者只是去抓取源代码FlushableGZIPOutputStream.java

它是在Apache-2.0许可下发布的。

我还没有尝试过这个,这个建议在我们掌握Java 7之前没有用,但是从DeflaterOutputStreaminheritance的GZIPOutputStreamflush()方法的文档依赖于在构造时使用syncFlush指定的刷新模式 参数 (与Deflater#SYNC_FLUSH相关)决定是否刷新要压缩的待处理数据。 GZIPOutputStream在构造时也接受此syncFlush参数。

听起来你想要使用Deflator#SYNC_FLUSH或甚至Deflater#FULL_FLUSH ,但是,在深入研究之前,首先尝试使用双参数或四参数GZIPOutputStream构造函数,并为syncFlush参数传递true 。 这将激活你想要的冲洗行为。

错误ID 4813885处理此问题。 2006年9月9日提交的“DamonHD”评论(大约是bug报告的一半)包含了一个FlushableGZIPOutputStream的例子,它建立在Jazzlib的 net.sf.jazzlib.DeflaterOutputStream之上。

作为参考,这是一个(重新格式化)提取:

 /** * Substitute for GZIPOutputStream that maximises compression and has a usable * flush(). This is also more careful about its output writes for efficiency, * and indeed buffers them to minimise the number of write()s downstream which * is especially useful where each write() has a cost such as an OS call, a disc * write, or a network packet. */ public class FlushableGZIPOutputStream extends net.sf.jazzlib.DeflaterOutputStream { private final CRC32 crc = new CRC32(); private final static int GZIP_MAGIC = 0x8b1f; private final OutputStream os; /** Set when input has arrived and not yet been compressed and flushed downstream. */ private boolean somethingWritten; public FlushableGZIPOutputStream(final OutputStream os) throws IOException { this(os, 8192); } public FlushableGZIPOutputStream(final OutputStream os, final int bufsize) throws IOException { super(new FilterOutputStream(new BufferedOutputStream(os, bufsize)) { /** Suppress inappropriate/inefficient flush()es by DeflaterOutputStream. */ @Override public void flush() { } }, new net.sf.jazzlib.Deflater(net.sf.jazzlib.Deflater.BEST_COMPRESSION, true)); this.os = os; writeHeader(); crc.reset(); } public synchronized void write(byte[] buf, int off, int len) throws IOException { somethingWritten = true; super.write(buf, off, len); crc.update(buf, off, len); } /** * Flush any accumulated input downstream in compressed form. We overcome * some bugs/misfeatures here so that: * 
    *
  • We won't allow the GZIP header to be flushed on its own without real compressed * data in the same write downstream. *
  • We ensure that any accumulated uncompressed data really is forced through the * compressor. *
  • We prevent spurious empty compressed blocks being produced from successive * flush()es with no intervening new data. *
*/ @Override public synchronized void flush() throws IOException { if (!somethingWritten) { return; } // We call this to get def.flush() called, // but suppress the (usually premature) out.flush() called internally. super.flush(); // Since super.flush() seems to fail to reliably force output, // possibly due to over-cautious def.needsInput() guard following def.flush(), // we try to force the issue here by bypassing the guard. int len; while((len = def.deflate(buf, 0, buf.length)) > 0) { out.write(buf, 0, len); } // Really flush the stream below us... os.flush(); // Further flush()es ignored until more input data data written. somethingWritten = false; } public synchronized void close() throws IOException { if (!def.finished()) { def.finish(); do { int len = def.deflate(buf, 0, buf.length); if (len <= 0) { break; } out.write(buf, 0, len); } while (!def.finished()); } // Write trailer out.write(generateTrailer()); out.close(); } // ... }

你可能会发现它很有用。

这段代码在我的应用程序中对我很有用。

 public class StreamingGZIPOutputStream extends GZIPOutputStream { public StreamingGZIPOutputStream(OutputStream out) throws IOException { super(out); } @Override protected void deflate() throws IOException { // SYNC_FLUSH is the key here, because it causes writing to the output // stream in a streaming manner instead of waiting until the entire // contents of the response are known. for a large 1 MB json example // this took the size from around 48k to around 50k, so the benefits // of sending data to the client sooner seem to far outweigh the // added data sent due to less efficient compression int len = def.deflate(buf, 0, buf.length, Deflater.SYNC_FLUSH); if (len > 0) { out.write(buf, 0, len); } } } 

Android也有同样的问题。 接受者答案不起作用因为def.setLevel(Deflater.NO_COMPRESSION); 抛出exception。 根据flush方法,它改变了Deflater压缩级别。 所以我想在写入数据之前应该调用压缩变量,但我不确定。

还有2个其他选择:

  • 如果您的应用的API级别高于19,那么您可以尝试使用带有syncFlush参数的构造函数
  • 另一个解决方案是使用jzlib 。