如何从video中获取帧样本(jpeg)(mov)

我想从带有java的video文件(mov)中获取帧样本(jpeg)。 是否有捷径可寻。 当我在谷歌中搜索时,我发现只能从多个jpgs制作mov。 我不知道也许我找不到合适的关键字。

我知道原来的问题已经解决了,不过,我发布这个答案,万一其他人像我一样被卡住了。

从昨天开始,我已经尝试了一切,我的意思是做到这一点。 所有可用的Java库都已过时,不再维护或缺少任何可用的文档(严重?? ??!)

我尝试了JFM(旧的和无用的),JCodec(没有任何文档),JJMpeg(看起来很有前途,但由于缺少Java级文档而使用非常困难和繁琐),OpenCV自动Java构建和一些其他库我不记得了。

最后,我决定看看JavaCV的( Github链接 )类,瞧! 它包含FFMPEG绑定和详细的文档。

 org.bytedeco javacv 1.0  

事实certificate,有一种非常简单的方法可以将video帧从video文件提取到BufferedImage并扩展为JPEG文件。 FFmpegFrameGrabber类可以很容易地用于抓取单个帧并将它们转换为BufferedImage 。 代码示例如下:

 FFmpegFrameGrabber g = new FFmpegFrameGrabber("textures/video/anim.mp4"); g.start(); for (int i = 0 ; i < 50 ; i++) { ImageIO.write(g.grab().getBufferedImage(), "png", new File("frame-dump/video-frame-" + System.currentTimeMillis() + ".png")); } g.stop(); 

基本上,此代码转储video的前50帧并将其保存为PNG文件。 好处是内部搜索function,适用于实际帧而不是关键帧(我遇到的问题是JCodec)

您可以参考JavaCV的主页,以了解有关可用于从WebCams捕获帧等其他类的更多信息。希望这个答案有所帮助:-)

Xuggler完成了这项工作。 他们甚至提供了一个完全符合我需要的示例代码。 链接在下面

http://xuggle.googlecode.com/svn/trunk/java/xuggle-xuggler/src/com/xuggle/mediatool/demos/DecodeAndCaptureFrames.java

我修改了此链接中的代码,使其仅保存video的第一帧。

 import javax.imageio.ImageIO; import java.io.File; import java.awt.image.BufferedImage; import com.xuggle.mediatool.IMediaReader; import com.xuggle.mediatool.MediaListenerAdapter; import com.xuggle.mediatool.ToolFactory; import com.xuggle.mediatool.event.IVideoPictureEvent; import com.xuggle.xuggler.Global; /** * * @author aclarke * @author trebor */ public class DecodeAndCaptureFrames extends MediaListenerAdapter { private int mVideoStreamIndex = -1; private boolean gotFirst = false; private String saveFile; private Exception e; /** Construct a DecodeAndCaptureFrames which reads and captures * frames from a video file. * * @param filename the name of the media file to read */ public DecodeAndCaptureFrames(String videoFile, String saveFile)throws Exception { // create a media reader for processing video this.saveFile = saveFile; this.e = null; IMediaReader reader = ToolFactory.makeReader(videoFile); // stipulate that we want BufferedImages created in BGR 24bit color space reader.setBufferedImageTypeToGenerate(BufferedImage.TYPE_3BYTE_BGR); // note that DecodeAndCaptureFrames is derived from // MediaReader.ListenerAdapter and thus may be added as a listener // to the MediaReader. DecodeAndCaptureFrames implements // onVideoPicture(). reader.addListener(this); // read out the contents of the media file, note that nothing else // happens here. action happens in the onVideoPicture() method // which is called when complete video pictures are extracted from // the media source while (reader.readPacket() == null && !gotFirst); if (e != null) throw e; } /** * Called after a video frame has been decoded from a media stream. * Optionally a BufferedImage version of the frame may be passed * if the calling {@link IMediaReader} instance was configured to * create BufferedImages. * * This method blocks, so return quickly. */ public void onVideoPicture(IVideoPictureEvent event) { try { // if the stream index does not match the selected stream index, // then have a closer look if (event.getStreamIndex() != mVideoStreamIndex) { // if the selected video stream id is not yet set, go ahead an // select this lucky video stream if (-1 == mVideoStreamIndex) mVideoStreamIndex = event.getStreamIndex(); // otherwise return, no need to show frames from this video stream else return; } ImageIO.write(event.getImage(), "jpg", new File(saveFile)); gotFirst = true; } catch (Exception e) { this.e = e; } } } 
 import java.awt.image.BufferedImage; import java.io.File; import java.io.IOException; import javax.imageio.ImageIO; import org.bytedeco.javacpp.opencv_core.IplImage; import org.bytedeco.javacv.FFmpegFrameGrabber; import org.bytedeco.javacv.FrameGrabber.Exception; public class Read{ public static void main(String []args) throws IOException, Exception { FFmpegFrameGrabber frameGrabber = new FFmpegFrameGrabber("C:/Users/Digilog/Downloads/Test.mp4"); frameGrabber.start(); IplImage i; try { i = frameGrabber.grab(); BufferedImage bi = i.getBufferedImage(); ImageIO.write(bi,"png", new File("D:/Img.png")); frameGrabber.stop(); } catch (Exception e) { // TODO Auto-generated catch block e.printStackTrace(); } } } 

也许这会对你有所帮助:

 Buffer buf = frameGrabber.grabFrame(); // Convert frame to an buffered image so it can be processed and saved Image img = (new BufferToImage((VideoFormat) buf.getFormat()).createImage(buf)); buffImg = new BufferedImage(img.getWidth(this), img.getHeight(this), BufferedImage.TYPE_INT_RGB); //TODO saving the buffImg 

了解更多信息:

如何从网络摄像头拍摄单个快照?

下面显示了从媒体文件请求帧的基本代码。

有关完整的源代码和video演示:
使用Marvin Framework的 “媒体文件处理”示例..

 public class MediaFileExample implements Runnable{ private MarvinVideoInterface videoAdapter; private MarvinImage videoFrame; public MediaFileExample(){ try{ // Create the VideoAdapter used to load the video file videoAdapter = new MarvinJavaCVAdapter(); videoAdapter.loadResource("./res/snooker.wmv"); // Start the thread for requesting the video frames new Thread(this).start(); } catch(MarvinVideoInterfaceException e){e.printStackTrace();} } @Override public void run() { try{ while(true){ // Request a video frame videoFrame = videoAdapter.getFrame(); } }catch(MarvinVideoInterfaceException e){e.printStackTrace();} } public static void main(String[] args) { MediaFileExample m = new MediaFileExample(); } } 

以下是BoofCV的方法 :

 String fileName = UtilIO.pathExample("tracking/chipmunk.mjpeg"); MediaManager media = DefaultMediaManager.INSTANCE; ConfigBackgroundBasic configBasic = new ConfigBackgroundBasic(30, 0.005f); ImageType imageType = ImageType.single(GrayF32.class); BackgroundModelMoving background = FactoryBackgroundModel.movingBasic(configBasic, new PointTransformHomography_F32(), imageType); SimpleImageSequence video = media.openVideo(fileName, background.getImageType()); ImageBase nextFrame; while(video.hasNext()) { nextFrame = video.next(); // Now do something with it... }