OpenCV:处理每一帧

OpenCV: process every frame(OpenCV:处理每一帧)

本文介绍了OpenCV:处理每一帧的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!

问题描述

我想编写一个使用 OpenCV 进行视频捕获的跨平台应用程序.在所有示例中,我发现来自相机的帧使用抓取功能进行处理并等待一段时间.我想按顺序处理每一帧.我想定义我自己的回调函数,该函数将在每次准备处理新帧时执行(例如在 Windows 的 directshow 中,当您定义并将自己的过滤器放入图形时用于此类目的).

I want to write a cross-platform application using OpenCV for video capture. In all the examples, i've found frames from the camera are processed using the grab function and waiting for a while. And i want to process every frame in a sequence. I want to define my own callback function, which will be executed every time, when a new frame is ready to be processed (like in directshow for Windows, when you defining and putting into the graph your own filter for such purposes).

所以问题是:我该怎么做?

So the question is: how can i do this?

推荐答案

根据下面的代码,所有的回调都必须遵循这个定义:

According to the code below, all callbacks would have to follow this definition:

IplImage* custom_callback(IplImage* frame);

这个签名意味着回调将在系统检索到的每一帧上执行.在我的例子中,make_it_gray() 分配一个新图像来保存灰度转换的结果并返回它.这意味着您必须稍后在代码中释放此框架.我在代码上添加了注释.

This signature means the callback is going to be executed on each frame retrieved by the system. On my example, make_it_gray() allocates a new image to save the result of the grayscale conversion and returns it. This means you must free this frame later on your code. I added comments on the code about it.

请注意,如果您的回调需要大量处理,系统可能会从相机中跳过几帧.考虑 Paul Rdiverscuba23 提出的建议.

Note that if your callback demands a lot of processing, the system might skip a few frames from the camera. Consider the suggestions Paul R and diverscuba23 did.

#include <stdio.h>
#include "cv.h"
#include "highgui.h"


typedef IplImage* (*callback_prototype)(IplImage*);


/* 
 * make_it_gray: our custom callback to convert a colored frame to its grayscale version.
 * Remember that you must deallocate the returned IplImage* yourself after calling this function.
 */
IplImage* make_it_gray(IplImage* frame)
{
    // Allocate space for a new image
    IplImage* gray_frame = 0;
    gray_frame = cvCreateImage(cvSize(frame->width, frame->height), frame->depth, 1);
    if (!gray_frame)
    {
      fprintf(stderr, "!!! cvCreateImage failed!
" );
      return NULL;
    }

    cvCvtColor(frame, gray_frame, CV_RGB2GRAY);
    return gray_frame; 
}

/*
 * process_video: retrieves frames from camera and executes a callback to do individual frame processing.
 * Keep in mind that if your callback takes too much time to execute, you might loose a few frames from 
 * the camera.
 */
void process_video(callback_prototype custom_cb)
{           
    // Initialize camera
    CvCapture *capture = 0;
    capture = cvCaptureFromCAM(-1);
    if (!capture) 
    {
      fprintf(stderr, "!!! Cannot open initialize webcam!
" );
      return;
    }

    // Create a window for the video 
    cvNamedWindow("result", CV_WINDOW_AUTOSIZE);

    IplImage* frame = 0;
    char key = 0;
    while (key != 27) // ESC
    {    
      frame = cvQueryFrame(capture);
      if(!frame) 
      {
          fprintf( stderr, "!!! cvQueryFrame failed!
" );
          break;
      }

      // Execute callback on each frame
      IplImage* processed_frame = (*custom_cb)(frame);

      // Display processed frame
      cvShowImage("result", processed_frame);

      // Release resources
      cvReleaseImage(&processed_frame);

      // Exit when user press ESC
      key = cvWaitKey(10);
    }

    // Free memory
    cvDestroyWindow("result");
    cvReleaseCapture(&capture);
}

int main( int argc, char **argv )
{
    process_video(make_it_gray);

    return 0;
}

我更改了上面的代码,以便它打印当前帧率并执行手动灰度转换.它们是对代码的小幅调整,我是出于教育目的而这样做的,因此人们知道如何在像素级别执行操作.

I changed the code above so it prints the current framerate and performs a manual grayscale conversion. They are small tweaks on the code and I did it for education purposes so one knows how to perform operations at pixel level.

#include <stdio.h>
#include <time.h>

#include "cv.h"
#include "highgui.h"


typedef IplImage* (*callback_prototype)(IplImage*);


/* 
 * make_it_gray: our custom callback to convert a colored frame to its grayscale version.
 * Remember that you must deallocate the returned IplImage* yourself after calling this function.
 */
IplImage* make_it_gray(IplImage* frame)
{
    // New IplImage* to store the processed image
    IplImage* gray_frame = 0; 

    // Manual grayscale conversion: ugly, but shows how to access each channel of the pixels individually
    gray_frame = cvCreateImage(cvSize(frame->width, frame->height), frame->depth, frame->nChannels);
    if (!gray_frame)
    {
      fprintf(stderr, "!!! cvCreateImage failed!
" );
      return NULL;
    }

    for (int i = 0; i < frame->width * frame->height * frame->nChannels; i += frame->nChannels)
    {
        gray_frame->imageData[i] = (frame->imageData[i] + frame->imageData[i+1] + frame->imageData[i+2])/3;   //B
        gray_frame->imageData[i+1] = (frame->imageData[i] + frame->imageData[i+1] + frame->imageData[i+2])/3; //G
        gray_frame->imageData[i+2] = (frame->imageData[i] + frame->imageData[i+1] + frame->imageData[i+2])/3; //R
    }

    return gray_frame; 
}

/*
 * process_video: retrieves frames from camera and executes a callback to do individual frame processing.
 * Keep in mind that if your callback takes too much time to execute, you might loose a few frames from 
 * the camera.
 */
void process_video(callback_prototype custom_cb)
{           
    // Initialize camera
    CvCapture *capture = 0;
    capture = cvCaptureFromCAM(-1);
    if (!capture) 
    {
      fprintf(stderr, "!!! Cannot open initialize webcam!
" );
      return;
    }

    // Create a window for the video 
    cvNamedWindow("result", CV_WINDOW_AUTOSIZE);    

    double elapsed = 0;
    int last_time = 0;
    int num_frames = 0;

    IplImage* frame = 0;
    char key = 0;
    while (key != 27) // ESC
    {    
      frame = cvQueryFrame(capture);
      if(!frame) 
      {
          fprintf( stderr, "!!! cvQueryFrame failed!
" );
          break;
      }

      // Calculating framerate
      num_frames++;
      elapsed = clock() - last_time;
      int fps = 0;
      if (elapsed > 1)
      {
          fps = floor(num_frames / (float)(1 + (float)elapsed / (float)CLOCKS_PER_SEC));
          num_frames = 0;
          last_time = clock() + 1 * CLOCKS_PER_SEC;
          printf("FPS: %d
", fps);
      }

      // Execute callback on each frame
      IplImage* processed_frame = (*custom_cb)(frame);  

      // Display processed frame
      cvShowImage("result", processed_frame);

      // Release resources
      cvReleaseImage(&processed_frame);

      // Exit when user press ESC
      key = cvWaitKey(10);
    }

    // Free memory
    cvDestroyWindow("result");
    cvReleaseCapture(&capture);
}

int main( int argc, char **argv )
{
    process_video(make_it_gray);

    return 0;
}

这篇关于OpenCV:处理每一帧的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持编程学习网!

本文标题为:OpenCV:处理每一帧

基础教程推荐