Program framework for binocular visuals, code and some explanations

xiaoxiao2021-03-06  18

This is a framework that has completed most of the week ago, and now makes some modifications in order to better test. Modify, should still be endless, now, there is a problem, staying in the code, it means can't see, so, intend to write this article, and modify your code.

For two-dimensional visual, important two parts are extracted in the camera calibration and the characteristics of the image. However, in this program, these two parts are not, this program is just a platform for achieving binocular visual. A shelf for a double visual implementation.

Generally, the double visual visual is used for the ranging, which is a camera that is a CCD with an image capture card. For such a camera, the image acquisition is no problem, and there is a function of its own belt in the image acquisition card. However, here, we have to use USB camera, which is usually used for webcam for video chat. For such a camera, it is necessary to perform a double visual test, only you caught the video flow, this needs you write your code. There are two ways to perform video streams in a normal Windows platform. One is VFW, one is DirectShow, in the following program, I use DirectShow technology.

Ok, this procedure will be described below.

This program consists of several parts of the video capture, a two-dimensional visual algorithm, a numerical algorithm, and the entire program. Among them, for real-time double visual visual instead of the general only get two pictures, then proceed to the line, this, the control section of this program uses several threads, these thread functions are global functions That is to say, these functions control the specific processing of the function in the video capture class, the bideway visual algorithm class, and the numerical calculation class.

First, let's talk about the video stream capture code of the two cameras.

This is also a process of establishing a Filter Graph. Here I am implemented in a class.

//buildtwocamfiltergraph.h

#ifndef _buildtwocamfiltergraph_h_ # define _buildtwocamfiltergraph_h_

#include // isamplegrabbercb, because a class in this is inheriting the class in the DirectShow base class library, this header file must be added.

#include #include

#include

#include "databuffer.h"

#pragma comment (lib, "strmiids.lib") #pragma Comment (Lib, "Strmbase.LIB")

// The following is a auxiliary class that uses // when you want to capture the image of the corresponding frame.

Extern DataBuffer CB1; // for Camera 1Extern DataBuffer CB2; // for Camera 2

Extern bool g_boneshot1; extern bool g_boneshot2;

Extern Handle HEVENTPROCESSDATA1; EXTERN HANDLE HEVENTPROCESSDATA2;

// Note: This Object is a semi-com object, and can online use little semi-com object to handle the sample-grab-callback, // Since The Callback Must Provide a COM interface. We could have had an interface // where you provided a function-call callback, but that's really messy, so we // did it this way. you can put anything you want into this C object, even // a pointer to a CDialog . Be aware of multi-thread issues though.////class CSampleGrabberCB: public ISampleGrabberCB {public: // these will get set by the main thread below We need to // know this in order to write out the bmp long lWidth. Long Lheight;

DataBuffer * PCB; BOOL * PBONESHOT; Handle * PheventProcessData;

TCHAR M_SZCAPDIR [MAX_PATH]; // The Directory We Want To Capture To Tchar M_SzsnappedName [MAX_PATH];

// Constructor CSampleGrabberCB (DATABUFFER * pdataBuffer, bool * pg_bOneShot, HANDLE * phEvent): pcb (pdataBuffer), pbOneShot (pg_bOneShot), phEventProcessData (phEvent) {ZeroMemory (m_szCapDir, sizeof (m_szCapDir)); ZeroMemory (m_szSnappedName, sizeof (m_szSnappedName ));

// fake out any com ref counting stdmethodimp_ (ulong) AddRef () {return 2;} stdmethodimp_ (ulong) release () {return 1;}

// fake out any COM QI'ing STDMETHODIMP QueryInterface (REFIID riid, void ** ppv) {if (riid == IID_ISampleGrabberCB || riid == IID_IUnknown) {* ppv = (void *) static_cast (this) Return noerror;} returnif;

// We don't import this interface for this Example Stdmethodimp Sample, iMediasample * psample) {return 0;}

// The following is why you don't have to use // the sample grabber is calling us back on its deliver thread. // this is not the main app thread! /// !!!!! Warning warning warning !!!!! / / // On Windows 9x systems, you are not allowed to call most of the // Windows API functions in this callback. Why not? Because the // video renderer might hold the global Win16 lock so that the video // surface can be Locked While You Copy ITA. this is not an // ISSUE ON Windows 2000, But Is A Limitation On Win95, 98, 98SE, AND ME. // Calling a 16-bit Legacy Function Could Lock The System, Because // IT WOULD WAIT Forever for the Win16 Lock, Which Would Be Forever // Held by The Video Renderer. /// AS Workaround, COPY The Bitmap Data Database To Our App, And Write The Data Later . // stdmethodimp buffercb (double dblsampletime, byte * pBuffer, long lbuffersize) {// this flag will get set to true in order to take a pi When the cture // // each frame image will enter this function, it is necessary to judge, and only when it is required to save the image, the data is extracted. IF (! * Pboneshot) Return 0;

IF (! pBuffer) Return E_POINTER;

// if (cb.lbuffersize LBuffersize! = lbuffersize) {delete [] PCB-> PBuffer; PCB-> PBuffer = NULL; PCB-> lbuffersize = 0;}

// Since We can't Access Windows API Functions in this Callback, Just // Copy The Bitmap Data To A Global Structure for Later Reference. PCB-> DBLSampleTime = DBLSAMPLETIME

// ------------------------------------- / / below is to save the frame to be saved Time, finally to display the arrival speed // of the frame, the following code is written like this, but the data that feels to me is not right, so although now, do not PCB-> dblsampletimebefore = pcb-> dblSampleTimeNow; pcb-> dblSampleTimeNow = dblSampleTime; pcb-> dblSampleTimeBeforeAll = pcb-> dblSampleTimeBefore; pcb-> dblSampleTimeNowAll = pcb-> dblSampleTimeNow;

PCB-> dblsampletimebetween = pcb-> dblsampletimenow-pcb-> dblsampletimebefore;

PCB-> CNT ;

// ------------------------------------------

// if we haven't yet allocated the data buffer, do it now We need to store the new bitmap. If (! Pcb-> pbuffer {PCB-> PBuffer = new byte [lbuffersize]; PCB-> lbuffersize = lbuffersize;

IF (! pcb-> pbuffer) {PCB-> lbuffersize = 0; return e_outofmemory;}

// Copy The Bitmap Data INTO OUR Global Buffer Memcpy (PCB-> PBuffer, PBuffer, LBuffersize);

// set the size of the bitmap // pcb-> lwidth = lwidth; pcb-> Lheight = LHEIGHT

/ / Here, you should be a notified frame already captured, to process it // gen the datas har Application, Telling it to come // and write it saved data to a Bitmap file on the user's disk. // Activate an EVENT

// set the g_boneshot to be false // * pboneshot = false; // and single the evenet // setEvent (* pheventprocessData); return 0;}};

// Capture the true start Class BuildtwocamFiltergraph {publictwocamfiltergraph (DataBuffer * PCB1, DataBuffer * PCB2); ~ BuildTwocamFiltergraph ();

// release between filter graph; // re-establish the filter graph, // void rebuildFilterGraph two cameras (void); // The following is a reconstruction filter graph function // void rebuildFilterGraph1 (void); void rebuildFilterGraph2 (void) Connection, etc., two cameras // void teardownfiltergraph (void); // Release connection between Filter Graph, etc. ; // the inline functions --------------------------------- // / for the first Camera the filter graph // void setPinCam1 (void); no use now, this must put to another class // void setCaptureFilterCam1 (void); IVideoWindow * getIVideoWindow1 () {return m_pVidWin1;} IMediaControl * getIMediaControl1 () {return m_pMediaControl1;} IGraphBuilder * getIGraphBuilder1 () {return m_pGraph1;} IBaseFilter * getCaptureFilter1 () {return m_pVCap1;} ICaptureGraphBuilder2 * getICaptureGraphBuilder21 () {return m_pGraphBuilder21;}

// For the second camera's filter graph // void setPinCam2 (void); no use now, this must put to another class // void setCaptureFilterCam2 (void); IVideoWindow * getIVideoWindow2 () {return m_pVidWin2;} IMediaControl * getIMediaControl2 ( ) {return m_pMediaControl2;} IGraphBuilder * getIGraphBuilder2 () {return m_pGraph2;} IBaseFilter * getCaptureFilter2 () {return m_pVCap2;} ICaptureGraphBuilder2 * getICaptureGraphBuilder22 () {return m_pGraphBuilder22;}

// set the bool value void setFilterGraph1Run (bool bValue) {m_bPreview1 = bValue;} void setFilterGraph2Run (bool bValue) {m_bPreview2 = bValue;} void setFilterGraph1Build (bool bValue) {m_bFilterGraphCam1 = bValue;} void setFilterGraph2Build (bool bValue) {m_bFilterGraphCam2 = BVALUE;

// are the filter graph have been build bool isFilterGraph1Built (void) {return m_bFilterGraphCam1;}? Bool isFilterGraph2Built (void) {return m_bFilterGraphCam2;} // are the filter garph running bool isFilterGraph1Run (void) {return m_bPreview1;}?

Bool isfiltergraph2run (void) {return m_bpreview2;}

Private: // Used to set some properties of the display, etc. iVideoWindow * m_pvidwin1; iVideoWindow * m_pvidwin2;

// Used to control the created Filter Graph, that is, STOP, Run, and Pause IgraphBuilder * m_pgraph1; igraphbuilder * m_pgraph2;

// When the video stream is captured, it is convenient to generate the Fiter Graph IcapturegraphBuilder2 * m_pgraphbuilder21; ICApturegraphBuilder2 * m_pgraphbuilder22;

// This is a filter ibasefilter * m_pgrabberf1 for intercepting images; ibasefilter * m_pgrabberf2;

// Interface ISAMPLEGRABBER * m_pgrabber1; isamplegrabber * m_pgrabber2;

// The Capture filter ibasefilter * m_pvcap1; ibasefilter * m_pvcap2;

// the render filter ibasefilter * m_prenderer1; ibasefilter * m_prenderer2;

// The Media Control IMEDIACONTROL * m_pmediacontrol1; iMediaControl * m_pmediacontrol2;

// Decision has established Filter Graph // Bool M_BFILTERGRAPHCAM1; BOOL M_BFILTERGRAPHCAM2;

/ / Judgment whether it is preview, that is, whether Run is // BOOL M_BPREVIEW1; BOOL M_BPREVIEW2;

// Some auxiliary functions -------------------------------- //

// Device Enum Function for Two Camera, And Set The Capture Filter Directly // Void DeviceNum (Void);

void InitStillGraph (IGraphBuilder ** ppGraph, ICaptureGraphBuilder2 ** ppGraphBuilder2, IBaseFilter ** ppVCap, IBaseFilter ** ppGrabberF, IBaseFilter ** ppRenderer, ISampleGrabber ** ppGrabber //, // IVideoWindow ** ppVidWin, // IMediaControl ** ppMediaControl);

// function for disconnecting the auxiliary filter connected // void NukeDownstream (IBaseFilter * pf, IGraphBuilder * pFg); void TearDownGraph (IGraphBuilder * pFg, IVideoWindow * pVW, IBaseFilter * pVCap); / * ------- -------------------------------------------------- ------------------------- This Semi_Com Object Will Receive Sample Callbacks for US -------------------------------------------------------------------------------------------------------------- -------------------------------------------------- --------------- * / csamplegrabbercb mcb1; csamplegrabbercb mcb2;

}

#ENDIF

//buildtwocamfiltergraph.cpp

#include "stdafx.h"

#include "buildtwocamfiltergraph.h"

#include "databuffer.h" // store the definition of the global structure

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------- This function is used to create two filter graphs first, initialize COM components, enumerate hardware devices, and create two filter graphs -------------- ------------------------------------------- * / BuildTwocamfiltergraph :: BuildtwocamFiltergraph ( DATABUFFER * pcb1, DATABUFFER * pcb2): m_pVidWin1 (NULL), m_pVidWin2 (NULL), m_pGraph1 (NULL), m_pGraph2 (NULL), m_pGraphBuilder21 (NULL), m_pGraphBuilder22 (NULL), m_pGrabberF1 (NULL), m_pGrabberF2 (NULL), m_pGrabber1 (NULL), m_pGrabber2 (NULL), m_pVCap1 (NULL), m_pVCap2 (NULL), m_pRenderer1 (NULL), m_pRenderer2 (NULL), m_pMediaControl1 (NULL), m_pMediaControl2 (NULL), m_bFilterGraphCam1 (false), m_bFilterGraphCam2 (false), m_bPreview1 (false), m_bPreview2 (false), mCB1 (pcb1, & g_bOneShot1, & hEventProcessData1), mCB2 (pcb2, & g_bOneShot2, & hEventProcessData2) {// mCB1 = CSampleGrabberCB (cb1); // mCB2 = CSampleGrabberCB (cb2); HRESULT hr; // Step1 -------------- // Initialize the CORARY. // Coinitialize (NULL);

// step2 --------------- // enum capture filter // device ();

// step3 --------------- // Connect Filters, build the filter graph ////1.for Camera 1 .............. ................................... //initstillgraph (& m_pgraph1, & m_pgraphbuilder21, & m_pvcap1, & m_pgrabberf1, & m_prenderer1, & m_pgrabber1 ); // ask for the connection method // IT IS, SO We can write out bitmaps // am_media_type mt; hr = m_pgrabber1-> getConnectedMediatype (& MT); if (Failed (HR)) // something wrong happened {MessageBox (NULL, TEXT ( "someting wrong"), TEXT ( "Could not read the connected media type"), 0); // return;} VIDEOINFOHEADER * vih = (VIDEOINFOHEADER *) mt.pbFormat; mCB1 .lwidth = vih-> bmiheader.biwidth; mcb1.lheight = vih-> bmiheader.biheight; freemediatype (mt);

// Don't Buffer The Samples As The Pass Through // HR = m_pgrabber1-> setBuffersamples (false); if (succeeded (hr)) {MessageBox (Null, "SetBuffersamples Sucess", "OK", 0);} (Failed (HR)) {MessageBox (Null, "SetBuffersamples Faile", "WRONG", 0);

// Only Grab One At a Time, Stop StreamAfter // Grabbing One Sample // HR = m_pgrabber1-> setOnShot (false); if (succeeded (hr)) {MessageBox (Null, "SetoneShot Sucess", "OK", 0);} IF (Failed (HR)) {MessageBox (Null, "SetoneShot Failed", "WRONG", 0);

// set the callback, so we can grab the one sample // hr = m_pgrabber1-> setCallback (& ​​MCB1, 1);

HR = m_pgrabber1-> setOnShot (false); if (successded (HR)) {MessageBox (Null, "SetCallback Sucess", "OK", 0);} if (Failed (HR)) {MessageBox (Null, "SetCallback Failed "," Wrong ", 0);} // get this, just for the use to show out in the screen // hr = m_pgraph1-> queryinterface (IID_IVideoWindow, (void **) & m_pvidwin1);

// Get the Media Control Object.and Run the filter graph // hr = m_pgraph1-> queryinterface (IID_IMEDIACONTROL, (void **) & m_pmediacontrol1);

//2.for Camera 2 ......................................... ........ // initstillgraph (& m_pgraph2, & m_pgraphbuilder22, & m_pvcap2, & m_pgrabberf2, & m_prenderer2, & m_pgrabber2);

// ask for the connection medium type sowe know how big // IT IS, SO We can write out bitmaps //

HR = m_pgrabber2-> getConnectedMediatype (& MT); IF (Failed (HR)) {MessageBox (Null, Text ("Someting Wrong"), Text ("Could Not Read The Connected Media Type"), 0); Return;} VIH = (VideoInfoHeader *) mt.pbformat; mcb2.lwidth = vih-> bmiheader.biwidth; mcb2.lheight = vih-> bmiheader.biheight; FreeMediaType (MT);

// don't buffer the samples as the pass through // hr = m_pgrabber2-> setBuffersamples (false);

// ONLY GRAB One At a Time, Stop StreamAfter // Grabbing One Sample // HR = M_PGRABBBER2-> SetOnShot (false);

// set the callback, so we can grab the one sample // hr = m_pgrabber2-> setCallback (& ​​MCB2, 1);

IF (succeeded) {MessageBox (Null, "SetCallback", "OK", 0);

// get this, just for the use to show out in the screen // hr = m_pGraph2-> QueryInterface (IID_IVideoWindow, (void **) & m_pVidWin2); // get the Media Control object.and run the filter graph // hr = m_pgraph2-> queryinterface (IID_IMEDIACONTROL, (void **) & m_pmediacontrol2);

// Now set the flag to be true // m_bfiltergraphcam1 = true; m_bfiltergraphcam2 = true;}

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------- The following function is the destructor, the release of resources --------------------- ----------------------------------------- * / Buildtwocamfiltergraph :: ~ BuildTwocamFiltergraph () {// step1: stop preview // m_pmediacontrol1-> stop (); m_pmediacontrol2-> stop ();

// Step2: Release resource // m_pvidwin1-> release (); m_pgraph1-> release (); m_pgraphbuilder21-> release (); m_pvcap1-> release (); m_pmediacontrol1-> release (); m_pgrabberf1-> release (); M_pgrabber1-> release (); m_prenderer1-> release ();

m_pVidWin2-> Release (); m_pGraph2-> Release (); m_pGraphBuilder22-> Release (); m_pVCap2-> Release (); m_pMediaControl2-> Release (); m_pGrabberF2-> Release (); m_pGrabber2-> Release (); m_pRenderer2 -> Release (); // step3: // couninitialize ();

}

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ---------------- This function is used to re-establish Filter Graph because there is a need to release the connection between Filter GRAPH. At this time, there should be only the Capture Filter left, so, re-establish, can call the auxiliary function used in the Constructor. ^ _ ^ This is the code from the constructor, it is not my package. Great------------------------------------------------- ------------------ * / Void Buildtwocamfiltergraph :: RebuildFiltergraph (void) {HRESULT HR; // Connect Filters, Build The Filter Graph ///1.for Camera 1 ............................................... / InitStillGraph (& m_pGraph1, & m_pGraphBuilder21, & m_pVCap1, & m_pGrabberF1, & m_pRenderer1, & m_pGrabber1); // ask for the connection media type so we know how big // it is, so we can write out bitmaps // AM_MEDIA_TYPE mt; hr = m_pGrabber1-> GetConnectedMediaType (& mt); if (FAILED (hr)) {MessageBox (NULL, TEXT ( "someting wrong"), TEXT ( "Could not read the connected media type"), 0); return;} VIDEOINFOHEADER * vih = (VIDEOINFOHEADER *) mt.pbformat; mcb1.lwidth = vih-> bmiheader.biwidth; mcb1.lheight = vih-> bmiheader.biheight; freemediatype (mt);

// don't buffer the samples as the pass through // hr = m_pgrabber1-> setBuffersamples (false);

// ONLY GRAB One At a Time, Stop Stream After // Grabbing One Sample // HR = M_PGRABBBER1-> SetOnShot (false);

// set the callback, so we can grab the one sample // hr = m_pgrabber1-> setCallback (& ​​MCB1, 1);

// Get this, Just for the use to show out in the screen // hr = m_pgraph1-> queryinterface (IID_IVideoWindow, (void **) & m_pvidwin1);

// Get the Media Control Object.and Run the filter graph // hr = m_pgraph1-> queryinterface (IID_IMEDIACONTROL, (Void **) & m_pmediacontrol1); // 2.for Camera 2 ........... ........................................ // initstillgraph (& m_pgraph2, & m_pgraphbuilder22, & m_pvcap2, & m_pgrabberf2, & m_prendere2, & m_pgrabber2);

// ask for the connection medium type sowe know how big // IT IS, SO We can write out bitmaps //

HR = m_pgrabber2-> getConnectedMediatype (& MT); IF (Failed (HR)) {MessageBox (Null, Text ("Someting Wrong"), Text ("Could Not Read The Connected Media Type"), 0); Return;} VIH = (VideoInfoHeader *) mt.pbformat; mcb2.lwidth = vih-> bmiheader.biwidth; mcb2.lheight = vih-> bmiheader.biheight; FreeMediaType (MT);

// don't buffer the samples as the pass through // hr = m_pgrabber2-> setBuffersamples (false);

// ONLY GRAB One At a Time, Stop StreamAfter // Grabbing One Sample // HR = M_PGRABBBER2-> SetOnShot (false);

// set the callback, so we can grab the one sample // hr = m_pgrabber2-> setCallback (& ​​MCB2, 1);

// Get this, Just for the use to show out in the screen // hr = m_pgraph2-> queryinterface (IID_IVideoWindow, (void **) & m_pvidwin2);

// Get the Media Control Object.and Run the filter graph // hr = m_pgraph2-> queryinterface (IID_IMEDIACONTROL, (void **) & m_pmediacontrol2;

// Now set the flag to be true // m_bfiltergraphcam1 = true; m_bfiltergraphcam2 = true;

}

Void Buildtwocamfiltergraph :: rebuildfiltergraph1 (void) {HRESULT HR; // Connect Filters, build the filter graph ///1.for Camera 1 .................. ........................... / initstillgraph (& m_pgraph1, & m_pgraphbuilder21, & m_pvcap1, & m_pgrabberf1, & m_prenderer1, & m_pgrabber1); // ask for the connection media type so we know how big // it is, so we can write out bitmaps // AM_MEDIA_TYPE mt; hr = m_pGrabber1-> GetConnectedMediaType (& mt); if (FAILED (hr)) {MessageBox (NULL, TEXT ( " someting wrong "), TEXT (" Could not read the connected media type "), 0); return;} VIDEOINFOHEADER * vih = (VIDEOINFOHEADER *) mt.pbFormat; mCB1.lWidth = vih-> bmiHeader.biWidth; mCB1.lHeight = ViH-> bmiheader.biheight; FreeMediaType (MT);

// don't buffer the samples as the pass through // hr = m_pgrabber1-> setBuffersamples (false);

// ONLY GRAB One At a Time, Stop Stream After // Grabbing One Sample // HR = M_PGRABBBER1-> SetOnShot (false);

// set the callback, so we can grab the one sample // hr = m_pgrabber1-> setCallback (& ​​MCB1, 1);

// Get this, Just for the use to show out in the screen // hr = m_pgraph1-> queryinterface (IID_IVideoWindow, (void **) & m_pvidwin1);

// Get the Media Control Object.and Run the filter graph // hr = m_pgraph1-> queryinterface (IID_IMEDIACONTROL, (void **) & m_pmediacontrol1);

m_bfiltergraphcam1 = true;} void buildtwocamfiltergraph :: rebuildfiltergraph2 (void) {HRESULT HR; am_media_type mt; videofoheader * vih; //2.for Camera 2 ................. .................................. // initstillgraph (& m_pgraph2, & m_pgraphbuilder22, & m_pvcap2, & m_pgrabberf2, & m_prendere2, & m_pgrabber2) ; // ask for the connection method

HR = m_pgrabber2-> getConnectedMediatype (& MT); IF (Failed (HR)) {MessageBox (Null, Text ("Someting Wrong"), Text ("Could Not Read The Connected Media Type"), 0); Return;} VIH = (VideoInfoHeader *) mt.pbformat; mcb2.lwidth = vih-> bmiheader.biwidth; mcb2.lheight = vih-> bmiheader.biheight; FreeMediaType (MT);

// don't buffer the samples as the pass through // hr = m_pgrabber2-> setBuffersamples (false);

// ONLY GRAB One At a Time, Stop StreamAfter // Grabbing One Sample // HR = M_PGRABBBER2-> SetOnShot (false);

// set the callback, so we can grab the one sample // hr = m_pgrabber2-> setCallback (& ​​MCB2, 1);

// Get this, Just for the use to show out in the screen // hr = m_pgraph2-> queryinterface (IID_IVideoWindow, (void **) & m_pvidwin2);

// Get the Media Control Object.and Run the filter graph // hr = m_pgraph2-> queryinterface (IID_IMEDIACONTROL, (void **) & m_pmediacontrol2;

// Now set the flag to be true // m_bfiltergraphcam2 = true;

}

// The following function is auxiliary function ----------------------------------- // void BuildTwocamFiltergraph: : teardownFilterGraph () {// for camera 1 TearDownGraph (m_pGraph1, m_pVidWin1, m_pVCap1); m_bFilterGraphCam1 = false; m_bPreview1 = false; // for camera 2 TearDownGraph (m_pGraph2, m_pVidWin2, m_pVCap2); m_bFilterGraphCam2 = false; m_bPreview2 = false;}

Void Buildtwocamfiltergraph :: TeardownFiltergraph1 () {// for Camera 1 TEARDOWNGRAPH (m_pgraph1, m_pvidwin1, m_pvcap1); m_bfiltergraphcam1 = false; m_bpreview1 = false;}

Void Buildtwocamfiltergraph :: TeardownFiltergraph2 () {// for Camera 1 TEARDOWNGRAPH (m_pgraph2, m_pvidwin2, m_pvcap2); m_bfiltergraphcam2 = false; m_bpreview2 = false;}

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- This function is used to create a filter graph ------------------------------------------------------------------------------------------------------- ----------------------------- * / void buildtwocamfiltergraph :: initstillgraph (igraphbuilder ** ppgraph, icapturegraphbuilder2 ** ppgraphbuilder2, ibasefilter ** ppVCap, IBaseFilter ** ppGrabberF, IBaseFilter ** ppRenderer, ISampleGrabber ** ppGrabber) {// want to change the value of the point // so, first create the temp value IGraphBuilder * pGraph = * ppGraph; ICaptureGraphBuilder2 * pGraphBuilder2 = * ppGraphBuilder2 Ibasefilter * pvcap = * ppvcap; ibasefilter * pgrabberf = * ppgrabberf; ibasefilter * prender = * pprenderer; isampleGrabber * pgrabber = * ppgrabber;

HRESULT HR; // Step 1: ------------------- // Create a filter graph // hr = cocreateInstance (CLSID_FILTERGRAPH, NULL, CLSCTX_INPROC_SERVER, IID_IGRAPHBUILDER, Void **) & pgraph); if (Failed (HR)) {MessageBox (Null, Text ("Can Not Create Filter Graph Mananger!"), Text ("Something Wrong"), 0); Return;}

// step 2: ----------------------- // Create the Capture Graph Builder // hr = CoCreateInstance (CLSID_CaptureGraphBuilder2, NULL, CLSCTX_INPROC_SERVER, IID_ICaptureGraphBuilder2,. (void **) & pGraphBuilder2); hr = pGraphBuilder2-> SetFiltergraph (pGraph); if (SUCCEEDED (hr)) {MessageBox (NULL, TEXT ( "m_pGraphBuilder2 success"), TEXT ( "ok"), 0);} else {MessageBox (NULL, Text ("M_PgraphBuilder2 Failed"), Text ("Someting Wrong"), 0); Return;} // Step 3: ------------------ --------- // connect the filters, do some setting // // create a sample grabber, stolen from sdk // Create the Sample Grabber. hr = CoCreateInstance (CLSID_SampleGrabber, NULL, CLSCTX_INPROC_SERVER, IID_IBaseFilter, ( Void **) & pgrabberf; if (Failed (HR)) {MessageBox (Null, "Grabberf Create Failed", "Wrong", 0); Return;}} (succeededed (hr)) {MessageBox (Null, "GRABBBBERF CREATE Succeeded "," Right ", 0);

HR = pgrabberf-> queryinterface (IID_ID **) & pgrabber; if (Failed (HR)) {MessageBox (Null, "Pgrabber Failed", "WRONG", 0);} if (succeededed (hr)) { MessageBox (NULL, "pGrabber succeed", "ok", 0);} // force it to connect to video, 24 bit // CMediaType VideoType; VideoType.SetType (& MEDIATYPE_Video); VideoType.SetSubtype (& MEDIASUBTYPE_RGB24); hr = pGrabber -> setMediatype (& video); // ShouldN't Fail if (Faled (HR)) {MessageBox (Null, Text ("Could Not Set Media Type", Text ("Someting Wrong"), 0); Return;}

// add the capture filter to the filter graph // hr = pGraph-> AddFilter (pVCap, L "Capture Filter"); if (SUCCEEDED (hr)) {MessageBox (NULL, TEXT ( "add capture filter to the filter graph ")," ok ", 0);} // add the grabber to the graph // hr = pgraph-> addfilter (pgrabberf, l" grabber); if (Failed (HR)) {MessageBox (Null, Text ("Could Not Put Sample Grabber in Graph"), Text ("Someting Wrong"), 0); Return;}

// connect the filter // ibasefilter * preventer; // try to render preview pin hr = pgraphbuilder2-> renderstream (& pin_category_preview, & mediatype_video, pvcap, pgrabberf, prender);

IF (Failed (HR)) {MessageBox (Null, "Cannot Renderstream", "WRONG", 0);} if (succeededed (hr)) {MessageBox (Null, "Renderstream Succeed", "OK", 0);

* ppgraph = pgraph; * ppgraphbuilder2 = pgraphbuilder2; * ppvcap = pvcap; * ppgrabberf = pgrabberf; * pprenderer = prender ;; * ppgrabber = pgrabber;

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ----------- This function is just two Capture Fiter and saved to two private variables but did not add them in Filter Graph ------------- --------------------------------------------- * /

Void Buildtwocamfiltergraph :: DeviceEnum () {HRESULT HR; / / ------------------------------------- ----------------------------- // ENUMERATE All Video Capture Devices

// Create the System Device Enumerator ICreateDevEnum * pCreateDevEnum = 0;. Hr = CoCreateInstance (CLSID_SystemDeviceEnum, NULL, CLSCTX_INPROC_SERVER, IID_ICreateDevEnum, (void **) & pCreateDevEnum); if (! Hr = NOERROR) {MessageBox (NULL, TEXT ( "Error Creating Device Enumerator "), TEXT (" Error Creating Device Enumerator "), 0); return;} // Create an enumerator for the video capture category IEnumMoniker * pEnum = 0; hr = pCreateDevEnum-> CreateClassEnumerator (CLSID_VideoInputDeviceCategory, & pEnum, 0 ); If (hr! = Noerror) {MessageBox (Null, Text ("Sorry, You Have No Video Capture Hardware./r/n/r/N") Text ("Video Capture Will NOT FUNCTION Properly."), Text ("Someting wheng"), 0); return;

IMONIKER * PMONIKER; IF (Penum-> Next (1, & PMoniker, NULL) == S_OK) // Get the first one {

HR = PMONIKER-> BindToobject (0, 0, IID_IBASEFILTER, (Void **) & m_pvcap1); if (successded (HR)) {MessageBox (Null, "Capture Filter 1 succeeded", "OK", 0);} IF Failed (HR)) {MessageBox (Null, "Capture Filter 1 failed", "OK", 0);

} IF (Penum-> Next (1, & pmoniker, null) == s_ok) // Get the second one {

HR = PMoniker-> BindToObject (0, 0, IID_IBaseFilter, (Void **) & m_Pvcap2); if (successded (HR)) {MessageBox (Null, "Capture Filter 2 succeeded", "OK", 0);} IF Failed (HR)) {MessageBox (Null, "Capture Filter 2 failed", "OK", 0);

Else MessageBox (Null, Text ("Need Another More Camera on The Computer), Text (" Someting Wrong "), 0); PMoniker-> Release ();

}

/ / -------------------------------------------------------------------------------------------- --------------- // The following two functions are Copy from the example program, when doing auxiliary function, // is when the PIN of SET CAPTURE Graph is Need to disconnect all Filter connections /// * ------------------------------------- ------------------------------------------ Tear Down Everything DownStream of A Given Filter deletes all filters under a given filter in a graphbuilder, implement Stolen from Amcap by recursive call ----------------------- -------------------------------------------------- ------- * / Void BuildTwocamfiltergraph :: NukeDownStream (IBaseFilter * Pf, IgraphBuilder * PFG) {ipin * pp = 0, * pto = 0; Ulong u; ienumpins * pins = null; pin_info pininfo; if (! PF) Return;

HRESULT HR = PF-> Enumpins (& Pins); Pins-> Reset (); // Take this function to get the latest data?

While (hr == noerror) {hr = pins-> next (1, & pp, & u); // Find pin if (hr == s_ok && pp) {PP-> Connectedto (& PTO); // Find Pin if (pto) {hr = pto-> querypinInfo (& pinInfo) on another filter connected to this PIN; if (hr == noerror) {if (PinInfo.dir == Pindir_input) // only looking for Input PIN {NukeDownStream (PININFO.PFILTER, PFG); // Recursive call PFG-> disconnect (pto); // To remove PFG-> Disconnect (PP) at both ends of the PIN; PFG-> RemoveFilter (Pininfo.pfilter) // In recursive call, // first deleted only the filter} Pininfo.pfilter-> release ();} PP-> Release ();} PP-> Release (); End while

IF (Pins) Pins-> Release ();

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --------------------------------- Tear Down Everything DownStream of the Capture Filters, So We can build a Different Capture Graph . Notice that we never destroy the capture filters and WDM filters upstream of them, because then all the capture settings we've set would be lost. the filter deletes the back of the capture filter, so that you can create a new capture graph. The reason why you don't delete the Capture Filter is that if you delete, then our original setting is white? ? Stolen from SDK AMCAP ---------------------------------------------------------------------------------------------------------------------------- ----------------------------------- * / void Buildtwocamfiltergraph :: Teardowngraph (iGraphbuilder * PFG, IvideoWindow * PVW , IBasefilter * PVCAP) {if (pvw) // iVideoWindow Stop Video Show {// Stop Drawing in Our window, or we May Get Wierd Repaint Effects PVW-> PUT_OWNER (NULL); PVW-> PUT_VISIBLE (OAFALSE); PVW- > Release (); pVW = NULL;} // destroy the graph downstream of our capture filters if (pVCap) NukeDownstream (pVCap, pFg); // this function Tear down everything downstream of a given filter // However, the filter was Yes will be reserved // then it should be for some refter's release}

Below is the implementation of the binocular visual class, some tests used, such as saving the captured video stream as BMP picture, now this part is still left, just, in the global multi-threaded function It is possible to perform image processing, here, the function of the image processing has not been written yet.

//binoview.h

#ifndef _binoview_h_ # define _binoview_h_

#include #include //ow to change the number to char *

#include "databuffer.h" // store the definition of the global structure

// Extern DataBuffer CB1; // for Camera 1 // Extern DataBuffer CB2; // for Camera 2

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- --- Description: This class is used to achieve binocular visual algorithm binoCular

-------------------------------------------------- --- * // Store the position of the image in the image with the following structure // typef struct tagpositioninimage {DoubleX; Double Y;} IMGPOS; // Store the object in the world coordinate system in the world coordinate system with the following structure Typedef struct tagpositioninworld {Double X; Double Y; Double Z;} worldpos;

// To be identified in the world's coordinate system and in the two images //////mgpos inImgpos InImage1; IMGPOS inImage2;

// Objects in the world coordinate system worldpos inworld;} ObjectPos;

Class binoview {public: binoview (DataBuffer * DataBuffer1, DataBuffer * DataBuffer2); ~ binoView ();

// Process two images, and finally get the position of the required object in the image // void processimg1 (void processimg2 (void);

// this is the last step of the binocular vision // void binocularvision (void);

// the inline function // OBJECTPOS getBall (void) {return m_ball;} OBJECTPOS getDoor (void) {return m_door;} OBJECTPOS getPillar1 (void) {return m_pillar1;} OBJECTPOS getPillar2 (void) {return m_pillar2;}

/ / This function is through the internal parametric and outer parameter matrix of the camera, and the value of the M matrix // preliminary imagine is that the internal and external parameters are changed each time. Void Cam1mmatrix (Void); Void Cam2mmatrix (Void);

// The following function is a test for test -------------------------- // // The following two functions are Save data in the corresponding buffer as bmp picture // save to bmp for buffer1: // void copybitmap1 (void);

// save to bmp for buffer2: // void copybitmap2 (void);

// a Handy Method, Just To Store The Data Into Bitmap // Bool Copybitmap (Databaser * PDataBuffer, LPCTSTR M_SZSNAPPEDNAME);

// The following function is empty, but it is only for testing // void thelaststep (void);

// ------------------------------------------- // this IS a Litter try // int count; // --------------------------------------- --------

// Camera parameters ---------------------------- /// Camera1: Double Cam1M [3] [4]; / / MM matrix of camera 1, m1 × m2

Double Cam1M1 [3] [4]; // The internal parameter matrix Double Cam1M2 [4] [4] of the camera 1; // Camera 1 Overseis matrix

// Camera2: Double Cam2M [3] [4]; // Camera 2 M matrix, m1 × m2

Double Cam2M1 [3] [4]; // The internal parameter matrix Double Cam2m2 [4] [4] of the camera 2; // The outer parameter matrix private: // below is the transferred data to be processed // DataBuffer * PBuffercam1; DataBuffer * pBuffercam2;

// The position of the matching of the two images is ////

Objectpos m_ball; // now Only consider ball first objectpos m_door; // Ball door ObjectPos m_pillar1; // Noodle post 1 Objectpos m_pillar2; // angle post 2

/ / ---------------------------------- // Here is auxiliary function // void processimg (DatabaseFfer * PBuffercam, int Numofimg);

}

#ENDIF

//binoview.cpp

#include "stdafx.h"

#include "binoview.h"

#include "mathutilities.h" // A class that performs numerical calculation

// The following is a multi-threaded event synchronization control, because in the global place has been defined, so there can only be a extern extern HANDLE hEventGetData; extern HANDLE hEventProcessData1; extern HANDLE hEventProcessData2; extern HANDLE hEventProcessData1Finish; extern HANDLE hEventProcessData2Finish;

// the method in public area ----------------------------------------- / / Binoview :: binoview (DatabaseFfer * PDATABUFFER1, DATABUFFER * PDATABUFFER2): PBufferCam1 (PDATABUFFER1), PBufferCam2 (PDATABUFFER2) {// Initialization camera parameter // / * CAM1M [3] [4] = {0,0,0,0 , 0}, {0, 0, 0, 0}, {0, 0, 0, 0}; CAM2M [3] [4] = {0 0, 0, 0, 0}, {0,0,0 , 0}, {0, 0, 0, 0}};

CAM1M1 [3] [4] = {{0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 1, 0}}; CAM1M2 [4] [4] = {{ 0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 1}};

CAM2M1 [3] [4] = {{0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 1, 0}}; CAM2M2 [4] [4] = {{ 0, 0, 0, 0}, {0, 0, 0, 0}, {0, 0, 0, 1}}; * /

For (int i = 0; i <3; i ) {for (int J = 0; j <4; j ) {CAM1M [i] [j] = 0; CAM2M [i] [j] = 0; CAM1M1 [ I] [j] = 0; CAM2M1 [i] [j] = 0;}}

For (int i = 0; i <4; i ) {for (int J = 0; j <4; j ) {CAM1M2 [I] [j] = 0; CAM2M2 [i] [j] = 0;}}}

// Special Value Cam1M1 [2] [2] = 1; CAM2M1 [2] [2] = 1;

CAM1M2 [3] [3] = 1; CAM2M2 [3] [3] = 1;

// for the ball m_ball.inimage1.x = 0; m_ball.inimage1.y = 0; m_ball.inimage2.x = 0; m_ball.inimage2.y = 0; m_ball.inworld.x = 0; m_ball.inworld.y = 0; m_ball.inworld.z = 0; // for the door m_door.inimage1.x = 0; m_door.inimage1.y = 0; m_door.inimage2.x = 0; m_door.inimage2.y = 0; m_door. Inworld.x = 0; m_door.inworld.y = 0; m_door.inworld.z = 0;

// for the pillar1 m_pillar1.inimage1.x = 0; m_pillar1.inimage1.y = 0; m_pillar1.inimage2.x = 0; m_pillar1.inimage2.y = 0; m_pillar1.inworld.x = 0; m_pillar1.inworld.y = 0; m_pillar1.inworld.z = 0;

// for the pillar2 m_pillar2.inimage1.x = 0; m_pillar2.inimage1.y = 0; m_pillar2.inimage2.x = 0; m_pillar2.inimage2.y = 0; m_pillar2.inworld.x = 0; m_pillar2.inworld.y = 0; m_pillar2.inworld.z = 0;

// just for debug // count = 0;}

Binoview :: ~ binoview () {}

// the next method just to get the position in image // void binoView :: processimg1 (void) {processimg (pBuffercam1, 1);

// set the value by hand, just for debug // m_ball.inimage1.x = 20; m_ball.inimage1.y = 20; // set the event set1finish;}

Void binoView :: processimg2 (void) {processimg (pBuffercam2, 2);

// set the value by hand, just for debug // m_ball.inimage2.x = 20; m_ball.inimage2.y = 20; // set the Event setEvent (HEVENTPROCESSDATA2FINISH);}

// this method is to do the last step s binocular vision //// before running this step, the location in the two images has been obtained, // and M matrix has also been calculated, here Some of the wrong processing yet: Note: The M ma matrix of the camera is not calculated here, and wants to calculate when the user gives a given parameter. // In this program, it is to do the last step of the eyes of the eyes. The position of the small ball, the angle post and the ball, // If you can't perform the calculation of the vision because of objective reasons, then return -1 // Double K [4] [3]; Double U [4] [1 ]; Double M [3] [1]; // the result // the middle result // Double Transeposek [3] [4]; Double Multikk [3] [3]; Double Multikkk [3] [4];

// for ball -------------------------- // IF ((M_ball.inImage1.x> 0) && (m_ball.inimage1 .y> 0) && (m_ball.inimage2.x> 0) && (m_ball.inimage2.y> 0)) {// Give Value to Matrix K and Matrix U // K [0] [0] = m_ball.inimage1 .x * CAM1M [2] [0] -CAM1M [0] [0]; k [0] [1] = m_ball.inimage1.x * CAM1M [2] [1] -CAM1M [0] [1]; k [0] [2] = m_ball.inimage1.x * CAM1M [2] [2] -CAM1M [0] [2];

K [1] [0] = m_ball.inimage1.y * CAM1M [2] [0] -CAM1M [1] [0]; K [1] [1] = m_ball.inimage1.y * CAM1M [2] [1 ] -CAM1M [1] [1]; K [1] [2] = m_ball.inimage1.y * CAM1M [2] [2] -CAM1M [1] [2];

K [2] [0] = m_ball.inimage2.x * CAM2M [2] [0] -CAM2M [0] [0]; k [2] [1] = m_ball.inimage2.x * Cam2m [2] [1 ] -CAM2M [0] [1]; K [2] [2] = m_ball.inimage2.x * CAM2M [2] [2] -CAM2M [0] [2];

K [3] [0] = m_ball.inimage2.y * CAM2M [2] [0] -CAM2M [1] [0]; K [3] [1] = m_ball.inimage2.y * CAM2M [2] [1 ] -CAM2M [1] [1]; K [3] [2] = m_ball.inimage2.y * CAM2M [2] [2] -CAM2M [1] [2];

U [0] [0] = CAM1M [0] [3] -m_ball.inimage1.x * CAM1M [2] [3]; U [1] [0] = CAM1M [1] [3] -m_ball.inimage1. Y * CAM1M [2] [3]; U [2] [0] = CAM2M [0] [3] -m_ball.inimage2.x * CAM2M [2] [3]; U [3] [0] = CAM2M [ 1] [3] -m_ball.inimage2.y * CAM2M [2] [3];

// Minimum multiplier Begin // Mathutilities :: MatrixTransepose (k [0], 4, 3, jthutilities :: matrixmultiple (transseposek [0], k [0], 3, 4, 3, multikk [0]); MATHUTILITIES :: Matrixinv (Multikk [0], 3); Mathutilities :: Matrixmultiple (Multikk [0], Transeposek [0], 3, 3, 4, Multikkkk [0]); Mathutilities :: Matrixmultiple Multikkk [0], U [0], 3, 4, 1, m [0]); // Now store the result // m_ball.inworld.x = m [0] [0]; m_ball.inworld.y = M [1] [0]; m_ball.inworld.z = m [2] [0];

} Else {m_ball.inworld.x = -1; m_ball.inworld.y = -1; m_ball.inworld.z = -1;}

// for door ------------------------------ IF ((m_door.inimage1.x> 0) && (m_door.inimage1 .y> 0) && (m_door.inimage2.x> 0) && (m_door.inimage2.y> 0)) {// Give Value to Matrix K and Matrix U // K [0] [0] = m_door.inimage1 .x * CAM1M [2] [0] -CAM1M [0] [0]; K [0] [1] = m_door.inimage1.x * CAM1M [2] [1] -CAM1M [0] [1]; k [0] [2] = m_door.inimage1.x * CAM1M [2] [2] -CAM1M [0] [2];

K [1] [0] = m_door.inimage1.y * CAM1M [2] [0] -CAM1M [1] [0]; K [1] [1] = m_door.inimage1.y * CAM1M [2] [1 ] -CAM1M [1] [1]; K [1] [2] = m_door.inimage1.y * CAM1M [2] [2] -CAM1M [1] [2];

K [2] [0] = m_door.inimage2.x * CAM2M [2] [0] -CAM2M [0] [0]; k [2] [1] = m_door.inimage2.x * Cam2m [2] [1 ] -CAM2M [0] [1]; K [2] [2] = m_door.inimage2.x * CAM2M [2] [2] -CAM2M [0] [2];

K [3] [0] = m_door.inimage2.y * CAM2M [2] [0] -CAM2M [1] [0]; K [3] [1] = m_door.inimage2.y * CAM2M [2] [1 ] -CAM2M [1] [1]; k [3] [2] = m_door.inimage2.y * CAM2M [2] [2] -CAM2M [1] [2];

U [0] [0] = CAM1M [0] [3] -m_door.inimage1.x * CAM1M [2] [3]; U [1] [0] = CAM1M [1] [3] -m_door.inimage1. Y * CAM1M [2] [3]; U [2] [0] = CAM2M [0] [3] -m_door.inimage2.x * CAM2M [2] [3]; u [3] [0] = CAM2M [ 1] [3] -m_door.inimage2.y * CAM2M [2] [3];

// Minimum multiplier Begin // Mathutilities :: MatrixTransepose (k [0], 4, 3, jthutilities :: matrixmultiple (transseposek [0], k [0], 3, 4, 3, multikk [0]); MATHUTILITIES :: Matrixinv (Multikk [0], 3); Mathutilities :: Matrixmultiple (Multikk [0], Transeposek [0], 3, 3, 4, Multikkkk [0]); Mathutilities :: Matrixmultiple Multikkk [0], U [0], 3, 4, 1, m [0]); // Now store the result // m_door.inworld.x = m [0] [0]; m_door.inworld.y = M [1] [0]; m_door.inworld.z = m [2] [0];

} Else {m_door.inworld.x = -1; m_door.inworld.y = -1; m_door.inworld.z = -1;}

// for pillar1 -------------------------------- // IF ((M_Pillar1.inImage1.x> 0) && (m_pillar1.inimage1.y> 0) && (m_pillar1.inimage2.x> 0) && (m_pillar1.inimage2.y> 0)) {// Give Value to Matrix K and Matrix u // k [0] [0] = m_pillar1.inImage1.x * CAM1M [2] [0] -CAM1M [0] [0]; K [0] [1] = m_pillar1.inimage1.x * CAM1M [2] [1] -CAM1M [0] [ 1]; K [0] [2] = m_pillar1.inimage1.x * CAM1M [2] [2] -CAM1M [0] [2];

K [1] [0] = m_pillar1.inimage1.y * CAM1M [2] [0] -CAM1M [1] [0]; K [1] [1] = m_pillar1.inimage1.y * CAM1M [2] [1 ] -CAM1M [1] [1]; k [1] [2] = m_pillar1.inimage1.y * CAM1M [2] [2] -CAM1M [1] [2];

K [2] [0] = m_pillar1.inimage2.x * CAM2M [2] [0] -CAM2M [0] [0]; k [2] [1] = m_pillar1.inimage2.x * Cam2m [2] [1 ] -CAM2M [0] [1]; K [2] [2] = m_pillar1.inimage2.x * CAM2M [2] [2] -CAM2M [0] [2];

K [3] [0] = m_pillar1.inimage2.y * CAM2M [2] [0] -CAM2M [1] [0]; k [3] [1] = m_pillar1.inimage2.y * Cam2m [2] [1 ] -CAM2M [1] [1]; K [3] [2] = m_pillar1.inimage2.y * CAM2M [2] [2] -CAM2M [1] [2];

U [0] [0] = CAM1M [0] [3] -m_pillar1.inimage1.x * CAM1M [2] [3]; U [1] [0] = CAM1M [1] [3] -M_Pillar1.inImage1. Y * CAM1M [2] [3]; u [2] [0] = CAM2M [0] [3] -m_pillar1.inimage2.x * Cam2m [2] [3]; U [3] [0] = CAM2M [ 1] [3] -m_pillar1.inimage2.y * CAM2M [2] [3];

// Minimum multiplier Begin // Mathutilities :: MatrixTransepose (k [0], 4, 3, jthutilities :: matrixmultiple (transseposek [0], k [0], 3, 4, 3, multikk [0]); MATHUTILITIES :: Matrixinv (Multikk [0], 3); Mathutilities :: Matrixmultiple (Multikk [0], Transeposek [0], 3, 3, 4, Multikkkk [0]); Mathutilities :: Matrixmultiple Multikkk [0], U [0], 3, 4, 1, m [0]); // Now store the result // m_pillar1.inworld.x = m [0] [0]; m_pillar1.inworld.y = M [1] [0]; m_pillar1.inworld.z = m [2] [0];

} Else {m_pillar1.inworld.x = -1; m_pillar1.inworld.y = -1; m_pillar1.inworld.z = -1;}

// for pillar2 ----------------------------- // IF ((m_pillar2.inimage1.x> 0) && (m_pillar2. InImage1.y> 0) && (m_pillar2.inimage2.x> 0) && (m_pillar2.inimage2.y> 0)) {// Give Value to Matrix K and Matrix U // K [0] [0] = m_pillar2. InImage1.x * CAM1M [2] [0] -CAM1M [0] [0]; K [0] [1] = m_pillar2.inimage1.x * CAM1M [2] [1] -CAM1M [0] [1]; K [0] [2] = m_pillar2.inimage1.x * CAM1M [2] [2] -CAM1M [0] [2];

K [1] [0] = m_pillar2.inimage1.y * CAM1M [2] [0] -CAM1M [1] [0]; k [1] [1] = m_pillar2.inimage1.y * CAM1M [2] [1 ] -CAM1M [1] [1]; K [1] [2] = m_pillar2.inimage1.y * CAM1M [2] [2] -CAM1M [1] [2];

K [2] [0] = m_pillar2.inImage2.x * CAM2M [2] [0] -CAM2M [0] [0]; K [2] [1] = m_pillar2.inimage2.x * Cam2m [2] [1 ] -CAM2M [0] [1]; K [2] [2] = m_pillar2.inimage2.x * Cam2M [2] [2] -CAM2M [0] [2];

K [3] [0] = m_pillar2.inimage2.y * CAM2M [2] [0] -CAM2M [1] [0]; k [3] [1] = m_pillar2.inimage2.y * CAM2M [2] [1 ] -CAM2M [1] [1]; K [3] [2] = m_pillar2.inimage2.y * CAM2M [2] [2] -CAM2M [1] [2];

U [0] [0] = CAM1M [0] [3] -M_pillar2.inimage1.x * CAM1M [2] [3]; U [1] [0] = CAM1M [1] [3] -M_pillar2.inimage1. Y * CAM1M [2] [3]; u [2] [0] = CAM2M [0] [3] -m_pillar2.inimage2.x * Cam2m [2] [3]; U [3] [0] = CAM2M [ 1] [3] -m_pillar2.inimage2.y * CAM2M [2] [3];

// Minimum multiplier Begin // Mathutilities :: MatrixTransepose (k [0], 4, 3, jthutilities :: matrixmultiple (transseposek [0], k [0], 3, 4, 3, multikk [0]); MATHUTILITIES :: Matrixinv (Multikk [0], 3); Mathutilities :: Matrixmultiple (Multikk [0], Transeposek [0], 3, 3, 4, Multikkkk [0]); Mathutilities :: Matrixmultiple Multikkk [0], U [0], 3, 4, 1, m [0]); // Now store the result // m_pillar2.inworld.x = m [0] [0]; m_pillar2.inworld.y = M [1] [0]; m_pillar2.inworld.z = m [2] [0];

} Else {m_pillar2.inworld.x = -1; m_pillar2.inworld.y = -1; m_pillar2.inworld.z = -1;}

// now everything finished // // set the event // setEvent (HEVENTGETDATA);

}

// the method in private area ------------------------------------------- --- /// This function is not well designed, where the second parameter represents the processing of the first few images, put the corresponding result // void binoView :: processimg (databuffer * pBuffercam , int Numofimg) {// step1: ----------------------------------------- ---------- // The portion of the image processing is processed, and the image in the image is extracted, and the corresponding coordinates in the image in the image are extracted. If there is no appearance in the image These objects are set to -1 // // define some temporary variables corresponding to the coordinate of the image // // IMGPOS BALL; // Now Only Consider Ball First Imgpos Door; // Goal IMGPOS PILLAR1; // Noodles 1 IMGPOS PILLAR2; // Noodles 2

// Image processing section start //// Just Give the result, only for debug // Door.x = -1; door.y = -1; pillar1.x = -1; pillar1.y = -1; pillar2. x = -1; pillar2.y = -1;

// STEP2: ------------------------------------------- // According to the incoming parameters, the specific image // if (Numofimg == 1) {m_ball.inimage1 = ball; m_door.inimage1 = door1; m_pillar1.inimage1 = pillar1; m_pillar2.inimage1 = pillar2;} else {m_ball.inimage2 = Ball; m_door.inimage2 = door; m_pillar1.inimage2 = pillar1; m_pillar2.inimage2 = pillar2;}}

// Calculate the M ma ma matrix Void BinoView :: Cam1mmtrix (Void) {Mathutilities :: Matrixmultiple (CAM1M1 [0], CAM1M [0]);} // Computation Camera 1 M Matrix Void BinoView :: Cam2mmatrix (Void) {Mathutilities :: Matrixmultiple (CAM2M1 [0], CAM2M2 [0], 3, 4, 4, CAM2M [0]);} // ---------- -------------------------------------------------- --------- // The following function is mainly test // void binoView :: there, t telTEP () {SLEEP (100);

// and this just try // count ; // set the evenet // setEvent (hEventgetdata);}

bool BinoView :: copyBitmap (DATABUFFER * pdataBuffer, LPCTSTR m_szSnappedName) {// write out a bmp file // HANDLE hf = CreateFile (m_szSnappedName, GENERIC_WRITE, FILE_SHARE_READ, NULL, CREATE_ALWAYS, NULL, NULL); if (hf == INVALID_HANDLE_VALUE) Return 0;

// Write out the file header // bitmapfileHeader Bfh; Memset (& BFH, 0, SIZEOF (BitmapFileHeader)); bfh.bftype = 'mb'; // Set the size of bitmap file BFH.BFSIZE = SizeOf (BitmapfileHeader) Sizeof (BitMapInfoHeader) pDataBuffer-> lbuffersize; // Set the position of the bit image line bfh.bfoffbits = sizeof (BitmapFileHeader) Sizeof (BitmapInfoHeader);

DWORD dwritten = 0; Writefile (HF, & BFH, SIZEOF (BFH), & DWWRITEN, NULL

// and the bitmap format // BITMAPINFOHEADER bih; memset (& bih, 0, sizeof (BITMAPINFOHEADER)); bih.biSize = sizeof (BITMAPINFOHEADER); bih.biWidth = pdataBuffer-> lWidth; bih.biHeight = pdataBuffer-> lHeight; BIH.BIPLANES = 1; BIH.BIBITCOUNT = 24;

Dwwritten = 0; Writefile (HF, & BIH, SIZEOF (BIH), & DWWRITEN, NULL;

// and the bits themselfs dwwritten = 0; Writefile (HF, PDATABUFFER-> PBuffer, PDataBuffer-> LBuffersize, & dwwritten, null);

CloseHandle (HF); // bfilewritten = true;

Save BitMapInfoHeader for lating //memcpy the window //memcpy (&( (CB2.BIH), & ((CB2.BIH), &BIH ,SIZEOF (BIH): Return True;}

Void binoview :: copybitmap1 (void) {//copybitmap(PBuffercam1,"e//capturebmp/cam1.bmp "); static int CNT = 0; char buffer [50]; _itoa (CNT, Buffer, 10);

Char * pathname = new char [100]; char * path = "e: // capturebmp // Cam1"; strcpy (pathname, path); strcat (pathname, buffer); strcat (pathname, ". bmp");

CopyBitmap (PBuffercam1, Pathname);

CNT ; hEventProcessData1Finish;} void binoview :: copybitmap2 (void) {//copybitmap (PBuffercam2, "E: //capturebmp/cam2.bmp ");

Static int CNT = 0; char buffer [50]; _itoa (CNT, Buffer, 10);

Char * pathname = new char [100]; char * path = "e: // capturebmp // Cam2"; strcpy (pathname, path); strcat (pathname, buffer); strcat (pathname, ". BMP");

CopyBitmap (PBuffercam2, Pathname);

CNT ;

STEVENT (HEVENTPROCESSDATA2FINISH);

}

For binocular vision, numerical operation is unable to miss, the following related numerical calculations, I am very simple, that is, the source code on the book on the numerical calculation is good, called it now only A few functions of several functions, however, this only a few functions of numerical operations, is also enough for both visual visuals.

//Mathutilities.h

#ifndef mathutilities_h # define mathutilities_h

#include "math.h"

Class Mathutilities {public: // The following function is used to seek two matrices multiplied, and finally resulting results // static void matrixmultiple (double a [], // The matrix Double B (//) The number of lines of the matrix INT M, / / ​​matrix A on the right side, is also the number of rows INT N, // matrix A of the last result matrix C, and the number of rows of the matrix B INT k, // matrix B Number, and the result matrix C column number Double C [] // Result Matrix C); // The following function is used to transpose the A matrix, and the result is placed in B // static void matrixtransepose (Double A [] , // int m, // This is the number of rows of the A matrix int N, // This is the number of columns of the A matrix Double B []);

// Real matrix demise allocation of the main element Gaussian approach, if it is a singular matrix return 0 // static int matrixinv (double a [], // is required to be reversed matrix, return to its reverse matrix INT N) ; // matrix step};

#ENDIF

//Mathutilities.cpp

#include "stdafx.h" #include "mathutilities.h"

// matrix multiplication ------------------------------------------- --------------------------------- // void mathutilities :: matrixmultiple (double a [], // multiplier The matrix Double B [], // is the number of the number of rows of the line of the matrix INT m, // matrix A, is also the number of rows of the last result matrix C, and the number of columns of matrix B. The number of INT K, // matrix B is also the number of columns of the result matrix C [] // Result matrix C) {INT I, J, L, U; For (i = 0; i <= m- 1; i ) for (j = 0; j <= k-1; j ) {u = i * k j; c [u] = 0.0; for (l = 0; l <= n-1; l ) C [u] = c [u] a [i * n l] * b [l * k j];} return;

// The following function is used to transpose the A matrix, and the result is put into B // void Mathutilities :: MatrixTransepose (double a [], // Input initial matrix int m, // This is a matrix line Number INT N, // This is the result matrix obtained by the column number double b []) // of the A matrix {for (int i = 0; i

// Real matrix-seeking allocation of the main element Gaussian approach to the method // int matchutilities :: Matrixinv (double a [], // is required to be reversed matrix, return to its inverse matrix INT N) // matrix order Number {INT * IS, * JS, I, J, K, L, U, V; Double D, P; IS = (int *) malloc (n * sizeof (int)); js = (int *) malloc N * sizeof (int)); for (k = 0; k <= N-1; K ) {d = 0.0; for (i = k; i <= n-1; i ) for (j = k; J <= N-1; J ) {L = i * n j; p = fabs (a [l]); if (p> d) {d = p; IS [k] = i; js [k] = J;}}} (D 1.0 == 1.0) {free (IS); free (js); Printf ("Err ** Not INV / N"); return (0);} if (IS [k]! = k) for (j = 0; j <= N-1; j ) {u = k * n j; v = IS [k] * n j; p = a [u]; a [u] = a [v]; a [v] = p;} if (js [k]! = k) for (i = 0; i <= n-1; i ) {u = i * n k; v = i * n js [k]; p = a [u]; a [u] = a [v]; a [v] = p;} L = k * n k; A [L] = 1.0 / a [ l]; for (j = 0; j <= n-1; j ) IF (j! = k) {u = k * n j; A [u] = a [u] * a [l];} For (i = 0; i <= n-1; i ) IF (i! = k) for (j = 0; j <= N-1; j ) IF (j! = k) {u = i * n j; A [u] = a [u] -a [i * n k] * a [k * n j]; } For (i = 0; i <= n-1; i ) IF (i! = K) {u = i * n k; A [u] = - a [u] * a [l];}} For (k = N-1; k> = 0; k -) {IF (js [k]! = k) for (j = 0; j <= n-1; j ) {u = k * n j; v =

JS [k] * n j; p = a [u]; a [u] = a [v]; a [v] = p;} if (IS [k]! = k) for (i = 0; i <= n-1; i ) {u = i * n k; v = i * n IS [k]; p = a [u]; a [u] = a [v]; a [v] = P;}} free (IS); free (js); return (1);} The structure defined in this function is a place where the image of the image is stored

//databuffer.h

#ifndef _Database_H_ # define _DatabaseFfer_H_

// structures // The folstion structure used to store the datatypedef struct _callbackInfo {double dblsample; long lbuffersize; byte * pbuffer; // ------------ // The following data member is used Finally, get the number of frame dblsampletimebefore dblsampletimebefore; double dblsampletimebeter; double dblsampletimenoWall; Double DBLsampletimeBetWeen;

Double CNT; // ------------- // BitmapInfoHeader Bih; Long LWIDTH; LONG LHEIGHT;} Database

#ENDIF

Some major classes are also achieved. The following is the main program, mainly to call the previous class to achieve real-time video stream collection and processing, that is, there will be several global threads, several events, There are two global data areas for playing video frame data, and it is a file that I have implemented in the dialog box. In fact, the dialog box class is a very ordinary implementation interface. Class. Therefore, it will be very messy.

// twocamcapturebmpdlg.h: header file //

#pragma overce

#include #include // isamplegrabbercb # incrude

#include #include "afxwin.h"

#pragma comment (lib, "strmiids.lib") #pragma Comment (Lib, "Strmbase.LIB")

// ctwocamcapturebmpdlg dialog CTWOCAMCAPTUREBMPDLG: public cdialog {// Construct Public: ctwocamcaptureBMPDLG (CWND * PParent = null); // Standard constructor

// Dialog Box Data ENUM {IDD = IDD_TWOCAMCAPTUREBMP_DIALOG};

Protected: Virtual Void DodataExchange (CDataExchange * PDX); // DDX / DDV Support

// Realize Protected: hicon m_hicon;

// message mapping function generated virtual BOOL OnInitDialog (); afx_msg void OnSysCommand (UINT nID, LPARAM lParam); afx_msg void OnPaint (); afx_msg HCURSOR OnQueryDragIcon (); DECLARE_MESSAGE_MAP () private: // // helper functions

// Used to set Capture Filter // Void SetCaptureFilter (IBaseFilter * PVCAP);

/// 's Handle of the thread // Handle Hgetdata; Handle HProcessData2; Handle Hgetresult;

Public: // The window for the preview of camera 1 cstatic m_previewcam1; // The window for the preview of Camera 2 cstatic m_previewcam2;

// ---------------------- // The public method // void showresult (void); // This function is just a function that is not useless Moreover, it will be useful in the future.

afx_msg BOOL OnEraseBkgnd (CDC * pDC); afx_msg void OnBnClickedBegin (); afx_msg void OnBnClickedSavegrf (); afx_msg void OnBnClickedSetpincam1 (); afx_msg void OnBnClickedSetfiltercam1 (); afx_msg void OnBnClickedSetpincam2 (); afx_msg void OnBnClickedSetfiltercam2 (); afx_msg void OnBnClickedStop () AFX_MSG Void OnbnclickedStopcap (); AFX_MSG Void OnBnclickedCamval1 (); AFX_MSG Void OnBnclickedCamval2 (); AFX_MSG Void OnbnclickedProcessBMP ();

// twocamcapturebmpdlg.cpp: Implement file //

#include "stdafx.h" #include "twocamcapturebmp.h" #include "twocamcapturebmpdlg.h"

#include "./twocamcapturebmpdlg.h"

#ifdef _debug # define new debug_new # Endif

// --------------------------------------- // Headers //

#include "databuffer.h" // store the definition of the global structure # include "buildtwocamfiltergraph.h" #include "binoview.h"

// Display and set the internet camera parameter dialog #include "camvalue1.h" #include "camvalue2.h"

/ / -------------------------------------------------------------------------------------------- ----- // The global data // Database CB1 = {0}; DataBuffer CB2 = {0}; Bool g_boneshot1 = false; bool g_boneshot2 = false; // This is just a dialog for easy doing The pointer has made it a full-class function // ctwocamcapturebmpdlg * PDLG = NULL;

Buildtwocamfiltergraph * twocam = new buildtwocamfiltergraph (& CB1, & CB2);

Binoview * binocular = new binoview (& CB1, & CB2);

/ / ----------------------------------- / / The Events // The first event that allows the data is excited by the user to press the start button to excite // Handle HEVENTGETDATA = CreateEvent (NULL, FALSE, // Automatically turn into non-excited state false, // for the non-activated state TEXT ( "getDataEvent")); HANDLE hEventProcessData1 = CreateEvent (NULL, FALSE, FALSE, TEXT ( "processData1Event")); HANDLE hEventProcessData2 = CreateEvent (NULL, FALSE, FALSE, TEXT ( "processData2Event")) ; HANDLE hEventProcessData1Finish = CreateEvent (NULL, FALSE, FALSE, TEXT ( "finishProcessData1Event")); HANDLE hEventProcessData2Finish = CreateEvent (NULL, FALSE, FALSE, TEXT ( "finishProcessData2Event")); // used for WaitForMultipleObjects // HANDLE hEventProcessDataFinish [2 ] = {HEVENTPROCESSDATA1FINISH, HEVENTPROCESSDATA2FINISH};

// ----------------------------------- // the global function / // * ---- -------------------------------------------------- - The following thread starts getting the thread of the data (that is, set up g_boneshot) However, this function itself is waiting for Event -------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------------------ / DWORD WINAPI G_GETDATA (LPVOID P) {While (True) {WaitForsingleObject (HEVENTGETDATA, INFINITE ); // MessageBox (NULL, "GetData Singled", "OK", 0); g_boneshot1 = true; g_boneshot2 = true; // Note, the thread that is not activated at this time, but // is really complete Place the data to the buffer, activate the thread of the processing data}} / * ---------------------------------------------------------------------------------------------------------------------------- ------------------------ The thread below is used to initiate the procedure for processing the image data acquired by Camera1, then wait for the data to be completely acquired. Can be handled, so wait for Event ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------------ * / DWORD WINAPI G_ProcessData1 (LPVOID P) {WHILE (TRUE) {WaitForsingleObject (HEVENTPROCESSDATA1, Infinite); // MessageBox (Null, "HEVENTPROCESSDATA1 SINGED" , "OK", 0); // Now process the data from camera 1 // // binocular-> copybitmap1 (); binocular-> processimg1 (); // when finish process Data, activate Event HEVENTPROCESSDATA1FINISH / / // I Think The Event Must Be Set by The Uper method // setEvent (HEVENTPROCESSDATA1FINISH);}}

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------- The thread below is used to start the data acquired by Camera2, then wait for the data to be completely acquired, so you need to wait for Event ------ -------------------------------------------------- --- * / DWORD WINAPI g_processData2 (LPVOID p) {while (true) {WaitForSingleObject (hEventProcessData2, INFINITE); // MessageBox (NULL, "hEventProcessData2 singled", "ok", 0); // now process the data from Camera 2 // // this method just for debug // // binocular-> copybitmap2 ();

binoCular-> processImg2 (); // when finish process the data, the activation event hEventProcessData2Finish // // I think the event must be set by the upper method, // so could conform that the function finished // SetEvent (hEventProcessData2Finish); }

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- -------------- The thread below is used to fusion the results of the data processed above, so it is necessary to wait for the two threads to run over again. Waiting two events, and, since the data is handled, it activates Event, so that the thread to get the data can continue to run --------------------- ------------------------------------------ / DWORD WINAPI G_GETRESULT (LPVOID P ) {while (true) {WaitForMultipleObjects (2, hEventProcessDataFinish, TRUE, INFINITE); // MessageBox (NULL, "hEventProcessDataFinish singled", "ok", 0); // here all data in the databuffer has been finished, // SO, You Could Use the data, and get the result at last // // this method just for debug // // binocular-> thelaststep ();

BinoCular-> binocularvision ();

// This is called the function in the dialog box // pdlg-> showResult ();

// now Single The Event HEVENTGETDATA / / / MAY BE THIS Event Must Be set by the Upper Function // // setEvent (HEVENTGETDATA);}}

// Caboutdlg dialog for the application "About" menu item

Class Caboutdlg: public cdialog {public: capoutdlg ();

// Dialog Box Data ENUM {IDD = IDD_ABOUTBOX};

Protected: Virtual Void DodataExchange (CDataExchange * PDX); // DDX / DDV Support

// Realize protected: declare_MESSAGE_MAP () public: AFX_MSG VOID onClose ();

Caboutdlg :: Caboutdlg (): CDIALOG (Caboutdlg :: IDD) {}

Void Caboutdlg :: DODATAEXCHANGE (CDataExchange * PDX) {cdialog :: DODATAEXCHANGE (PDX);

Begin_MESSAGE_MAP (Caboutdlg, CDIALOG) ON_WM_CLOSE () end_MESSAGE_MAP ()

// ctwocamcapturebmpdlg dialog

CtwoCamCaptureBmpDlg :: CtwoCamCaptureBmpDlg (CWnd * pParent / * = NULL * /): CDialog (CtwoCamCaptureBmpDlg :: IDD, pParent) {m_hIcon = AfxGetApp () -> LoadIcon (IDR_MAINFRAME);} void CtwoCamCaptureBmpDlg :: DoDataExchange (CDataExchange * pDX) { CDIALOG :: DODATAEXCHANGE (PDX); ddx_control (pdx, idc_windowcam1, m_previewcam1); ddx_control (pdx, idc_windowcam2, m_previewcam2);}

BEGIN_MESSAGE_MAP (CtwoCamCaptureBmpDlg, CDialog) ON_WM_SYSCOMMAND () ON_WM_PAINT () ON_WM_QUERYDRAGICON () //}} AFX_MSG_MAP ON_WM_ERASEBKGND () ON_BN_CLICKED (IDC_BEGIN, OnBnClickedBegin) ON_BN_CLICKED (IDC_SAVEGRF, OnBnClickedSavegrf) ON_BN_CLICKED (IDC_SETPINCAM1, OnBnClickedSetpincam1) ON_BN_CLICKED (IDC_SETFILTERCAM1, OnBnClickedSetfiltercam1) ON_BN_CLICKED (IDC_SETPINCAM2 , OnBnClickedSetpincam2) ON_BN_CLICKED (IDC_SETFILTERCAM2, OnBnClickedSetfiltercam2) ON_BN_CLICKED (IDC_STOP, OnBnClickedStop) ON_BN_CLICKED (IDC_STOPCAP, OnBnClickedStopcap) ON_BN_CLICKED (IDC_CAMVAL1, OnBnClickedCamval1) ON_BN_CLICKED (IDC_CAMVAL2, OnBnClickedCamval2) ON_BN_CLICKED (IDC_PROCESSBMP, OnBnClickedProcessbmp) END_MESSAGE_MAP ()

// ctwocamcapturebmpdlg message handler

Bool ctwocamcapturebmpdlg :: oninitdialog () {cdialog :: oninitdialog ();

// Add / "About ... /" menu item to the system menu.

// IDM_Aboutbox must be within the system command range. Assert (IDM_ABOUTBOX & 0xFFF0) == idm_aboutbox); assert (IDM_aboutbox <0xf000);

CMenu * pSysMenu = GetSystemMenu (FALSE); if (pSysMenu = NULL!) {CString strAboutMenu; strAboutMenu.LoadString (IDS_ABOUTBOX); if {pSysMenu-> AppendMenu (MF_SEPARATOR) (strAboutMenu.IsEmpty ()!); PSysMenu-> AppendMenu ( MF_String, IDM_AboutBox, Straboutmenu);}}

/ / Set the icon of this dialog. When the application main window is not a dialog, the framework will automatically / / perform this SETICON (M_HICON, TRUE); // Set large icon seticon (m_hicon, false); // Set small icon

// Todo: Add additional initialization code here

// Since we're embedding video in a child window of a dialog, // we must set the WS_CLIPCHILDREN style to prevent the bounding // rectangle from drawing over our video frames. // // Neglecting to set this style can lead to situations when the video // is erased and replaced with the default color of the bounding rectangle m_previewCam1.ModifyStyle (0, WS_CLIPCHILDREN);. m_previewCam2.ModifyStyle (0, WS_CLIPCHILDREN);

/ / -------------------------------------------------------------------------------------------- --------------------- // show out in the screen, make some settings, make the captured image can be displayed // setting the video window, // pay attention These settings must be set when Filter Graph is created, otherwise it will not

// for the camera 1 // HRESULT HR; HR = (Twocam-> getiVideoWindow1 ()) -> put_owner (oahwnd) m_previewcam1.getsafehwnd ()); if (succeededed (hr)) {// The video window Must Have the WS_CHILD style hr = (twoCam-> getIVideoWindow1 ()) -> put_WindowStyle (WS_CHILD); // Read coordinates of video container window RECT rc; m_previewCam1.GetClientRect (& rc); long width = rc.right - rc.left; long Height = rc.bottom - rc.top; // ignore The video's original size and stretch to fit bounding rectangle hr = (twocam-> getivideowindow1 ()) -> setWindowPosition (rc.Left, rc.top, width, height); (Twocam-> getiVideoWindow1 ()) -> PUT_Visible (OATRUE);

// for the camera2 // hr = (twoCam-> getIVideoWindow2 ()) -> put_Owner ((OAHWND) m_previewCam2.GetSafeHwnd ()); if (SUCCEEDED (hr)) {// The video window must have the WS_CHILD style hr = (twoCam-> getIVideoWindow2 ()) -> put_WindowStyle (WS_CHILD); // Read coordinates of video container window RECT rc; m_previewCam2.GetClientRect (& rc); long width = rc.right - rc.left; long height = rc. Bottom - rc.top; // ignore the Video's Original Size and Stretch To Fit Bounding Rectangle HR = (Twocam-> GetiVideoWindow2 ()) -> setWindowPosition (rc.left, rc.top, width, height); (twocam-> GetiVideoWindow2 ()) -> put_visible (oatrue);} // now Set Something in Order to Preview As Soon As the window Built //

HR = (Twocam-> GetIMediaControl1 ()) -> run (); if ("run the graph");} (Twocam-> getimediacontrol2 ()) -> run ();

Twocam-> setfiltergraph1run (TRUE); Twocam-> setfiltergraph2run (TRUE);

// Now give the pdlg a value // pdlg = this; return true; // Returns true} unless the control is set.

void CtwoCamCaptureBmpDlg :: OnSysCommand (UINT nID, LPARAM lParam) {if ((nID & 0xFFF0) == IDM_ABOUTBOX) {CAboutDlg dlgAbout; dlgAbout.DoModal ();} else {CDialog :: OnSysCommand (nID, lParam);}}

// If you add a minimized button to the dialog, you need to draw the following code // to draw the icon. For the MFC application using the document / view model, // This will be done automatically by the frame.

Void ctwocamcapturebmpdlg :: onpaint () {ix (isiconic ()) {cpaintdc dc (this); // Draw the device context

SendMessage (WM_ICONERASEBKGND, Reinterpret_cast (dc.getsafehdc ()), 0);

// Make the icon in the working rectangle in the working rectangle = getSystemMetrics (SM_CXICON); int Cyicon = getSystemMetrics (SM_CYICON); CRECT RECT; GetClientRect (& Re); int x = (Rect.width () - CXICON 1) / 2; INT Y = (Rect.height () - Cyicon 1) / 2; // Draw icon Dc.drawicon (x, y, m_hicon);} else {cdialog :: onpaint ();}}

// The system calls this function to get the cursor display when the user drags the minimization window. Hcursor ctwocamcapturebmpdlg :: ONQUERYDRAGICON () {return static_cast (m_hicon);

/ * ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------------ This function is used to set the Capture Filter ---------------------------- --------------------------------- * / void ctwocamcapturebmpdlg :: setCaptureFilter (ibasefilter * pvcap) {CWND TT; TT .GeTactiveWindow ();

ISpecifyPropertyPages * pProp; HRESULT hr = pVCap-> QueryInterface (IID_ISpecifyPropertyPages, (void **) & pProp); if (SUCCEEDED (hr)) {// Get the filter's name and IUnknown pointer FILTER_INFO FilterInfo;. Hr = pVCap-> QueryFilterInfo ( & Filnfo); IUNKNOWN * PFILTERUNK; PVCAP-> QueryInterface (IID_IUNKNOWN, (Void **) & Pfilterunk);

// Show the page. Cauuid CAGUID; PPROP-> GetPages (& CAGUID); Pprop-> Release (); OlecreatePropertyFrame (tt.m_hwnd, // parent window 0, 0, // reserved filterinfo.chname, // CAPTION for THE dialog box 1, // Number of objects (just the filter) & pFilterUnk, // Array of object pointers. caGUID.cElems, // Number of property pages caGUID.pElems, // Array of property page CLSIDs 0, // Locale identifier 0, null // reserved;

// clean up. Pfilterunk-> Release (); filterinfo.pgraph-> release (); cotaskmemfree (caguid.pElems);}} bool ctwocamcapturebmpdlg :: OneRaseBkGnd (CDC * PDC) {// Todo: Add message processing here Program code and / or call default value CRECT RC; m_previewcam1.getwindowRect (& RC); ScreenToClient (& RC); PDC-> ExcludeclipRect (& RC);

M_PreviewCam2.GetWindowRect (& RC); ScreenToClient (& RC); PDC-> ExcludeclipRect (& RC);

Return CDialog :: OneRaseBkGnd (PDC);

Void Caboutdlg :: OnClose () {// Todo: Add message handler code and / or call default value delete twocam;

Delete binocula;

CDIALOG :: onClose ();

Void ctwocamcapturebmpdlg :: OnblickedBegin () {// Todo: This add control notification handler code // In the start-processed this button, first determine if the filter graph is connected, if not connected, then // reconnect, secondly, Perform the display of the video stream, let the captured video stream show // then, Run Filter graph // final, re-establish threads, stimulate events, let thread run // // for Camera 1 ----- ------------ // if (! Twocam-> isfiltergraph1built ()) {twocam-> rebuildfiltergraph1 (); twocam-> setfiltergraph1build (true);

// for the camera 1 // HRESULT HR; HR = (Twocam-> getiVideoWindow1 ()) -> put_owner (oahwnd) m_previewcam1.getsafehwnd ()); if (succeededed (hr)) {// The video window Must Have the WS_CHILD style hr = (twoCam-> getIVideoWindow1 ()) -> put_WindowStyle (WS_CHILD); // Read coordinates of video container window RECT rc; m_previewCam1.GetClientRect (& rc); long width = rc.right - rc.left; long Height = rc.bottom - rc.top; // ignore The video's original size and stretch to fit bounding rectangle hr = (twocam-> getivideowindow1 ()) -> setWindowPosition (rc.Left, rc.top, width, height); (Twocam-> getiVideoWindow1 ()) -> put_visible (oatrue);}}} (! twocam-> isfiltergraph1run ()) {twocam-> getimediacontrol1 () -> run (); twocam-> setfiltergraph1run;} / / for Camera 2 --------------------------------- // HRESULT HR; if (! twocam-> isfiltergraph2built) )) {Twocam-> rebuildfiltergraph2 (); twocam-> setfiltergraph2build (true);

// for the camera 2 // hr = (Twocam-> getivideoWindow2 ()) -> put_owner ((OAHWND) m_previewcam2.getsafehwnd ()); if (succeeded (hr)) {// The video window Must Have The WS_Child Style hr = (twoCam-> getIVideoWindow2 ()) -> put_WindowStyle (WS_CHILD); // Read coordinates of video container window RECT rc; m_previewCam2.GetClientRect (& rc); long width = rc.right - rc.left; long height = rc .bottom - rc.top; // ignore the video original size and stretch to fit bounding rectangle hr = (Twocam-> getivideoWindow2 ()) -> setWindowPosition (rc.left, rc.top, width, height); (Twocam > getvideoWindow2 ()) -> put_visible (oatrue);}}} if (! twocam-> isfiltergraph2run ()) {twocam-> getimediacontrol2 () -> run (); twocam-> setfiltergraph2run (true);} // Here Create the thread, so could control them to do the things // hGetData = CreateThread (NULL, 0, g_getData, 0, 0, NULL); hProcessData1 = CreateThread (NULL, 0, g_processData1, 0, 0, NULL); hProcessData2 = CreateThread (NULL, 0, G_ProcessData2, 0, 0, null; hgetresult = Createthread (NULL, 0, G_GETRESULT, 0, 0, NULL);

// Stimulate the event HEVENTGETDATA to start the process of capture processing // setEvent (HEVENTGETDATA);

Void ctwocamcapturebmpdlg :: OnbnclickedSavegrf () {// Todo: Add control notification handler code HRESULT HR; CFiledialog DLG (TRUE);

IF (DLG.Domodal () == idok) {wchar wfilename [max_path]; multibytetowidechar (CP_ACP, 0, DLG.GetPathName (), -1, wfilename, max_path;

IStorage * pstorage = NULL;

// First, create a document file that will hold the GRF file hr = :: StgCreateDocfile (wFileName, STGM_CREATE | STGM_TRANSACTED | STGM_READWRITE | STGM_SHARE_EXCLUSIVE, 0, & pStorage); if (FAILED (hr)) {AfxMessageBox (TEXT ( "Can not create a document ")); return;} // Next, create a stream to store WCHAR wszStreamName [] = L." ActiveMovieGraph "; IStream * pStream; hr = pStorage-> CreateStream (wszStreamName, STGM_WRITE | STGM_CREATE | STGM_SHARE_EXCLUSIVE, 0 , 0, & pstream); Failed (HR)) {AFXMessageBox (Text ("Can Not Create A Stream"); ​​PStorage-> Release (); Return;}

. // The IpersistStream :: Save method converts a stream // into a persistent object IPersistStream * pPersist = NULL; (twoCam-> getIGraphBuilder1 ()) -> QueryInterface (IID_IPersistStream, reinterpret_cast (& pPersist)); hr = pPersist-> save (pstream, true); pstream-> release (); pPERSIST-> Release ();

IF (succeededed (hr)) {hr = pstorage-> commit (STGC_DEFAULT); if (Failed (HR)) {AFXMessageBox (Text ("Can Not Store It");}} PStorage-> Release ();

}

}

Void ctwocamcapturebmpdlg :: OnbnclickedSetPincam1 () {// Todo: This add control notification handler code CWnd TT; tt.getactiveWindow ();

HRESULT HR; IAMSTREAMCONFIG * PSC;

IF (twocam-> isfiltergraph1run ()) {twocam-> getimediacontrol1 () -> stop (); twocam-> setfiltergraph1run (false);}

IF (twocam-> isfiltergraph1built ()) {twocam-> setfiltergraph1build (false); twocam-> teardownfiltergraph1 (); // graph could Prevent Dialog Working

}

hr = (twoCam-> getICaptureGraphBuilder21 ()) -> FindInterface (& PIN_CATEGORY_CAPTURE, & MEDIATYPE_Video, twoCam-> getCaptureFilter1 (), IID_IAMStreamConfig, (void **) & pSC); ISpecifyPropertyPages * pSpec; CAUUID cauuid; hr = pSC-> QueryInterface (IID_ISpecifyPropertyPages , (Void **) & pspec); if (hr == s_ok) {hr = pspec-> getpages (& cauuid); // Display Properties page hr = OlecreatePropertyFrame (tt.m_hwnd, 30, 30, null, 1, (iUnknown **) & PSC, Cauuid.celems, (GUID *) Cauuid.PELEMS, 0, 0, NULL);

// !!! What if Changing Output Formats COULDN '' Broken? Shouldn't Be Possible ... cotaskmemfree (cauiD.PELEMS); pspec-> release (); psc-> release () }}

Void ctwocamcapturebmpdlg :: OnbnclickedSetFiltercam1 () {// Todo: Add control notification handler code setcapturefilter (twocam-> getcapturefilter1 ());}

Void ctwocamcapturebmpdlg :: OnbnclickedSetPincam2 () {// Todo: This add control notification handler code CWND TT; tt.getActiveWindow ();

HRESULT HR; IAMSTREAMCONFIG * PSC;

IF (twocam-> isfiltergraph2run ()) {twocam-> getimediacontrol2 () -> stop (); twocam-> setfiltergraph2run (false);}

IF (twocam-> isfiltergraph2built ()) {twocam-> setfiltergraph2build (false); twocam-> teardownfiltergraph2 (); // graph could Prevent Dialog Working

}

HR = (Twocam-> GeticapturegraphBuilder22 ()) -> Findinterface (& pin_category_capture, & mediatype_video, twocam-> getcapturefilter2 (), IID_iamstreamconfig, (void **) & PSC);

ISpecifyPropertyPages * pSpec; CAUUID cauuid; hr = pSC-> QueryInterface (IID_ISpecifyPropertyPages, (void **) & pSpec); if (hr == S_OK) {hr = pSpec-> GetPages (& cauuid); // Properties Page hr = OleCreatePropertyFrame (tt.m_hwnd, 30, 30, null, 1, (iunknown **) & PSC, cauiD.celems, (guid *) cauuid.PELEMS, 0, 0, null); // !!! What if Changing Output Formats Couldn 't reconnect // and the graph is broken? Shouldn't Be Possible ... cotaskmemfree (cauiD.PELEMS); pspec-> release (); psc-> release ();}}

Void ctwocamcapturebmpdlg :: OnBnclickedSetFiltercam2 () {// Todo: Add control notification handler code setCaptureFilter (twocam-> getcapturefilter2 ());}

Void ctwocamcapturebmpdlg :: OnBnclickedStop () {// Todo: Add control notification handler code // now Here Teminate All the thread // This is just over all threads, these threads are also screenshots And whether the operation of the video stream is not related. // the video stream or running // DWORD dwExitCode = 0; BOOL bSucess; bSucess = TerminateThread (hGetData, dwExitCode); bSucess = TerminateThread (hProcessData1, dwExitCode); bSucess = TerminateThread (hProcessData2, dwExitCode); bSucess = TerminateThread (hGetResult DWEXITCODE);

}

Void ctwocamcapturebmpdlg :: ShowResult (void) {char buffer [50];

_itoa (binocular-> count, buffer, 10); setdlgitemtext (IDC_EDIT1, BUFFER);

_itoa (CB1.cnt, Buffer, 10); setdlgitemtext (IDC_EDIT2, BUFFER);

}

Void ctwocamcapturebmpdlg :: OnblickedStopcap () {// TODO: This add control notification handler code // Implementing this function is for the convenience of debugging, it can display the required frames for image processing //1. End thread / / 2. Stop Filter Graph's run // 3. Put the last captured image to the dialog, displayed /// step1: kill the thread ----------- ---------------- // DWORD dwExitCode = 0; BOOL bSucess; bSucess = TerminateThread (hGetData, dwExitCode); bSucess = TerminateThread (hProcessData1, dwExitCode); bSucess = TerminateThread (hProcessData2, DWEXITCODE); bsucess = terminatethread (hgetresult, dwexitcode); // step2: stop the filter graph --------------------------- // / / for Camera 1: IF (Twocam-> isfiltergraph1Run ()) {Twocam-> getimediacontrol1 () -> stop (); twocam-> setfiltergraph1run (false);} // for Camera 2: if (Twocam-> isfiltergraph2Run () ) {Twocam-> getimediacontrol2 () -> stop (); twocam-> setfiltergraph2run (false);

// Step3: show the last bitmap we have gotten ---------------- // // for Camera 1 // BitmapInfoHeader BiH1; MEMSET (& BiH1, 0, Sizeof (BitmapInfoHeader)) BiH1.bisize = sizeof (BitmapInfoHead); bih1.biwidth = cb1.lwidth; bih1.biheight = cb1.lheight; bih1.bipitcount = 24;

CWND * PWNDCAM1 = getdlgitem (idc_windowcam1); cdc * confcam1 = pwndcam1-> getdc (); CRECT RectCam1; PWNDCAM1-> getClientRect (& RectCam1);

StretchDIBits (theDCCam1-> m_hDC, rectCam1.left, rectCam1.top, rectCam1.right-rectCam1.left, rectCam1.bottom-rectCam1.top, 0,0, bih1.biWidth, bih1.biHeight, cb1.pBuffer, (LPBITMAPINFO) & BiH1, DIB_RGB_COLORS, SRCCOPY;

// for camera 2 // BITMAPINFOHEADER bih2; memset (& bih2,0, sizeof (BITMAPINFOHEADER)); bih2.biSize = sizeof (BITMAPINFOHEADER); bih2.biWidth = cb1.lWidth; bih2.biHeight = cb1.lHeight; bih2.biPlanes = 1; bih2.biBitcount = 24; CWND * PWNDCAM2 = getdlgitem (idc_windowcam2); cdc * confcam2 = pwndcam2-> getdc (); CRECT RectCam2; PWNDCAM2-> getClientRect (& RectCam2);

StretchDIBits (theDCCam2-> m_hDC, rectCam2.left, rectCam2.top, rectCam2.right-rectCam2.left, rectCam2.bottom-rectCam2.top, 0,0, bih2.biWidth, bih2.biHeight, cb2.pBuffer, (LPBITMAPINFO) & BiH2, DIB_RGB_COLORS, SRCCOPY;

// Step3: Store the Bitmap Into Memory // BinocuLar-> Copybitmap1 (); binocular-> copybitmap2 ();

}

Void ctwocamcapturebmpdlg :: OnbnclickedCamval1 () {// Todo: Add control notification handler code ccamvalue1 DLG;

IF (dlg.domodal ()! = idok) {return;}

}

Void ctwocamcapturebmpdlg :: OnbnclickedCamval2 () {// Todo: Add control notification handler code CCAMVALUE2 DLG;

IF (DLG.Domodal ()! = idok) {return;}}

Void ctwocamcapturebmpdlg :: OnbnclickedProcessBmp () {// Todo: Add control notification handler code

// STEP1: ----------------------------------- // for cam1 // binocular-> processimg1 ();

// for cam2 // binocular-> processimg2 ();

// Step2: show the bitmap change --------------- // // for Camera 1 // BitmapInfoHeader BiH1; MEMSET (& BiH1, 0, Sizeof (BitmapInfoHeader); bih1.bisize = sizeof (BitmapInfoHead); bih1.biwidth = cb1.lwidth; bih1.biheight = cb1.lheight; bih1.bipite = 1; bih1.bibitcount = 24;

CWND * PWNDCAM1 = getdlgitem (idc_windowcam1); cdc * confcam1 = pwndcam1-> getdc (); CRECT RectCam1; PWNDCAM1-> getClientRect (& RectCam1);

StretchDIBits (theDCCam1-> m_hDC, rectCam1.left, rectCam1.top, rectCam1.right-rectCam1.left, rectCam1.bottom-rectCam1.top, 0,0, bih1.biWidth, bih1.biHeight, cb1.pBuffer, (LPBITMAPINFO) & bih1, DIB_RGB_COLORS, SRCCOPY); // for camera 2 // BITMAPINFOHEADER bih2; memset (& bih2,0, sizeof (BITMAPINFOHEADER)); bih2.biSize = sizeof (BITMAPINFOHEADER); bih2.biWidth = cb1.lWidth; bih2.biHeight = CB1.LHEIGHT; BIH2.BIPLANES = 1; bih2.bibitcount = 24;

CWND * PWNDCAM2 = getdlgitem (idc_windowcam2); cdc * confcam2 = pwndcam2-> getdc (); cRect RectCam2; PWNDCAM2-> getClientRect (& RectCam2);

StretchDIBits (theDCCam2-> m_hDC, rectCam2.left, rectCam2.top, rectCam2.right-rectCam2.left, rectCam2.bottom-rectCam2.top, 0,0, bih2.biWidth, bih2.biHeight, cb2.pBuffer, (LPBITMAPINFO) & BiH2, DIB_RGB_COLORS, SRCCOPY;

// STEP3: ---------------------------------------------- // binocular-> binocularvision ();

// Step4: show the result out --------- // cstring str; str.format ("x =% f / n y =% f / n z =% f", binocular-> Getball (). inworld.x, binocular-> getball (). inworld.y, binocular-> getball (). inworld.z); MessageBox (STR);

}

The above is most of the code of this binocular visual program frame, which also needs to be improved. Also, the setting camera parameters inside are made by two dialogs, but the code inside is physical activity, so don't write it here. The video stream capture in the program has a part of the code in the DirectShow example.

转载请注明原文地址:https://www.9cbs.com/read-44017.html

New Post(0)