In a television interview, sometimes some interviews are reluctant to throw their heads. In this case, the interviewee may be back to the camera lens; however, the interviewee is still facing the lens, while the surface of the interview object is handled in the surface of the interview. This mosaic handles makes the audience unable to see the true face of the interviewee, so that the interviewee is not willing to throw the head. As a programmer, do you think about how to implement this effect? This paper introduces a simple and easy programming method for realizing the local area mosaic processing.
One. Mosaic treatment principle and its implementation
Let's first take a look at the same frame video image in performing the contrast effect before and after the mosaic processing, as shown in Figure 1.
Figure 1 Comparison of the effects of mosaic processing to the portrait face
After the mosaic is handled, you can't identify her true face? So, how can the mosaic effect? As you know, the image is composed of pixels; the size of the pixel particles determines the accuracy of the image performance (this is why the small-sized television seems to be more clear than the large-size TV.). If we enlarge the pixels in the designated area, do you have a mosaic effect? Don't worry, there is a key issue here: for a given display device, its pixel particle size is physically unmatosed, how to enlarge? ! There is a way: use several neighboring pixels as the same pixel value, is it equal to zooming in pixels? Just after we make such an enlargement process in the specified area, the mosaic area will exceed the area initially specified (if the user specifies the width of the area is W, the ratio of the pixel horizontal magnification is Ratiox, this mosaic The process after processing will override WX Ratio. How do I still define the area of the mosaic still define in the area specified by the user? The author's approach is that the pixels in the specified area are sampled. As shown in Figure 2, it is assumed that we will perform mosaic processing on the R1 area in the image.
Figure 2 Requires mosaic processing area
Suppose the pixel arrangement of the R1 area is arranged in Figure 3:
Figure 3 R1 area of pixels
It is then assumed that the horizontal magnification ratio of the pixels is 3, and the vertical amplification ratio is 3, then each corresponding position pixel value distribution after the mosaic treatment is shown in Figure 4: Figure 4 R1 area passes through the mosaic processing.
We see that every 3 pixels per three pixels in the R1 area (P00, P03, P06, P09, P30, P33, P36, P39, P60, P63, P66, P69, etc. are sample points), vertical The direction of each sampling pixel is repeated 3 times (the contents of the second line of 2, 3, and the fourth line of the second line will be pushed); each sampling point pixel is enlarged to A 3 x 3 macroblock, that is, the sampling point pixel is magnified 9 times.
Image Specify Area Mosaic Processing C Implementation
// Image Frame Data Pointer PBYTE PIMAGE; / / Get image data // ... // Pointer Point PBYTE PIMAGETOPLINE = NULL; / / Image Span (in bytes) long imagestride = 0; / / If the image data is stored from the bottom-up scan sequence, the first line of the image should be in the countdown first line of the PIMAGE data; // If the image data is stored in the downward scan sequence // the line 1 refers to the image location is pImage if (m_bIsBottomUp) {imageStride = -m_nImageStride; pImageTopLine = pImage m_nImageStride * (m_nImageHeight - 1);} else {imageStride = m_nImageStride; pImageTopLine = pImage;} // RatioX is the magnification of the magnification of pixels in the horizontal direction // Ratioy is the magnification of the pixels in the vertical direction // maskstride to perform the width of the region of the mosaic treatment (in bytes) / * macrowidth and MacorHeight have the following calculation relationship: Rect M_maskRect ; // rectangular area required synchronization processing (specified by the user) int maskWidth = m_MaskRect.right - m_MaskRect.left 1; int maskHeight = m_MaskRect.bottom - m_MaskRect.top 1; macroWidth = maskWidth / m_nRatioX; macroHeight = maskHeight / m_nRatioY; * / int macroWidth, macroHeight, maskStride, ratioX, ratioY; // mosaic processing: // pMaskPixel points to the current pixel, // pMaskLine points to the current row, // pMaskNextLine next line PBYTE pMaskTopLine, pMaskLine, pMaskNextLine, pMaskPixel ; // PMASKTopline points to the first line of the region you need to perform mosaic processing: m_npixelbytes is a single pixel occupied bytes PMasktopline = PIMAGETOPLINE m_MaskRect.top * imageStride m_MaskRect.left * m_nPixelBytes; macroWidth = m_nMacroWidth; macroHeight = m_nMacroHeight; maskStride = m_nMaskStride; ratioX = m_nRatioX; ratioY = m_nRatioY;
/ / Scan the pixels of the specified area, perform mosaic processing ... int CYCLE = 0; for (int i = 0; i two. Component development and presentation With the algorithm of mosaic processing, the next question is how to get continuous video image frame data? Here we can use a tool software with DirectX SDK with a tool software graphedit (ie, the bin / dxutils / graphedt.exe under the SDK directory). Run Graphedit, as shown in Figure 5: Figure 5 Graphedit Tool Software Execute a menu command file | Render Media File ..., select a multimedia file in the subsequent dialog box (such as the selected MPEG2 file mp2_sales.mpg), automatically build the link as shown in Figure 6: Figure 6 Playing links built using Graphedit Then execute the menu command graph | play can be played to the MP2_sales.mpg file. Also perform graph | pause or graph | STOP can be suspended or stopped. The value is noted that the graphedit plays the mp2_sales.mpg file is DirectShow technology! Everyone knows that DirectX is a set of programming interfaces that Microsoft provide high performance graphics, sound, input, output, and online games on the Windows platform; and DirectShow is a member of DirectX, specifically for audio and video data acquisition, multimedia files Application of playback. The most basic functional module in DirectShow is called Filter (each rectangular block in Figure 6 represents a filter); each filter has at least one PIN for receiving data or output data; Filter always completing a certain function (Figure 6 The first filter in the left is the file source. MPEG-2 splitter is responsible for separating the audio and video in the MPEG2 data stream. CyberLink Audio Decoder is responsible for decoding the audio data of the MPEG format, HQ MPEG-2 Video Decoder is responsible for the video of MPEG format Data decoding, Default DirectSound Device is responsible for audio playback, Video Rendrer is responsible for video display); various FILTERs are connected in series in a certain order; data flows along the arrow between the Filter until default DirectSound Device and Video Renderer. DirectShow is a modular, open application framework. We can develop our own Filter components and inserted into a location in the Filter link to get the opportunity to process the data stream. Take the video mosaic processing that needs to be implemented in this article, we can fully implement the mosaic processing algorithm in a FILTER and then connect it to the video decoding filter to obtain continuous, non-compressed image frame data. We named this filter as "HQ Video Mosaic"; because this filter can modify data on the input image frame, the filter can use the Trans-In-Place model; Filter accepts 16-bit, 24-bit, 32 Data input in bit RGB format. HQ Video Mosaic is completed after the development is completed, generates hqmosaic.ax files (assuming placed in C: / Next), then use the system's Regsvr32.exe registration (method is: execute the command line Regsvr32 C: / HQMosaic.ax). (Note: About DirectShow Filter development methods More detailed introduction, limited to space, the author is not going to start here; interested readers can refer to the author's two kinds of "DirectShow Development Guide" and "DirectShow Practices" .hq Video Mosaic Filter's source code, please readers to http://hqtech.nese.net download.) Filter component development is completed and can be used in Graphedit after successful registration. The first is still constructed as the Filter link of Figure 6. Then execute the menu command graph | insert filter ..., click the "DirectShow Filters" directory in the dialog that will then pop up, and then find a "HQ Video Mosaic" double-click to join. Then, the connection of HQ MPEG-2 Video Decoder and Video Renderer is then disconnected (using the arrow between the two filters) and press the DELETE button of the keyboard). The HQ MPEG-2 Video Decoder then connects to HQ Video Mosaic, and then connects the HQ Video Mosaic to Video Renderer. (Between two filters: First, press the left button on the previous FILTER of the previous stage, and drag the mouse to the next level of Filter's input PIN, finally release the left mouse button. The final Filter link is shown in Figure 7: Figure 7 is used in graphedit to gradually get out of Filter Now, execute the menu command graph | play, we can see the demonstration of the video local area mosaic processing similar to Figure 1. In addition, through the attribute page of the HQ Video Mosaic Filter (Open Properties Page: Mouse selecting HQ Video Mosaic, performing the right-click item "Filter Properties ...", we can also dynamically update the area where mosaic processing needs to be dynamically updated. And the ratio of pixel horizontal / vertically. Figure 8 HQ Video Mosaic Filter's Properties Page three. summary This article describes the principles of video mosaic processing, as well as an algorithm that uses C . This paper then completed the effect of the video local area mosaic processing.