VC Digital Image Processing Programming Lecture 2003-10-30 Liu Tao Author Authorized Reprinted Foreword Digital Image Processing Technology and Theory is an important field of computer applications, many engineering applications involve image processing, has always had a strong desire, want to write system A lecture on digital image processing, because work is very busy, it is achieved today. "Figure" is the distribution of object transmitted light or reflected light, "like" is an impression or understanding of the human visual system to receive in the brain. The image is the combination of both. Human gets the external information is the auditory, visual, tactile, smell, taste, etc., but most (about 80%) from the image information received by the visual. Image processing is processed to meet image information to meet the needs of people's visual psychology and practical applications. Simply put, the processing of various purposes of the image relying on the computer We are called digital image processing. The purpose of the early digital image processing is to improve the quality of the image in order to satisfy the visual effect of the person. In the processing, the quality of the image is input, and the output is an image, the commonly used image processing method has an image. Enhance, restoration, etc. With the development of computer technology, there is a type of image processing to be object-like, the purpose of processing is to automatically identify the target, which is called image recognition, because this is involved in some complex mode identification theory, So our subsequent lectures only discuss the most basic content. Since there are often digital image processing in many practical applications, some algorithms are involved, this is also a content of many programming enthusiasts, our lecture is to discuss how to use Microsoft's Visual C development tools. Implement some common digital image processing algorithms, discusses the theory of image processing, and gives the source code for VC implementation. The main content of this lecture is divided into basic articles, intermediate articles and advanced articles, and the main contents of the specific include: 1. Image file format; 2. Image programming basis - operation palette; 3. The reading, storage, and display of image data, how to get the size of the image, etc .; 4. Use images to beautify the interface; 5. Basic operation of images: image movement, image rotation, image mirroring, scaling, and clipboard operation of the image; 6. Various tricks shown in images; 7. The basic processing of the image: the two-value, the brightness and contrast adjustment of the image, the edge enhancement of the image, how to obtain the histogram of the image, the correction of the image histogram, the image of the image, sharpening of the image, etc. Color, color image is converted to black and white images, search for object edges, etc .; 8. Treatment of binary images: corrosion, expansion, refinement, distance transformation, etc .; 9. Image analysis: identification of straight lines, circles, specific objects; 10. Format files such as JEPG, GIF, PCX; 11. Conversion of image file format; 12. Common transformation of images: Pay leaf transform, DCT transformation, Walsh transform, etc .; 13. AVI video streaming; image processing technology is profound, not only requires strong mathematical skills, but also masters a computer language, in the current popular language, my personal Visual C this platform is image developer The preferred tool. This lecture is just a role of throwing brick, hoping to communicate with the readers.
The file format of the first section should be handled by a computer to process the digital image. First, we must have a clear understanding of the file format of the image. Previously, first, the signals, such as the camera (CCD) intake, in general, in general, in general, in general, in general, often by the image acquisition card, its output is generally in the form of naked chart; if The user wants to generate the target image file, must be processed according to the format of the file. With the development of technology, digital cameras, digital cameras have entered ordinary people's home, and we can use these devices as input devices for image processing systems to provide information sources for subsequent image processing. No matter what device, it always provides information on a certain image file format, which is commonly used with BMP format, JPEG format, GIF format, etc., so we must first have the format of the image before the image processing. Clear understanding, only further development processing can be performed on this basis. Before telling the image file format, you will make a simple classification for images. In addition to the simplest image, all images have colors, while monochrome images are relatively simple in images with color, which typically consists of black areas and white areas, and a bit can be used to represent a pixel, " 1 "Indicates black," 0 "means white, of course, can also be reversed, this image is called a binary image. We can also use 8 bits (one bytes) to represent a pixel, which is equivalent to dividing black and white oriented into 256 levels, "0" is black, "255" is expressed as white, the value of this byte indicates corresponding The grayscale or luminance value of the pixel value, the closer the value, the more black, the corresponding pixel point is, in contrast, the corresponding pixel point, this image we are generally referred to as a grayscale image. The monochrome image and grayscale image are also collectively referred to as black and white images, and there is a color image, which is complex, indicating that the image, the common image color mode is RGB mode, CMYK mode, and HIS mode, in general We only use the RGB mode, R corresponds to red, G corresponds to green, B corresponds to blue, they are collectively referred to as three-color color, the different colors of these three color can be matched into a variety of reality, at this time, the color image One pixel requires a set of data representations of 3 samples, where each sample is used to represent a basic color of the pixel. For existing image file formats, we mainly introduce the BMP image file format, and the image data in the file is uncompressed because the digitization of the image is mainly processed accordingly for each pixel in the image, and not The pixel value in the compressed BMP image is exactly corresponding to the actually handled digital image, and the file in this format is most suitable for digital processing. Readers Remember that the compressed image is unable to digitally process, such as JPEG, GIF, etc. File, first to decompress the image file, this is to involve some complicated compression algorithms. In the subsequent chapter, we will convert the special file format to the file problem with the BMP format, which is converted, and we can use the uncompressed BMP file format for subsequent processing. For JPEG, GIF, etc., due to compression algorithms, this requires readers to master certain information theory knowledge, if they are expanded, you can write a book, limited to space reasons, we only make general explanations, interested friends You can refer to the relevant book information. First, BMP file structure 1. BMP file composition BMP file consists of file header, bitmap information head, color information, and graphics data.
The file head mainly contains information such as the size of the file, the file type, the image data deviates from the length of the file head; the bitmap information header contains the size information of the image, and the image is used to represent a pixel, whether the image is compressed, the image is used. Color number and other information. The color information contains the color table used by the image, and the color table needs to be used to display the image to generate a palette, but if the image is true color, each pixel of both the image is indicated by 24 bits, there is no this A piece of information does not require a palette. The data block in the file represents the corresponding pixel value of the image. It should be noted that the number of pixel values of the image in the file is from left to right, from bottom to, otherwise, in the BMP file first stored in the BMP file. It is the last row of the image, and finally stores the first line of pixels of the image, but the pixels of the same line are stored in the order of the first left back; another detail of the reader is concerned: File Storage When the image of the image is pixel value, if the number of bytes accounted for the pixel value is 4, it is normal to store, otherwise, it is necessary to add 0 to the rear end to set up a multiple of 4. 2. The BMP file head BMP file header data contains information such as the type, file size and bitmap start positions of the BMP file. Its structure is defined as follows: typedef struct tagbitmapfilehead {Word bftype; // bitmap file type, must be "BM" dword bfsize; // bitmap file size, in byte is unit bFReServed1; // bitmap file reserved Word, must be 0word bfreserved2; // bitmap file reserved word, must be the starting position of 0Dword bfofbits; // bitmap data, in bytes in byte} BitmapFileHeader This structure occupies 14 bytes. 3. Bitmap information head BMP bitmap information header data is used to explain the size of the bitmap. The structure is as follows: typef struct tagbitmapinfoheader {dWord bisize; // This structure occupies the number of bytes long biwidth; // Bit map, with a pixel unit long biheight; // bitmap height, in pixels in Word Biplanes ; // The plan number of target devices is unclear, must be a number of digits required for 1Word Bibitcount //, must be 1 (two-color), 4 (16 colors), 8 (256 colors) or 24 (true color) One DWORD BICOMPRESSION; // bitmap compression type, must be 0 (not compressed), 1 (Bi_RLE8 Compression Type) or 2 (Bi_RLE4 Compression Type) DWord BisizeImage; // Bit map size, byte Long Bixpelspermeter; // bitmap level resolution, number of pixels per mow pixel; // bit map vertical resolution, pixel DWORD BICLRUSED; // Bitgram actually used color table DWord BICLRIMPORTANT; / / Bitmap shows important color number} BitmapInfoHeader; this structure occupies 40 bytes. Note: For the BMP file format, when processing monochrome images and true color images, no matter how large image data is, no compression processing for image data, in general, if the bitmap is compressed, then 16 colors The image uses the RLE4 compression algorithm, and the 256-color image is used using a RLE8 compression algorithm. 4. Color table color table is used to explain the color in the bitmap, which has several entries, each of which is a RGBQUAD type structure, defining a color.
The RGBQUAD structure is defined as follows: typedef struct tagrgbquad {bytergbblue; // Blue brightness (value range is 0-255) BytergBGreen; // Green brightness (value range is 0-255) Bytergbred; // Red brightness (value) The range is 0-255) bytergbreserved; // Reserved, must be 0} RGBQUAD; the number of RGBQUAD structure data in color table is determined by the BIBITCOUNT item in BitmapInfoHeader, and when BiBitCount = 1, 4, 8, there are 2, respectively 16, 256 color entries, when BiBitCount = 24, the image is true color, the color of each pixel in the image is represented by three bytes, respectively corresponds to the R, G, B value, the image file has no color entries. The bitmap information head and color table form bitmap information, the BitmapInfo structure is defined as follows: typef struct tagbitmapinfo {bitmapinfoheader bmiheader; // bitmap information head RGBQUAD BMICOLORS [1]; // Color table} BitmapInfo; Note: RGBQUAD data structure, Increases a reserved field RGBRESERVED, which does not mean any color, must take a fixed value "0", at the same time, the color value defined in the RGBQUAD structure, red, green, and blue arrangement of the general color image file The color data is arranged in contrast, both: if the color of one pixel point in a bitmap is "0000, FF, 00", the point is red, not blue. 5. The bitmap data bitmap data records each pixel value of the bitmap or the index value of the color table of the corresponding pixel. The image recording order is from left to right, and the scan line is from bottom to top. . This format We also referred to as a Bottom_up bitmap, of course, with the relative to the bitmap in the UP_DOWN form, which is from top to bottom, and there is no compressed form for this form of bitmaps. One byte number of pixel values of bitmaps: When BiBitCount = 1, 8 pixels account for 1 byte; when BiBitCount = 4, 2 pixels account for 1 byte; when BiBitcount = 8, 1 A pixel for 1 byte; when BiBitCount = 24, one pixel occupies 3 bytes, and the image is a true color image. When the image is not true color, the image file contains a color table, and the data of the bitmap represents the corresponding index value in the color table, when it is true color, each pixel uses three bytes to represent the image corresponding pixel Point color values, each byte respectively corresponds to the value of R, G, B components, and there is no color table in the image file. Above I have already said, Windows specifies that the number of bytes occupied by a scan line in the image file must be a multiple of 4 (ie, the word unit), and it is insufficient to fill the word in 0, the image in the image file. Calculation method: DataSizePerLine = (BiWidth * BibitCount 31) / 8; // A size of the byte digital bitmap data of a scan line is calculated in the formula (no compression): DataSize = DataSizePerLine * Biheight. The above is a description of the BMP file format. It is clear that the above structure can be operated correctly, and it is read or written to it.
Second, GIF Image File Format GIF Image Format The full name of Graphics Interchange Format, from this name, this image format is mainly designed to transfer images over the network. The GIF file does not support 24-bit true color images, up to 256 colors of images or grayscale images; GIF format files cannot store image data of CMY and HIS models; in addition, various data areas of GIF image files are generally not fixed For the data length and storage order, in order to facilitate the program to find the data area, the first byte in the data area is used as the flag; finally the reader is concerned that the GIF file storage image data is two arrangements: order arrangement or Cross alignment. The cross-aligned manner is suitable for network transmission, so that the user is allowed to obtain the contour data of the current image before the image data is not fully grasped. The GIF file format is divided into two versions of 87 and 89. For 87 version, this file is mainly five parts, which is in order: file header, logical screen description block, selectable color The sector, the image data block, and finally the end of the flag file ends, which always takes a fixed value 3BH. Where the first and second blocks use GIF image file head structure description: gifheader: {db signal; // This field accounts for six bytes, in order to specify the image GIF format, the first three characters must be "GIF ", The latter three characters are used to specify which version, 87 or 89. DW screenwidth; // dw screenDepth; // Terrow 2 bytes, indicating the width of the image in pixels, high DB GlobalFlagByte; // This byte is used for coloring version DB BackgroundColor; // representative The background color index of the image DB aspectratio; the length width ratio} of the image has a common palette and a partial palette in the GIF format, because the GIF format allows a file to store multiple images in a file, so there is These two palettes, where the general palette is suitable for all images in the file, and the local palette is only available for one image. The data area in the format is generally divided into four parts, image data identification areas, local palette data, and the image data area and end flag area obtained by compression algorithms. In GIF89, it contains seven parts, which are file headers, universal palette data, image data areas, and four supplementary data areas, which are mainly used to prompt how to handle images. Third, JEPG image file JEPG is referred to as a joint photographic expert team, as a technology, mainly used for the standard code of digital images, JPEG mainly uses a lossless compression coding method, which is more complicated than GIF, BMP image file, this Not a short number of sheets can be clear, fortunately, we can convert this format into BMP format through some other ways. Readers need to know that when encoding JEPG file format, it is usually required to be divided into the following four steps: color conversion, DCT transformation, quantization, encoding. The above is introduced some commonly used image files, which are more complex, such as GIF and JEPG, only have an extremely floating introduction, and we will continue to contact them. In practical applications, there are many image formats, and there is not mentioned in the article. If readers need further research, they also need to refer to some information about image formats.
Basic operation of BMP images, we mainly introduce the format of the image, which focuses on the storage format of the BMP file, and simultaneously introduces the common formats such as JEPG and GIF. This section mainly tells how to operate BMP files, such as reading, writing, and displaying. In the process of implementing digital image processing, it is mainly to achieve the intended effect by using various image processing algorithms in each pixel point in the image, so the first step of image processing is also our most concerned, it is How to get the brightness value of each pixel point in the image; in order to observe and verify the image effect of the processing, another problem that needs to be resolved is how the image before and after the correct display is displayed. Our chapter content is to solve these problems. With the development of science and technology, image processing technology has penetrated into various fields of human life and get more and more applications, but a prominent contradiction is that the format of the image is also increasing, and the main image involved in the current image processing. There are many kinds of formats, such as TIF, JEMP, BMP, etc., in general, in order to handle simple and convenient, digital image processing is used in the BMP format image file (sometimes referred to as a DIB format), And this format file is not compressed. We can obtain the correct display of the palette information, the size information of the image, the size information of the image, the brightness information of each pixel point in the image, and the developer can apply to the image. Various processing algorithms perform corresponding processing. If you need to process images of other formats, such as GIF, JEMP formats, you can first convert the format to BMP format, and then proceed. This requires a reader to be clear. Image files in BMP format can be divided into many types, such as true color bitmaps, 256 color bitmaps, BMP bitmaps in RLE (running) compressed format, and more. Due to the actual engineering application and image algorithm effect verification, 256 is often handled and is a BMP grayscale image that is not compressed, for example, the image obtained by black and white acquisition card is this format, so we are in the entire lecture. The file format processed is a BMP grayscale image. If the reader can make a skilled operation on this format, the operation of the BMP bitmap for the rest is not difficult. BMP grayscale image is one of the main image formats in the Windows environment, and is popular with its simple format and adaptation. As we have said in the previous lecture, this file format is the 8bit of each pixel, and the displayed image is black and white effect, the bright pixel grayscale (also called brightness) value is "0" The gray value of the whitening pixels is "255", and the grayscale value of each pixel of the entire image is randomly distributed in "0" to "255", and the bronze pixel, the grayscale value is closer. " 0 ", more (both brighter) pixels, the closer the grayscale value, the closer" 255 "; which corresponds to each of the RGB components values of the color entries in the file type, and color table The number of items is 256. When image processing is performed, the pixel value in the operation image is to obtain an image array; the pixel value of the processed image needs to be stored; when the image is displayed, the palette should be realized correctly, and the size information of the bitmap is obtained. Combined with these issues, the partial function when operating grayscale BMP images is achieved below. 1. BMP bitmap operation first we review important information: BMP bitmaps include bitmap file header structures BitmapFileHeader, bitmap information head structure BitmapInfoHeader, bitmap color table RGBQUAD and bit image lines data four parts. When processing a bitmap, depending on the structure of the file, the bitmap file size, the width, high, high, the palette, to obtain a positioned image value, and the like.
What should be noted here is that in the BMP bitmap, each line pixel value of the bitmap is to populate a four-byte boundary, that is, the memory length of the bitmap is a multiple of four bytes, and it will be used when it is insufficient. 0 fill. With the above knowledge, you can start writing image processing, and how to develop procedures on the development platform of VCs, the author assumes that the reader has certain VC development experience. During the development of the image processing program, the author does not use object-oriented methods, although the object-oriented method can encapsulate data, the data in the protection class is not subject to external interference, and this security is improved. By reducing the performance efficiency of the program, we make full use of the program's document view structure, and use some API functions directly to operate the image. There is an example called Diblook in Microsoft's MSDN. This example demonstrates how to operate DIB bitmaps, interested readers can refer to it, I believe it will be gain. Start Visual C , generate a multi-document program named DIB, set the base class of the CDIBVIEW class to the CScrollView class, the purpose is to support the scroll bar in the display bitmap, and additionally process the image application document class ( CDIBDoc.h) Declaration as follows macro and public variables: #define widthbytes ((BITS) 31) / 32 * 4) // Calculate the number of bytes accounted for each line of images; Handle M_HDIB; / / Store the handle of the bitmap data; cpalette * m_paldib; // points to the pointer of the palette CPALETTE class; CSIZE M_SIZEDOC; // Initialize the size of the view, the size of the bitmap; finally put the program's character string String resource iDR_DibType is modified to: "/ NDIB / NDIB / NDIB FILES (*. BMP; *. DIB) /n.bmp/ndib.document/ndib document". The purpose of this is to select a bitmap file in BMP or DIB format in the Program File dialog. 1. Read the grayscale BMP bitmap According to the structure of the BMP bitmap file, operate the BMP bitmap file and read the image data, to make full use of the VC's document view structure, overloaded the text of the text ( The function so that the user can select the bitmap file to open in the Open File dialog box of the Automatic Generation Program, and then the program will automatically call the function to perform the operation of the read data.
The implementation code of this function is as follows: BOOL CDIBDOC: ONOPENDOCUMENT (LPCTSTR LPSZPATHNAME) {logpalette * ppal; // Define logical palette pointer; ppal = new logpalette; // Initialize this pointer; cfile file; cfileException Fe; if; (! file.open (lpszpathname, cfile :: modeRead | cfile :: Sharednywrite, & Fe)) {// Open file in "Read"; AFXMessageBox ("Image file can't open!"); return false;} deleteContents (); // Delete the text; BeginWaitcursor (); BitmapFileHeader BMfheader; // Defines the bitmap file header structure; lpbitmapinfo lpbmi; dword dwbitssize; handle hdib; lpstr pdib; // points to the pointer of bitmap data; BitmapInfoHeader * BMHDR; // Pointer DWBITSSIZE = file.getLength () () (file.read)! = Sizeof (bmfheader)) Return False; = Sizeof / Read the file header structure information of the bitmap file; if (BMFHeader.BfType! = 0x4d42) // checks if the file is a BMP format; Return False; HDIB = (Handle) :: GLOBALLOC (GMEM_MOVEABLE | GMEM_ZEROINIT, DWBITSIZE ); // For reading image file data application buffer IF (HDIB == 0) {RETURN FALSE;} PDIB = (LPST) :: Globalock (hglobal); // Get pointers of the application for buffers; IF (file.readhuge (pdib, dwbitssize - sizeof (bitmapfileHeader))! = dwbitssize - sizeof (bitmapfileHeader) {:: GlobalUnlock ((hglobal) HDIB; hdib = null; return False;} // The data read in the PDIB data block includes a bitmap head information, a bitmap color table, an image pixel grayscale value; BMHDR = (BitmapInfoHeader *) PDIB; / / to point to bitmap information head structure Pointer assignment; :: Globalunlock ((hglobal) HDIB; if (* bmhdr) .biBitcount! = 8) // Verify whether it is 8bit bitmap {AFXMESSAGEBOX ("This file is not a grayscale bitmap format! "); return false;} m_hdib = HDIB; // Converse internal variable data to global variables; // below is the size of the record bitmap; m_sizedoc.x = BMHDR-> biwidth; m_sizedoc.y = BmHDR-> biheight; / / The following is based on the color table to generate a palette; m_paldib = new cPalette; ppal-> pave = 0x300; // Fill logical color table ppal-> PALNUMENTRIES = 256; lpbmi = (lpbitmapinfo) BMHDR; for (int i = 0 I <256; i ) {// R, g, b value of each color entry, etc., and each value is expanded from "0" to "255" sequence;
PAL-> palpalent [i] .pered = lpbmi-> bmicolors [i] .rgbred; ppal-> palpalent [i] .pegreen = lpbmi-> bmicolors [i] .rgbgreen; ppal-> palpalentry [i] .peblue = LPBMI-> Bmicolors [i] .rgbblue ;; ppal-> palpalent [i] .peflags = 0;} m_paldib-> createpalette (PPAL); // Depending on the read data to get the width, high, color table of the bitmap; IF (ppal) delete pPal; endWaitcursor (); setPathName (lpszpathname); // Setting storage path setmodifiedflag (false); // Setting file modification flag is falsereturn true;} The above method is to read through the operation of the CFILE class object Bitmap file, it needs to analyze the file header information in the bitmap, thereby determining the length of the image that needs to be read. This method is relatively cumbersome, in fact, can also read the bitmap data in a relatively simple method, first define a DIB type resource in the program's resources, then add bitmaps to this type, and add image data to resources. The form is read, at this time, the image data is displayed according to the bitmap information structure in the acquired data. The following function implements loading image file data in the resource form, the implementation code of this function is shown below: / handle loaddib (uint uids, lpcstr lpszdibtype) {lpstr lpszdibres = makeintResource (uids); // Determine the resource according to resource icon Name; hinstance hinst = afxgetInstanceHandle (); // Get the application's handle; HRSRC HRES = :: FindResource (hinst, lpszdibres, lpszdibType); // Get the handle of the resource, where lpszdibType is the name "DIB" of the resource; if ( HRES == NULL) RETURN NULLHGLOBAL HDATA = :: loadResource (HINST, HRES); / / Reprint the resource data and return this handle; Return HDATA;} 2, the storage of grayscale bitmap data To put the image processing after processing The value saves, we overload the document class's onsavedocument () function so that the user automatically calls the function when the program is clicked or the SAVEAS submenu, implements the storage of image data.
The specific implementation of this function is as follows: /// Bool CDIBDoc :: OnSaveDocument (lpctstr lpszpathname) {cfile file; cfileException fe; bitmapfileHeader BMFHDR; // bitmap file header structure; lpbitmapinfoheader lpbi; // Pointer to the bitmap head information structure DWORD DWDIBSIZE ;; if (! File.open (lpszpathname, cfile :: modecreate | cfile :: motereadwrite | cfile :: ShareExClusive, & fe)) {AFXMESSAGEBOX ("File can't open"); return false;} // Writing method opens the file; Bool BsuCcess = false; beginwaitcursor (); lpbi = (lpbitmapinfohead) :: Globalock ((hglobal) m_hdib); if (lpbi == null) Return False; dwdibsize = * (lpdword) lpbi 256 * sizeof (rgbquad); // The number of bytes occupying the file information of the image; DWORD DWBMBITSE; / / BMP file The number of bytes of the pixels of the bitmap in the BMP file dwbmbitssesize = widthbytes ((lpbi-> biwidth) * ((lpbi-> biwidth) * DWORD) LPBI-> BIBITCOUNT) * LPBI-> BIHEIGHT; // Storage time bitmap All pixels The total bytes of the total bytes of the number DWDIBSIZIZE = DWBMBITSSIZE; / / BMP file In addition to the total word of all data outside the file information structure The number of times; lpbi-> bisizeImage = dwbMbitssize; // bit map All pixels The total byte of the total byte /////bit is the file header structure fill the value BMFHDR.BFTYPE = 0x4D42; // The file is "BMP" type BMFHDR. BFSIZE = DWDIBSIZE SIZEOF (BitmapfileHeader); // File Total length BMFHDR.BFRESERVED1 = 0; bmfhdr.bfreserved2 = 0; BMFHDR.BFOFFBITS = (DWORD) SIZEOF (BitmapFileHeader) lpbi-> bisize 2 56 * sizeof (RGBQUAD); // bitmap data distance file header offset; file.write ((LPSTR) & BMFHDR, SIZEOF (BitmapFileHeader); // Write file header information to the file; file.writehuge (LPBI , dwdibsize); // write bitmap information (information head structure, color table, pixel data) ;: GlobalUnlock ((hglobal) m_hdib); endwaitcursor (); setmodifiedflag (false); // Set the document to "Clean" flag indicates that the document does not require a storage prompt; Return True;} Second, the operation of the palette passes the above operation, we can get the data in the image, and now another question is displayed in the window. Image data. The grayscale image is displayed correctly, and the logical palette and the system palette must be implemented.
First we introduce the logical palette structure LogPalette, which is defined as follows: typedef struct taglogpalette {Word Palversion; // The board of the palette should specify the value of 0x300; Word PalNumenTries; // Palette Number of items, for grayscale images, this value is 256; PaleteEntry Palpalent [1]; // The color entries in the palette are not necessarily, so the array length is defined as 1, grayscale image here. The corresponding array length is 256;} logpalette; color entry structure PaletteEntry defines the color and use of each color entry in the palette, defined as follows: typedef struct tagpaletteentry {byte pered; // R component value BYTE pegreen; // g 分 分; Byte peblue; // B component value; byte peflags; // This color is used in the general case, "0";} Paletteentry; Windows system uses palette management To manage operations related to the palette, typically the palette of the active window is the current system palette, all non-active windows must be displayed in this system palette to display their own color, at this time the palette The manager will automatically map the corresponding display color with the nearest colors in the system palette. If the window or application is displayed in your own palette, you must load your own palette into the system palette. This operation is called to implement a palette, and the palette includes two steps. Both first, select the palette to the device context, and then implement it in the device context. The above two steps can be implemented via CDC :: SelectPalette (), CDC :: REALIZEPALETTE () or corresponding API functions. During the process of implementing the palette, the custom message WM_DOREALIZE is processed by handling Windows defined in the main frame class, WM_QUERETTECHANGETTE, WM_PALETTECHANGET and view class (this message is defined in the main frame window as follows: #define WM_RealizePal (WM_USER 101) ) To achieve the operation of the palette. When the system needs to process the change in the palette, send WM_QueryNewPalette, WM_PALETTECHANGED, for example when a window is about to be activated, inform the window, to notify the window to receive the input focus, give It once a chance to achieve its own logical palette; when the system palette changes, the main frame window will receive a WM_PALETTECHANGED message, inform the other window system palette has changed, at this time, each window should implement its logic Solden palette and redraw a customer area. Since the above-described palette change message is sent to the main frame window, we can only respond to these two messages in the main window, then notify each window by the main frame window, so that the program can be automatically loaded with your color board. Our defined user message WM_REALIZEPAL is used for the main frame window to notify the window It has received a palette change message, and the window should coordinate its palette.
Below we give the specific implementation of the response processing function of each message and notes: // void cmainframe :: onpalettechanged (cwnd * pfocusWnd) {// Tongly active palette cmdiframeWnd :: OnPalettechanged (pfocuswnd); cmdichildwnd; cmdichildwnd; cmdichildwnd; cmdichildwnd; cmdichildwnd; cmdichildwnd; cmdichildwnd; cmdichildw * pMDIChildWnd = MDIGetActive (); // get a pointer to the active child window; if (pMDIChildWnd == NULL) returnCView * pView = pMDIChildWnd-> GetActiveView (); // get a pointer to the view; ASSERT (pView = NULL!); SendMessageToDescendants (WM_DOREALIZE, (WPARAM) pView-> m_hwnd); // Notify all sub-window system palettes have changed} BOOL CMAINFRAME: ONQUERYNEWPALETTE () // Provides an opportunity to implement system palette {// Implement activity Swatches CMDIChildWnd * pMDIChildWnd = MDIGetActive (); // get a pointer to the active child window; if (pMDIChildWnd == NULL) return FALSE; // no active MDI child frame (no new palette) CView * pView = pMDIChildWnd-> GetActiveView ( ); // Get the view pointer of the active subsequent window; askERT (pView! = Null); // Notify the activity view to implement the system palette PView-> SendMessage (WM_DOREALIZE, (WPARAM) PVIEW-> m_hwnd); return true;} / BOOL CDIBVIEW :: Ondorealize (WPARAM WPARAM, LPARAM) // Implement system palette {assert (wparam! = Null); cdibdoc * pdoc = getDocument (); if (pdoc-> m_hdib == null) Return False; / / MUST BE A New Documentcpalette * ppal = pdoc-> m_paldib; // Color table data for the palette Implement IF in the initDibdata () function (pPal! = NULL) {CMainFrame * pAppFrame = (CMainFrame *) AfxGetApp () -> m_pMainWnd; // get pointer to program the main frame; ASSERT_KINDOF (CMainFrame, pAppFrame); CClientDC appDC (pAppFrame); // device context main frame; CPalette * OldPalette = Appdc.selectPalette (PPAL, ((hwnd) wPARAM)! = m_hwnd); // Only the active palette can be set to "false", ie set to "foreground" palette according to the active palette; IF (OldPalette! = null) {uint ncolorschanged = Appdc.RealizePalette (); // Implement system palette if (ncolorschanged> 0) PDOC-> UpdateAllViews (null); // Update view Appdc.selectPalette (Oldpalette, true) ; // Place the original system palette as a background palette} else {trace0 ("/ tselectpalette failed in");} return true;} Note: When calling the API function display bitmap,
Don't forget to set the logical palette, the "Background" palette, otherwise the bitmap will not display correctly, the reader can see the logical palette when it is displayed from the implementation of the display portion. The above processing is relatively complicated, which may be more difficult to understand for beginners, so if our procedure is limited to processing grayscale images, another relatively simple approach can be used, that is, initialization in the document class. The stage defines a grayscale palette, and then implements it in the device context, the benefit of this is to re-consider the color table information in the file when the gradation bitmap is degrees, and the file read speed is improved. Developing a machine-visual project is this method, which is more satisfactory. First define a pointer PPAL pointing to the logical color table structure LogPalette, populate the pointer, then connect the pointer to the palette pointer, the specific implementation of this method is as follows: / cdibdoc :: cdibdoc () {.................. ........ .logpalette * Pal; PAL = new logpalette; m_paldib = new cpalette; ppal-> pavenute = 0x300; ppal-> panelntries = 256; for (int i = 0; i <256; i ) {// Each The color entries are equal, and each value is expanded from "0" to "255" sequence; pal-> palpalentry [i] .pered = i; ppal-> palpalent [i] .pegreen = i; PPAL-> PALPALENTRY [i] .peblue = i; ppal-> palpalent [i] .peflags = 0;} m_paldib-> createpalette (ppal); ...........................} III, image display display DIB Bitmap data can be implemented by the member function cdc :: bitblt () or cdc ::stchblt () or cdc ::stChblt () of the device context (), or through the API function setDibitStodeVice () or stretchdibbits () to implement each of the functions The meaning of the parameters can be referred to MSDN. STRETCHDIBBITS () and CDC :: StretchBLT () can amplify and reduce the display. When loading the bitmap file from the document, the ONInitialUpdate function of the CDIBVIEW class will be called, so you can implement the settings of the view size in this function, which is used to display the bitmap, and then you can in the view class ondraw ( The correct display bitmap in the function.
The specific implementation of these two functions is as follows: / void cdibview :: onInitialupdate () {cscrollview :: oninitalUpdate (); cdibdoc * pdoc = getDocument (); if (pdoc-> m_hdib == null) // If The bitmap data is empty, setting the default size of m_sizedoc; pdoc-> m_sizedoc.cx = pdoc-> m_sizedoc.cy = 100; setscrollsizes (mm_text, pdoc-> m_sizedoc);} / void cdibview :: OnDRAW (CDC * PDC) {BitMapInfoHeader * LPDIBHDR; // bitmap information head structure pointer; BYTE * LPDIBBITS; / / Pointer to point the image size value; BOOL BSUCCESS = FALSE; cPalette * Oldpal = NULL; // Pattern pointer; HDC HDC = PDC-> getSafeHDC (); // Get the handle of the current device context; cdibdoc * pdoc = getDocument (); // Get the pointer to the active document; if (pdoc-> m_hdib == null) {// Determine whether the image data is determined Empty; AFXMessageBox ("image data cannot be empty, first read image data!"); Return;} lpdibhdr = (bitmapInfoHeader *) Globalock (pdoc-> m_hdib); // Get image header information LPDIBBITS = LPDIBHDR SIZEOF (BitMapInfoHeader) 256 * sizeof (RGBQUAD); // Get the pointer to the buffer saving image pixel value; if (pdoc-> m_paldib) {// If there is a palette information, the logical palette is realized; Oldpal = PDC-> SelectPalette (pdoc-> m_paldib, true); pdc-> realizepalette ();} else {afxMessageBox ("The color palette data of the image cannot be empty, please first read the palette information!"); "); Return;} setstretchBltmode (HDC, Coloroncolor); // Display Image Bsuccess = StretchDibbits (HDC, 0, 0, PDOC-> m_sizedoc.cx, pdoc-> m_sizedoc.cy, 0, PDOC -> m_sizedoc.cy, 0, pdoc-> m_sizedoc.cy, lpdibbits, (lpbitmapinfo) LPDIBHDR, DIB_RGB_COLORS, SRCCOPY; GlobalUnlock (pdoc-> m_hdib); if (ilpal) // Restore palette; PDC-> SelectPalette; (OldPal, False); RETRUN;} IV. Summary In this period, we mainly introduce how to operate grayscale bitmaps, it has strong representation, and prepares for subsequent image processing programming. Work, learning, how to operate other types of BMP format images, can achieve a two-way role.
BMP Image Show Special Effective Operation We mainly tell us about the access, image display of the BMP image data, the operation of the image, and the operation of the palette, on the above study, we can further deepen, learn and master image special effects technology. With this technology, you can use the future project development to beautify our software interface and improve the visual effect of software. In today's commercial software, almost every image is displayed, such as the reader's more familiar Windows screen saver uses a variety of image special effects display, making people dazzling and refreshing. Professional image processing software is a rich display method for users to use, which can be easily displayed in programs, such as Photoshop, Authorware, etc. This section mainly describes how to achieve image relief, engraving, multi-page windows, rotation, scanning, grid, mosaic, and gradual implications. Through this lecture study, readers can also work hard to make software with special effects showing effect. Image shows that we have told functions such as Bitblt (), setDibitStodevice () and StretchDibits (). Readers should be noted that when the special effects are displayed, it is not suitable, and the bitblt () function is mainly used to display the device's unacceptable bitmap (DDB), and the latter two functions are used to display device-free bitmaps (DIB) ). Since our lecture is dealing with the device's unacceptable bitmap, we mainly care about the application of the latter two functions, where setDibitStodevice () uses to be a dead plate, far less flexible, and for most special effects No power, so in order to implement the effect of the image, you need to use the stretchDibits () function to display the image. What is the reason, I think it is possible to be Microsoft's way to use it when implementing these functions. How these functions use, the meaning of each parameter, can refer to Microsoft's MSDN. The basic idea of displaying the special effect of the image is or the pixel of the operation image, or the image block is subjected to a certain direction or order, the corresponding image block is displayed or erased. For the second display, the point is: 1. Division image block; 2. Determine the operation order of the image block; 3. Display or clear the corresponding image block; 4. Between two consecutive displayed image blocks Insert a fixed latency. The division of the image block determines the display mode of the image, the display order of the image block determines the basis of the direction and subdivision of the display. Different effects determine different pieces of blocking methods and display, we will introduce how to block and determine the order in various special effects display. In order to make the display process of the image, the displayed effect is realized, and the fixed delay is required in this display in this display. Perhaps the reader friend will think of a delay using the Sleep () function or with setTime (). Since Windows is a message-based multi-task operating system, the delay time generated by these methods is not accurate for the display of the image, in order to implement More precise time delay with the machine, you can use the TimegetTime () function to generate a latency of the microsecond level. When using this function, you want to generate an error in the connection settings, and you want to include the header file "mmsystem.h".
Here we first give a delay function, it is used to implement a fixed time delay: void delayTime (DWORD TIME) {dWord begintime, endtime; begintime = timegettime (); // Get the current system time, unit is microsecond; {EndTime = TimegetTime (); // Received the current system time;} while (endtime-begintime) // determines whether the delay time has ended;} 1. Special effects of the pixel implementation of the operation bitmap We first This section describes the grayscale values of the pixels in the image to realize the effects of image display, which mainly shows how to implement the relief and engraving effects of the image. Friends who often watch TV don't know not to pay attention, some TV series has images that show some special effects at each episode, such as the "Long March" and "Kangxi Dynasty", and "Kangxi Dynasty" in the front of the front. The image called "embossed effect" and "image of the image" enhances the visual effects of the audience after these special effects, they seem to use 3D technology, this may be why this technology is then Popular reason. In fact, we can use some simple digital image processing algorithms to implement these seemingly complex and highly display effects. The following is a standard LENA grayscale image, which gives the processed renderings, and gives the partial implementation source code on the VC development platform. 1. The "relief" image "relief" image effect is the foreground forward convex background of the image. The so-called "relief" concept is a process of processing a difference between a pixel on the image and the pixel it onto the left, in order to keep the image a certain brightness and present gray, I am in the process of processing. A value of a value of 128 is added.
Readers should be noted that when a pixel value is set, it is used to be used in the upper left, in order to avoid the pixels that have been set, should start processing from the lower right below the image, below Source code: void cdibview :: onfdimage () // Generate "relief" effect graph function {handle data1handle; // Used to store handle of image data; LPbitMapInfoHeader LPBI; // Image head structure; cdibdoc * PDOC = GetDocument (); // Get a texture pointer; HDIB HDIB; // Used to store the handle of image data; Unsigned char * pdata; // Pointer to the original image data; unsigned char * data; // Pointer to process image data Pointer; hdib = pdoc-> m_hdib; // Copy the image file data handle that has been read; lpbi = (lpbitmapInfoheader) GlobalLock (hglobal); // Get image information head PDATA = (unsigned char *) FindDibbits (LPSTR) LPBI); // FindDibbits is a function I defined, obtaining the grayscale value data of the bitmap according to the structure of the image, pdoc-> setmodifiedflag (TRUE); // Setting the document modification flag "true", Subsequent modification storage; Data1Handle = GlobalAlloc (gmem_share, widthbytes (lpbi-> biwidth * 8) * lpbi-> biheight); // Declare a buffer used to temporarily deserve image data after processing; Data = (unsigned char *) Globallock ((hglobal); // Getting the pointer of the buffer; AFXGetApp () -> BeginWaitcursor (); INT I, J, BUF; For (i = lpbi-> biheight; i> = 2; i -) // From the lower right corner of the image, "relief" processing is started to "relieve" processing; for (j = lpbi-> biwidth; j> = 2; J-) {// relief processing buf = * (PDATA LPBI-> BIHEIGHT-I) * Widthbytes (lpbi-> biwidth * 8) J) - * (PDATA (LPBI-> Biheight-i 1) * Widthbytes (lpbi-> biwidth * 8) J-1) 128; if (BUF> 255) BUF = 255; if (buf <0) BUF = 0; * (Data (lpbi-> biheight-i) * widthbytes (lpbi-> biwidth * 8) J ) = (Byte) buf;} for (j = 0; jbiheight; j ) for (i = 0; ibiwidth; i ) // data buffer of the original image; * (PDATA i * Widthbytes (lpbi- > BIWIDTH * 8) J) = * (Data i * widthbetes (lpbi-> biwidth * 8) J); AFXGetApp () -> endwaitcursor (); pdoc-> m_hdib = HDIB // will process the image Data Write back image buffer in PDOC; globalunlock (hglobal); // unlock, release buffer; globalunlock (hglobal data1handle);
GlobalFree ((hglobal); globalfree (hglobal); invalidate (TRUE); // Display Image} 2. The "engraving" image describes the "relief" effect of the "relief" effect, "sculving" image, and the "engraving" image, the "engraving" image, and the "engraving" image, and the "engraving" image is generated. A difference between a pixel and the pixel below, plus a constant, here I also take 128, after doing so, you can get a "engraving" image, which is in the background of the foreground concave. It is also necessary to note that the reader is noted to avoid reusing the processed image pixels, and the pixels of the image should be processed from the lower left of the image.
The code is as follows: void cdibview :: OndkiMage () {// Todo: add your command handler code here, the internal variable here is consistent with the previous meaning, here is not described here; lpbitmapinfoheader lpbi; cdibdoc * pdoc = getDocument (); Hdib hdib; unsigned char * pdata; hdib = pdoc-> m_hdib; // copy image data handle; LPBI = (lpbitmapinfoheader) Globalock ((hglobal); pdata = (unsigned char * ) FindDIBBits ((LPSTR) lpBi); pDoc-> SetModifiedFlag (TRUE); data1handle = GlobalAlloc (GMEM_SHARE, WIDTHBYTES (lpBi-> biWidth * 8) * lpBi-> biHeight); // application buffer; data = (unsigned char *) Globallock ((hglobal); // Get new buffer pointer; AFXGetApp () -> Beginwaitcursor (); INT I, J, BUF; For (i = 0; i <= lpbi-> biheight- 2; i ) // "Engraving" processing on each pixel cycle of the image; for (j = 0; j <= lpbi-> biwidth-2; j ) {buf = * (PDATA (lpbi-> biheight-i) * Widthbytes (lpbi-> biwidth * 8) j) - * (PDATA (lpbi-> biheight-i-1) * widthbytes (lpbi-> biwidth * 8) J 1) 128; // "Engraving" Treatment; if (BUF> 255) buf = 255; if (buf <0) buf = 0; * (Data (lpbi-> biheight-i) * widthbytes (lpbi-> biwidth * 8) J) = (byte) BUF;} for (j = 0; jbiheight; j ) for (i = 0; ibiwidth; i ) // Resend the processed image data Write the original image buffer; * (PDATA i * widthbytes (lpbi-> biwidth * 8) j) = * (Data i * widthbetes (lpbi-> biwidth * 8) J); PDOC-> M_HDIB = HDIB // Write the processed image data back to image buffer in PDOC; GlobalUnlock (hglobal); // unlock, release buffer; globalunlock (hglobal); globalfree ((hglobal) HDIB ); GlobalFree ((hglobal); invalidate (TRUE); // Display Image} 3. The rotation of the image adjusts the gradation of the position according to the position of the image pixel, can realize many displayed effects, such as image, flip, and the like. The grayscale image rotation is achieved according to this idea, which refers to a certain angle to rotate a defined image around a counterclockwise or clockwise direction, typically refers to the center of the image in counterclockwise.
First, according to the angle of the rotation, the maximum width, height of the image diagonal is calculated, and the maximum width of the rotation is calculated. According to the maximum width of the image after the rotation, the new buffer is highly generated, assumes the upper left corner of the image (LEFT, TOP) , The lower right corner is (X, Y), the image is rotated against the Angle angle around the center (x, y), and the calculation formula of the new coordinate position (X1, Y1) is: xCenter = (Width 1) / 2 Left; Ycenter = (Height 1) / 2 TOP; X1 = (X-xcenter) cos θ - (Y - Ycenter) sin θ xcenter; Y1 = (X-xcenter) sin θ (Y- YCENTER) COS θ YCENTER; Similar to the image transformation of the image, the next step is to read the grayscale value of the (x, y) pixels in the original image into the new buffer (x1, y1) point. . Note that the value of the pixel point without the corresponding pixel points in the new buffer is replaced with a white or specified grayscale.
Second, the block display and clear 1 of the image. The scanning display and cleaning of the scan display image is the most basic special effect display method, which is expressed as an image line (or a column) displayed or clear from the screen, there is a sprinkler effect of the aatee. Depending on the direction of the scan, six scans such as upper, lower, left, right, horizontal, and vertical fraction can be divided. Here is a downward movement as an example, showing the implementation of display and clearance, respectively. The rest of the scanning effect can be pushed in order. The implementation method of the down-scan display is to copy the image from the top of the image to the top of the target area from the bottom of the image. After each copy, the number of copies will increase, and add some delays; the implementation method of moving downward is the image movement display, and the upper painting of the display area is increasing. 1) Scanning code: cdibview :: onimagedownscan () {cdibdoc * pdoc = getDocument (); hdib hdib; cclientdc PDC (this); hdib = pdoc-> m_hdib; // Get image data handle; BitmapInfoHeader * LPDIBHDR; / / Bitmail information head structure pointer; byte * lpdibbits; // points the pointer of the bit image gray value; HDC HDC = pdc.getsafeHDC (); // Get the handle of the current device context; LPDIBHDR = (BitmapInfoHeader *) Globalock HDIB); // Get image header information of the image; lpdibbits = (BYTE *) LPDIBHDR SIZEOF (BitMapInfoHeader) 256 * sizeof (rgbquad); // Get pointing to image pixel values; setstretchBltMode (HDC, coloroncolor); // Display image; for (int i = 0; i
When the scan is displayed, the scanning is scanned according to the difference, ie the k-1, k * m-1, ... k * n-1 scan line is displayed. Similarly, the implementation of the vertical hundred page window is similar to the display of the vertical 100-page window, and the difference is to change the bitmap to the painting rectangle. In the following example, I divide the image into 8 copies. ..................................... INT M = 8; int N = lpdibhdr-> biheight / m; // The height of the image can be eliminated 8; for (int L = 1; l <= m; L ) for (int K = 0; k
...................................................................................................................... Number of images of image levels; m = lpdibhdr-> biwidth / rectries 1; Elsem = LPDIBHDR-> BIWIDTH / RectSize; if (LPDIBHDR-> Biheight% RectSize! = 0) // Number of images vertical blocks; n = LPDIBHDR-> BIHEIGHT / RectSize 1; Elsen = LPDIBHDR-> BIHEIGHT / RectSize; Point * Point = new point [n * m]; // Apply for an array to record coordinates of the upper left corner of each block; Point Point1 ; for (int a = 0; a
Below is the code we implemented in the second method: .......................................// Apply for the same size of the image buffer; HDIBCopy = (HDIB) Globalalloc (GMEM_SHARE, LPDIBHDR-> BIWIDTH * LPDIBHDR-> BIHEIGHT); lpbits = (byte *) Globalock (hdibcopy); // Initialize the data of the buffer; for (int K = 0; k
In addition, our previous learning is for ready-made BMP images. In practical work learning, most of the processing image processes are in a system environment, that is, the acquisition equipment that needs to be used directly, in general The computer image processing system can be divided into three levels from the system level, and the current is more popular with low-grade systems. The system consists of three parts of the CCD (camera), the image capture card, and the computer. Simple structure, convenient application, more effective, the resulting image is clearer, so there is a much more used in engineering applications. This brings a real problem to developers, how to use image capture cards? At present, although there are many articles based on VC development experience in various program resources, there is indeed, how to use the image capture card on the VC development platform does not find out that the author borrows the valuable opportunity of this lecture, and adds to how to in the program. Write your own code to operate the image capture card to build a complete image processing system. I hope to learn from this part of the content, a complete image operating system concept can be created in the minds of the readers; at the same time, they can also help friends who are currently using the image capture card to develop their own image processing systems. 1. Jitter Image In the previous lecture, we talked about how to implement the image's "engraving" and "relief" effect, and their realization is to obtain "the difference between the two neighboring pixels" " Realize it. If there is no limit "operation between the two pixels that previously do not processed", it is replaced by "pixels that have been previously operated", and the grayscale value of one pixel can be passed to several pixels adjacent thereto. In fact, sometimes we must achieve some effects, such as the jitter effect of the image. For example, in order to make the image look like from the upper left corner to the lower right corner, to generate the feeling of motion, you must repeatedly copy the grayscale values of those pixels left above, and gradually integrate them together, look like There are some colors behind the image, which is the jitter effect of the image we have to say.
The implementation code of this effect is given below: void cdibview :: OnDoudongImage () // Generate "Jitter" effect graph function {handle data1handle; // Used to store the handle of image data; LPbitMapInfoHeader LPBI; // Information head structure CDIBDOC * PDOC = getDocument (); // Get a gentle pointer; HDIB HDIB; // Used to store the handle of image data; Unsigned char * pdata; // Pointer to the original image data; unsigned char * data; // Pointer to the post-processing image data; HDIB = PDOC-> m_hdib; // copy stored a read image file data handle; lpbi = (lpbitmapinfoheader) Globalock ((hglobal); // Get image information head; pdata = (unsigned char *) FindDibbits ((lpstr) lpbi; // FindDibbits is a function I defined, obtaining the grayscale data of the bitmap according to the structure of the image; pdoc-> setmodifiedflag (TRUE); // Set the document modification flag For the "true", prepare for subsequent modification; Data1Handle = GlobalLoc (gmem_share, widthbytes (lpbi-> biwidth * 8) * lpbi-> biheight); // Declare a buffer used to temporarily store the image data Data = (unsigned char *) GlobalLock ((hglobal) Data1Handle; // Getting the pointer of the buffer; AFXGetApp () -> BeginWaitcursor (); Int i, j, buf; for (i = lpbi-> biheight; I> = 2; I -) // Start "jitter" processing on the image from the lower right corner of the image; for (j = lpbi-> biwidth; j> = 2; j -) {// jitter processing Start the measurement of adjacent pixels above the image from the lower right corner of the image; buf = (* (lpbi-> biheight-i) * widthbytes (lpbi-> biwidth * 8) J) * (PDATA (lpbi -> Biheight-i 1) * Widthbytes (lpbi-> biwidth * 8) J-1)) / 2; if (BUF> 255) BUF = 255; // Limit The grayscale range of the pixel point is 0-255; if (buf <0) buf = 0; * (Data (lpbi-> biheight-i) * widthbytes (lpbi-> biwidth * 8) J) = (byte) BUF ;} for (j = 0; j
Globalunlock ((hglobal); globalfree (hglobal); globalfree (hglobal); invalidate (TRUE); // Display image} For complicated images, calculate the current pixel grayscale and oblique The jitter effect produced by the mean may not be obvious. In order to solve this problem, the author's solution is the calculation of the intervalry column, such as calculating the grayscale value of the current position (ij), I take (i, j) and (i- 2, J-2) The average of the grayscale values of the pixels of the two locations. 2. Image Synthesis Technologies Image Synthesis Technology is important. It is a matter of operation to fuse their information together, generate 1 1> 2 effects of two or more images. We can use an alpha value method when performing image synthesis. Let's take a look at how to synthesize two pictures with alpha values. The method of using Alpha image synthesis is that the pixel values of the final synthesis of the image of the image are mixed by a corresponding point of the corresponding point of the two pictures of the synthetic diagram, which is determined by the alpha value. The specific equation is as follows: Resultpixe = (Pixel1 * (255-alpha) PIXEL2 * alpha) / 255; // alpha value range from 0 to 255, PIXEL1 represents the gradation value of the current pixel point of the image 1 Pixel2 represents the grayscale value of the current pixel point of the image 2, Alpha can view the weights of the two pixels in the final synthesis. It can be seen that as long as the value of Alpha is modified, the ratio of the two images used to synthesize the composite image can be changed, and the synthesized display effect can be changed. We can use this method to modify the value of Alpha at a certain interval, so it can easily produce vivid fade out effect, and achieve smooth excessive effects between the two images. The specific source code for making the synthetic diagram is given: BOOL CompoundImage (Handle HDIB1, Handle HDIB2, INT Alpha) {BYTE LPDATA1, LPDATA2; // Source Image 2 Of the two images to be synthesized, The size is the same, so I can only get image information of an image file.
LPbitMapInfo lpbi = (lpbitmapinfo) HDIB2; // calculate image data offset LPDATA2 = (LPVOID) ((lpbyte) lpbi-> Bmicolors 256 * sizeof (rgbquad)); // Get image data of the source image 2; LPBI = (Lpbitmapinfo) HDIB1; LPDATA1 = (LPVOID) ((lpbyte) lpbi-> Bmicolors 256 * sizeof (RGBQUAD)); // combined with the pixel value for two images by Alpha value (int i = 0; i < LPBI-> BIWIDTH; i ) for (int J = 0; j
In general, use image capture card three steps, first install the capture card driver, and copy the virtual driver file vxd.vxd to the system directory of Windows; at this time, you can enter the development platform, enter the VC development platform. Generate new projects, because manufacturers provide an initial hardware, acquisition of images, such as the library files, the library provides the initial hardware, collecting images, etc., to use these functions, to use these functions, need to connect to new projects This dynamic library; the last step is to collect the image and display it. This step is to set the system palette because the capture card is provided in the form of naked maps, both pure image data, no image specifications and palette information, these needs The developer's own request is implemented, the following is the implementation of the code: ctestview :: ctestView () {w32_init_mpe1000 (); // Initialization Card W32_MODIFY_CONTRAST (50); // The following function is to pre-set W32_MODIFY_BRIGHTNESS for the capture card (45 ); // Setting the brightness W32_set_HP_Value (945); // Set the horizontal collection point wcurrent_frame = 1; // The current frame is 1, the acquired image is from this frame to set the signal source, only the MPE1000 is effective W32_SET_INPUT_SOURCE ( 1); // This image acquisition card supports three-channel video, currently collected images from the second input; W32_set_pal_range (1250, 1024); // Set horizontal acquisition range W32_set_vga_mode (1); use PAL system; Wgrabwinx1 = 0 ; // collect the coordinate wgrabwiny1 = 0 of the upper left corner of the window; firstTime = true; // First acquisition; bgrabmode = frame; // gramphock mode is? Format; bzipmode = zipple; // compression mode is zipple; LPDIB = NULL; // Storage The acquired image data buffer is empty;} ctestView :: ~ ctestView () {w32_close_mpe1000 (); // Close the capture card} void ctestview :: ONGRABONEFRAME () // Display Collection image, double click Mouse Collection Stop {// Todo: Add Your Command Handler Code HereWcurrent_Frame = 1; // Settings Collection Target for Memory W32_CacardParam (Ad_SETGRABDEST, CA_ Grabmem); // Start the acquisition IF (LPDIB! = NULL) // If the image buffer is not empty, the buffer is released; {GlobalUnlock (hglbdib); globalfree (hglbdib);} // is assigned to the collected image data Memory; hglbdib = GlobalLoc (GHND, (DWORD) WIMGWIDTH * (DWORD) WIMGHEIGHT); LPDIB = (Byte *) Globalock (hglbdib); // Get image data pointer; HDC = getdc () -> getSafeHDC (); / / Get the device context handle; if (lpdib! = Null) {cxdib = wimgwidth; cydib = WIMGHEIGHT; setLogicPal (HDC, CXDIB, CYDIB, 8); // Setting the palette; setstretchbltmode (HDC, coloroncolor); BGRABMARK = True;