Pixel format

xiaoxiao2021-03-06  25

As I said earlier, when you write a palette pattern to the memory, you assign a byte at the same time, each byte represents an index of the color query table. In RGB mode, you only need to write the color description value to memory, but the number of bytes required for each color is more than one byte. How much is related to the depth of the color of the byte. For 16-bit colors, you have to prepare two bytes (16 bits) for each pixel, so you can guess how the 32-bit color is going, these are very easy to understand. 32-Bit Color For a pixel, each character is as follows:

AAAA AAAA RRRR RRR GGGG GGGG BBBB BBBB

"A" is representative "alpha", represents a transparent value, which is prepared for Direct3D. I have said before, DirectDraw does not support α mix, so when you create a 32-bit color for DirectDraw, you set the high position to 0 well. The next 8 digits represents the value of the red intensity, and then the next 8 digits represent green, and the last 8 digits represent blue. A 32-bit color pixel requires 32 bits, so we generally define the corresponding variable type with the UINT type, which is an unsigned real type. Usually I use a macro to convert RGB data into the correct pixel format. Let me look at it look, I hope this can help you understand the pixel format:

#define RGB_32bit (R, G, B) ((R << 16) | (g << 8) | (b))

As you can see, this macro writes the corresponding red, green, blue intensity value by displacement, and fully complies with the correct pixel format. Is it a bit entry? To create a 32-bit pixel, you can call this macro. The intensity value of red, green, and blue, all of which are 8 bits, and their value range is from 0-255. For example, build a white pixel, you can do this:

UINT White_Pixel = RGB_32bit (255, 255, 255);

24-bit color is basically the same, the truth is actually the same, but 24-bit does not have a description of α, that is, there is 8 bits of α. The pixel format is as follows:

RRRR RRRR GGGG GGGG BBBB BBBB

So red, green, blue is still 8 bits, which means that 24-bit colors and 32-bit colors are actually the same color depth, only 32-bit multiple α mix. Now, you will think that 24-bit is better than 32-bit, really like this? No, because there are some troubles using 24-bit, in fact, there is no 24-bit data type. When you build a pixel, you have to write red, green, blue intensity value, not like 32- Bit is completed once. Although 32-bit colors require more memory, but in most machines, it is faster. In fact, many display cards do not support 24-bit color mode because 3 bytes of each pixel are very inconvenient. Now, the turn is 16-bit colors, it has a little bit of trouble, because for 16-bit colors, not every display card uses the same pixel format! There are two formats. One of them is also more popular, red occupies 5, green occupies 6, and the blue occupies the remaining 5. Another format is 5 bits, and one of the rest, that is, the high position is not used, and some old display cards use this format. So these two formats look like this:

565 Format: RRRR RGGGGGB BBBB555 FORMAT: 0RR RRGG GGGB BBBB When you work in the 16-bit color depth, you first need to detect the display card is support 565 format or 555 format, then use the appropriate way. This is very annoying, but you insist on the use of 16-bit colors, this is no way to avoid. Due to both formats, you need two macros:

#define RGB_16Bit565 (R, G, B) ((R << 11) | (g << 5) | (b)) # define RGB_16Bit555 (R, G, B) ((R << 10) | (g < <5) | (b))

For 565 format, the red and blue range is 0--31, green is 0-63; for the 555 format, the value range is 0-31, so when you want to create a white pixel, It will be different:

Ushort white_pixel_565 = rgb_16bit565 (31, 63, 31); ushort white_pixel_555 = RGB_15BIT555 (31, 31, 31);

This ushort is a non-symbolic shorter type, and the corresponding variable is only 16 bits. There are two formats to make things complicated, but in the actual game programming process, you will feel that this is not as annoying you. By the way, sometimes the 555 format is called 15-bit color depth, so if I talk about it this, you must always lead God! ^ _ ^ Now tells you how to detect the display card to the end of the 16-bit color depth mode, is it 555 or 565? The easiest way is to call the getpixelformat () function under the IDirectdrawsurface7 interface, which is as follows:

HRESULT GETPIXELFORMAT (LPDDPixelformat LPDDPixelformat);

The parameter is a pointer to the DDPixElFormat structure. You just declare it, initialize it, and then pass its address, everything is OK. The structure itself is huge, so I don't listed it, but I have to tell you that it is DWORD type, they are dwrbitmask, dwgbitmask, and dwbbitmask. You can get mask values ​​from dwrbitmask, dwgbitmask, and dwbbitmask (new stuff, don't understand too much). You can also use them to detect the format of the display card support. If the display card supports 565, DWGbitMask will be 0x07E0. If it is 555 format, DWGbitMask is 0x03E0. Now, we have learned all the pixel formats we may use, and can enter the actual phase of the image to display the image under DirectX. You have been waiting for a long time, isn't it? Before putting the pixel on the surface, we need to lock the surface and at least part of the lock surface. The lock surface returns a pointer to the surface of the surface, then we can do it for what you want.

转载请注明原文地址:https://www.9cbs.com/read-58859.html

New Post(0)