Game Engine Analysis 2

zhaozj2021-02-16  61

Part 2: 3D Environment Light and Texture

World lights

During the transformation process, we are typically one of the most important operations in coordinate space called observation space: light calculation. It is a kind of thing, when it works, you don't pay attention to it, but when it doesn't work, you will pay attention to it. There are many different illumination methods, from simple computing polygons for the direction of the lights, and add the percentage of the light color according to the direction of the light to the polygon and the percentage of the light color, until the marginal smoothing light sticker is superimposed. Moreover, some APIs actually provide pre-built lighting methods. For example, OpenGL provides light calculations per parent, each vertex, and per pixel.

In the vertex light, you have to decide how many polygons shared by a vertex, and calculate all the average of all the multilateral method vectors of the vertex (referred to as method vector), and assigns the top to the top. Each vertex for a given polygon will have different ways of normal, so you need a gradient or interpolated polygonal vertex light color to get a smooth illumination effect. You don't have to use this illumination to view each individual polygon. The advantage of this approach is to usually use hardware conversion and light (T & L) to help fast. The deficiency is that it cannot produce a shadow. For example, even if the light is on the right side of the model, the left arm should be in the shadow that is projected by the body, and the arms of the actual model are illuminated in the same way.

These simple methods use coloring to achieve their goals. When you draw a polygon with a flat light, you let the rendered (draw) engine put the entire polygon in the specified color. This is called flat colored light. (In this method, both the polygon correspond to a light intensity, all points on the surface are displayed with the same intensity value, and a planar effect is obtained, and the edge of the polygon cannot be accurately displayed).

For vertex coloring (Gouraud coloring), you let the rendering engine give each vertex to a specific color. When the pixels corresponding to the projection on the polygon, the color of these vertices is performed according to their distance from the vertices. (In fact, the Quake III model is used is this method, and the effect is very surprising).

There is also pHong coloring. Like gouraud color, through texture, do not interpret the pixel color value for each vertex color, it is interpolated for each vertex, which will do the same work for each vertex. For Gouraud Colors, you need to know which light projected on each vertex. For phong color, you know so much for each pixel.

Not surprising, Phong coloring can get a smoother effect, because each pixel needs to perform illumination calculations, and draws a very time consuming. The planar light treatment method is fast, but it is relatively rough. Phong coloring is more expensive than gouraud coloring, but the effect is best, can reach the mirror highlight effect ("highlight"). These require your trade-off in the game development.

Different lights

Next, a illumination map is generated with the second texture mapping (lighting mapping) and the existing texture mixed to generate lighting effects. This work is good, but this is essentially a canned effect in the predecessor before rendering. If you use dynamic lighting (ie, light movement, or without program intervention), you have to re-generate the illumination mapping at each frame, modify these lighting mappings according to the motion of the dynamic light. Light maps are quickly rendered, but the memory consumption required to store these lighting textures is very expensive. You can use some compression skills to make them occupy less memory space, or reduce its size, even make them monochrome (this will not have color lights), and so on. If you do have multiple dynamic lights in the scene, regenerate the lighting map will end in an expensive CPU cycle. Many games typically use a mixed lighting method. Take Quake III as an example, the scene uses lighting maps, animation models use vertex lighting. The pre-processed light does not generate the correct effect on the animation model - the entire polygon model obtains all the light-sensing values ​​of the lights - and the dynamic lighting will be used to generate the correct effect. Using a mixed lighting method is a compromise that most people don't pay attention to, it usually looks "correct". This is all the games - doing everything necessary to make the effect look "correct", but don't really be correct.

Of course, all of this does not exist in the new Doom engine, but to see all the effects, at least 1GHz CPU and GeForce 2 graphics card. It's progress, but everything is cost.

Once the scene is converted and illuminated, we will cut the computation. Do not enter the bloody detail, the cut operation determines which triangles are completely within the scene within the scene (known as observed flat frut). Triangles completely within the scene are called detail acceptance, they are processed. For triangles that are just partial in the scene, the portion located outside the flat-cut body will be cropped, and the remaining polygon portions within the interior of the flat body will need to be re-closed so that it is completely within the visible scene. (For more details, please refer to our 3D pipeline guidance).

After the scene is cropped, the next stage in the pipeline is the triangular generation phase (also called scanning line conversion), and the scene is mapped to 2D screen coordinates. Here, it is the rendering (drawing) operation.

Texture and MIP mapping

Texture is important to make 3D scenes look realistic. They are some small pictures that are decomposed into polygons in the scenic area or object. Multi-texture consumption costs a large amount of memory, there are different technologies to help manage their size sizes. Texture compression is a method for making texture data smaller in the case of holding image information. Texture compression takes up less game CD space, more importantly, occupying less memory and 3D graphics storage space. In addition, when you ask the graphics card to display the texture, the compressed (smaller) version is sent from the PC mainly to the 3D graphics card, which will be faster. Texture compression is a good thing. In the following we will discuss texture compression.

MIP mapping (multiped mapping)

Another technology used to reduce texture memory and bandwidth requirements is MIP mapping. MIP mapping technology generates a plurality of copy textures, each successive copy is half the size of the previous copy. Why do you want this? To answer this question, you need to know how the 3D graphics card display texture. Worst, you choose a texture, attached to a polygon, then output to the screen. We said this is a one-to-one relationship, an edible texture map of a pattern (texture element) corresponding to a pixel of the texture mapping object polygon. If the polygon you display is shorter, the textured essay is displayed every space. This usually has no problem - but in some cases, some visual weird phenomena will be caused. Let us look at the brick wall. Suppose the initial texture is a brick wall, there are many bricks, and the mud width between the bricks has only one pixel. If you shrink half a polygon, the essay is just an application every space, this time, all the mud will suddenly disappear because they have been removed. You will only see some strange images. Using MIP mapping, you can zoom images yourself before the display card is applied, because you can pre-processed textures, you do better, so that the mud is not removed. When 3D graphics card draws a polygon, it detects the zoom factor, "" You know, I want to use a small texture, not the biggest texture, which looks better. "Here, MIP mapping For everything, everything is also mapped for MIP.

Multi-texture and concave convection map

Single texture mapping brings great difference to the entire 3D realistic pattern, but multiple textures can even achieve some more unforgettable results. In the past, this has only need to render (draw), which seriously affects the pixel filling rate. However, many 3D acceleration cards with multi-flow wires, such as Ati's Radeon and NVIDIA's GeForce 2 and higher graphics cards, multiple textures can be done during rendering (draw). When you create multiple texture, you draw a polygon with a texture, then use another texture to transparently draw on the polygon. This allows you to make the texture look in movement, or pulsation, or even create a shadow effect (we described in the lighting section). Draw the first texture mapping, then draw a transparent full black texture, causing one of all weave black but has a transparent hierarchy stacked its top, this is - instant shadow. This technology is called lighting mapping (sometimes referred to as dark mapping), until new Doom has always been a traditional method of id engine.

The embossed sticker is an ancient technology that recently emerged. A few years ago Matrox first initiated a variety of different forms of embossed stickers in popular 3D games. It is generated to generate texture to show the lamp on the surface, express the surface of the unevenness or surface of the surface. The embossed map does not move along with the light - it is designed to show the fineness of a surface, rather than a large irregularity. For example, in the flight simulator, you can use the concave stickers to produce a random surface detail, rather than repeatedly use the same texture, and there is no interesting.

The concave convex map produces a considerable surface detail, although it is a high priest, but in strict, the concave stickers do not change as your observation angle. Comparing new ATI and NVIDIA graphics cards can perform per pixel, this default perspective is no longer a powerful and rapid rule. No matter which method, so far, there is no use of too many game developers; more games and should be used.

Cache cache shake = bad thing

Texture Cache management game engine speed is critical. Like any cache, the cache is very good, and it will be very bad. This is the texture cache jitter if the texture is frequently exchanged in the graphic display card memory. When this happens, usually the API will discard each texture, the result is that all textures will be reloaded in the next frame, which is very time consuming and waste. For gamers, when the API reloads the texture cache, the frame rate is slow. In Texture Cache Management, there are a variety of different technologies to minimize texture cache jitter - this is a decisive factor in ensuring any 3D game engine speed. Texture management is a good thing - this means only the graphics card is required to use the texture, not repeatedly. This sounds a little contradictory, but the effect is that it means that the graphics card says, "Look, all these polygons use this texture, can we only load this texture once instead of many times?" This stopped API (or Graphics Drive Software) Upload multiple times to load texture. The API like OpenGL actually processes texture cache management, mean that according to some rules, such as texture access, the API determines which textures are stored on the graphics, which textures are stored in the main memory. The real problem came: a) You often can't know the accurate rules that the API is using. B) You often ask more textures in a frame, so that the textures that can accommodate the graphics memory space.

Another texture cache management technology is the texture compression we discussed earlier. Very like sound waveform files are compressed into MP3 files, although the compression ratio cannot be reached, the texture can be compressed. The compression of the sound waveform file to MP3 can reach the compression ratio of 11: 1, and the vast majority of hardware supported texture compression algorithm is only 4: 1 compression ratio, despite this, this can produce a lot. In addition, in the rendering (drawing) process, the hardware is dynamically decompressed only when needed. This is great, we just erase the surface that is about to be used.

As mentioned above, another technique ensures that the renderer requires that the graphics are plotted once for each texture. Make sure you want to render (draw) all polygons that use the same texture at the same time, not a model here, another model there, and then return to the initial texture theory. Just draw a time, you will transfer once via the AGP interface. Quake III is doing this in its shaded system. When processing a polygon, add them to an internal shadow list. Once all polygon processing is completed, the renderer traverses the texture list, transmits the texture and all polygons that use these textures simultaneously.

The above process is not very effective when the hardware T & L of the graphics card is used. The outcome of your face is that full screen is a large number of polygonal small groups using the same texture, all polygons use different transformation matrices. This means that more time is spent on the hardware T & L engine of the graphics card, and more time is wasted. Anyway, because they help to use unified textures for the entire model, it works effectively on the model on the actual screen. However, because many multi-sides tend to use the same wall texture, it is often hell for the rendering of the world scene. Usually it is not so serious, because it is generally, the world's texture will not be so big, so that the API's texture cache system will handle this for you, and keep the texture in the graphics card for re-use.

On the game console, there is usually no texture cache system (unless you write one). On the PS2, you'd better stay away from the "once texture". In Xbox, this is not important because it does not have graphical memory (which is a UMA architecture), and all textures are always kept in the main memory. In fact, in today's modern PC FPS game, trying to deliver a large number of textures through the AGP interface is the second most common bottleneck. The largest bottleneck is actually handled, it should make things appear where it should appear. In today's 3D FPS game, the most time-consuming job is clearly the mathematical operation of each of the most valued world positions in the model. If you don't keep the texture of the scene within the budget, only the second is to transmit a lot of texture through the AGP interface. However, you do have the ability to affect these. By reducing the top layer MIP level (I still remember where the system is constantly subdivided into texture?), You can reduce the system that is trying to send to the texture size of the graphics card. Your visual quality will decline - especially in the fascinating movie pieces - but your frame rate has risen. This way is especially helpful to online games. In fact, Soldier of Fortune II and JEDI KNIGHT II: Outcast These two games are not a graphics card in the market. To view their texture with the largest size, your 3D graphics card requires at least 128MB of memory. These two products are designed to future ideas.

The above is the first

2

Some.

In the following sections, we will introduce many topics, including memory management, fog effect, depth test, anti-aliasing, vertex coloring,

API

Wait.

转载请注明原文地址:https://www.9cbs.com/read-22032.html

New Post(0)