Game Engine Analysis 3
Original author: Jake Simpson Translator: to the sea Email: GameWorldChina@myway.com
Part 3: Memory usage, effects, and API's thinking about memory usage, let us think about it, how do you actually use 3D graphics memory today and how to use in the future. Most 3D significant
The card handles 32-bit pixel colors, 8-bit red, 8-bit blue, 8-bit green, and 8-bit transparency. These combinations of red, blue and green 256
The color can form 16.7 million colors - that is all the colors you can see on a monitor. So, why did the game design master John Carmack require 64-bit color resolution? If we can't see the difference, what is the meaning?
What is the meaning? For example, there are more than a dozen light illumination models, the color colors are different. We take the original color of the model,
Then calculate the illumination of a light, and the model color value will change. Then we calculate another light, the model color value is further changed
. The problem here is that because the color value is only 8 bits, after calculating 4 lights, the 8-bit color value will not be sufficient to give us the last color.
Good resolution and performance. Insufficient resolution is caused by quantization errors, and the essential reason is a rounded error due to insufficient bits.
You can use the number of things quickly, and in the same, all the colors are cleared. 16 or 32 digits per color, you have a higher resolution, because
This can be repeatedly colored to properly express the last color. Such color depths can soon consume a lot of storage space. We should also mention the entire graphics memory and texture memory. What to say here is that each 3D graphics card actually has limited memory, and these
Store the front and backend buffers, Z buffers, and all surprising textures. The initial voodoo1 graphics card only 2MB memory
Later, Riva TNT increased to 16MB to memory. Then Geforce and ATI Rage have 32MB memory, now some Geforce 2 to 4
The graphics card and Radeons have 64MB to 128MB of memory. Why is this important? Ok, let's see some numbers ... such as what you want to make your game look the best, so you want it to start with 32-bit screen, 1280x1024 resolution and 32-bit Z-buffer
. Ok, there are 4 bytes of each pixel on the screen, plus the z-buffer of each pixel 4 bytes, because it is 32 bits per pixel. We have
1280x1024 pixels - is 1,310,720 pixels. Based on the number of bytes based on the front-end buffer and the z-buffer, this number multiplies
8, is 10,485,760 bytes. Includes a backend buffer, which is 1280x1024x12, which is 15, 728, 640 bytes.
, Or 15MB. On a 16MB display graphics card, only 1MB left will store all textures. Now if the initial pattern
It is true 32-bit or 4 bytes wide, then we can store 1MB / 4 bytes per pixel per pixel per frame per frame to 262,144 pixels per pixel. This is about
4 256x256 texture page. It is clear that the above example shows that the old 16MB graphics card does not have a modern game to express its beautiful screens. Obvious, in it
When drawing the picture, we must load the texture to the graphics per frame. In fact, the purpose of designing the AGP bus is to complete this
Affairs, but AGP is still slower than 3D card buffers, so you will be subject to some loss. Obviously, if the texture is reduced to 16 bits, you can transfer twice the number of textures with a lower resolution through AGP. If your game is more than each pixel
Lower color resolution running, then there are more display memory to save common textures (called cache textures). But real
Always never predict how users will set their system. If they have a graphics card running in high resolution and colors,
Then they will make them more likely to set their graphics. We are now talking about fog now, it is some visually effect. Now most of the engrains can handle fog because the fog is very convenient to make far away
The world faded out of the field, so when the model and scene geography crossed the plane into the visual range, you won't see them suddenly
It is in distinguishing. There is also a technique called body fog. This kind of fog is not dependent on the distance of the object, which is actually one you can
See the real object, and can go through it, from the other side - when you cross the object, the visible level of visually fog
Change. Imagine through the cloud group - this is a perfect example of body fog. Some good implementations of body fog are Quake III
The red fog in some levels, or the Gamecube version of the new Rogue Squadron II Lucas Arts. Some of them are I have
The best clouds that have been seen - approximately the same as you can see.
When we discuss atomization, it may be a short introduction to the Alpha test and texture alpha mixed good time. When the renderer is on the screen
When drawing a specific pixel, assume that it has passed the Z-buffer test (in the following definition), we may finally do some alpha tests. we
It may be found that in order to display certain things behind the pixels, the pixels need to be transparently drawn. This means that we must obtain the existing value of pixels, and I
The new pixel values are mixed, and the pixel values of the mixing result are placed back. This is called reading - modification - writing operation, far more than normal pixel
Time. You can use different types of mixing, these different effects are called mixed modes. Direct Alpha mixed only some of the percentage of background pixels
The ratio is added to the opposite percentage value of the new pixel. There are also additional mixing, some percentages of the old pixels, and specific quantities (rather than percentage)
The new pixel is added. This effect will be even more distinct. (Kyle's Lightsaber effect in JEDI KNIGHT II). Whenever the vendor provides a new graphics card, we can get the update of hardware support more complex mixing mode, thereby making more dazzling effects
fruit. The pixel operation provided by GF3 4 and the nearest Radeon graphics card has reached the limit. Template shadows and depth test templates create shadow effects, things become more complicated and expensive. I don't discuss too many details here (you can write a separate article),
Its idea is to draw a model view from the light source view and then generate or project the surface of the affected object with this polygon texture.
In fact, you will be "falling" in other polygons in the field of view. Finally, you get a real illumination, even
The corner is inside. This is very expensive because it is necessary to dynamically create textures and draw more through the same scene.
You can use many different ways to create shadows, and the situation is often the proportion of rendering quality and rendering work required by rendering quality. Have
The so-called hard shadow or soft shadow, while the latter is better because they are more accurately imitating the shadow usually in the real world. usually
There are some "sufficient good" methods that are preferred by game developers. For more information on the shadow, please refer to Dave Salvator 3D water
Line one article. Deep Test Now we have begun to discuss depth tests, depth tests discard hidden pixels, and excessive draw starts. Excessive drawing is very simple -
In one frame, you draw a pixel location several times. It is also known as a depth complexity based on the number of elements in the Z (depth) direction in the 3D scene. If you often excessive excessive drawing, - For example, the dazzling visual effects of the spell is like Heretic II,
Your frame rate has become very bad. When some people on the screen are cast, some of the initial effects of Heretic II design
Shape, they draw 40 times on the screen on the screen in one frame! Needless to say, this must be adjusted, especially software renderers, except
Reduce the game to an image is a ski show, it cannot handle such a load. Deep test is a location for determining the same pixel location
Which objects are in front of other objects, so we can avoid drawing those hidden objects.
Look at the scene and think about you can't see. In other words, what is in front of other scene objects or hides other scene objects?
This decision made by the degree test. I will further explain how the depth depth helps increase the frame rate. Imagine a very trivial scene, a large number of polygons (or pixels) are located in each other
This is followed, and there is no quick way between the renderer to discard them. Sort by polygon classification for non-Alpha mixed polygon (in
Z-direction), first rendering those polygons closest to your nearest, priority use the nearest pixel population screen. So when you want to render them
When the back pixels (determined by the Z or Deep Test), these pixels are quickly discarded, thereby avoiding the mixing steps and saving time. Such as
If you are drawn before, all hidden objects will be fully drawn, and then overwrite override by other objects. The closing scene, this
The closer situation is, so the depth test is a good thing.
Anti-aliasing let us quickly look at the anti-aliasing. When the single polygon is rendered, the 3D graphics card carefully checks the rendered and the side of the new polygon.
The edge is soften so you will not get a visible zipped pixel edge. One of the two technical methods is generally used to process. First
One method is a single polygon hierarchy, you need you from the back to the front to render the polygon, so that each polygon can be made backwards.
Appropriate mixing. If you don't render in order, you will finally see a variety of strange results. In the second method, use more than actual display
The big resolution is to render the entire frame screen, and then when you narrow the image, the sharp zigzag edge is mixed. This second method
The result is good, but because the graphics card needs to render more pixels than the actual results frame, there is a large amount of memory resources and high memory bandwidth.
Most new graphics cards can handle these, but there are still a variety of anti-aliasing modes to choose from, so you can be between performance and quality
Focus on compromise. For more detailed discussion of today's popular different anti-aliasing technology, see the 3D pipeline of Dave Salvator.
The vertex and pixel coloration have ended the discussion rendering technology, we quickly let's talk about the vertices and pixels, and they are recently caught a lot of attention. The vertex coloring is one
Use the API without using the graphics hardware characteristics. For example, if the graphics card supports hardware T & L, you can use DirectX
Or OpenGL programming, and hope that your vertices pass T & L units (because this is completely handled by the driver, there is no way to believe), or
You directly use the graphics hardware to color. They allow you to make special coding according to the characteristics of the graphics card, your own special coding
With T & L Engines, and to play your biggest advantages, other features of the graphics card must be provided. In fact, now NVIDIA and
ATI provides this feature on their large number of graphics cards. Unfortunately, the graphics card indicates that the vertex coloring method is inconsistent. You can't like DirectX or OpenGL, you can write a code for the vertex, you can run on any graphics card, this is a bad news. However, because you communicate directly with the graphics card, it is fast
Rendering the effect of vertex coloring may result in the greatest commitment. (Like creating a very good effect - you can use the vertex coloring to API
There is no way to affect things). In fact, the vertex coloring is really brought the 3D graphic display card back to the game console's encoding method, direct
Access hardware, maximize the need to use the system, rather than relying on the API to do everything for you. For some programmers, it will
The encoding method is surprised, but this is a progressive price. Further explanation, the vertex coloring is some calculation and running a vertex effect programs or routines prior to being sent to the graphics card rendering. You can be at the Lord
The CPU is doing these things on the software, or coloring the vertices on the graphics card. Transform Grid for animation model is the primary selection of vertex programs. Pixel coloring is the routines you write, and these routines are performed by pixels when drawing textures. You effectively use these new routines to overthrow
Mixed mode calculation made by the normal conditions of the graphics card hardware. This allows you to do some very good pixel effects, such as making the distant texture blur
, Add gunfire fog, producing reflection effect in water, and the like. Once ATI and NVIDIA can actually agree on pixel coloring versions (DX9's new high-graphic shadow language will help promote this
Standard), I am not surprised that DirectX and OpenGL use glide way - there is helpful, but eventually not playing any graphics card
The best way to limit. I think I will be interested in watching the future.
Finally, the renderer is the most judged place for the game programmer. In this industry, visually gorgeous is very important, so it knows you are
Do pay. One of the worst factors for renderer is the speed of the 3D graphics card industry. One day, you are trying to make a lottery
The image is correctly working correctly; the next day NVIDIA is doing the display of the top coloring programming. And the development is very fast, roughly, four years ago
The code written by the 3D graphics card in that era is now over time, and you need to rewrite it. Even John Carmack describes this, he knows
Four years ago to give full play to the good code written by the performance of the graphics card, now very ordinary - so he produces a new ID
The project is completely rewritten to the desire of the renderer. Epic's TIM SWEENEY - this is a comment he gave me last year: We have spent 9 months to replace all rendering code for 9 months. The initial unreal is designed to be software rendering and later expanded to
Hardware rendering. The next generation engine is designed as geforce and a better graphic display card, and polygon throughput is unreal tournament
100 times. This requires all replacement renders. Very fortunate, the engine modularity is good enough, we can keep the rest of the engine - editor, object
Science, artificial intelligence, network - does not change, although we have been improved in many ways.
Sidebar with long articles: API - Blessing and Cursing What is the API? It is an application programming interface that will be presented with the uniform backend with a consistent front end. For example, a large extent
The 3D implementation of each 3D display card is different. However, they all present a consistent front end to the end user or programmer.
So they know that their code written for X 3D displays will have the same result on the Y 3D display card. Ok, no matter what is theoretical.
Yes. About three years ago this may be quite real statement, but since then, under the leadership of NVIDIA, the 3D graphics industry has changed.
Now in the PC field, unless you are planning to build your own software raster engine, use CPU to draw all your wizards, polygons and particles -
And people are still doing this. Like unreal, Age of Empires II: Age of Kings has an excellent software renderer -
Otherwise you will use two possible graphics API, OpenGL or DirectX. OpenGL is a real cross-platform API (use this
The software written by the API can run on Linux, Windows, and MacOS. ), And many years of history, known for people, but also start
Slowly show its ancient. About four years ago, define OpenGL driver feature sets have always been the direction of all display card vendors.
However, once the target is reached, there is no way to set a road map in front of the characteristic working direction. At this time, all graphics developers have begun to
The collected on the collected, using OpenGL extension. 3DFX creates T-buffering. NVIDIA strives to seek hardware transform and light calculation. Matrox efforts to get a concave convex map. and many more. I
In the past few words, "Over the past few years, the 3D display card field has changed." Euphemistically explained this. Anyway, another API that can be selected is DirectX. This is controlled by Microsoft, and is perfect on PC and Xbox.
hold. Due to obvious reasons, DirectX does not have Apple or Linux version. Because Microsoft controls DirectX, it is generally
Easy to integrate in Windows. The basic difference between OpenGL and DirectX is that the former is owned by 'community', and the latter is owned by Microsoft. If you want DirectX
Support your 3D display card to support a new feature, then you need to lobby Microsoft, hope to adopt your desire, and wait for new DirectX
Line version. For OpenGL, you can get the display immediately by OpenGL extension because the display card manufacturer provides a driver for a 3D display card.
A new feature of the card. This is good, but as a game developer, when you encode the game, you can't expect them to be very common. They may make
Your game speed is increased by 50%, but you can't ask others to have a geforce 3 to run your game. Ok, you can do this, but if
If you want to come in this industry, this is a quite stupid idea.
This is a great simplification of this problem, and there are also exceptions to all of me described, but the general thinking is very true. for
DirectX, you can easily know the features you can get from the display card, if a feature cannot be obtained.
DirectX will use software to simulate it (nor a good thing, because it is very slow, but that is another thing). for
OpenGL, you can more abuse the characteristics of the display card, but the price cannot be determined to be the accurate feature.