The game engine analyzes the original author: Jake Simpson Translator: GameWorldChina@myway.com Part 1:: game engine introduced construct 3D rendering and presentation from the world since the era of Doom game that we have come a long way to the sea Email. Doom is not just a great game, and it has also created a new game programming mode: the game "engine". This modular, scalable and expanded design concept allows gamers and programmers to go deep into the game core, create new games with new models, scenes and sounds, or add new things to existing games. . A large number of new games have been developed according to the existing game engines, while most of them are based on the QUAKE engine of ID, including Counter Strike, Team Fortress, TAC OPS, Strike Force, and Quake Soccer.
TAC OPS and Strike Force use the Unreal Tournament engine. In fact, "Game Engine" has become the standard language of communication between game players, but where is the engine stop, where is the game start? Pixel rendering, sound play, monster thinking, trigger of the game event What is the behind-the-scenes behind the game? If you have thought about these problems, and I want to know what to drive games, then this article can tell you this. In this part, multiple parts are deeply analyzed, especially the quake engine, because the company's Raven Software has developed a variety of games based on the Quake engine, including the famous Soldier of Fortune.
begin
Let us first take a look at the main difference between a game engine and the game itself. Many people confuse the game engine and the entire game. This is a bit like a car engine and the entire car is confused. You can take out the engine from the car, build another housing, and use the engine once. The game is also like that. The game engine is defined as all non-game unique technologies. The game part is all content known as 'assets' (model, animation, sound, artificial intelligence and physics) and program code that specializes in order to run or control how to run, such as Ai - artificial intelligence.
For those who have seen the QUAKE game structure, the game engine is quake. EXE, and the game part is QAGAME. DLL and CGAME. DLL. If you don't know what this means, there is no relationship; if someone explains it, I don't know what it means. But you will fully understand what it means. This game engine guidance is divided into eleventh part. Yes, from the quantity, there are eleven parts! For each part, about 3,000 words. Now start our exploration from the first part, go deep into the kernel we play games, where we will understand some basic things, put mats for the following chapters. . .
The renderer allows us to start the game engine design from the renderer, we will explore these issues from the perspective of the game developer (the author of this article). In fact, in each paragraph in this article, we will often discuss from the angle of the game developers, but also let you think about our problem!
What is a renderer, why is it so important? Ok, if you don't have it, you will not see anything. It allows the game scene to visualize, so that the player / audience can see the scene, so that the player can make an appropriate decision according to what they see on the screen. Although we will find some fear, don't pay attention to it. What do the renderer do? Why is it necessary? We will explain these important issues. When constructing a game engine, the first thing you usually want to do is to build a renderer. Because if you don't see anything - then do you know your program code at work? More than 50% of the CPU processing time spent on the renderer; usually in this part, the game developers will be subject to the most demanding judgment. If we are very poor in this part, things will become very bad, our program technology, our games and our companies will turn into industry jokes within 10 days. It is also where we are most dependent on external vendors and strength, where they will handle the maximum potential operational goals. So, the construction of a renderer is really not like sounding, but if there is no good renderer, the game may never be among the top 10 in front of the list.
Today, pixels are generated on the screen, involving 3D acceleration cards, API, 3D space math, how to work for 3D hardware, and so on. For hosts (game consoles) games, the same type of knowledge is also required, but at least for hosts, you don't have to try to hit a mobile target. Because the hardware configuration of a host is a fixed "time snapshot", and the PC (personal computer) is different, its hardware configuration does not change in a host's life.
In the general sense, the work of the renderer is to create the visual flash point of the game, in fact, this goal needs a lot of skill. 3D graphics essentially use the least effort to create a maximum effect, because additional 3D processing is extremely expensive in processor time and memory bandwidth. It is also a budget, you have to figure out where you want to spend processor time, and you would rather save some of the best overall effects. Next we will introduce some tools in this, and how to better use them to work.
Construction 3D World
Recently, when I worked with a person who worked for a few years of people who worked for a computer graphic, she spit it to me. When she first saw real-time manipulating computer 3D image, she didn't know how this is. Implemented, do not know how the computer can store 3D images. Today, this may be true for ordinary people on the street, even if they often play PC games, game console games, or arcade games.
Below we will discuss some details from the game designer's perspective, you should also look at the "3D pipeline introduction" written by Dave Salvator, so that there is an overall understanding of the main process of 3D image generation.
3D objects (objects) are stored in a set of points (known as vertices) in the 3D world, and there is interrelationship between each other, so the computer knows how to draw lines or fill the surface between these points in the world. A cube consists of 8 points, one point for each corner. There are 6 surfaces of the cube, representing each of its faces, respectively. This is the basis for the storage of 3D objects. For some complicated 3D objects, such as a QUAKE level, there will be thousands (sometimes hundreds of thousands) vertices, and thousands of polygonal surfaces.
See the lineframe representation above (Note: There is a picture here). Essentially similar to the above cube example, it is just some of the complex scenes composed of many small polygons. How to store the model and the world is a part of the renderer, not part of the application / game part. Game logic does not need to know how objects are in memory, nor do you need to know how the renderer will display them. The game just needs to know that the renderer will use the correct field of view to represent the object and will display the correct model in the correct animation frame. In a good engine, the renderer should be completely replaced by a new renderer and do not need to change the game's code. Many cross-platform engines, and many self-developed gaming machine engines are like this, such as Unreal Engines, for example, this game GameCube version renderer can be arbitrarily replaced.
Let's take a look at the internal representation - In addition to using the coordinate system, there are other methods to indicate the spatial point in the computer memory.
In mathematics, you can use a equation to describe a straight line or curve and get a polygon, and almost all 3D display cards use polygons to use the final rendering chamber. A element is your lowest draw (rendering) unit you can use on any display card, almost all hardware uses three vertices, multilateral (triangles). A new generation of NVIDIA and ATI graphics can allow you to rendering (known as high-time surfaces) in mathematical ways, but because this is not the standard of all graphics cards, you can't rely on it as a rendering policy.
From a calculated perspective, this is usually somewhat expensive, but it is often the foundation of new experimental technology, such as the rendering of the surface, or softer the edge of the object's sharp edge. We will further introduce these high-time surfaces in the section below the surface.
Remove overview
The problem is coming. I now have a world that is described by hundreds of thousands of vertices / polygons. I am at the side of our 3D world with the first perspective. In the field of view, some polygons can be seen in the world, while others are not visible, because some objects, such as the walls that can be seen, blocked them. Even the best game coding personnel, on the current 3D graphics card, in a field of view, 300,000 triangles cannot be processed and still maintain 60fps (a primary objective). The graphics card cannot handle it, so we must write some code that removes those visible polygons before passing them to the graphics card. This process is called exclusive. There are many different culling methods. Before you understand these, let us explore why the graphic display card cannot handle the super high number of polygons. I am saying that the latest graphics card can't handle millions of polygons per second? Should it be able to handle? First, you must understand the market sales claiming the polygonal generation rate and the polygonal generation rate of the real world. The multilateral generating rate declared on the marketing is the multilateral generation rate that the graphic display card can be achieved.
If all polygons are on the screen, the same texture, the same size size, the application that transmits a polygon on the display card is not done, in addition to the transmission polygon, how many polygons can be processed at this time that the graphics chip manufacturer is present Give you a number.
However, in a real game situation, the application often makes many other things in the background - polygon 3D transform, light calculations, copy more textures to graphics card, and so on. Not only the texture is sent to the display card, but also the details of each polygon. Some comparison new graphics cards allow you to store models / world geometric details in the graphics card memory itself, but this may be expensive, which will consume the texture to be used properly, so you can determine that each frame is Use the vertices of these models, otherwise you just have a waste of storage on your card. We said here. It is important that when you actually use the graphics card, you don't have to reach those indicators you saw on the graphics package. If you have a slow CPU, or if there is not enough memory, this difference is It is particularly true. Basic elimination method
The easiest way to remove the world is divided into a region, each area has a list of other visible areas. In that, you only need to display the visible part of any given point. How to generate a list of visible field of view is the skill. Furthermore, there are many ways to generate a list of visible areas, such as BSP trees, snobers, and the like.
It can be sure that when talking about Doom or Quake, you have heard this term using BSP. It represents a binary spatial segmentation.
BSP is a method of dividing the world into a small area. By organizing the world's polygons, it is easy to determine which areas are visible and which are invisible - so that you don't want to do too much software-based renderer. It also allows you to know where you are in the world in a very effective way.
In the engine based on the snooped engine (earliest by the PREY project that has been canceled by 3D Realms), each area (or room) is built with its own model, and can be seen by the door (or snoop) through each area. Segment. The renderer is individually drawn as a stand-alone scene. This is its general principle. It is enough to say that this is a required part of any renderer, and it is very important.
Although some such techniques are classified under "occlusion", they all have the same purpose: eliminate unnecessary work as soon as possible.
For an FPS game (first person shooting game), there are often many triangles in the field of view, and the game players bear the control of the field of view, discard or eliminate the ubiquitous triangle is absolutely necessary. This is also the case for space simulation, you can see a far away, it is very important to eliminate things that exceed the visual range. For games that are restricted by view - such as RTS (instant strategy game) - is usually easier to implement. Usually this part of the renderer is completed by software, not by the graphics card, which is just a time problem by the graphics card.
Basic graphics line flow
A simple example, the process from the game to polygonal graphics lines is roughly like this:
· The game determines which objects in the game, their models, textures used, they might in what animation frames, and their location in the game world. The game also determines the position and direction of the camera. · The game passes this information to the renderer. Take the model as an example, the renderer first wants to view the size of the model, the location of the camera, then determine if the model is all visible on the screen, or on the left side of the observer (camera view), behind the observer, or distance from the distance
Not visible. It will even use some world assay to calculate whether the model is visible. (See the following)
· The World Visualization System determines where the camera is in the world and determines which area / polygons of the world according to the camera vision. There are many ways to accomplish this task, including the violent way to divide the world into many regions, each area directly to "I can see area AB & C from region D" to a more refined BSP (two fork spatial segmentation) world. All polygons passing through these rejection tests are drawn to the polygon renderer. • For each polygon that is passed to the renderer, the renderer is converted in accordance with local mathematics (that is, model animation) and world mathematics (relative to the camera position?), And checks that the decision polygon is not a back to the camera ( That is, away from the camera). The polygon on the back is discarded. The non-back polygon is illuminated by the renderer according to the nearby light of the discovery. Then the renderer wants to look at the texture used by the polygon, and determine that the API / graphics card is using that texture as its rendering basis. Here, the polygon is sent to the rendering API and then gives the graphics.
It is obvious that this is too simple, but you are probably understood. The chart below is taken from the Dave Salvator's 3D pipeline, you can give you more specific details:
3D pipeline - high level overview
1. Application / Scene
· Scene / Geometric Database Acceptance Object Movement, Observe the Camera Movement and Target · Animation Movement of Object Model · 3D Description of World Contents · Objectability Check, including possible occlusion · Detail level (LOD)
2. Geometry
· Transform (rotation, translation, zoom) · Transformation from model space to world space (Direct3D) · From World Space to Observe Space Transform · Observe Projection · Details Accept / Reject Remove · Back Remove (can also be in the back screen space Do) Lights · Perspective Segmentation - Transform to Cold Space · Crop · Transform to Screen Space
3. Triangular generation
· Rear excluding (or completed in the observation space before the light calculation) · slope / angle calculation · Scan line transformation
4. Rendering / rasterization
· Coloring · Texture · Fog · Alpha Transparent Test · Deep Buffer · Anti-aliasing (optional) · Show
Usually you will put all the polygons in some lists, then sort this list according to the texture (so you only need to send a texture for the graphics, not each polygon, etc.), and so on. In the past, the polygon will be sorted according to their distance to the camera.
First, you can draw the polygons furthest from the camera, but now this method is not that important due to the appearance of Z buffers. Of course, except for transparent polygons, they have to draw after all non-translucent polygonal drawings, so that all polygons behind them can appear in the scenes. Of course, like that, in fact, you have to draw those polygons beforewards. However, in any given FPS game scene, there is usually no too transparent polygons. It may look like there is, but in fact compared to those opaque polygons, their ratio is quite low.
Once the application passes the scene to the API, the API can utilize hardware acceleration transformations and light processing (T & L), which is a very common thing in today's 3D graphics card. The matrix mathematics involved here (see DAVE 3D line introduction), geometric transformation allows 3D graphics cards to draw a polygon in the world according to your attempt, according to the location and direction of the camera, in the main point and position in the world.
There are a lot of calculations for each point or the vertex, including crop operations, determine whether any given polygons are actually visible, completely unacceptable or partially visible on the screen. Light operation, calculate the brightness of the texture, depending on how the world's light is projected onto the vertex. In the past, the processor handled these calculations, but now, the contemporary graphics hardware can do these things for you, this means you
The instrument can do other things. Obviously this is a good thing (TM), because you can't expect all 3D graphics cardboards on the market, there are T & Ls, so you will have to write all these routines (once again from the game developer angle Say). You will see the vocabulary of "Good things (TM)" in different paragraphs in this article. I think these features make a very useful contribution to make the game look better. It is not surprising, you will also see its opposite; you guessed, it is "bad things (TM)". I am trying to struggle to strive for the copyright of these vocabulary. You have to pay a small fee. Flavo (high-time surface)
In addition to triangles, the use of curved places is now more common. Because they can use mathematical expressions to describe geometry (usually involved in a geometric body), not just listing a large number of polygons and in the game world, the curved piece (another name of the high-time surface) very good. In this way, you can actually create (and deformed) polygon grids depending on the equation, and determine the number of polygons you actually want to see from the surface of the surface. Therefore, for example, you can describe a pipe, and then there are many examples of this pipeline in the world. In some rooms, you have shown 10,000 polygons, you can say, "Because we have shown a lot of polygons, any more polygons will drop the frame rate, so this pipe should only have 100 polygons. ". But in another room, only 5,000 visible polygons in the field of view, you can say, "Because we have not reached the number of polygons that can be displayed, this pipe can now have 500 polygons." Very wonderful thing - but you must first know all of this, and build a grid, this is not a lightweight. The surface equation that transmits the same object through the AGP is indeed a cost savings cost. SOF2 uses a variant of this method to establish its surface system.
In fact, the current ATI graphics card has TruForm, which can bring a triangular model and convert the model to a high-time model to smooth - then use the ten-fold triangle number to convert the model to a large amount Triangle model (called ReteSsela). The model is then sent to the pipeline for further processing. In fact, ATI only adds a stage to the T & L engine to handle this process. The disadvantage here is that what models to control need to be smoothed and do not need. You often want some edges to be sharp, such as noses, but it is not properly smooth. This is still a good technology, and I can foresee it will be more applications in the future.
This is the first part - we will continue to introduce illumination and textures in the second part, the following chapters will be more deeper.