Game Engine Analysis 1-1

zhaozj2021-02-16  81

Game Engine Analysis 1-1

Original author: Jake Simpson Translator: to the sea Email: GameWorldChina@myway.com

Part 1: Game Engine Presentation, Rendering and Construct 3D World

Introduction

We have gone far from the Doom game era. Doom is not just a great game, and it has also created a new game programming mode: the game "engine". This modular, scalable and expanded design concept allows gamers and programmers to go deep into the game core, create new games with new models, scenes and sounds, or add new things to existing games. . A large number of new games have been developed according to the existing game engines, while most of them are based on the QUAKE engine of ID, including Counter Strike, Team Fortress, TAC OPS, Strike Force, and Quake Soccer.

TAC OPS and Strike Force use the Unreal Tournament engine. In fact, "Game Engine" has become the standard language of communication between game players, but where is the engine stop, where is the game start? Pixel rendering, sound play, monster thinking, trigger of the game event What is the behind-the-scenes behind the game? If you have thought about these problems, and I want to know what to drive games, then this article can tell you this. In this part, multiple parts are deeply analyzed, especially the quake engine, because the company's Raven Software has developed a variety of games based on the Quake engine, including the famous Soldier of Fortune.

begin

Let us first take a look at the main difference between a game engine and the game itself. Many people confuse the game engine and the entire game. This is a bit like a car engine and the entire car is confused. You can take out the engine from the car, build another housing, and use the engine once. The game is also like that. The game engine is defined as all non-game unique technologies. The game part is all content known as 'assets' (model, animation, sound, artificial intelligence and physics) and program code that specializes in order to run or control how to run, such as Ai - artificial intelligence.

For those who have seen the QUAKE game structure, the game engine is quake. EXE, and the game part is QAGAME. DLL and CGAME. DLL. If you don't know what this means, there is no relationship; if someone explains it, I don't know what it means. But you will fully understand what it means. This game engine guidance is divided into eleventh part. Yes, from the quantity, there are eleven parts! For each part, about 3,000 words. Now start our exploration from the first part, go deep into the kernel we play games, where we will understand some basic things, put mats for the following chapters. . .

Renderer

Let us start the game engine design from the renderer, we will explore these issues from the perspective of the game developer (the author of this article). In fact, in each paragraph in this article, we will often discuss from the angle of the game developers, but also let you think about our problem!

What is a renderer, why is it so important? Ok, if you don't have it, you will not see anything. It allows the game scene to visualize, so that the player / audience can see the scene, so that the player can make an appropriate decision according to what they see on the screen. Although we will find some fear, don't pay attention to it. What do the renderer do? Why is it necessary? We will explain these important issues. When constructing a game engine, the first thing you usually want to do is to build a renderer. Because if you don't see anything - then do you know your program code at work? More than 50% of the CPU processing time spent on the renderer; usually in this part, the game developers will be subject to the most demanding judgment. If we are very poor in this part, things will become very bad, our program technology, our games and our companies will turn into industry jokes within 10 days. It is also where we are most dependent on external vendors and strength, where they will handle the maximum potential operational goals. So, the construction of a renderer is really not like sounding, but if there is no good renderer, the game may never be among the top 10 in front of the list.

Today, pixels are generated on the screen, involving 3D acceleration cards, API, 3D space math, how to work for 3D hardware, and so on. For hosts (game consoles) games, the same type of knowledge is also required, but at least for hosts, you don't have to try to hit a mobile target. Because the hardware configuration of a host is a fixed "time snapshot", and the PC (personal computer) is different, its hardware configuration does not change in a host's life.

In the general sense, the work of the renderer is to create the visual flash point of the game, in fact, this goal needs a lot of skill. 3D graphics essentially use the least effort to create a maximum effect, because additional 3D processing is extremely expensive in processor time and memory bandwidth. It is also a budget, you have to figure out where you want to spend processor time, and you would rather save some of the best overall effects. Next we will introduce some tools in this, and how to better use them to work.

Construction 3D World

Recently, when I worked with a person who worked for a few years of people who worked for a computer graphic, she spit it to me. When she first saw real-time manipulating computer 3D image, she didn't know how this is. Implemented, do not know how the computer can store 3D images. Today, this may be true for ordinary people on the street, even if they often play PC games, game console games, or arcade games. Below we will discuss some details from the game designer's perspective, you should also look at the "3D pipeline introduction" written by Dave Salvator, so that there is an overall understanding of the main process of 3D image generation.

3D objects (objects) are stored in a set of points (known as vertices) in the 3D world, and there is interrelationship between each other, so the computer knows how to draw lines or fill the surface between these points in the world. A cube consists of 8 points, one point for each corner. There are 6 surfaces of the cube, representing each of its faces, respectively. This is the basis for the storage of 3D objects. For some complicated 3D objects, such as a QUAKE level, there will be thousands (sometimes hundreds of thousands) vertices, and thousands of polygonal surfaces.

See the lineframe representation above (Note: There is a picture here). Essentially similar to the above cube example, it is just some of the complex scenes composed of many small polygons. How to store the model and the world is a part of the renderer, not part of the application / game part. Game logic does not need to know how objects are in memory, nor do you need to know how the renderer will display them. The game just needs to know that the renderer will use the correct field of view to represent the object and will display the correct model in the correct animation frame.

In a good engine, the renderer should be completely replaced by a new renderer and do not need to change the game's code. Many cross-platform engines, and many self-developed gaming machine engines are like this, such as Unreal Engines, for example, this game GameCube version renderer can be arbitrarily replaced.

Let's take a look at the internal representation - In addition to using the coordinate system, there are other methods to indicate the spatial point in the computer memory. In mathematics, you can use a equation to describe a straight line or curve and get a polygon, and almost all 3D display cards use polygons to use the final rendering chamber. A element is your lowest draw (rendering) unit you can use on any display card, almost all hardware uses three vertices, multilateral (triangles). A new generation of NVIDIA and ATI graphics can allow you to rendering (known as high-time surfaces) in mathematical ways, but because this is not the standard of all graphics cards, you can't rely on it as a rendering policy. From a calculated perspective, this is usually somewhat expensive, but it is often the foundation of new experimental technology, such as the rendering of the surface, or softer the edge of the object's sharp edge. We will further introduce these high-time surfaces in the section below the surface.

Remove overview

The problem is coming. I now have a world that is described by hundreds of thousands of vertices / polygons. I am at the side of our 3D world with the first perspective. In the field of view, some polygons can be seen in the world, while others are not visible, because some objects, such as the walls that can be seen, blocked them. Even the best game coding personnel, on the current 3D graphics card, in a field of view, 300,000 triangles cannot be processed and still maintain 60fps (a primary objective). The graphics card cannot handle it, so we must write some code that removes those visible polygons before passing them to the graphics card. This process is called exclusive.

There are many different culling methods. Before you understand these, let us explore why the graphic display card cannot handle the super high number of polygons. I am saying that the latest graphics card can't handle millions of polygons per second? Should it be able to handle? First, you must understand the market sales claiming the polygonal generation rate and the polygonal generation rate of the real world. The multilateral generating rate declared on the marketing is the multilateral generation rate that the graphic display card can be achieved.

If all polygons are on the screen, the same texture, the same size size, the application that transmits a polygon on the display card is not done, in addition to the transmission polygon, how many polygons can be processed at this time that the graphics chip manufacturer is present Give you a number. However, in a real game situation, the application often makes many other things in the background - polygon 3D transform, light calculations, copy more textures to graphics card, and so on. Not only the texture is sent to the display card, but also the details of each polygon. Some comparison new graphics cards allow you to store models / world geometric details in the graphics card memory itself, but this may be expensive, which will consume the texture to be used properly, so you can determine that each frame is Use the vertices of these models, otherwise you just have a waste of storage on your card. We said here. It is important that when you actually use the graphics card, you don't have to reach those indicators you saw on the graphics package. If you have a slow CPU, or if there is not enough memory, this difference is It is particularly true. Basic elimination method

The easiest way to remove the world is divided into a region, each area has a list of other visible areas. In that, you only need to display the visible part of any given point. How to generate a list of visible field of view is the skill. Furthermore, there are many ways to generate a list of visible areas, such as BSP trees, snobers, and the like.

It can be sure that when talking about Doom or Quake, you have heard this term using BSP. It represents a binary spatial segmentation. BSP is a method of dividing the world into a small area. By organizing the world's polygons, it is easy to determine which areas are visible and which are invisible - so that you don't want to do too much software-based renderer. It also allows you to know where you are in the world in a very effective way.

In the engine based on the snooped engine (earliest by the PREY project that has been canceled by 3D Realms), each area (or room) is built with its own model, and can be seen by the door (or snoop) through each area. Segment. The renderer is individually drawn as a stand-alone scene. This is its general principle. It is enough to say that this is a required part of any renderer, and it is very important. Although some such techniques are classified under "occlusion", they all have the same purpose: eliminate unnecessary work as soon as possible.

For an FPS game (first person shooting game), there are often many triangles in the field of view, and the game players bear the control of the field of view, discard or eliminate the ubiquitous triangle is absolutely necessary. This is also the case for space simulation, you can see a far away, it is very important to eliminate things that exceed the visual range. For games that are restricted by view - such as RTS (instant strategy game) - is usually easier to implement. Usually this part of the renderer is completed by software, not by the graphics card, which is just a time problem by the graphics card.

转载请注明原文地址:https://www.9cbs.com/read-22022.html

New Post(0)