Game Engine Analysis 5
Original author: Jake Simpson Translator: to the sea Email: GameWorldChina@myway.com
Part 5 Physics, Sports, Effects
World construction
When you create a game containing any 3D ingredients, you must ultimately try to create a 3D environment that will generate game action inside. I don't know how to make a party to build this environment, it is easy to modify, efficient, with a lower polygon quantity, which is easy to render and easily use physics. Very simple, right? What do I do with the left hand when doing this? When I did this, what did I do for my left hand? Yes. Nice.
Although there are many 3D structural programs, from the CAD / CAM program to 3D Studio Max, the construction of the game world is different from the model of the construction of internal or external world. You have a quantity of triangles - any given renderer can only render so many polygons at a time, which is not enough for genius's level designers. I don't know these, you can only store a predetermined number of polygons per level, so even your renderer can process 250,000 polygons in the field of view, even if you can only store 500,000 polygons in a reasonable amount of space, So how do you depends on how you depends on it, and finally your level value is like two rooms. not good.
Any method, developers need to propose a creation tool - it is best enough to allow all kinds of things that the game engine needs to be needed -, for example, placing objects in the world, in which you have previously previewed the level, and accurate light preview. These capabilities allow game developers to see how the level will look at the game before they spend three hours to produce a 'engine digestible' format. Developers need corresponding data on levels, polygon quantities, grid quantity, and so on. They need a suitable and friendly way to make the world have texture background diagrams, easy access to multiplex number reduction tools, and so on. This list can continue.
This feature is possible in the previously existing tools. Many developers use Max or Maya to build their level, but even if 3D Max needs to have some task-specific extensions to complete the level construction work efficiently. It may even use the level construction tool, like QERADIENT (see figure below), and add its output to your engine to interpret the format.
Can't you see it? Don't harass ...
Recall the BSP (two fork spatial segmentation) tree we discussed in the first part, you may also have heard that the term potential video (PVS) is being discussed around. Both have the same goal, do not explore the smallest subset of the world's decomposition of the world to you can see from the world to any given position. In implementation, they only return those you can see, not those hidden behind the wall that may be blocked. You can imagine this benefits that give the software renderer, and each pixel rendered (possibly such a situation) is extremely important. They also returned those walls in the order beforewards, which is convenient when rendering, because you can determine the actual location of an object in the rendering order.
Generally speaking, the BSP tree is nearestly unpopular, because of some of their weird, and because we get pixel throughput obtained from today's 3D display card, BSP is often a redundant process. They are convenient in calculating your exact location in the world and the geometric objects around you, but often store this information better and more intuitive than the BSP tree.
Potential visual sets seem to be as good as they have. It is such a way, at any given time, give you a position in the world, it determines which surfaces of the world, which objects can actually see it. This is often used to remove objects before rendering, and also eliminate them to reduce AI and animation processing. After all, if you can't see them, why do you have to handle your brain? Most of this is really not important. If a non-player role (NPC) is playing animation, or even thinking of it. Game physics
Since we have got the world in memory, we must prevent our roles from dropping out from inside, and handling floor, slopes, walls, doors, and mobile platforms. In addition, we must properly handle the cultivation, speed changes, inertia, and collisions in other objects placed in the world. This is considered to be game physics. And before we further discuss it, I want to eliminate a myth here now. At any time you see physics in the world, or anyone claims that "real physics" in a complex gaming environment, it is very good, it is BS. More than 80% of the construction of an efficient game physical system is simplified to handle the true equation of objects in the world. Even at that time, you often ignore what is' real ', and create some interesting' things, after all, this is the goal.
Frequent players will ignore Newton physics in the real world and play their own, more interesting real versions. For example, in Quakeii, you can speed up from 0 to 35MPh and stop. There is no friction, and the slope does not provide the same type of gravity of the true slopes. The body does not have the role of the role in all sections - you can't see that your body is in the top or edge of the table, and the gravitational gravity itself may even be variable. In the face of real world, in the real world, spacecraft in space is not as practiced as the flight combatants in their surface operations. In the air, all force and reaction force, force around the weight point, and so on. Unlike Luke Skywalker like X-Wing. Although it is more interesting!
As a game developer, no matter what we do, we need to detect walls, detect the floor, handling the collision of other objects in the world. These are the essentials of modern game engines - we decide to further do further from us to our game needs.
Effect system
The vast majority of game engines now build a certain effect generator, which allows us to show all the lovely attractive things that have insightful players expect. However, things that have been taken behind the effect system can have a sharp impact on the frame rate, so this is where we need special concern. Now we have a great 3D display card, we can deliver a large number of triangles to them, and they still ask more triangles. Not always. In Heretic II, use its lovely software rendering mode, because of their beautiful spell effect, Raven encounters considerable excessive excessive drawing issues. Recall that when you draw the same pixels over once on the screen, excessive drawing happens. When you have a lot of effects, you have a lot of triangles according to your nature, and multiple triangles may be stacked to each other. The result is that you have many repeated pixels. Plus Alpha, which means that you must read the old pixels before rewriting, mixing with new pixels, which consumes more CPU time.
Some effects of Heretic II illustrate this, we repeat 40 times for the entire screen in a frame. Very surprised, is it? So they realize a system sample in the past 30 frames in the effect system. If the speed begins to slow down, it automatically reduces the number of triangles drawn by any given effect. This makes the main work, and the frame rate remains, but some effects look ugly. Anyway, because most of the effects now tend to use a large number of small particles simulated fire and smoke, etc., you have a lot of triangles per frame in the effect code. You must move them from a frame to the next frame, decide if they are completed, and even use some physics to make them properly rebound on the floor. This is quite expensive on the PC, so you must now have some practical limitations you can do. For example, the effect of generating fire with a pixel particle may be good, but don't expect more things on the screen when you do this.
Particles are defined as their own world location and very small drawn objects. They are different from the direction of the elf, and the large particles use these elves - such as a group of smoke sprayed. They automatically rotate, zoom, and change their transparency levels, so they can fade over time. We are easy to see a lot of particles, but we limit the quantity of the elf - although the truly difference between the two is now blurred. In the future, especially after the DX9 and more advanced graphics hardware surfaces, we may see more people use the process shader to produce similar or better effects with the particle system, create a great animation effect.
When talking about the effect system, you may have heard about the word 'Turah'. A Titland is your effect system minimal level of physical performance. Further explanation, a triangle is a map. That is the vast majority of the engine eventually drawn - a large number of triangles. When you go up the system, you define the definition of the T. For example, the top-level game programmer does not want to consider handling individual triangles. He just wants to say, "This effect happens here" and allows the system to handle it in a black box. Therefore, for him, an effect map is' Let's continue to produce a bunch of particles in this level of gravitation in this moment. Inside the effect system, it may think that an effect map is that it is generating each individual effect in that, like a set of triangles that follow the same physics rules - then transfer all of these separate triangles to the renderer Therefore, in the renderer level, the Turaen is a separate triangle. There is a little confusion, but you know the total thought.
The above is the fifth part, the next part is about the sound system, and various audio APIs, 3D audio effects, processing occlusion and obstacles, affecting the effects of different materials, audio mixing, etc.