Chapter 1 Introducing Direct 3D
Translation: Clayman
Create a device
The DEVICE class is necessary for all drawings in DirectX. You can make this type of imaginary as a real graphics card. All other graphics objects in the scene depends on Device. You can have a few devices in your computer. In Mnaged Dirctx3d, you can control any multiple Devices.
Device has three constructor, and we only discuss one of them now, but discussions in the contents behind. Let's take a look at the constructor with the following functions:
Public Device (int Adapter, DeviceType DeviceType, Control RenderWindow, CreateFlags Behaviorflags, Presentparameters);
(The second overload of the constructor is similar to the upper side, but it accepts the window handle from the unmanaged (or non-Windows form) as renderWindow. Only one INTPTR parameter is an interface to the Non-managed COM group pointing to IDirect3DDevice9. Apply it when your code needs to collaborate with the unmanaged program.
Ok, what does these parameters mean, and how do we use it? Oh, parameter adapter means which physical graphics card we will use. All graphics cards in the computer have a unique adapter identifier (usually 0 to your graphics card number -1), the default graphics card always identifies a graphics card with 0.
The next parameter, DeviceType tells DirectX3D which type of Device you want to create. The most common value here is DeviceType.hardware, indicating that you will create a hardware device. Another option DeviceType.Reference, this device allows you to use the "Reference Raster", all the effects are implemented by DirectX3D, run with very slow, very slow, very slow speed ^ _ ^. This option should be used only when debugging or testing the features that you don't support your graphics card.
(Note that the reference raster is only included in the DirectX SDK, and the SO DirectX is not to use this feature. The last value for DeviceType.software allows the user-defined software Raster (Custom Software Rasterizer) Uncertain When such a grating is present, ignore this option ^ _ ^.)
Rendrwindow indicates the window that binds the device. Because the Windows Form control class contains a Windows Handle, it is easy to put a certain class as the rendering window. You can use Form, Panel, or any other control as the value of this parameter. But now, we only use form.
The next parameter is used to describe the behavior after the device creation. Members of most CreateFlags enumerations can be combined to make the equipment multiple behaviors. But some Flags are mutually exclusive and discuss them later. We now only use the SoftwareVertexProcessing flag. This flag is suitable for all vertex processing. Therefore, this is naturally slower than all points, because we are not sure if your graphics card supports all features. SO, security first, suppose your CPU can complete the current task.
The last parameter indicates that your device presented the data to the display. The appearance of the Presentation Parameter class can be controlled by this class. We will discuss its constructor after we will discuss its constructor, now we only care about "Windowed" members and "SwaPeffect" members. Windowed members are a Boolean value that determines that the device is full screen or window mode.
SWAPEFFECT members are used to control the behavior of cache exchange. If SWAPEFFECT.FLIP is selected, an additional back buffer is created at runtime and copy the Front Buffer when displayed. Swapeffect.copy is similar to FLIP, but requires you to make a backup to 1. We will choose SwaoEffect.discard if the buffer is not ready to be displayed, will discard the content in the buffer (Which Simply Discards The Contents of the Buffer IT ISN't Ready to Be Presented).
I have learned so much, and now create a device. Back to code, first create a Device object for our programs:
(Code is slightly, see DirectX SDK Tutorial 1: CREATE A DEVICE)
Let us rewrite the Paint () function now:
Protected Override Void Onpaint (System.Windows.Forms.Painteventargs E)
{
Device.clear (Clearflags.target, System.drawing.color.blue,
1.0f
, 0);
Device.Present ();
}
We use the clear () method to populate the window as a solid color. Its first parameter specifies the object we want to fill; in the example, we fill it is the target window. Other members of ClearFlags enumerations will be discussed later. The second parameter is the color we have to fill. The other two parameters are temporarily ignored. After Device is filled, we must update the display: Present method will complete this task for us. This method also has several overload types; the method used on the upper side displays the entire area of Device. The same will be discussed later.
Is it somewhat boring, ok, now we really draw some graphics
The most basic graphics in the three-dimensional graphics world is triangular. Using enough triangles, we can show anything, very smooth surface. Nothing is better than painting a simple triangle. In order to make the process as simple as possible, we first avoid "World Space" and various transformations (of course, we will immediately mention them), use the screen coordinates to draw a simple triangle. Draw our charming triangle before we have to do 2 things. 1. There is a need for some data structures to save the triangle information. 2, tell Device to draw it.
Fortunately, DirectX already has such a data structure to save the triangle. The class called CustomvertEx in the Direct3D namespace can be used to store the "vertex format" data structure (Vertex Format) used in most Direct3D.
A vertex format structure saves data as a DirectX3D understanding and can be used. We will discuss a lot of this structure, but first look at the Transformore structure we are going to create triangle. This structure tells DirectX3D running our triangle without coordinate transformation (such as rotation or movement), because we have specified the use of the screen coordinate system. It also contains information about the color of each point (vertex). Back to the overpaint method Add the following code: CustomvertEx.TransformedColored [] Verts = New CustomvertEX.TRANSFORMEDCOLORED [3];
VERTS [0] .SetPosition (New Vector4 (this.width /
2.0F
,
50.0F
,
0.5F
,
1.0f
);
VERTS [0] .Color = system.drawing.color.aqua.toargb ();
Verts [1] `` `` `` `` `
Verts [2] `` `` `` `` `
(See DirectX SDK Tutorial 2: Rendering Vertices)
Each element in the array represents a vertex of the triangle, so we created three elements. Then use the newly created VECTOR4 structure to call the setPositivein method for each member. The converted vertex coordinates contains the coordinates of x and y on the screen (relative to the screen (0,0) point), of course, also include Z coordinates and RHW members (Reciprocal of Homogenous W 3D coordinates). First ignore two parameters. VECTOR4 structure (Note: Vector4 is actually (x, y, z, w) becomes (x / w, y / w, z / w) is the most convenient way to save this information. Then we set the color of the point. Note that we use the ToargB method for standard colors. DirectX3D assumes that the received colors are 32-bit int.
Since you have data, you can tell DirectX we need to draw this triangle and draw it. Add the following code in the overpaint overpaint
DEVICE.BEGINSCENE ();
Device.vertexFormat = CustomvertEx.transformedColored.Format;
Device. DrawUserPrimitives (PrimitiveType.triangLIST, 1, VERTS); Note and SDK examples are different
Device.endscene ();
Ok, what does this line of code mean? it's actually really easy. Befinscene method tells DirectX3D we are about to draw something and prepare for drawing. Now we have already told DirectX3D to draw something, and you must tell it what to draw. This is the role of the VertexFormat property. It determines which "fixed function pipline" format used when DirectX3D is running. In our example, use the transformed, colored vertex pipe.
Don't worry about what you don't understand now, what does it mean? We will come soon discuss it.
The DrawUserPrimitives function is a place that really draws. SO, what does his parameters mean? The first parameter is the type of primary geometry we want to draw. There are many types of available types, but now, we just draw a series of triangles. So I chose the PrimitiVEPE.triangleList type. The second parameter is the number of triangles we have to draw. For a triangle collection, this value should be that your vertex number is divided by 3. We only draw a triangle, so set to 1. The last parameter is DirectX3D used to draw data. The last Endsence method notifies DirectX3D. We no longer draw. You must call this method again each time you call BegInsnce. If you are currently compiling, you will find the display of the window or reset the window, and does not update the display. The reason is that when we need to redraw the entire window, Windows does not calculate the shrinkage of the window every time. Therefore, you just remove the displayed data, but did not delete the displayed content. Fortunately, there is a simple way to solve this problem, we can tell the Windows window to always need to be redirected throughout. Last plus code on OnPaint:
THIS.INVALIDATE ();
Oh, try again now, oh, it seems that we have destroyed the program! Now I can only display a blank, and our triangle is still flashing, especially when the window is adjusted. What are we doing? It turns out that "smart" Windows always tries to draw the current window (ie blank window) in the invalidate () method. There are other drawings in our onpaint way! It is easier to solve it by changing the "style" attribute of the window. Add the following code in the constructor
THIS.SETSTYLE (ControlStyles.allpaintingInwmpaint | constolstyles.opaque, true);
Oh ~, well, finally Erying Works as expected. What we do is to tell Windows all drawing processes are completed in onpain.
Three-dimensional triangle
Look at our procedure, it doesn't look so "three-dimensional". And we can do it easily with GDI . SO, how should we draw in the 3-dimensional space, and give people a deep impression? In fact, simple modifications can achieve this effect.
If you still remember, when we could create the first triangle, we used a "Transformed" coordinate system. This coordinates are the coordinates used by the screen area of the display, but is also most user-defined. What if we use unverted coordinate systems? In fact, the unchanged coordinate system is widely used in modern game scenarios.
When we define these coordinates compared to screen coordinates, you should define each vertex in World Space. You can imagine the world coordinates as an infinite three-dimensional card coordinate. You can put your object in any position of this "world". Now to modify our program, draw a triangle that has not been transformed from the world coordinate.
First use one of the unchanged vertex format type to change the triangular data. Here we only care about the position of the vertex, as well as colors, so use CustomvertEx.PositionColored.
CustomvertEx.positionColored [] VERTS = New Customvertex. PositionColored [3];
VERTS [0] .SetPosition (New Vector3
0.0F
,
1.0f
,
1.0f
));
VERTS [0] .Color = system.drawing.color.aqua.toargb (); VERTS [1] `` `` `` `` `
Verts [2] `` `` `` `` `
(See DirectX SDK Tutorial 3: USING MATRICES)
Also change the VertexFormat property:
Device.VertexFormat = CustomvertEx.PositionColored.Format;
Ok, now run the program: Nothing happens, just get a filled window. In the discussion, let's see what we have made. As you can see, we have chosen the POSITONCOLORED structure to save the data. This structure saves the position of the vertex with the world coordinates and saves its colors. Because the vertex has not changed, we use the Vector3 class to replace the Vector4 class, and the vertex that has not changed is no RHW value. Members of the Vector3 structure are mapped directly to the value of X, Y, Z in the world coordinate system. At the same time, we need to determine that DirectX3D knows what changes do, so we use new VertexFormat properties to allow fixed-function pipes to use new unchanged but fill the vertices of colors.
SO, why didn't the program run without the correct display? The problem is that we are just drawing in the world coordinates, but do not give DirectX3D anything about how to display their information. We need to add a camera to the scene to determine how to watch our vertices. The reason why the camera does not require a camera in the transformed coordinate system is that DirectX3D already knows which location on the screen to display the vertex.
Control the camera through two different variables on Device. Each transformation is defined as a 4 × 4 matrix pass to DirectX3D. (??? Each Transform Is Defined As a 4 * 4 Matrix That You CAN Pass in To DirectX3D)
The mapping transformation defines how the scene is projected to the display. The simplest way to generate projection matrices is to use the Matrix class's PerspectiveFovLH method. It will use the left hand coordinate system to create a perspective projection transformation that is a scene. (For details on the left and right hand coordinate system, please refer to SDK, or your higher mathematics, high physical textbook ^ _ ^) DirectX3D usually uses the left hand coordinate system.
The following is the signature of the projection function:
Public Static Matrix PerspectiveFovlh (Float FieldofViewy, Float Aspectratio, Float Znearplane, Float Zfarplane);
Projection transformations depict the view of the scene (Note:). Sight is a flat-exact body defined by a visual angle and a pre-PLANE and a far plane (Note: such as the part between the four-priced cone cross section and the bottom surface, God bless, you I still remember high school geometry), which is the visible portion within this flat-cut body. There are two parameters of the NearPlane and FarPlane in the header, depict the boundary of the cone: FarPlane is the bottom surface of the cone, and NearPlane is a cross section.
The FieldOfView parameter depicts the angle of the cone. Aspectratio is similar to the high and grewmost ratio of the TV, for example, the aspect ratio of wide silver TV is 1.85. You can use the width of the visible area to get this value higher than the upper height. DirectX3D is only drawn in this flat-cut body.
Since I have never been projected, there is no visual body at all, so DirectX3D is not drawn. However, even if we conducted a projection transformation, we have not yet included View Transform that contains camera information. You can use the function to complete this task: public static matrix lookatlh (Matrix Pout, Vector3 CameraPosition, Vector3 CameraTrGet, Vector3 Cameraupvector);
You can know how to use this function only by the name of each variable. Three of them are used to describe the properties of the camera: its position, the location of its observation points and a direction referenced as "UP". With the help of projection transformations and view transform, DirectX3D has sufficient information to draw triangles. Add code: (see DirectX SDK Tutorial 3: SetupMatrices () function in Using Matrice)
Try again, oh, we already have a triangle, but it is completely black! Where is the problem? In a transformed environment, DirectX3D uses lights to calculate the color of each pixel in the scene in the scene, and we don't define the light, and there is no extra light on the triangle, so, it is completely black. Since we have defined colors for every point, now, you can safely and simple to turn the lights in the scene. Plus the following code:
Dev.renderstate.lighting = false; try again, finally, we returned to the appearance of the unchanged coordinate. What is the benefit of doing so much change? Compared with the biggest benefit on the screen, it is the triangle in a three-dimensional space - the first step towards the great three-dimensional work! ^ _ ^
Since there is a triangle in the three-dimensional space, how can I do to make him look really a triangle in a space? The easiest thing is to let it rotate. How to do it? Very simple, just change the world coordinates.
Device's world coordinate transformation will convert each vertex location defined by local coordinate to a vertex location defined with the world coordinate. (The world transform on the device is used to transform the objects being drawn from model space, whice is where each vertex is defined with respect to the model, to world space, where each vertex is actually placed in the world.) Matrix objects Many methods can complete this transformation:
Device.Transform.World = Matrix.Rotationz ((float) Math.pi /
6.0f
);
It tells DirectX3D unless specified a new world coordinate transformation, it will make this transformation after this code will be made. The above world coordinate transformation is rotating the Z axis based on the arc. Note that the parameters here must be arc rather than an angle. The regular change parameter value can make the triangle smooth turn (the following code is slightly, refer to the example in the SDK).
Our rotating triangle does not give people a deep impression. Try to make him special, rotate multiple axes. Very lucky, just like this, good, update code: Device.Transform.World = Matrix.RotationAxis (New Vector3 (Angle / ((Float) Math.pi *
2.0F), angle / ((float) math.pi *
4.0F
), angle / ((float) math.pi *
6.0f
)), Angle / ((float) math.pi;
The RotationAxis function is used here. Through this function, we first define the rotating shaft, and use a simple version in each dimension to change the position of the shaft, and then pass it in the angle of the triangle around the axis, just like Previously did it.
Run the program again, oh, we really got a triangle that rotated around a rotating axis, but it seems that the triangle will have a regular disappearance, and then display it again. Ok, do you still remember the back face cooling? This is the best example of the back of the back. When the DirectX3D render object, if it finds that a certain face does not have a camera, it will not draw it, which is called the back. Then, how do you know if a specific geometry is facing a camera? Quickly see the croping options in DirectX3D may give you a little prompt. The three available exclusion options are: None, Clockwise, respectively, and CounterclockWise. In the case of ClockWise and CounterClockWise, when the vertices of simple geometry is opposite to the rejection mode, it will not be drawn.
Look at our triangles, its vertices are arranged in counterclockwise (Note: About the order of the vertex, refer to the SDK document Face and Vertex Normal Vectors). DirectX3D default culling mode is counterclockwise mode.
You can simply exchange the first and third elements in the vertex collection, see what will be different.
Now we know how the back is to work, it is clear that our simple program does not need to eliminate the function. There is a simple render state to control the culling mode, add the following code:
Device.renderstates.cullmode = Cull.none;
Once again, what is the size of the ERYING WORKS AS EXPECTED? ?
Automatically reset device when dragging and dropping
Anyone who used C or VB development DirectX3D knows that when changing the window size, you need to reset the DevicD, otherwise DirectX3D continues to render the scene according to the original resolution, and copy the result to the new window . When creating Device via the Windows Form Control, smart MAMAGED DIRECTX can find that you change the size of the window and reset the Device. There is no doubt that the program can always run under normal behavior, and you can also reset the Device yourself. A event called DeviceResizing is triggered before the Device is automatically reset. Capture this event, set the Cancel member of the Eventargs class to True, and you can return to the default behavior. Add the following code after creating the Device.
Private Void Cancelresize (Object Sender, Canceleventargs E)
{
e.cancel = true;
}
As you can see, this method is just a simple Say Yes, we really want to cancel this operation. Subscribe to the event handler now, let DEVICE know not to do this:
Device.deviceresizing = new canceleventHandler (Note: CanceleventHandle) Run the program to maximize the window. The position of the triangle is also the same as the original, but this time it looks terrible. The edges are serrated, it looks bad. You can delete the code we just added. The Managed DirectX default operation has helped us complete this task and can use it directly.
I said: "If you have light," so I have light.
We draw a triangle and let him turn up, how can you make him better? Of course it is a light. In the past, I mentioned it, in fact, when we completely closed the lights. The first thing to do is to go back to the dark scene:
Devicerenderstate.lighting = true;
In fact, you can even delete the whole line, Device's default behavior is to open the light; just to keep the code more clearly. Now get a black rotating triangle. Maybe we should define a lamp first, then open it. You may have noticed that there is a light array to connect to the DEVICE class, and each element of this array has saved a large amount of properties for the light. We want to define the first lamp in the scene and open it, so, where the Triangle is defined in the onpaint method (Note: It is different from the SDK, but it is the same effect ^ _ ^) Add the following code:
Device.lights [0] .type = lightttype.point;
Device.lights [0] .positon = new vector3 ();
Device.lights [0] .diffuse = system.drawing.color.white;
Device.lights [0] .attion =
0.2F
;
Device.lights [0] .RANGE =
1000.0F
;
Device.lights [0] .Commit ();
Device.lights [0] .enabled = true;
What does these code mean? First, the light type to be created, we chose a Point Light in all directions, creating a bulb. Of course, there is also a Direction Light that lights propagates in the specified direction. Direction Light only generates direction and color effects, ignores other lighting elements (such as attenization and range), so it is also the smallest light. The last one can use Spot Light, similar to the lights used to illuminate the characters on the stage in the theater. There are many elements to describe Spot Light (position, direction, angle, etc.), so it is the largest light in the system you need.
We continue after a simple discussion of lighting types. Next, set the position of the light. Because the center of the triangle is in (0, 0, 0), we put the lights in that position. The VECTOR3 has no parameters completed this task. Set the diffusing color of the light to white, which can illuminate the surface. Next, the weakening attribute of the control light intensity in the spatial change is set. The range is the farthest distance that the light can produce effect. The scope of example has far exceeded what we need. Please refer to SDK to find more content about the light.
Finally, we submit the light to Device and make it available. If you browse the properties of the light, it will notice a Boolean value called "deferred". By default, this value is false, so you need to call your COMMIT function before preparing to use the light. Set this value to True, you can cancel the call to commit, but there will be certain performance losses. Before watching the effect of the light, you must make sure it is enable and committed. Back to the program, you find that even if we define the lights for the scene, the triangle is also black! Open the light, but I can't see the light, Direct3D must not illuminate our triangle, in fact, it does not. The lighting is performed only when there is a normal normal in each surface of the geometry. I know this, let's add the normal to the triangle, so you can see it in the scene. The easiest way is to change the vertex format to a format containing the normal. I happen to have such a structure, change the code that creates a triangle:
CustomvertEX.POSITIONNORMALCOLORED [] VERTS = New CustomvertEX.PositionNormalcolored [3];
Verts [0] .SETPOSITON (New Vector3
0.0F
,
1.0f
,
1.0f
));
VERTS [0] .SetNormal (New Vector3
0.0F
,
0.0F
,
-1.0F
));
VERTS [0] .Color = ststem.drawing.color.white.toargb ();
VERTS [1] `` `` `
`` `` `` `` `
Update vertex format to accommodate new data:
Device.VertexFormat = CustomvertEx.positionNormalColored.Format;
This biggest change is to use a set of data containing the normal, and change the color of the triangle to white. It can be seen that we define the direction of the vertical point to the outside to the method. Because the point is only moving in the z plane, the negative direction along the Z axis is the direction of the normal vector. Now the program is everything. You can try to change the diffusing color of the light, see how there will be changes.
There is also a thing that should be remembered: The light is calculated according to each vertex, so the light may not be true in the Low Polygon model (just like our simple triangle). We will discuss some advanced lighting technologies in the chapters behind it, such as Per Pixel Linghting. These lights can create a real world.
DEVICE
State
And Transforms
So far, there are two options in the sample code: Device State and Transform. For a device, there are three different ways of devices: The Render State, The Sampler State, and The Texture State. We only use several types in The Render State; the two types behind are used to process textures. Don't worry that we will talk quickly to the texture. The Render State class specifies how DirectX3D is rasterized to the scene. You can use this class to change a lot of properties, including the lights we have already used and excluded. Other Render State Available options with Fill Mode (such as Wire Frame Mode) and various atomization parameters. We will also come to the next chapter to discuss in depth. As mentioned earlier, the transformation is a series of matrices that are used to transfer geometry positions from one coordinate system to another coordinate system. Three major transformations used in Device are world, view, and projection transformation, but there are other transformations. For example, it is used to control the transformation of Texture Stages, depending on a 255 World Matrix (There Area TEXTURE STAGES, AS WELL AS UP To 255 World Matrices ??).
Swapchains and rendertargets
What works did DEVICE do this to draw these triangles? Device has some fixed way to handle where to draw and how to draw objects. Each device has a swap chain and a render target.
A switched chain is actually a series of buffers that are controlled to render. All drawing processes occur in the backup buffer in the switched chain. When using SwaPeffect.flip to create a swap chain, the backup buffer flipped is a Front Buffer that is truly graphically used to read the data. At the same time, the third buffer becomes a new backup buffer, and the previous buffer changes into unused third buffers.
The true flip operation is achieved by changing the data area currently read by the graphics card, just read the address area and the address between the backup buffer. Real flip operation is only in full screen mode. In the window mode, the flip is actually only the copy of the data, because Device does not control the entire display, just a small part. Although the results in both modes are the same. In full screen mode, some drivers also use flip operations to implement SwaPeffect.discard or SwaPeffect.copy.
If you use swapect.copy or swapect.flip to create a switched chain, you can ensure that Presen () will not affect the contents of the backup buffer. An additional hidden buffer is enforced when running. Swapeffect.discard is recommended to avoid this potential loss. This mode allows the driver to select the most efficient method to assign backup buffers. When using swapect.discard, it is not worth it (???) Checking the entire backup buffer before drawing new graphics. Running in debug mode will use random data to fill (just used) backup buffers, so that developers check whether to call Clear (). (It is worth nothing that when usuing SwapEffect.Discardyou will want to ensure that you clear the entire back buffer before starting new drawing operations. The runtime will fill the the back buffer with random data in the debug runtime so developers can see if they forget To Call Clear) (Note: This paragraph is not too understanding, so the original text is also given. The interpretation of SWAPEFFECT enumeration is not too clear. Reference SDK: The exchange effect is clearly defined after calling present (), Status of the backup buffer. Theflip exchange chain is a loop of queue, which can have 0 ~ (n-1) block backup buffer, the Discard exchange chain is a queue, the Copy exchange chain only one backup buffer. The backup buffer in FLIP in Present ( The content will not change, so the system requires additional memory as a backup buffer, which has a performance loss. Since the content in the rear backup does not change, how to constitute a loop queue to use the length of the team's backup buffer, how to change clearly stated that only "The swap chain is essentially a queue where 0 always indexes the back buffer that will be displayed by the next Present operation and from An application that uses this swap effect should update an which buffers are discarded once they have been displayed. Entire Back Buffer BEFORE INVOKING a Present Operation That Displays It.The Debug Version of The Runtime Overwrites The Contents of Discarded Back Buffers with Random Data, To Enable Developers to Verify That Their Applications Are Updating The Entire Back Buffer Surface Correctly. "Random Data can help check whether the entire back buffer is updated? ? Since you will discard the data, you need to call CLEAR? ? )