Realistic computer graphics (2) - blanking and realistic graphic generation

zhaozj2021-02-11  202

Realistic computer graphics (two) - blanking and realistic image synthesis Author: Tian Jingcheng Published: 2001/02/07

Thesis:

In the article "Realistic Computer Graphics (1) - Natural View", four basic tasks must be completed on the computer's graphics device, and introduce the relevant technologies related to the first task - Natural Scene Simulation in 3D Styles. This article will focus on the third and fourth tasks - determine all the visible faces (blanking) and computing scenes in the scene (light model, texture, and color model, etc.).

text:

Realistic Computer Graphics (II) - Masonry and Realistic Graphic Generation 1 Melt in Computer Graphics, there are three ways to represent three-dimensional object: wire block diagram, blanking map and real touch. Among them, the generation of real-sensitive graphics also should be illuminated on the blanking basis. The so-called blanking is a process of giving a set of three-dimensional objects and projection modes (visual constraints), decision line, face or body visibility. According to the difference between the blanking, the hidden algorithm can be divided into two categories: the blanking algorithm of the object space, the blanking is carried out in the standardized projection space, and each of the K polygons in the surface of the object with the rest of K -1 face comparison, accurately obtain the occlusion relationship of each edge or each surface on the object. This type of algorithm is compared to K2. · The blanking algorithm of the image space, the blanking is performed in the screen coordinate system, judges every pixel on the screen, and determines the surface visible on the pixel point. If the screen resolution is m × n, there are K a multilateral in the object space, and the amount of calculation of such algorithms is more than MNK. Most of the implicit algorithms involve the concept of sorting and correlation. Sort is to determine the occlusion relationship between blanking objects, usually in three directions in X, Y, Z. The efficiency of the hidden algorithm depends to a large extent on the efficiency of the sort. Relevance refers to the nature of the object object or its transform, and the use of correlation is an important means of improving the sort ratio in the mating algorithm. Commonly used object space mounting algorithms have polygonal area sort algorithms and list priority algorithms. Z-Buffer is the simplest image space surface conceal algorithm, the use of depth cache arrays avoids complex sorting processes, and in the case of resolution, the algorithm calculation is only a proportional to the polygonal shape. This algorithm is also facilitated hardware implementation and parallelism. On this basis, the Z-Buffer scanning line algorithm utilizes the correlation of the polygon and the pixel point, so that the algorithm efficiency is further improved. The scanning line algorithm also provides a good blanking foundation for simple illumination models. 2 Simple Light Model and Illumination Model are mathematical models of light intensity and color of the observer in the eyes of the observed eye in accordance with the optical law. Simple partial lighting model assumes that the light source is a point source, and the object is a non-transparent body. It does not consider refraction, and the reflection light is composed of ambient light, diffuse reflected light, and mirror reflected light. The characteristics of the ambient light are: the light irradiated on the object is from the respective directions, and evenly reflects in each direction. The calculation formula is: where IA is ambient light intensity (constant); KA is the reflection coefficient of the object surface to ambient light. The diffuse reflective light is a light that is uniformly reflected around the object. According to the Lambert cosine, the calculation formula can be given: where Kd is the diffuse reflection coefficient of the surface of the object; IP, J is an incident light emitted by a point source. Θi is an incident angle of the light source, i.e., the angle between the object surface method N and the point source incident light vector Li (COS θi = N · Li). In order to simulate high light, Btphone proposes a Phone mirror reflection model: wherein KS is a method of method of the object surface; n is a mirror reflected light convergence index; αi is a certain light source in a specular reflected light direction (RI) and line of sights The angle between the direction (V), ie cosαi = ri · v). In summary, any point of the object surface to the reflected light intensity I = IE ID IS calculation formula of the observation point is: or representation of the incentive form of each unit vector: when the actual application, the red, green, and The quantity is processed separately.

In the computer graphics, the surface is usually approximated by a polygon, and there are two incremental bright and dark processing algorithms to solve the brightness of the brightness and color of the polygon, namely a birenabular brightness interpolation method (gouraud shading) and the bilinear method. Vector interpolation method (phong shading). Gouraud's method is mainly for the diffuse reflection in the simple lighting model, first calculating the average light intensity of the vertices of the object, and then obtains the light intensity of each point in the polygon with two-line interpolation. This method is small, but it cannot completely eliminate the Machi effect, there is a singular situation that is difficult to handle algorithm, and the treatment of highlight is not ideal. The method of pHONG is also incremental linear interpolation, but the interpolation is the average method of the vertex, and the pixel point in the polygon is performed, and the light intensity is calculated by the method obtained by the interpolation. This approach overcomes some of the shortness of the brightness interpolation, and can treat the mirror reflection well, but the calculation amount is larger than the Couraud method. There are also many shaded generation algorithms based on local light model and bright and dark treatment. Shadow is a sector that is not directly illuminated by the source in the light source. In the real-sense graphics generated by the computer, the shadow can reflect the relative position of the scene in the picture, increase the three-dimensional sense of the graphics and the level of the scene, the realistic effect of the rich picture. Shadows can be divided into two types of this shadow and half shadows. This shadow plus the half shadows around it constitute a soft shadow area. Single point light source light can only form a shadow, multiple point light sources and line light sources can form half. For the object represented by the polygon, a method of calculating this shadow is a shadow domain polygon method, and the shadow domain of the object is defined as the intersection of the regional multi-faceted and light source in the area of ​​the object contour in the scene space. The implementation of this method can utilize existing scanning line mounting algorithms. Athherton et al. Proposed a curved detail polygon method, which is based on the hidden surface of the multilateral area classification, and is a shadow from the light source and viewpoint. The above two shadow generation methods are only applicable to the scenery represented by polygons, which cannot solve the shadow generation problem on the smooth surface. To this end, Williams proposed Z-Buffer method, first using the Z-Buffer algorithm to blanking the scene in the direction of the source, and then use the Z-Buffer algorithm to make the trend direction in the line of sight. This method can conveniently include any complex scene of a smooth surface, but the amount of storage is large, and it is easy to produce a sample near the shaded area. 3 Overall lighting model and light tracking the light onto the object, not only from the light source, but also reflects or reflects other objects. The local light model can only handle direct illumination, in order to accurately simulate various reflections between objects in the environment, the overall light model is required. The overall light model can be represented as iglobal = krir ktit relative to the local light model. Where IGLOBAL is a point of light intensity on the object; IR is the light intensity of other objects reflecting or reflecting the reflection direction R, KR is a reflection coefficient; KT is refractive direction T reflects other objects from the line of sight. The light intensity of the reflection, IT is a refractive coefficient. When IGLOBAL is superimposed on the calculation result of the local light model, the light intensity of the object on the object can be obtained. The ray tracking algorithm is a typical overall light model, which is the earlier by Goldste, Nagel and APPEL et al., The APPEL uses light tracking method to calculate the shadow; Whited and Kay extend this algorithm to solve the mirror reflection and refractiveness. The basic idea of ​​the algorithm is as follows: For each pixel on the screen, track the light from the viewpoint through the pixel, find the intersection of objects in the environment. In the intersection, the light is divided into two, and tracking along the reflective direction of the specular reflection direction and the transparent body, forming a recursive tracking process. Each of the light is reflected or refracted by a reflection, and the refractive coefficients are decayed by the material material, and the refractive coefficient is attenuated, and when the light is less than the given threshold, the tracking process is stopped when the light is less than a given threshold.

The shadow processing of light tracking is also very simple, just emit a test light from the intersection of the light to the object, it can be determined whether other objects block the light source (the light intensity of the transparent blocking object is further processed) Thereby simulating the effect of movie and transparent shadow. Light tracking naturally solves the problem of blanking, shadow, mirror reflection and refraction between all objects in the environment, can generate very realistic graphics, and the implementation of the algorithm is relatively simple. However, as a recursive algorithm is very large. Try to reduce the amount of assignment calculation is the key to increasing the efficiency of light tracking efficiency, and the common methods are: SPATIAL Partitioning, and other technologies. Light tracking is a typical sampling process, and the brightness of each screen pixel is calculated separately, and thus it produces a sample, and the calculation of the algorithm itself makes it difficult for the traditional increasing sampling frequency. Pixel subdivision is an anti-walking technology suitable for light tracking. The specific method is to first track the brightness with light tracking of each pixel; then compare the brightness of each corner point, if the difference is large, The pixel is divided into 4 sub-regions, and the new 5 corners are used to calculate the brightness; repeated comparison and subdivision, until the difference in brightness difference between the sub-regions is less than the given threshold; final weighting average The display brightness of the appearance is. Different from pixel subdivision, distributed light tracking proposed by Cook, Porter and Carpen is a random sampling method, within the stereo angle of the mirror reflection direction and the refractive direction, in the intersection of the cross-point, tracking a number of distribution functions Root light, then perform weighted average. Cook et al. Also proposed a method of simulating a semi-shaped, depth of field and motion blurring using distributed random sampling techniques. Another problem with light tracking is that the light is emitted from the viewpoint, and the shadow test light needs additional processing, so that indirect reflection or refractive light source cannot be processed, such as the effect of the mirror or lens to the light source is difficult to simulate. To solve this problem, two-way tracking can be made from the light source and viewpoint. However, a large number of light from the light source is not possible to reach the screen at all, which makes the calculation amount of two-way light tracking increase, it is difficult to use. The solution proposed by HECKERT and HANrahanr is to use the light tracking from the light source as a supplement to the standing light tracking; the Arvo method is to prepare the light from the enumeration from the light source; Shao Min and Peng Qunheng are also proposed. Two-way light tracking algorithms for optimizing light from the light source based on spatial linear tumulating structure. 4 Diffuse and radiation method The conventional light-sensing model assumes that the diffuse reflection between the object is a constant ambient light, even if the bidirectional light tracking can only handle the reflection and refraction between the object, and cannot process the diffuse reflection between the object. The radiode method proposed by Goral et al. In 1984 and Nishita et al. Was initiated by a thermal engineering method, replacing ambient light with the production and reflection of light radiation, thereby accurately handling light reflection between objects. problem. The radioscopy method treats the scene and the light source as a closed system, which is highly balanced in the system; and assumes that the surface of the constituent is an ideal diffuse reflection surface. The so-called radiation refers to the light energy reflected from the unit area on the surface during the unit time, remembers B. Ideally, it is possible to approximate the light intensity on the surface of the surface of the surface, that is, the reflection is uniform. Depending on the law of energy, the radiation on each of the wraps is: the radiode of the Differential element DAI on the Bi Painting I. EI is the light energy that is uniformly radiated to the space to the space for the light source; ρi is DAI's Reflectivity; BJFij is the light energy of the surface J to DAI radiation, and FIJ is the shape factor of the differential metallidal DAJ on the DAI.

Each of the facilities in the environment has the above relationship. For scenes with N faces, it can be obtained as follows: where Ei only does not zero when the wrapper I itself is the surface of the illuminator, represents the system in the system The source of energy; the shape factor FIJ is only related to the geometric position of the scene. It is not difficult to solve the radiation Bi of each piece by the above linear equation group, which is also equal to the brightness II of the wrap. Utilize double linear interpolation; in this set of brightness as the initial value, the brightness of the vertices of the face can be calculated; finally the brightness value of each pixel point of the screen is obtained for a particular viewpoint interpolation, and the final image is generated. The main calculation of the radiation method is to calculate the shape factor. The semi-cube method proposed by Cohen and Greenberg is an efficient method of approximately calculate the shape of the closed surface shape. First, a half cube is established by the center of the facet I, and the method is established for the Z-axis, dividing the five surfaces into a uniform mesh, and the micro-shape factor of each grid unit can be pre-prepared; then in the scene All other wraps are projected onto the semi-cube, and the case where the plurality of wraps projected to the same grid unit, the grid unit only retains the nearest wrap, this process is equivalent to Z -Buffer algorithm; Finally, all the micro-shape factors of all the mesh units associated with the wrapper jumes can obtain the shape factor FIJ of the wrapped sheet J. The advantage of the radiological method is that the algorithm is independent of the algorithm and the viewpoint, and the calculation and drawing can be separately performed separately, and the color penetration effect is analogous, but the specular reflection and refraction cannot be processed. In the radiological method, the wrapper is only related to the total radiation of the light energy radiated in a particular direction, and is independent of the direction of the received energy. Immel, Cohen and Greenberg generally promote this method, and each wrapper does not calculate only unique radiation, but is divided into a partial sphere space to a limited space stereo, and calculate the input output in each region. Energy, the probability of radiation energy to a certain direction is calculated by the bidirectional radiation function, and the light intensity of each vertex can be interpolated by the radiation degree in a plurality of directions closest to the viewpoint direction, and the graphic generation is finally completed. This improvement method can handle complex scenes containing mirror and transparent objects, but have huge time overhead and space overhead. Another approach is to combine the radiation degree with light tracking, and only the calculation results of the two methods are not enough, and the illumination relationship between the diffusing surface and the mirror must be simultaneously processed. Wallace, Cohen and Greenberg have proposed a two-step method: the first step is to perform a radiometric method independent of the viewpoint, and the calculation of the radiation must consider the mirror, which can be simulated by mirror-world approach; second Step Perform a viewpoint-based light tracking algorithm, processing overall mirror reflections and refraction, and generate graphics. The key to the efficiency of the algorithm is the first step in which the mirror simplification method only needs to treat the reflection effect of the ideal mirror, and the shape factor is corrected, and the amount of shape factor will increase significantly with the increase in the number of mirror. SILLON and PUECH further extended the above two-step method, not using a mirror image method during the first step, but using recursive light tracking to calculate the shape factor, and can handle scenes with any number of mirror and transparet. 5 Texture Mapping The process of incrementing surface detail by covering or projection of digital texture images or projected onto an object surface. Texture images can be obtained by sampling, or can be generated by mathematical functions. Many of the surface detail polygons approximation or other geometric modeling methods are difficult to manifest, so texture mapping techniques can make computer generated objects look more realistic. Texture Mapping Technology was first proposed by CATMull. After the improvement of BLINN and NEWELL, it was widely used to become an important method in computer graphics. The texture is mapped to the surface of the object, which is conceivable to project a screen pixel to the corresponding area of ​​the texture space and calculate the average color of the area to obtain the best approximation of the true pixel color.

转载请注明原文地址:https://www.9cbs.com/read-4982.html

New Post(0)