Arbitrary deformation effects of graphic images in C # with GDI +?

xiaoxiao2021-03-06  69

How to implement any deformation effect similar to Photoshop, the current GDI can easily achieve any parallelogram of any parallelism, but it is impossible to solve it into a trapezoid, triangular and arbitrary quadratic. For example, one of the twisted deformation effects: Twisting deformation effects: In the next generation operating system Vista, the graphic image can solve this problem by WPF's three-dimensional map (see me: Make it in Expression Blend) The side is a trapezoidal prismatic body), but is there a more direct algorithm to solve this problem? Search, find a papers online:

Two-dimensional deformation of the color image (Author: Xiang-Hui)

Summary This article discusses the deformation technology of color images, and gives a speed and satisfactory algorithm for two-dimensional deformation. Key words deformed reverse transform Double linear interpolation increment Calculation 1 In order to obtain a particular effect, it is often necessary to transform the image to a two-dimensional region or an image of any irregular boundary or an image to a three-dimensional space surface, simply? Is a so-called image deformation technology. This article focuses on the deformation problem of any two-dimensional polygon area, and gives a practical algorithm for the color image. In the three-dimensional case, it belongs to the texture of the texture in computer graphics. Generally involved in three-dimensional graphic blanking, bright and dark treatment, complicated, this article has not discussed in-depth discussion. Second, the transformation principle The two-dimensional deformation problem to be discussed herein may be formified as follows: The image is defined above the rectangular area ABCD, the source polygon area P = P1P2 ... PNP1 (Pi is vertex, i = 1, 2, ... n) is complete Contains within ABCD; deformation is converted by transform F, converts the image on P to the purpose of the polygon area Q = Q1Q2 ... QNQ1 (Qi is vertex, i = 1, 2, ... n), where P and Q One or one, that is, Qi = F (pi) (i = 1, 2 ... n). 1 is a simple example of a deformation: the source polygon area in the figure is a rectangular area ABCD, and the destination polygon is arbitrary quadrangular EFGH, and the shaded portion has clearly illustrated the effect of the deformation before and after the transformation. @@ t5s13200.gif; Figure 1 @@ So, how should the transformation do? A direct idea is explicitly evaluated to transform F-expressions. The implementation of F is divided into two methods; one is a forward transformation, i.e., f, convert any pixel point in the P to Q, and pick up the pixel value. Since the number of pixel points contained in P and Q is generally different, even the difference is large, resulting in a pixel point or not assigned, forming an annoying void, or is assigned multiple times, waste time, total effect Not ideal; its second use F reverse transform F-1, reverse each pixel point in the Q to the corresponding point in the P, generally having a real coordinates, and the pixel value can be determined by interpolation, this, this, As a result, each pixel point in the image is assigned once, both improves the accuracy, and avoids no need to assign values, and the use is better. The above-described expression of the expression of the transform (or reverse transformation) is more accurate, but this often involves the problem of solving complex multivariate equations, is not easy to complete. Another idea given in this paper is that since the P and Q in each of the actions, the constitutive transformation pair, that is, any of the pits Pi (i = 1, 2 ... n) in the source polygon P, which is converted. The vertices qi (i = 1, 2 ... n) in the polygon q, then the reverse change point of Qi must be PI. Thus, each pixel point A of the Q within the Q, the coordinate value of the reverse transform point of each vertex can be used to approximate its anti-transform point B by billy interpolation technology; the coordinate value of point B is in the source image The interpolation is performed, and finally results in the result pixel value for display A. The second method avoids explicit solving of transform expressions in the premise of retaining certain precision, and is simple. Based on this idea, this paper is designed with a fast deformation algorithm; in addition, the algorithm has also been drawn on the scanning of the multilateral area scanning conversion to achieve an efficient scan of each pixel point in Q. Hereinafter, this paper first introduces interpolation technology and incremental computing techniques, and then gives a detailed step of the two-dimensional deformation algorithm.

Third, the interpolation technology has known the transformation coordinate value of each of the vertices QI (i = 1, 2 ... n), how to find the replacement coordinates of any pixels in Q? Double linear interpolation method is a simple and fast Approximate method. The specific practice is to first use the reverse transform coordinate of the multilateral vertices qi (i = 1, 2), and the reverse transform coordinates of the intersection of the current scan line and the polygon, and then use the intersection of the intersection, the scanning line The reverse transform coordinate values ​​at each pixel on the segment within the polygon are used for future calculations. After the scan line processing is completed, the anti-transform coordinate value of each pixel point in Q is also available. As shown in FIG. 2, the scan line Y (ordinate = Y) is interspersed in points A and B in the polygon, and D is at any point on the scan line on the segment AB of the polygon. The reverse transform coordinates of the three top points QI (I = 1, 2, 3) of the polygon are (RXI, RYI); the anti-transform coordinates of the points A, B and D are (RXA, RYA), respectively, ( RXB, RYB) and (RXD, RYD). Then RXP can be found in the following: RXA = URX1 (1-U) RX2 formula 1 RXB = VRX1 (1-V) RX3 formula 2 rxd = trxa (1-T) RXB formula 3 where u = | AQ2 | / | Q1q2 |, v = | BQ3 | / | Q1Q3 |, T = | DB | / | AB |, called interpolation parameters. The value of the RYD can also be fully similar, and even the calculation of the interpolation parameters is not necessary. (RXD, RYD) is an approximation of the coordinate approximation of the corresponding point in the original image. @@ t5s13201.gif; Figure 2 @@ The above two-line interpolation process can increase the speed by incremental calculation. Among them, in the horizontal direction, the reverse transform coordinates of each pixel on each section within the polygon can be increased from left to right along the scan line. Take the X coordinate of the reverse transform as an example. As shown in FIG. 2, on the scan line Y, C and D are adjacent two-pixel points, pairs of C points, interpolated parameters Tc = | CB | / | AB |, pair D point, TD = | DB | / | ab | ab | ab | ab | , The difference in interpolation parameters △ T = | CD | / | AB |, due to C and D adjacent, and on the same scan line, | CD | = 1, Δt = 1 / | AB |, in AB The segment is constant. According to the formula 1 to the formula 3, the relationship between the reverse transform X coordinate RxD and C point in the D point is as follows: rxd = rxc (RXA-RXB) · △ t = rxc △ rxx due to △ RXX is still constant in the AB segment, so the reverse transform X coordinate of each pixel point on the AB section can be administered by the RXA of the A. The incrementation of the anti-transform Y coordinate is also the same. In this way, the calculation of the inversion coordinate values ​​of each pixel point on the AB section is simplified to be two additions, and the time saving is amazing. In fact, in the vertical direction, each side can also increase its inverse transform coordinates of the intersection of the scan line between adjacent scanning points. As the Q1Q2 in Fig. 2, it is intended to the A points and E points respectively from the adjacent two scan lines Y and Y-1. Then the difference between the interpolation parameters of the interpolation parameters △ u = | AE | / | Q1Q2 |, and the difference between the Q1Q2 side and the scan line is θ, and the difference between the Y coordinates of the A and E is 1, then: | ae | = 1 / sin θ, constant for Q1Q 2, thus ΔU is also constant on this side, so that two-point anti-transform X coordinate relationship is as follows: RXA = RXE (RX1-RX2) △ u = rxe △ RXY obvious However, △ RXY is also a constant along Q1Q2, so it is possible that the reverse transform coordinates of adjacent scanning lines and each edge intersection are as long as the amount of floating point addition is calculated. Thus, the reverse translation of each pixel point in the region can be efficiently completed by incremental calculation, which greatly increases the speed of the entire deformation algorithm.

In addition, the previously mentioned that the inverse transform is generally having a real coordinates, and the color value cannot be obtained directly in the original image. But we know, a so-called digital image, is a method of discrete the continuous image in the integer coordinate grid point, and can use interpolation methods to obtain a color value of any point in the area. The interpolation is the color value of any coordinate point, and the color value of several pixels (with integer coordinate values, color value determination) around it is calculated. There is generally recently adjacent point method, bilinear interpolation method, and 3 sample function methods, etc. For the exquisite requirements of accuracy and speed, it is appropriate to use bilinear interpolation method for the vast majority of applications. It should be particularly pointed out that the 3 primary color components of the color should be interpolated, not the color index number obtained by the read pixel point. Detailed discussion [1]. 4. The two-dimensional deformation algorithm of the color image to be given below is the scanning line algorithm of the color image to be given below, and the phased data structure is used, and the destination multilateral region is used to scan, and the previous discussion Each pending point is first given by the data structure described in the C language: struct Edge {float x; / * The X coordinate of the lower end point of the side is indicated in the sorting table of the side; the activation chain table on the side In the middle, the X coordinate of the intersection of the edge and the scan line is indicated; * / float dx; / * countdown in the slope of the slope; that is, the incremental value of X * / int ymax; / * in the direction of the scan line * / int yMax; Y Coordinate * / float rx; / * In ET, the lower end point * / float ioat /; / * is reversed; in AEL, the anti-transformation coordinate of the edge and scanning line is * / FLOAT DRX; / * The direction of the scanning line, the reversal value of the Change * / FLOAT DRY; / * Change coordinate (RX, RY) * / struct Edge * next; / * Pointer to the next side * /}; / * Side of the polygon Information * / struct Edge * ET [YRESOLUTION]; / * Side of the classification table, the plurality of pointers that are classified by the non-horizontal side by the ordinate y of the lower end point. The y value of the lower end point is equal to the edge of the I, in the same class, each side is in order of the value of the X value and the value of Δx; YResolution is the number of scan lines * / Struct EDYE * AEL; / * activation The linked list consists of the edges of all polygons intersecting the current scan line, record the intersection sequence along the current scan line along the polygonal edge. * / struct polygon {int nPTS; / * Polygon vertex number * / struct point * PTS; / * Vertex sequence * /}; / * polygon information * / Struct Point {Int x; int y; / * vertex coordinate * / FLOAT RX; FLOAT RY; / * Anti-transform coordinate * /}; / * information of the polygonal vertices * /

Note that the lower end point in the above comments refers to one end of the longitudinal coordinate value, and the other end is the upper end point. The following is a detailed step of the algorithm: 1. Data Preparation For each non-horizontal qiqi 1, set Qi and Qi 1 of the coordinates (xi, yi) and (xi 1, yi 1); Transform coordinates (RXI, RYI) and (RXI 1, RYI 1). The information structure of this side is fill out in various fields of this side of the information structure: x = xi, yi 1 rx = rxi, yi 1 RXI 1, yi> yi 1 RY = RYI, YI yi 1 dx = (xi-xi 1) YMAX = Max (Yi, Yi 1) DRX = ( RXI-RXI 1) / (yi-yi 1) DRY = (ryi-ryi 1) / (yi-yi 1) then insert it into the chain table ET [MIN (YI, YI 1)]. Active side table AEL is vacated. The current scanning line ordinate Y is 0, the minimum serial number. 2. Scan conversion Repeat the following steps until y is equal to YRESOLUTION (1) If et [y] is non-empty, insert all edges into AL. (2) If AEL is non-empty, the value of X and DX is submitted from small to large arrangements, pick up (3); otherwise, (3) Put two or two two two two two two or two. Then consists of several horizontal intervals [XLEFT, XRIGHT] along the current scan line Y, the reverse transform coordinates of the left and right endpoints are: (LRX, LRY), (RRX, RRY). The following steps are made in each such interval: DRXX = (LRX-RRX) / (XLEFT-XRight) DRYX = (Lry-Rry) / (XLEFT-XRight) is also set to the original image has been read into two-dimensional array Image Image in. Let XX = XLEFT, RXY = LRX, RYX = LRY is a pixel that satisfies the coordinates (xx, y) satisfying XLEFT ≤ xx ≤ xright, which is inverse transform coordinates (RXY, RYX) can be calculated as in the following formula: RXX = RXX DRXX RYX = RYX DRYY is interpolated in array image (see [1]), and displays the pixel according to the result color value. Then x = x 1, calculate the next pixel. (4) Delete the edge of Y = Ymax in AEL, then adjust the information in each side of the AEL. X = x DX rx = ry DRX RY = RY DRY (5) y = y 1, repeat the next point. 5. Discuss the above algorithm for the two-dimensional deformation problem of color images, give a simple and fast implementation. As for three-dimensional deformation, it is more complicated because there is generally involved in hidden surface elimination. However, in some cases, you can avoid the hidden question? The purpose of the target surface is relatively simple. After projection to the screen, all parts do not have overlap, and there is no need to use blanking technology, and directly projection can be used. At this time, it is still possible to use the two-dimensional deformation technique described in this article.

The method is to approach the surface with many small planar polygons, and then projep it onto the screen to form a two-dimensional polygon. After determining the correspondence between the small polygons to the original image, the three-dimensional problem is transformed into two. Dimension problems, fast speed, can also achieve a certain effect. If the blanking technique is mastered, the arbitrary surface deformation can be handled, and the idea is the same. references

[1] Xianghui Shoubian "Realistic Image Color Interpolation and Its Application", computer world month, October 1992

转载请注明原文地址:https://www.9cbs.com/read-84652.html

New Post(0)