This is what I saw in some digital images and graphics. The book is more zero, I have sorted it.
Practice has proven that the interpolation algorithm is completely acceptable for the smaller cases of scale, convincing. In general, it is acceptable to any image if it is reduced by 0.5 times or more or more than 3.0 times. The neighboring interpolation (near neighbor sampling method): The most close to the interpolation is very simple. For a floating-point coordinate obtained by reverse transform, a simple hierarchy is obtained, and a integer coordinate is obtained. The pixel value corresponding to the integer coordinates is the pixel value of the purpose pixel, that is, floating point coordinates. The neighboring upper left corner (for DIB is the upper right corner, because its scan line is in advance) The corresponding pixel value is stored. It can be seen that the neighboring interpolation is simple and intuitive, but the resulting image quality is not high. For a destination pixel, the floating point coordinate obtained by reverse transform is (i u, j v), where I, J are all floating point numbers in non-negative integers, u, V are [0, 1) intervals, then this pixel value f (i u, j v) can be coordinates in the original image (i, j), (i 1, j), (i, j 1), (i 1, j 1) determine the value of the surrounding pixels, namely f (i u, j v) = ( 1-U) (1-V) f (I, J) (1-U) Vf (I, J 1) U (1-V) f (i 1, J) UVF (i 1 , J 1) where f (i, j) represents the pixel value at the source image (i, j), and push this is a bine linear interpolation method. The double linear interpolation method is large, but the image quality is high after zooming, and the pixel value is not continuous.