When you want to improve VB efficiency, it is often used to test the advantages and disadvantages of the algorithm, but the "algorithm" of the test itself is ignored! Here I really want to say a "story": I felt the same problem when I study an alpha computational code: he compared the VB algorithm with Alphablend in the API, and the results prove that VB faster. Because I also wrote an alpha operation code, I analyzed his source code, I found two different, one is Dib, and I am based on DDB, so his speed is very consistent And my natural speed is different because of the difference between the different color classes. The 16-bit color processing is always slower; the other is that he uses the Safearray structure pointer, and I don't, I can't use this pointer for a reason, His DIB bitmap is a DibSection that is pre-loaded into yourself. When you create a data pointer, it will be a one-step replication process than DDB.
The pointer is not the unique skill of VB. Is it possible to use a pointer? I looked at the test method used by the code. I understand that DIB is pre-loaded. The timing is just a computing part. Alphablend has no priority. Although it uses this Dib, but it does not have its pointer, It still needs to get this DIB data, then processed, and finally output back this DIB, and then output to the screen by VB, which is of course slow.
In fact, use DIB processing is indeed simple, because you can unify bit map data format, an algorithm can adapt to all colors. But why didn't I use DIB, the reason is a little: DIB is slow, the Alpha effect is a screen dynamic effect, and the original map is DDB, so there is no ready-made DIB available in practical applications. If you use DIB, you must have a conversion, plot the DDB into the newly created Dibsection, don't read this conversion, I test, use this way to make a complete alpha process, DDB plotted into Dibsection time accounted for the time 70% of processing time! ! !
Another is very surprisingly that the algorithm with the arithmetic algorithm is almost the same as the algorithm of the pointer, and the array is almost the same. So when you use the "input source from DDB, re-output back DDB" as a process, my DDB handle is more than 3 times faster than DIB, but is still much slower than Alphablend, my pointer to VB. The usage has begun.
Therefore, various tests must not leave the practical application, or you will really go into astray. Several functions of Bluedog (), except that the assembly, the same way, the method I use in handling, but I am in the process, don't say that it is necessary to call it fast than the function; the assembly is estimated to call one With C to do the function of the shift operation, I am disappointed with the CopyMemory speed, as a function call, I don't dare to hold too much fantasy for it.
According to the results I repeatedly test, I always wanted to introduce a conclusion that many people's big fell glasses: CopyMemory sometimes has no direct value from the array!
It used to conclusively, it is derived from misunderstanding of test conditions. When we test, change in different algorithms is always compared in debugging environments. In the debugging environment, use CopyMemory instead of cyclic from array, will Make the speed improvement, why? Because CopyMemory is a function that has been compiled! If the cycle is compiled by this code, it will be faster than the call function. Because it is in the same way, save the overhead of the function call. However, CopyMemory is not obvious when the data is increased, and the loop is weak, so big data is recommended to use CopyMemory, this length is not determined, determined by the test situation, generally 32 bytes, or worth calling Function.