This article tells the common optimization method for writing C program code, divided into I / O article, memory articles, algorithms, MMX assembly articles.
two. Memory articles
In the previous one, we talked about how to optimize the read and write, this article mainly tells the optimization of memory operations, mainly with addressing, pointer chain, etc., and some practical skills.
I. Optimize array addressing
When writing a program, we often use a one-dimensional array a [m × n] to simulate the two-dimensional array a [n] [m], this time to access a [] one-dimensional number of groups: We often write A [ J × M I] (for A [J] [i]). This is of course unbended, but it is clear that each addressing statement J × M i is a multiplication. Now let's take a look at the addressing of the two-dimensional value, saying that we have to go deep into the internal details of the C compiler in the application for two-dimensional arguments and one-dimensional array - actually applied for two arrays and one-dimensional arrays. The processing of the compiler is different, and the array applying a A [N] [M] is larger than the number of spaces that apply for a A [M × N]. The structure of the two-dimensional array is divided into two parts:
1 is a pointer array that stores the starting address of each line, which is why A [J] is a pointer instead of A [J] [0] data in A [N] [M].
2 is the true M × N continuous data block, which explains why a two-dimensional array can be addressed as a one-dimensional array. (Ie a [j] [i] is equivalent (a [0]) [j × m i])
Clear these, we can know that the two-dimensional array is higher than the one-dimensional array addressing efficiency of the two-dimensional array). Because A [J] [i] addressing is just the address of the J line, then i, it is not multiplied!
Therefore, when processing a one-dimensional array, we often use the following optimization: (pseudo code example)
INT A [m * n];
INT * b = a;
For (...) {
b [...] = ...;
..........
b [...] = ...;
B = m;
}
This is an optimized example of traversing an array, each of which b = m makes B update to the head pointer of the next row. Of course, if you like, you can define an array pointer to store the starting address of each row. Then, according to the addressing method of the two-dimensional array, the one-dimensional array is then processed. However, here I suggest you simply apply for a two-dimensional array. Below is a dynamically applied and release a two-dimensional array of C code.
INT GET_MEM2DINT (int *** array2d, int rows, int columns) //h.263 source code
{
INT I;
IF ((* Array2D = (int **) Calloc (Rows, SizeOf (int *))) == NULL) No_mem_exit (1);
IF ((* Array2D) [0] = (int *) Calloc (Rows * Columns, SIZEOF (int))) == NULL) No_mem_exit (1);
For (i = 1; i (* array2d) [i] = (* array2d) [i-1] columns; Return Rows * Columns * Sizeof (int); } Void Free_MEM2D (byte ** array2d) { IF (array2d) { IF (Array2D [0]) Free (Array2D [0]); Else Error ("Free_MEM2D: Trying to Free Unused Memory", 100); Free (array2d); } else { Error ("Free_MEM2D: Trying to Free Unused Memory, 100); } } By the way, if your array is addressed with an offset, don't write as a [x offset], but should be b = a offset, then access B [x]. However, if you don't deal with a special request for speed, this optimization is not necessary. Remember, if a normal program, readability and removal value are the first. II. Array starting from negative numbers When you are programming, do you often handle border problems? When processing a boundary issue, the subscript is often starting from the negative number. Usually our processing is separated by the boundary processing, and separately writes separately. Then when you know how to use the array starting from the negative number, the boundary is more convenient. Below is a static use of an array starting from -1: INT A [M]; INT * PA = A 1; Now if you use PA to access A, it is from -1 to m-2, it is so simple. (If you are dynamically applied for a, free (a) may not free (Pa) because the PA is not an array header address) III. Do we need a linked list? I believe that when you learn the "data structure", it is quite familiar with the linked list, so I think someone is in the form of a chain list when writing some time-consuming algorithms. This is written, of course, the occupation of memory (it seems), but how can the speed? If you test: Apply and traverse the time of 10,000 element chain tables, you will find a difference between the time! (Previously tested an algorithm, using a chain list for 1 minute, with an array for 4 seconds). So here my suggestion is: When writing a big code, don't use the list as much as possible! In fact, it is actually used to truly save memory. When writing a lot of algorithms, we know how much memory to take up (at least knows), then consume memory with its listing list, it is better to use an array. Memory is occupied. The form of the linked list must be in the case where the element is relatively small, or when the portion is substantially not time consuming. (I estimate that the chain list is slow to slowly apply for memory. If you can assign a large memory block like an array, you should not consume very much. This is not specifically tested. Just guess: P)