Reprint, please specify the source: http://blog.9cbs.net/tingya
3.4.5 Hardery meter
In Apache, two hash tables are often merged into a new hash table, providing a special hash merged function Apr_hash_merge in this APR, which is defined as follows:
APR_DECLARE (APR_HASH_T *) APR_HASH_MERGE (APR_POOL_T * P,
Const Apr_hash_t * h1,
Const Apr_hash_t * h2,
Void * (* merger) (APR_POOL_T * P,
Const void * key,
APR_SSIZE_T KLEN,
Const void * h1_val,
Const void * h2_val,
Const void * data),
Const void * data);
H1 and h2 are two haveh tables that need to be merged. Since a key may appear in H1 and H2 simultaneously, only one of the merged hash tables is allowed, so how to handle the same keys in H1 and H2 in new merged hash tables must be considered. thing. The MERGE function in the parameter is used to handle this situation. DATA is an additional parameter that delivers a merged function Merger.
{
APR_HASH_T * RES;
APR_HASH_ENTRY_T * NEW_VALS = NULL;
APR_HASH_ENTRY_T * ITER;
APR_HASH_ENTRY_T * ENT;
UNSIGNED INT I, J, K;
Res = APR_Palloc (p, sizeof (apr_hash_t));
RES-> pool = p;
RES-> free = null;
RES-> hash_func = base-> hash_func;
RES-> count = base-> count;
RES-> max = (overlay-> max> base-> max)? overlay-> max: base-> max;
IF (base-> count overlay-> count> res-> max) {
RES-> max = res-> max * 2 1;
}
Res-> array = alloc_Array (res, res-> max);
IF (base-> count overlay-> count) {
NEW_VALS = APR_PALLOC (p, sizeof (APR_HASH_ENTRY_T) *
(Base-> Count overlay-> count);
}
From a big point, the merger of the hash table is very simple: first create a new hash table and then fill the hash table to the new hash table. Creating a new hash table can be divided into three steps:
1) Create a APR_HASH_T structure. This can be achieved by Apr_Palloc, very simple.
2) Initialize the Array array within Apr_hash_t. The size of the array depends on two aspects:
■ Temporarily set the new hash table capacity max to two have the largest Max maximum in the hash table that requires the merge.
■ If the number of elements already used in the existing two hash table exceeds max, the MAX element is not sufficient to store all the elements in the two hash tables, so the final capacity is adjusted to max * 2 1.
3), the number of APR_HASH_ENTRY_T nodes in the merged two hash table is bash-> count overlay-> count, so the total memory size occupied is SIZEOF (APR_HASH_ENTRY_T) * (Base-> Count overlay-> count ). Before the merger, all nodes in the two hash tables are saved with a chain form, and all the nodes after the merger are saved with a continuous memory block, which is very similar to the previous hash table. 4) Once the required memory block is assigned, we must adjust the pointer in the Array array to the actual memory.
J = 0;
For (k = 0; k <= base-> max; k ) {
For (iter = base-> array [k]; it; ney = iter-> Next) {
I = t t> (h);
New_vals [J] .klen = t = = t;;
New_vals [j] .key = iter-> key;
New_vals [j] .val = iter-> val;
New_vals [j] .hash = iter-> hash;
New_vals [j] .next = res-> array [i];
Res-> array [i] = & new_vals [j];
J ;
}
}
The function first copies all the nodes in the first hash table to the above-described memory block, and adjusts the pointer in the Array array points to these memory blocks. The copy of the memory is as follows.
As can be seen from the figure below, for the source hash table index index linked list, the order in which the node appears in the chain table is exactly the order in which the order in the memory block is exactly the opposite of the order of the memory block: the arrival [index] is the first one in the linked list. The address of the node, and in the target hash table, Array [index] is the address of the last node in the list;
In the first copy, another important aspect is the index immutability. Any node, if the index in the source hash table is index, it is definitely index in the new hash table, this can be seen in the following lines:
I = t t> (h);
Res-> array [i] = & new_vals [j];
5) When the first hash table is copied to the new hash table, the process of the second hash table began to start:
For (k = 0; k <= overlay-> max; k ) {
For (iter = overlay-> array [k]; it; ney = ney = per-> next) {
i = @ t-> HASH & RES-> Max; U
For (ent = res-> array [i]; ent; ent = ent-> next) {
IF ((ent-> klen == iter-> klen) &&
(Memcmp (ent-> key, iter-> key, iter-> klen) == 0)) {
IF (Merger) {
ENT-> VAL = (* Merger) (p, iter-> key, iter-> klen,
ITER-> VAL, ENT-> VAL, DATA);
}
Else {
ENT-> VAL = ITER-> VAL;
}
Break;
}
IF (! ENT) {
New_vals [J] .klen = t = = t;;
New_vals [j] .key = iter-> key;
New_vals [j] .val = iter-> val;
New_vals [j] .hash = iter-> hash;
New_vals [j] .next = res-> array [i];
Res-> array [i] = & new_vals [j];
RES-> COUNT ;
J ;
}
}
}
Compared to the simple copy of the first hash table, the processing of the second hash table is nothing more than copying each of them into a new hash table, so the outermost loop is:
For (k = 0; k <= overlay-> max; k ) {
For (iter = overlay-> array [k]; it; ney = ney = per-> next) {
......
}
For each node, the function first calculates its hash value I, such as U, and makes further processing according to the hash value:
1) If after the first copy, the nest of the new hash table is still null, then the node will be inserted directly into the index i linked list.
2) If the index i linked list in the new hash table already exists, then further differentiation has occurred again:
■ If there is a node having a completely identical node (the key length is equal in the index i linked list, and the key value is identical), then this is called a key conflict, namely the hash table 1 and the hash table 2 There is exactly the same node. For this case, the usual processing is done by a dedicated merged function Merger, but if the merge function is not defined, the default is directly overridden to the VAL.
■ If the current node does not exist in the index i linked list, this means that this is a new node, which is added to the tail of the index i linked list. For example, the index 3 linked list in the hash table 2, the nodes in the list are finally inserted into the 10,11 nodes of the target hash table. At this point, the node 3 in 10, 11 and the first hash table is no longer continuous, but they are associated with NEXT, as shown.
The memory schematic of the new hash table after the merger is shown.
3.4.6 Sample of Hash Table
The following code demonstrates the use of a hash table, including the following aspects:
1), has a hash table
2), hash table / value increase
3), hash table / value search
4), hash table
#include
#include
#include
#include
#include
#include
// Add, delete, modify elements in the hash table
Static void modify_hashtab (APR_HASH_T * HT, APR_POOL_T * MP)
{
Apr_hash_set (HT, "Foo", APR_HASH_KEY_STRING, "foo");
APR_HASH_SET (HT, APR_PSTRDUP (MP, "BAR"), APR_HASH_KEY_STRING, APR_PSTRDUP (MP, "BAR");
APR_HASH_SET (HT, APR_PSTRDUP (MP, "FOOBAR"), APR_HASH_KEY_STRING, APR_PSTRDUP (MP, "BAR"));
APR_HASH_SET (HT, APR_PSTRDUP (MP, "Barfoo"), APR_HASH_KEY_STRING, APR_PSTRDUP (MP, "Foo")))))))))); / * To delete an entry, Pass Null As a value * /
APR_HASH_SET (HT, APR_PSTRDUP (MP, "To-del"), APR_HASH_KEY_STRING, APR_PSTRDUP (MP, "To-Del"));
APR_HASH_SET (HT, "To-del", APR_HASH_KEY_STRING, NULL);
Apr_hash_set (HT, APR_PSTRDUP (MP, "Override"), APR_HASH_KEY_STRING, APR_PSTRDUP (MP, "OLD-VAL"));
APR_HASH_SET (HT, APR_PSTRDUP (MP, "Override"), APR_HASH_KEY_STRING, APR_PSTRDUP (MP, "New-Val"));
}
// Iterate the entire hash table
Static void iterate_hashtab (APR_HASH_T * HT)
{
APR_HASH_INDEX_T * HI;
For (Hi = APR_HASH_FIRST (NULL, HT); Hi; Hi = APR_HASH_NEXT (HI)) {
Const char * k;
Const char * v;
APR_HASH_THIS (Hi, (const void **) & k, null, (void **) & v);
Printf ("HT ITeration: Key =% S, VAL =% S / N", K, V);
}
}
Int Main (int Argc, const char * argv [])
{
APR_POOL_T * MP;
APR_HASH_T * HT;
APR_INITIALIZE ();
APR_POOL_CREATE (& MP, NULL);
HT = APR_HASH_MAKE (MP);
Modify_hashtab (HT, MP);
{
Const char * VAL = APR_HASH_GET (HT, "Foo", APR_HASH_KEY_STRING);
Printf ("Val for /" FOO / "IS% S / N", VAL);
}
Iterate_hashtab (HT);
APR_POOL_DESTROY (MP);
APR_TERMINATE ();
Return 0;
}
The procedure run is as follows:
About the author Zhang Zhongqing, the main research direction is embedded browser, mobile middleware, and large-scale server design. The source code analysis of Apache is currently underway, and the "Apache Source Code Panorama Analysis" will be published. Apache series articles are part of this book, and friends interested in apache can be connected with Flydish1234 at sina.com.cn!
If you think this paper is good, please click on the "Recommend this article" link after the text! !