Memory management insider

xiaoxiao2021-03-19  207

Memory management insider

Dynamic allocation, compromise and implementation

Level: primary

Jonathan Bartlett, Technical Director, New Media WORX

November 29, 2004

This article will overview the memory management technology that LinuxTM programmers can use. Although the focus of attention is C language, it also applies to other languages. Will provide you with how to manage memory details, and then further display how to manually manage memory, how to use reference counts or memory pools to manage memory, and how to use garbage collection automatically manage memory.

Why do you manage memory?

Memory management is one of the most basic fields of computer programming. In a lot of scripting languages, you don't have to worry about how memory is managed, which does not make the importance of memory management a little reduction. Understanding the actual programming, understanding your memory manager's ability and limitation is critical. In most system languages, such as C and C , you must perform memory management. This article will introduce the basic concepts of manual, semi-manual, and automatic memory management practices.

Dry back to the era of assembly language programming on Apple II, that memory management is not a big problem. You are actually running throughout the system. How much memory has yourself, how much memory you have. You don't even have to figure out how much memory it has, because the amount of memory per machine is the same. So, if the memory needs to be very fixed, you only need to select a memory range and use it.

However, even in such a simple computer, you will have problems, especially when you don't know how much memory is needed. If your space is limited, the memory demand changes, then you need some ways to meet these needs:

Determine if you have enough memory to process data. Get part of memory from available memory. Returning partial memory to the available memory pool (Pool) so that it can be used by other parts of the program or other programs.

A library that implements these needs is called allocators because they are responsible for assigning and recycling memory. The stronger the dynamics of the program, the more important memory management, the choice of your memory allocation is more important. Let us learn about different methods that can be used for memory management, their benefits and shortcomings, and their most applicable cases.

C style memory allocation program

The C programming language provides two functions to meet our three needs:

Malloc: This function assigns a given byte number and returns a pointer to them. If there is not enough memory, then it returns an empty pointer. Free: This function gets a pointer to the memory clip allocated by malloc, and releases it so that the program or operating system is used later (in fact, some Malloc implementation can only return the internal storage to the program, and cannot return internal storage to the operation system).

Physical memory and virtual memory

To understand how the internal existence is assigned, first, it is first to understand how to assign memory from the operating system to the program. Each process on your computer considers yourself to access all physical memory. Obviously, since multiple programs are running, each process is not possible to have all memory. In fact, these processes use virtual memory.

Just as an example, let us assume that your program is accessing the address of 629. However, the virtual memory system does not need to store it in the RAM of the location 629. In fact, it can even be in RAM - if the physical RAM is full, it may even be transferred to the hard disk! Since such an address does not have to reflect the physical location of memory, they are called virtual memory. The operating system maintains a table of virtual addresses to the translation of the physical address so that the computer hardware can respond correctly in address requests. Also, if the address is on the hard disk instead of in the RAM, the operating system will temporarily stop your process, transfer other memory to the hard disk, load the requested memory from the hard disk, and then restart your process. In this way, each process gets the address space you can use, you can access more memory than the memory installed. On the 32-bit x86 system, each process can access 4 GB of memory. Now, there is no 4 GB of memory on most people's systems, even if you count SWAP, the memory used by each process is certainly less than 4 GB. Therefore, when a process is loaded, it obtains an initial memory allocation depends on a particular address called System Break. This address is followed by a memory that is not mapped - is used to not assign a corresponding physical location in the RAM or hard disk. Therefore, if a process runs beyond its initial allocation, it must request an operating system "map" to "more memory. (Mapping is a math term representing a relationship - the memory will be mapped when the memory virtual address has a corresponding physical address to store memory.)

UNIX-based systems have two basic system calls that map to additional memory:

BRK: BRK () is a very simple system call. Remember the system interruption point? This location is the memory boundary of the process map. BRK () just simply moves this location forward or back, you can add memory to the process or take the memory from the process. MMAP: mmap (), or "memory image", similar to brk (), but more flexible. First, it can map any location memory, not only to the process. Second, it can not only map the virtual address to the physical RAM or SWAP, which can also map them to files and file locations, so that read-write memory will read and write data in the file. However, here, we only care about the ability of the MMAP to add the mapped memory that is mapped to the process. What MunMap () does is opposite to MMAP ().

As you can see, BRK () or MMAP () can be used to add additional virtual memory to our process. BRK () will be used in our example because it is simpler, more common.

Implement a simple distribution program

If you have written a lot of C procedures, you may have used Malloc () and Free () multiple times. However, you may not use some time to think about how they are implemented in your operating system. This section will show you a simplest code of Malloc and Free to help explain what is involved in managing memory.

To try to run these examples, you need to copy this code list and paste it into a file called Malloc.c. Next, I will explain this in some part.

In most operating systems, memory allocation is handled by two simple functions:

Void * malloc (long number ": This function is responsible for allocating NumBytes size memory and returns a pointer to the first byte. Void Free (Void * firstbyte): If a given pointer returned by the previous Malloc, the function will return the assigned space to the "free space" of the process. Malloc_init will be a function that initializes the memory allocation program. It is necessary to complete the following three things: Logo the Assign program to have been initialized, find the last valid memory address in the system, and then build a pointer to the memory we manage. These three variables are global variables:

Listing 1. Global variable of our simple allocation program

INT HAS_INITIALIZED = 0;

Void * managed_memory_start;

Void * last_valid_address;

As mentioned earlier, the boundary (last valid address) of the mapping memory is often referred to as a system interruption point or the current interrupt point. In many UNIX® systems, in order to point out the current system interrupt point, the SBRK (0) function must be used. SBRK moves the current system interrupt point according to the number of bytes given in the parameter, and then returns a new system interrupt point. Use parameter 0 just returns the current interrupt point. Here is our Malloc initialization code, which will find the current interrupt point and initialize our variables:

Listing 2. Assignment initialization function

/ * Incrude the sbrk function * /

#include

void malloc_init ()

{

/ * Grab the last valid address from the os * /

Last_valid_address = sbrk (0);

/ * We don't have any memory to manage yet, so

* Just set the beginning to be last_valid_address

* /

Managed_memory_start = last_valid_address;

/ * Okay, we're initialized and ready to go * /

HAS_INITIALIZED = 1;

}

Now, in order to fully manage memory, we need to track which memory to be assigned and recycled. After free Free call for the memory block, what we need to do is, such as the waiting for themselves as unused, and, when calling Malloc, we must be able to locate the unused memory block. Therefore, the beginning of each memory returned by Malloc first has this structure:

Listing 3. Memory control block structure definition

Struct Mem_Control_Block {

INT is_available;

Int size;

}

Now, you may think that when the program is called Malloc, this will trigger a problem - how do they know this structure? The answer is that they don't have to know; before returning a pointer, we hide it after moving it to this structure. This makes the returned pointer point to memory that is not used for any other use. In other words, from the perspective of the calling program, all of them are free, open memory. Then, when the pointer is passed through free (), we only need to retrieve several memory bytes to find this structure again.

Before discussing allocated memory, we will discuss free because it is simpler. In order to release the memory, the only thing we must do is to get the pointer we give, roll back the SIZEOF (Struct Mem_Control_block) bytes, and mark it as available. Here is the corresponding code: Listing 4. Unpacking function

Void free (void * firstbyte) {

Struct MEM_CONTROL_BLOCK * MCB;

/ * Backup from the given pointer to find the

* MEM_CONTROL_BLOCK

* /

MCB = firstBYTE - SIZEOF (Struct Mem_Control_block);

/ * Mark The Block as being available * /

MCB-> IS_AVAILABLE = 1;

/ * That's it! We're done. * /

Return;

}

As you can see, in this allocation program, the release of memory uses a very simple mechanism to complete memory release within a fixed time. Distribution memory is slightly difficult. The following is a scope of this algorithm:

Listing 5. Pseudo code for the main allocation program

1. IF Our Allocator Has Not Been Initialized, Initialize IT.

2. Add sizeof (struct mem_control_block) to the size request.

3. START AT Management_Memory_Start.

4. Are We at last_valid address?

5. IF WE Are:

A. We Didn't Find Any EXISTING SPAT THAT That Large ENOUGH

- ask the Operating system for more and return.

6. Otherwise:

A. Is The Current Space Available (Check IS_AVAILABLE FROM

THE MEM_CONTROL_BLOCK?

B. IF IS:

I) Is it Large Enough (Check "Size" from the

MEM_CONTROL_BLOCK?

II) IF SO:

a. Mark IT as unavailable

b. Move Past MEM_CONTROL_BLOCK AND RETURN THE

Pointer

III) Otherwise:

a. Move Forward "size" bytes

b. Go Back Go Step 4

C. OtherWise:

I) Move Forward "size" bytes

II) Go Back to Step 4

We mainly use the connection pointer to traverse memory to find open memory blocks. Here is the code:

Listing 6. Main Assignment

Void * malloc (long Numbytes) {

/ * Holds where we are loops in memory * /

Void * current_location;

/ * This is the same as current_location, But Cast TO A

* Memory_Control_block

* /

Struct MEM_CONTROL_BLOCK * CURRENT_LOCATION_MCB;

/ * This is the memory location we will will return. It Will

* Be set to 0 us We Find Something Suitable * /

Void * Memory_Location;

/ * Initialize if We Haven't Already Done SO * /

IF (! HAS_INITIALIZED) {

Malloc_init ();

}

/ * The Memory We search for Has To include the memory

* Control Block, But The Users of Malloc Don't Need

* To Know this, SO We'll Just Add It in for them.

* /

NumBytes = NumBytes Sizeof (struct mem_control_block);

/ * SET MEMORY_LOCATION TO 0 Until We Find A SuiTable

* Location

* /

Memory_location = 0;

/ * Begin Searching At the Start of Managed Memory * /

Current_location = managed_memory_start;

/ * Keep Going Until We Have Searched All Allocated Space * /

While (current_location! = last_valid_address)

{

/ * Current_location and current_location_mcb points

* To the Same Address. However, Current_Location_MCB

* Is of the Correct Type, SO We can use it is a struct.

* Current_location is a void Pointer So We can use IT

* To Calculate Addresses.

* /

Current_location_mcb =

(struct mem_control_block *) current_location;

IF (current_location_mcb-> is_available)

{

IF (current_location_mcb-> size> = numbytes)

{

/ * Woohoo! Weide Found An Open,

* ApproPriately-size location.

* /

/ * IT is no longer available * /

CURRENT_LOCATION_MCB-> is_available = 0;

/ * WE OWN IT * /

Memory_location = current_location;

/ * Leave the loop * /

Break;

}

}

/ * IF WE Made It Here, It's Because The Current Memory

* Block Not Suitable; Move to the Next ONE

* /

Current_location = current_location

Current_location_mcb-> size;

}

/ * If West Don't have a valid location, we'll

* Have to ask the Operating System for more memory

* /

IF (! Memory_Location)

{

/ * Move The Program Break NumBytes Further * / Sbrk (Numbytes);

/ * The New Memory Will BE Where The last Valid

* Address LEFT OFF

* /

Memory_location = last_valid_address;

/ * We'll Move The Last Valid Address Forward

* Numbytes

* /

Last_valid_address = last_valid_address numbytes;

/ * WE NEED TO IITIALIZE THE MEM_CONTROL_BLOCK * /

CURRENT_LOCATION_MCB = MEMORY_LOCATION;

CURRENT_LOCATION_MCB-> is_available = 0;

Current_location_mcb-> size = Numbytes;

}

/ * NOW, NO MATTER What (Well, Except for Error Conditions),

* Memory_Location Has The Address of The Memory, Including

* The Mem_Control_Block

* /

/ * Move the POINTER PAST THE MEM_CONTROL_BLOCK * /

Memory_location = memory_location sizeof (struct mem_control_block);

/ * Return the Pointer * /

Return Memory_Location;

}

This is our memory manager. Now, we only need to build it and use it in the program.

Run the following command to build a Malloc compatible allocation program (in fact, we have ignored some functions such as Realloc (), but Malloc () and Free () are the most important function):

Listing 7. Compilation Assignment Program

gcc -shared -fpic malloc.c -o malloc.so

The program will generate a file called Malloc.so, which is a shared library containing our code.

In UNIX systems, you can now use your allocation program to replace the system's malloc (), the practice is as follows:

Listing 8. Replace your standard Malloc

LD_PRELOAD = / PATH / TO / MALLOC.SO

Export LD_PRELOAD

The LD_PRELOAD environment variable makes the dynamic linker before loading any executable programs, first loads a given shared library symbol. It gives priority to the symbols in the specific library. Therefore, from now on, any application in this session will use our malloc (), not only the system's application can be used. Some applications do not use malloc (), but they are exceptions. Other applications that use realloc () and other memory management functions, or incorrectly assume those applications that malloc () internal behavior, it is likely to crash. Ash Shell seems to work with our new malloc ().

If you want to make sure Malloc () is being used, you should test by adding Write () calls to the entry point of the function.

Our memory manager has lacking in many ways, but it can effectively show what memory management needs to do. Its disadvantages include:

Since it operates on the system interrupt point (a global variable), it cannot be used with other allocation programs or MMAP. When allocating memory, in the worst case, it will have to traverse all process memory; may include many memory on the hard disk, which means that the operating system will take the time to move into the data and move from the hard disk. data. There is no good memory handling program (Malloc only assumes that memory allocation is successful). It does not achieve many other memory functions, such as Realloc (). Since SBRK () may alternate more memory than our request, some memory is missing in the end of the heap. Although the IS_AVAILABLE tag contains only one information, it uses a complete 4-byte word. The allocation program is not a thread. The allocation program cannot spend the idle space to a larger memory block. The excessive matching algorithm of allocation programs leads to a lot of potential memory fragments. I am sure that there are still many other questions. This is why it is just an example! Other Malloc implementation

Malloc () has a lot of implementation, these achievements have their own advantages and disadvantages. When designing a distribution program, you should face a lot of choices that need to be compromised, including:

The speed of allocation. Recycling speed. There are threaded environments. Memory will be used in light. Partial cache. Bookkeeping memory overhead. The behavior in the virtual memory environment. Small or big object. Real-time guarantee.

Each implementation has its own advantages and disadvantages. In our simple distribution program, allocation is very slow, and the recovery is very fast. In addition, it is best suited for processing large objects because it is using virtual memory systems.

There are many other allocation programs that can be used. These include:

Doug Lea Malloc: Doug Lea Malloc is actually a complete set of allocation programs, including the original allocation program of DOUG LEA, GNU Libc Assignment, and PTMalloc. The Doug LEA allocation program has a very similar basic structure with our version, but it adds an index, which makes the search speed faster, and can combine multiple blocks that are not used as a large block. It also supports caches to use the most recently released memory faster. PTMalloc is an extended version of Doug Lea Malloc that supports multithreading. In the reference part of this article, there is an article describing the malloc implementation of Doug Lea. BSD Malloc: BSD Malloc is implemented with 4.2 BSD, included in FreeBSD, which assigns objects from a pool composed of pre-sized objects. It has some Size classes for object sizes, and several power of these objects are 2 cases to reach a constant. So, if you request an object of a given size, it simply assigns a Size class that matches it. This provides a quick implementation, but may waste memory. In the reference part, there is an article describing the implementation. Hoard: The goal of writing Hoard is to make memory allocation is very fast in multithreaded environments. Therefore, its configuration is used by locking, so that all processes do not have to wait for memory. It can significantly speed up the speed of multi-threaded processes that make many allocation and recycling. In the reference part, there is an article describing the implementation.

The most famous allocation procedures in many available distributions is the above allocation programs. If your program has a special allocation requirement, you may be more willing to write a customized allocation program that matches your program memory allocation. However, if you are not familiar with the design of allocation programs, custom allocation processes usually bring more problems than they solve. To get an appropriate introduction to this topic, please refer to Donald Knuth The Art of Computer Programming Volume 1: Section 2.5 "Dynamic Storage Allocation" in Fundamental Algorithms (see links in the reference). It is a bit out of time because it does not consider the virtual memory environment, but most of the algorithms are based on the functions given in the previous. In C , you can implement your own allocation programs in each class or each template by overloading Operator New (). A small object allocation program is described in Chapter 4, "Small Object Allocation", written in Andrei Alexandrescu, described in "Small Object Allocation") (see links in the reference).

Disadvantages of Malloc () memory management

Not just our memory manager has a disadvantage, Malloc ()-based memory manager still has a lot of shortcomings, no matter which allocation you are using. For those programs that need to keep long-term storage using malloc () can manage memory may be very disappointing. If you have a lot of unfixed memory references, it is often difficult to know when they are released. The survival period is limited to the memory of the current function is very easy to manage, but the management memory is much more difficult for the memory period beyond the range of memory. Moreover, the memory management is the problem that the program is called or is being responsible by the called function, and many APIs are not clear.

Many programs tend to use their own memory management rules because of the problem of managing memory. C exception handling makes this task more problem. Sometimes it seems to be committed to managing memory allocation and cleaning, more than the code that actually completes the calculation task! Therefore, we will study other options for memory management.

Back to top

Semi-automatic memory management strategy

Quote count

The reference count is a semi-automated memory management technology, indicating that it requires some programming support, but it does not need to know when a certain object is no longer used. The reference counting mechanism will complete the memory management task.

In the reference count, all shared data structures have a domain to include the number of times the current active "reference" structure. When a program is passed to a program pointer, the program adds the reference count to 1. In essence, you are telling the data structure, it is being stored in how many positions. Then, when your process completes the use of it, the program will reduce the reference count 1. After ending this action, it will also check if the count has dropped to zero. If yes, then it will release memory.

The benefit of this is that you do not have to track each path in which a given data structure in the program may follow. Each time a partial reference will result in an appropriate increase or decrease in the count. This prevents the structure from being released in the use of a data structure. However, when you use a data structure that uses a reference count, you must remember the running reference count function. In addition, the built-in functions and third-party libraries do not know or you can use your reference counting mechanism. The reference count is also difficult to handle the data structure of a cyclic reference.

To implement a reference count, you only need two functions - an addition reference count, a reduction reference count and release memory when the count is reduced to zero.

An example reference count function set may appear as follows:

Listing 9. Basic reference counting function

/ * Structure definitions * /

/ * Base structure what holds a refcount * / struct refcountedstruct

{

Int refcount;

}

/ * All Refcounted Structures Must Mirror Struct

* RefcountedStruct for their first variables

* /

/ * Refcount Maintenance functions * /

/ * Increase Reference Count * /

Void Ref (Void * Data)

{

Struct refcounted 4truct * rstruct;

rstruct = (struct refcountstruct *) Data;

Rstruct-> refcount ;

}

/ * Decrease Reference Count * /

Void Unref (Void * Data)

{

Struct refcounted 4truct * rstruct;

rstruct = (struct refcountstruct *) Data;

Rstruct-> refcount--;

/ * Free the structure if there is are no more users * /

IF (Rstruct-> Refcount == 0)

{

Free (Rstruct);

}

}

Ref and unref may be more complicated, depending on what you want to do. For example, you may want to add a lock for multi-thread programs, then you may want to extend REFCOUNTEDSTRUCT so that it also contains a pointer to a function to be called before the memory is released (similar to the destructor-oriented language " - This pointer is included in your structure, then this is required).

When using REF and UNREF, you need to follow these pointers allocation rules:

Unref allocates the value of the left-hand-side pointer pointing to. Ref allocates the value of the left end-side pointer pointing.

In a function of passing the structure of the reference count, the function needs to follow these rules:

REF Each pointer at the beginning of the function. At the end of the function, Unref first pointer.

The following is an example of a vivid code that uses a reference count:

Listing 10. Example of using a reference count

/ * EXAMPLES OF USAGE * /

/ * Data Type to be refckounted * /

Struct mydata

{

Int refcount; / * Same as refcountedstruct * /

Int datafield1; / * fields specific to this struct * /

INT DataField2;

/ * Other declarations Would Go Here as appropriate * /

}

/ * Use the functions in code * /

Void Dosomething (Struct MyData * DATA)

{

REF (data);

/ * Process data * /

/ * WHEN WE area THROUGH * /

Unref (data);

}

Struct mydata * globalvar1;

/ * Note That in this one, we don't decreage the

* Refcount Since We Are Maintaining The Reference

* Past the end of the function call through the * Global Variable

* /

Void StoreSomething (Struct MyData * DATA)

{

Ref (data); / * Passed as a parameter * /

Globalvar1 = DATA;

Ref (data); / * ref Because of assocignment * /

Unref (data); / * function finish * /

}

Since the reference count is so simple, most of the programmers have since implemented it, not the library. However, they rely on low-level distribution procedures such as Malloc and Free to actually allocate and release their memory.

In advanced languages ​​such as Perl, use the reference count when performing memory management. In these languages, the reference count is automatically handled by the language, so you don't have to worry about it, unless you want to write the extension module. Since all content must be counted, this will have some impact on the speed, but it greatly improves the security and convenience of programming. The following is the benefits of reference count:

Simple. Easy to use. Since the reference is part of the data structure, it has a good cache location.

However, it also has its shortcomings:

Require you never forget to call the reference count function. The structure as part of the cycle data structure cannot be released. Mitigate the allocation of almost every pointer. Although the objects used are referenced, you must take other methods when using an abnormal process (such as try or setjmp () / longjmp ()). Additional memory is required to process references. The reference count takes up the first location in the structure, and the fastest access to most machines is this location. Slowly more difficult to use in multithreaded environments.

C can access some of the incorrects made by the programmer by using a smart point (Smart Pointers), the smart pointer can handle the reference count and other pointers. However, if you have to use any previous codes that cannot handle smart pointers (such as the connection to the C library), in fact, the use of their consequences is not more difficult and complicated than they do not use them. Therefore, it is usually only beneficial to pure C projects. If you want to use a smart pointer, you should read the chapter of the "Smart Pointers" in the Modern C Design written by Alexandrescu.

Memory pool

The memory pool is another semi-automatic memory management method. Memory pool helps some programs for automatic memory management, which will experience some specific phases, and each phase has a specific phase of the process assigned to the process. For example, many web server processes allocate a lot of memory for each connection - the maximum survival period of memory is the present connection. Apache uses pool-type memory (Pooled Memory), splitting its connection into individual phases, each with its own memory pool. At the end of each stage, all memory will be released at a time.

In pool-type memory management, each memory allocation specifies the memory pool, and allocates memory from it. Each memory pool has a different survival period. In Apache, there is a duration of the memory pool that exists in the server, and there is a duration of the memory pool, and a duration of the presence period, and there are other memory pools. Therefore, if my series of functions do not generate data longer than the connection duration, then I can allocate memory from the connection pool and know that these memory will be released automatically when the connection ends. In addition, there are some implementations to allow registration clearance functions, before clearing the memory pool, you can call it to complete all other tasks that need to be completed before the memory is cleaned (similar to the object-oriented destructor). To use the pool in your own program, you can use the GNU Libc's Obstack implementation, or you can use Apache's Apache Portable Runtime. The benefit of GNU Obstack is that the default will include them in a GNU-based Linux release. The benefit of Apache Portable Runtime is that there are many other tools to process all aspects of multiplayer software. To learn more about GNU Obstack and Apache's pool memory implementation, see the link to these implementation documents in the Reference section.

The following hypothetical code list shows how to use Obstack:

Listing 11. Sample code for Obstack

#include

#include

/ * EXAMPLE CODE LISTING for USING OBSTACKS * /

/ * Used for obstack macros (xmalloc is

A Malloc Function That EXITS IF MEMORY

IS exhausted * /

#define obstack_chunk_alloc xmalloc

#define obstack_chunk_free free

/ * Pools * /

/ * ONLY Permanent Allocations Should Go in this pool * /

Struct Obstack * Global_Pool;

/ * This pool is for per-connection data * /

Struct Obstack * connection_pool;

/ * This pool is for per-request data * /

Struct Obstack * Request_pool;

void allocation_failed ()

{

Exit (1);

}

int main ()

{

/ * Initialize pools * /

Global_pool = (struct obstack *)

Xmalloc (Struct Obstack);

Obstack_init (global_pool);

Connection_pool = (struct obstack *)

Xmalloc (Struct Obstack);

Obstack_init (connection_pool);

Request_pool = (struct obstack *)

Xmalloc (Struct Obstack);

Obstack_init (Request_pool);

/ * Set the error handling function * /

Obstack_alloc_failed_handler = & allocation_failed; / * Server main loop * /

While (1)

{

WAIT_FOR_CONNECTION ();

/ * WE Are IN a connection * /

WHILE (more_requests_available ())

{

/ * Handle Request * /

Handle_Request ();

/ * Free all of the memory allocate

* in The Request Pool

* /

Obstack_free (Request_Pool, Null);

}

/ * We're finished with the connection, time

* to free this pool

* /

Obstack_free (connect_pool, null);

}

}

INT HANDLE_REQUEST ()

{

/ * Be Sure That All Object Allocations Are Allocated

* from the request pool

* /

INT BYTES_I_NEED = 400;

Void * Data1 = Obstack_alloc (Request_pool, Bytes_i_Need);

/ * Do Stuff to Process the Request * /

/ * RETURN * /

Return 0;

}

Basically, after each major phase of the operation, the OBSTACK at this stage will be released. However, it is important to note that if a process needs to allocate duration longer than the current phase, it can also use longer term Obstack, such as connection or global memory. NULL passed to OBSTACK_FREE () indicates that it should release all content of Obstack. You can use other values, but they are usually not very practical.

The benefits of using pool memory allocation are as follows:

Applications can simply manage memory. Memory allocation and recovery are faster because each time is done in a pool. Allocation can be completed within O (1) time, the time required to release the memory pool is similar (actually O (N) time, but in most cases, the factor will be divided into O (1) ))). Error-Handling Pools can be pre-allocated so that programs can still be recovered when regular memory is exhausted. It is very easy to use standard implementation.

The shortcomings of pool-type memory are:

Memory pools apply only to operations that can be phased in phases. Memory pools usually cannot work well with the third party library. If the structure of the program changes, the memory pool has to be modified, which may result in redesign of the memory management system. You must remember which pool needs to be allocated. In addition, it is difficult to capture the memory pool if it is wrong here.

Garbage collection

Garbage Collection is fully automatically detected and removed from the data object that is no longer used. Garbage collectors are usually run when the available memory is reduced to less than one specific threshold. Typically, they use a "basic" data-stack data, global variable, register-as a starting point for the available "basic" data-stack data, global variables and registers. They then try to track each piece of data through these data. The collector is found to be useful data; it is not found to be garbage, which can be destroyed and reuse these useless data. In order to effectively manage memory, many types of garbage collectors need to know the planning of internal pointers in the data structure, so in order to properly run the garbage collector, they must be part of the language itself.

Collector type

Copying: These collectors are divided into two parts and only allow data to remain on some of them. They are scheduled to copy data from a part from a part from a portion of the "basic" element. The newly occupied part of the memory is now active, and all the content on the other part is considered garbage. In addition, when this copy is performed, all pointers must be updated to point to a new location to each memory entry. Therefore, in order to use this garbage collection method, the garbage collector must be integrated with the programming language. Mark and clean up: Each piece of data is plus a label. Undually, all tags are set to 0, and the collector starts traversing data from "basic" elements. When it encounters memory, mark the label as 1. Finally, all the content that is not marked as 1 is considered garbage, and it will be reused when the memory is allocated. Incremental: Incremental garbage collector does not need to traverse all data objects. Because of the sudden wait during the collection, it is also because of the caching problem associated with accessing all current data (all content has to be page-in), and all memory will cause problems. The incremental collector avoids these issues. Conservative: The conservative garbage collector does not need to know any information related to the data structure when managing memory. They only view all data types and assume that they can all be pointers. Therefore, if a byte sequence can be a pointer to a partially assigned memory, the collector is marked as being referenced. Sometimes the memory that has not been referenced will be collected, which will cause problems, for example, if a value is included in an integer domain, this value is the address that has been allocated. However, this situation is rare, and it will only waste a small amount of memory. The advantage of the conservative collector is that they can be integrated with any programming language. Hans Boehm's conservative collector is one of the most popular garbage collectors because it is free, and both conservative and increment, you can use the --enable-redirect-malloc option to build it, and It can be used as a simple alternative to the system allocation program (with Malloc / free instead of its own API). In fact, if you do this, you can use the same LD_PRELOAD tricks as we used in the sample allocation program to enable garbage collection in almost any program on your system. If you suspect a program is leaking, you can use this garbage collector to control the process. In the early days, many people used this technology when Mozilla severely leaks memory. This garbage collector can be run either in Windows® or running under UNIX.

Some advantages of garbage collection:

You will never have to worry about the dual release of memory or the life cycle of the object. With some collectors, you can use the same API as the regular assignment.

Its disadvantages include:

When you use most collectors, you cannot interfere with the memory. In most cases, garbage collection is slower than other forms of memory management. Disadvantages caused by garbage collection errors are difficult to debug. If you forget to set the pointer that will not be used as NULL, then there will still be a memory leak.

Conclude

Everything needs to be compromised: performance, easy to use, easy to implement, support threads, etc., only some of them are listed here. In order to meet the requirements of the project, there are many memory management models to be used. Each mode has a lot of implementation, each has its own advantages. For many projects, the use of programming environment The default technology is sufficient, but when your project has special needs, it will help it will help. The following table compares the memory management strategy involved herein. Table 1. Comparison of memory allocation strategies

Strategy Assignment Speed ​​Recycling Speed ​​Local Cache Ease of Use Real Time Available SMP Thread Friendly Custom Assignment Depending on Implementation Depends on Implementation Depending on Implementation Depending on Implementation Depending on Implementation Depending on Implementation Depending on Simple Allocation Program Memory Give GNU Malloc's fastness in GNU Mallo, it is easy to expect in HOARD. It is easy to quote the reference count N / AN / A very well (depending on the Malloc implementation) depends on the implementation of the pool very fast. (Depending on the Malloc implementation) depends on whether there is almost no incremental conservative garbage collection in the middle and middle-middle middle and middle difference in the garbage collection (slower in the collection), it is easy to overcome.

Reference

You can see this article in our website on our world.

Document on the web

The OBSTacks section of the GNU C Library manual provides an Obstacks programming interface. The Apache Portable Runtime document describes the interfaces of their pool-type allocation programs.

Basic allocation program

Malloc's Malloc is one of the most popular memory allocation programs. BSD Malloc is used in most BSD-based systems. PTMalloc originated from the Malloc of Doug LEA for GLIBC. Hoard is a Malloc implementation optimized for multi-threaded applications. GNU Memory-Mapped Malloc (component of GDB) is a MMAP () Malloc implementation. Electricfence Malloc Debugger is a Malloc implementation in a debugger memory problem.

Pool distribution procedure

GNU Obstacks (Components of GNU Libc) is the most installed pool allocation program because there is it in each GLIBC-based system. Apache's pool allocation program (Apache Portable Runtime is the most widely used pool allocation program. Squid has its own pool allocation program. NetBSD also has its own pool allocation. Talloc is a pool-type allocation program that is part of Samba.

Intelligent pointer and custom distribution program

LOKI C library has a lot of common mode implemented for C , including smart pointers and a customized small object allocation program.

Garbage collector

Hahns Boehm Conservation Garbage Collector is the most popular open source garbage collector that can be used in conventional C / C programs.

Articles on virtual memory in modern operating systems

Marshall Kirk Mckusick and Michael J. Karels Complete A New Virtual Memory Implementation for Berkeley UNIX discusses the VM system of the BSD. Mel Gorman's Linux VM Documentation discusses the Linux VM system.

About Malloc articles

The Malloc in Modern Virtual Memory Environments, written by Poul-Henning Kamp, discussed Malloc and how it interact with BSD virtual memory. BERGER, MCKINLEY, Blumofe and Wilson Complete Hoard - A Scalable Memory Allocator for Multithreaded Environments discusses the implementation of the Hoard Assignment. Marshall Kirk Mckusick and Michael J. Karels Complete Design of a GeneRal Purpose Memory Allocator for the 4.3bsd Unix Kernel discussed the internal nuclear level allocation. The A Memory Allocator written by DOUG LEA gives an overview of the design and implementation allocation programs, including design selection and compromise. EMERY D. Berger writes Memory Management for High-Performance Applications discussed to customize memory management and how it affects high-performance applications. Articles about custom allocation programs

The Some Storage Management Techniques for Container Classes written by DOUG LEA describes custom allocation programs for C classes. BERGER, ZORN and MCKINLEY Composing High-Performance Memory Allocators discussed how to write custom allocation programs to speed up specific work. BERGER, ZORN and MCKINLEY REConsdering Custom Memory Allocation referred to the topic of customization allocation, see if it is really worthy of it.

Articles about garbage collection

Paul R. Wilson writes Uniprocessor Garbage Collection Techniques gives a basic overview of garbage collection. The Measured Cost of Garbage Collection written by Benjamin Zorn gives hard data on garbage collection and performance. The Memory Allocation Myths and Half-Truths written by Hans-Juergen Boehm gives myths for garbage collection (Myths). The Space Efficient Conservation Garbage Collection, which is written by Hans-Juergen Boehm is an article describing his garbage collector for C / C .

General References on the Web

There are a lot of links to memory management reference materials and technical articles in memory management reference. The OOPS Group Papers of memory management and memory hierarchy is a very good set of technical articles about this topic. Memory Management in C discusses the custom allocation program for C . Programming Alternatives: Memory Management discusses some options when programmers perform memory management. Garbage collection FAQ discusses all the contents of the garbage you need to know. Richard Jones Garbage Collection Bibliography has a link to any article you want about garbage collection. The debugging tool for dynamic storage allocation and memory management is well lists Malloc implementations used to find memory problems in the program.

books

The C Pointers and Dynamic Memory Management of Michael Daconta describes many technologies about memory management. Frantisek Franek writes Memory As a Programming Concept in C and C discusses the technical and tools that effectively use memory and gives a role of memory-related errors that should be noticed in computer programming. Richard Jones and Rafael LINS Collection: Algorithms for Automatic Dynamic Memory Management describes the most common garbage collection algorithms currently used. Some techniques for implementing basic allocation programs are described in Section 2.5 "Dynamic Storage Allocation" in Donald Knuth. The garbage collection algorithm for the list is discussed in Section 2.3.5 "Lists and Garbage Collection" written in Donald Knuth "Lists and Garbage Collection". Andrei Alexandrescu writes modern C Design Chapter 4 "Small Object Allocation" describes a high-speed small object allocation program that is much higher than C standard allocation. Andrei Alexandrescu writes MODERN C Design Chapter 7 "Smart Pointers" describes the implementation of intelligent pointers in C . Jonathan's Programming from The Ground UP Chapter 8 "Intermediate Memory Topics" has a assembly language version of the simple allocation program used herein. From DeveloperWorks

Self-management data buffer memory (United States, January 2004) A pseudo C (PSeudo-C) implementation for managing the self-managed abstract data buffer for managing memory is described. A Framework for the user defined Malloc Replacement Feature (DeveloperWorks, February 2002) shows how to use a tool in AIX to replace the original memory subsystem using your own memory subsystem. Master Linux debugging technology (DEVELOPERWORKS, Aug 2002) describes 4 different situations that can use debugging methods: paragraph error, memory overflow, memory leak, and hang. In dealing with memory vulnerabilities in the Java program (DEVELOPERWORKS, February 2001), understanding the causes of Java memory leaks, and when needed to consider them. In the developerWorks Linux zone, you can find more reference materials for Linux developers. From developerWorks Speed-start your Linux app area, you can download a free test version running on Linux IBM's middleware products, including WebSphere® Studio Application Developer, WebSphere Application Server, DB2® Universal Database, Tivoli® Access Manager And Tivoli Directory Server, find how-to-TO articles and technical support. Add to the DeveloperWorks community by participating in the developerWorks Blogs. Linux books that can be purchased in the developer bookstore Linux column. About author

Jonathan Bartlett is the author of Programming from The Ground UP, this book introduces Linux assembly language programming. Jonathan Bartlett is the header of New Media Worx, which is responsible for developing web, video, kiosk, and desktop applications for customers. You can contact Jonathan through Johnnyb@eskimo.com.

转载请注明原文地址:https://www.9cbs.com/read-130243.html

New Post(0)