Flow model and package model
2 Conceptual Overview (Concept Summary)
In this section, we discuss the basic simulation methodology used by the flow model and the package model simulator, pointing out the difference between them, the respective advantages and deficiencies. Then we show how we intended these two model models into an simulation process while maintaining their own advantages of these two models. Subsequent chapters describe our integration methods more detailed, including some experiments we used to verify our approach.
2.1 Fluid Models (Flow Model)
A simple streaming network model is shown in Figure 1:
Figure 1: Simple stream model
In the network using a stream model method, the data stream between the system is built into traffic flowing through the pipe. In order to facilitate discussion, we assume that there is no looping flow, and only positive feedback is included in the network. However, the basic model method (requires a small amount of modification) can also be applied to a negative feedback network. In each position in the network model, the data stream is used to model the following six processes:
1. α (t): The input flow rate of this position is modeled to receive the data collection rate (bit./sec) in this location. It can be the output of the previous position (input), or an output of an external data source model (input in Figure 1).
2. β (t): The output flow rate that outputs the communication link can be provided by the name, which indicates the maximum traffic flowing from the server. It models the capacity of the maximum output communication link of this location (bit./sec).
3. с (t): Buffer size. We know that the buffer size may be a function of time by monitoring the size of the buffer shared by multiple competition. However, here we think it is a positive value C bit.
4. χ (t): Buffer utilization. Indicates the traffic in the buffer. It models the amount of data queued and waited on the output link.
5. Δ (t): The flow rate flows from the server, which indicates the rate of data from that position (bit./sec), and (expect) to the next position (as shown in Figure 1, position three) The input flow rate is the sum of the output flow rate of the position one and position I). Note that the output flow rate Δ (t) cannot be greater than the output flow rate β (t) that the output communication link can be provided. In other words, when a location transmits data to another, the rate of transmitting data is not larger than the maximum bandwidth of the physical communication link connected to the two locations.
6. γ (t): Data loss rate due to buffer overflow. It represents data lost in this position.
The first three treatments α (t), β (t), с (t) are called predefined processing, which portrays a stream position behavior. The other three processes χ (t), δ (t), and γ (T) are determined by predefined processing and become derived processing. Reference [25] Some basic principles of the convection network model have been described.
In the network simulation model created using the flow method, the processing (χ (t), δ (t), γ (t)) is derived in each location at any time point T, is used in the current predefined process. The value is calculated accurately. When a calculated output result Δ (t) is an input α (t) of a certain position, the location of the location is also calculated. The calculation is repeated until all predefined processing values change the position, and the derived processing is calculated. At this moment, the network reaches a steady state until the three situations occur, otherwise it is not necessary to calculate.
1. An external data source model notifies the use of a new data generating rate. For example, an open-off data source model notifies the data source from being driven or by the off.
2. Transitions to columns from non-air (eg, in one position χ (t) = 0). This results in a change in the input flow rate of the subsequent position.
3. The queue is full of full shift (eg, in one position χ (t) = c). This results in a change in the overflow rate of this position. In any of the above three events, the derived process of each position is recalculated as described above, and the network will reach a stable state.
When the network is modeled in this way, many weighs are needed:
1. In some cases, the calculation efficiency of the simulation of the flow model is very high. Because simulation only considers the rate change event, its computational complexity is much smaller than the simulation of the data package level. We have already shown in the previous experiment to model a simple network simulation, and the flow model is faster than the rate of packet models based on the rate of packet models. However, when the model of the model is modeled, the effectiveness of this calculation begins not obvious. This is mainly because of the well-known stream model simulation. Ripple Effect is caused by changes in the rate of input streams on a non-empty queue, also causing changes in the output flow rate of all shared this queue. That is, a single rate of a stream can cause a change in multiple rates of related queues. For more detailed discussion for Ripple Effect, please refer to Liu et al. [12, 11]
2. Flow mode method is more susceptible to mathematical analysis than other model methods. Wardi et al. [24] has shown that the collection of network behavior as a set of differential equations can cause precise Infinitesimal Perturbation Analysis, which can give good predictions for future network performance. Towsley et al. [16] The use of stream model and differential equation group is analyzed to analyze the behavior of TCP Endpoints and active queue management technology.
3. Method of streaming model leads to a decrease in network performance metrics in the simulation. Because it always uses average network status of a certain period of time. This is not necessary to get the consequences of the sudden network condition. For example, when several flows reach a full queue, the stream mode calculates the loss rate based on the ratio of the input rate accounted for each stream. In a real-world network, the loss rate of different streams is large in different time points, which is often due to accidental factors or Stroboscopic effects due to reaching mode.
4. As mentioned earlier, if the network contains feedback (or the potential routing cycle in the network topology or the data source adjusts the input flow rate according to the network, it is difficult to simulate. The routing cycle will result in a large amount of increase in the rate change event processed by the simulator of the flow model. This will result in a rapid decrease in simulation efficiency.
Despite these weighted and potential precision, a large number of researchers still report a good result of using flow model simulators [27, 21, 22, 17, 10, 15]..
2.2 packet models
Figure 2: Simple package model
An example of a network simulation of a package model is shown in Figure 2. Use this method to model the network, the simulator is also modeled in each point of each package generated by each source, and also models the path of the packet over each point in the network. Packets are tracked by each link, each queue, each data source, and receiver. The calculation of the loss of the package can be performed at the level of the packet, which produces a fairly accurate and repeatable network behavior. A large number of research results and commercial products use the package modeling method for network simulation, including NS2 [14], PDNS [19, 20], GLOMOSIM [28], SSF [4, 3], JavaSim [23], And Opnet [ 1].
The disadvantage of the method of data package level modeling is that the computational complexity is increased. Because the simulator tracks and models each package event (enter the queue, out queue, transmission, reception, loss), so that the total number of package events to be processed in the system will be very huge. In addition, the memory required by the computer will increase with the increase of the data packet at any point in the simulation process, so that the memory resources become the bottleneck of the package level simulation.