DirectShow RTP to network multimedia applications

zhaozj2021-02-16  62

??????????????????????

Interactive collaboration applications, or a distributed game of many independent multimedia programs, running synchronously and / or playing multiple audio and video streams while running. As a single stream changes and stream / applications are started or finally the demand for resources, the total amount of resources available will change. Network Multimedia Application (NETMM) must be prepared to adapt to these changes, using them to provide users with different levels of services that can accept. This paper focuses on the problems that add network and host adaptability to component-based DirectShow RTP.

DirectShow is Microsoft's set of system architectures for video data acquisition and display. DirectShow RTP is a framework that expands the DirectShow system architecture, which adds support for the use of RTP protocols through network transmission multimedia application data. The DirectShow RTP framework is designed to support multimedia flow tasks for high-scalability. DirectShow RTP is made as part of the operating system as part of the WINDWOS NT5.0. We have expanded this architecture that adds support for convection applications, which can be dynamically compensated for local hosts and computer networks to change the available resources due to distribution and receiving multimedia data.

Our extensions include information that can be adapted to make options, making decisions based on this information, and methods that can be implemented in the infrastructure. This article is useful for application development and architectural design for developing these applications.

1. ?

NETMM is a class that is the strongest in the executive of the client computer and a single user workstation. These applications have high requirements for host processing capabilities and network bandwidth. These applications also often require the underlying operating system and the computer network to approach the necessary resources that can perform in real time. The delay in any of the above resource access will lead to a significant decline in the quality of the user. With the changes in the resource requirements of a single NETMM program, the start and the system's start-stop, the local host and network availability resources will change significantly. Because the available resources and demand resources will be significantly changed during operation, NETMM must prepare a smooth adaptation.

In this paper, we mainly target two types of adaptation - network adaptation and host adaptation. Network adaptation refers to the ability to make full utilization of network resources through a variety of ways with bandwidth, network jitter, data loss, etc. Host adaptations can be defined as applications based on local hosts, including CPU utilization, available memory, to change their behavior. The examples of the examples listed below are useful to network and host adaptation:

Multi-stream competition for resources. Suppose an application has an audio stream, a high bit rate video stream, and a burst slide stream, where the audio stream is the highest priority to the user. If the quality of the audio stream is affected, in order to satisfy the user's priority, the program can adapt the slide show and video stream to reduce system resources. Applications must also detect competition for network resources, and adjust it accordingly to send and receive behavior of each stream.

Allow users who have different networks and processor resources to experience. All nodes will not receive all streams at a video conference or interaction meeting of heterogeneous users of different bandwidth and processing capabilities. Under this condition, all conference staff can participate in this classification code. This adaptation form is compared to a display that does not use the hierarchical coding stream, allowing heterogeneous participants with very little resource requirements to achieve great satisfaction.

Compensate changes in application roles of different importance. In a single user environment, an important degree of a program relative to other programs will change over time. For example, when a user is switched from a view news broadcast to a compiled or design task, an operating system like Windows 95 generally scheders a multimedia program to the background running at a lower priority. When the user switches to the news broadcast, go to the incident of interest (such as the latest baseball message), the operating system will increase the execution priority of this program accordingly. NETMM should also respond to the background or reception. In this way, the NETMM program should reduce the network and CPU usage after switching to the background, and return to the front desk. The above additional response and priority scheduling strategy that exceeds the operating system are assigned to those with an urgent need. Researchers have prepared several ways to adapt to NETMM programs. These methods include different forms of source stream control, adopting the accepted drive of hierarchical video, and adaptation of host resources. Although the protocols and mechanisms like RSVP and ATM provide QoS protection have been able to meet NETMM programs, researchers have been adapted under the conditions of resource constraints and resource changes.

Assuming that the impact of adaptation to application quality is positive, we decided to study how the application developers can not worry about the complexity of adaptation, add advice to applications. In this paper, the NETMM programs are studied in detail how the multimedia stream component architecture is adapted to network and host conditions. The paper discusses our results in this area, including new research on resource restrictions, creating middleware for NETMM supporting network host adaptation, and some application developers or architect designers using adaptive A beneficial lesson.

We use Microsoft's DirectShow as the foundation of our architecture. DirectShow provides a modular and scalable system that implements multimedia applications. In the DirectShow architecture, we add a framework for implementing the RTP protocol, we call DirectShow RTP, used to create a NETMM application,

The remainder of this thesis is organized as follows. Part 2 tells the DirectShow architecture and DirectShow RTP. Part 3 discussed this, analyzing some methods of adding adaptive functions to components based on DirectShow RTP. Part 4 gives the adaptation implementation based on the data source, Part 5, introduces the adaptation implementation of the reception drive of the layer multicast. Section 6 provides an overall empirical analysis of the implementation of components-based flow frame networks and host adaptation implementation. Section 7 discusses the work fields that may further improve the next step of adaptation in NETMM. Finally, this article is ended in accordance with the benefits of achieving adaptation based on the component architecture.

? 2. ? DirectShow architecture

2.1 Microsoft DirectShow

The DirectShow architecture is used to provide basic multimedia stream processing functions for many applications. It is used in all-round multimedia processing, including collection, codec, playback, and audio and video data storage. DirectShow uses four main abstractions to operate multimedia data. These abstract terms are filters, pin, media samples, and media types (Media Types).

DirectShow Filters is used to encapsulate one or more tasks related to multimedia flow. The DirectShow Filters example contains video acquisition filter, which is used to control video camera devices, output raw RGB video frames; H.261 codec FILTER, is used to compress the original RGB video data to H.261 frames and decompression. There are similar filters, audio acquisitions, Filter, and G.711 codec for audio stream. Filter is also used back to play video on local devices. To allow program integrated features to process audio or video, DirectShow uses Filter Graphs. Filter Graph is a single serial connection, processing multimedia data buffer Filters. Figure 1 shows an example filter graph that contains a file read filter, an MPEG decoding filter, and a video rendering filter for playback of the video. Filters Connect to the Filter Graph through the Pins. Pins has two main functions in DirectShow. The first is to negotiate MEDIA TYPE, and the allocation memory when the FILTER is interconnected at Filters. Media Type negotiation is a way for Media Type management two identified Filters data interactions. The memory allocation negotiation is used to specify where to save where the multimedia buffer (also known as MedIA Sample) is allocated, allocating how large memory (byte alignment, using memory from the memory map device, etc.).

The second task of Pins in DirectShow is to hide the way of exchange data between Filters. Once a link has been successfully established, Filters just simply receives and distributes Media Samples to its Pins, and Pins performs actual operations, that is, Samples is distributed to the Filter Graph in the next filter.

Media Samples is an abstraction of the storage multimedia data buffer in DirectShow. In addition to the multimedia data that they contain, Media Samples also contains start and termination timestamps for determining the Samples Life. These values ​​can be used by the renderer to determine when playing and detecting performance issues.

DirectShow Media Types Specifies the format of the data between the Filters that contains the data containing the Media Sample. Media Types contains several parts; most of which are Major and Minor types. A primary type is generally used in the semantic guidance of the highest level. Majortype_video and Majortype_Audio are two kinds of main types. Types are generally used to determine different formats, such as minorType_audio_g711a and minortype_audio_g723. If the Pins between the two filters can find the same Media Type, then they can establish a connection. DirectSHOW allows you to define new Filters, Pins, and Media Types. Take advantage of DirectShow's inner scalability, we define two new Media Types and several filters to implement multimedia data streams through the network under the DirectShow architecture using RTP protocols. This new NetMM frame is called DirectShow RTP.

2.2 DirectShow RTP

DirectShow RTP defines a set of Fitlers and Media Types that supports multimedia streams using RTP protocol networks. New defined filters are RTP Source Filter, RTP Render Filter, RTP Demux Filter, RTP Receive Payload Handler (RPH) Filter, and RTP Send Payload (SPH) Filter. It is possible to construct a data platform application that can pass the RTP protocol network traffic video stream using these five Filters (simultaneous use of standard CODEC and acquisition / rendering filters). RTP Source Filter is used to receive RTP and RTCP packages from a separate RTP session. These packages are encapsulated in Media Sample to Filter Graph. This Filter is used to connect broadcast Media Type, including primary type RTP_MAJORTYPE_MULTIPLE_STREAM, and the sub-type RTP_MINORTYPE_PAYLOAD_ANY, which are part of the DirectShow RTP framework. Such a Media Types combination indicates that this filter can generate a stream containing one or more RTP streams, which may be a single load type or multi-load type. This Filter provides an interface specified to other host RTCP receivers and specifying network addresses and port interfaces to receive RTP sessions.

The RTP Source Filter is RTP Render Filter. This Filter receives the Media Type primary type to RTP_majortype_single_stream, and any type of external connection. In order to comply with the AVP to the sender's regulations on the RTP stream on load type and SSRC, the selection of more stringent primary types is necessary. This Filter provides an interface similar to RTP Source Filter.

RTP Demux Filter is used to separate RTP packages from RTP Source Filter. Multi-channel separation follows the load type of SSRC and each package. This Filter receives the main type RTP_majortype_multiple_stream and the PIN of the Type of RTP_MINORTYPE_PAYLOAD_ANY. This filter registers one or more output PINs, the main type is RTP_majortype_single_stream, and the secondary type associated with a single distribution stream load on the PIN that is discussing. This Filter provides interfaces for how to control how multi-channel separation and how to assign to a specific output PIN. RTP RPH Filter is used to transform the RTP package from a single fixed load type source for their corresponding unpacking form. Therefore, one version of this Filter is used to obtain RTP H.261 packets and generate H.261 compressed video frames. This version of Filter has been designed to support H.261, H.263, INDEO, G.711, G.723, and G.729 and common multi-audio and video load types. These common audio and video RPH filters compliant with the RTP AVP specified to package Media Sample. It provides a specified number of buffer interfaces to resolve the waiting time before data loss (or forwarding), especially when the frame is rebuilt, and the buffer weight is required. RTP SPH Fitler is similar to RTP RPH Filter, and its task is to decompose the Decomposition of the audio and video compression FILTER to the RTP package. It provides an interface provided, specifying maximum generated packet size (for the MTU (maximum transmission unit)) and PT values ​​for different networks (allowing dynamic RTP® PT values).

Figures 2 and 3 show how Filters defined in DirectShow RTP is used. Figure 2 is a Filter GRAPH that collects local multimedia data and transmitted through the network using the RTP protocol. It contains a video acquisition filter of an output original video frame, following the encoded filter of a compressed frame. Once compressed, these frames are sent to RTP SPH Filter, fragmentation packages, generate RTP packets, and transmit them to RTP Render Filter, transfer them over the network. Figure 3 shows a filter graph to receive the video stream RTP package, play video. This Graph is composed of RTP Demux Filter, which is used to receive a package, a RTP Demux filter that is classified according to the source and load type, and a RTP RPH Filter that converts the RTP package to a compressed video frame. These filters are then used to decode the decoded filter that displays the uncompressed frame. The most important thing in the DirectShow RTP framework is the definition of Media Type used to display the RTP stream and the five filters used to receive, process, and send the RTP package. The definition of Media Type provides a useful way to describe the RTP stream in the DirectShow system structure, allowing future definitions of files that can add new processing RTP flow methods. The implementation of these Filter provides a set of components that can be easily created based on this NETMM application and provide a system structure that can be used in further research in the future NETMM.

?? 3. ? DirectShow RTP adaptation

DirectShow allows a programmer to write multimedia programs that do not have to consider the processing details of the media. DirectShow RTP expands this architecture to support NETMM applications with RTP protocols. In order to support adaptation, DirectShow needs to make several changes. Components are used to measure and monitor the distribution of multimedia streams, and determine the cause of any quality decline in unfavorable conditions, the initial adaptation of these conditions, and the determination of the distribution and display of the stream, The expected improvement effect is achieved. It is also necessary to develop a priority system that determines different streams of different applications for the same program or the same host.

The DirectShow architecture support allows a set of mechanisms such as a Filter Graph adapted to affect the unfavorable conditions it load. This mechanism is based on the generation and explanation of Quality Messages. Quality Messages pointed out that when the data is generated and slowly absorbed by Filters in Graphs. Quality Messages propagates in a direction opposite to the data stream in FILTER GRAPH. This is the responsibility of the status to Quality Messages before each FILTER passes the message upwards. For the severity of the outbreak or depletion of the stream, the Quality Messages contains a field that identifies a counterflow of Filter to change the ratio of the code stream. In addition to some of the default behaviors Quality Messages, it is also possible to configure Filter to distribute these messages into other system components. This allows DirectShow to support the adaptation of both modes. The default mode, contains the quality message delivered upward, allows advice within the stream. Re-routing these messages to a changed recipient requires a adapter controller to centrally adapt to more streams. These two Quality Message distribution methods have caused two different ways to add adaptation support in the DirectShow RTP architecture.

3.1 Incidentally

In this manner (as shown in Figure 4), the filter generating a quality control message will send a message to the next reverse FILTER. Each Filter in the graph may change its output or non-processing upload this message. This approach has the following characteristics:

Each Filter monitors QoS and provides performance instructions. Each Filter may contain parsing quality information and reflect it for it. The adapted task is distributed in the assembly and the interaction between them is simple. However, this approach has the following disadvantages: Each stream is individually adapted rather than adaptation with other streams. An application cannot share available resources with other applications. It is difficult to perform, for example, "Adapted to Acquisition" before the codec parameter settings ". The stream service quality control provides a simple, easy-to-execute mechanism to obtain a FILTER GRAPH unfavorable conditional information and respond. However, this quality metrics are severely limited in the fact that it is only adapted to a single Filter Graph environment rather than the scope of the application or generalized system. 3.2 Adaptation Control Ways

Specify an adaptive controller for a FILTER (DirectShow Quality Manager), the application sets a quality message to each of the Filter PINs to receive the entry. If the receiving entry is set, the Fitlers distributes the quality message to the receiver instead of the upward. A adapter controller may monitor a single filter, all Filters in a Filter Graph, all Filter Graphs in a program, or even Filter Graphs executed in several different application environments. The adapter controller adapts the adaptation selection. Figure 5 illustrates an adaptive controller to control two streams, adaptation control inputs from two rendering filters, implementation policies to implement behavior by modifying sources (acquisition) and codec filters.

A architecture that adapts control will introduce several issues. One of these must be explained is to determine how the adapter control is interactive with the application. It is also necessary to check how the adapter control can be written in a flexible manner so that different types of components can be controlled. Finally, the adaptation policy for network and host adverse conditions must be determined. These strategies must have feasibility to single filter graph (media stream) and multiple Filter Graphics. Application According to the interaction type with the adaptive controller, it can be divided into three different types. These categories include adaptation policies for applications and adaptors, applications only interact with adaptive controllers, and have no direct interaction with applications and adaptive controllers.

Applications with adaptive controller are used to load the controller in the form of a DLL. After creating the Filter Graph, the application calls an adapted controller provides the API provided to create a possible connection to allow the adapter controller to adapt Filter Graph. Applications Correctly configure those filters that can be adapted and the policies used to adapt.

For those support for adding interactions with the adaptive controller, it is not feasible to add, which can be added to a small level of support in the form of an initial adaptive control. In this case, the application only needs to control the pointer to the adapter controller created for each Filter Graph. Adapter controller parses Graph to acquire various interfaces and interact with them to apply transparency. Finally, for applications that require adaptation, but not directly modified, can be implemented by adding a Adapted proxy filter to the application's Filter Graph. This technique can be used in the file form in DirectShow to use the Filter GRAPH for use with the ability to use. The agent FILTER can be added directly to the Filter Graph without knowing the application. When this Filter Graph is initially, the agent FILTER represents the adapter controller throughout the Filter Graph, divided from other Filters interface and quality messages to the adaptive controller.

In order to make a maximum effective application of DirectShow RTP architecture, our adaptive controller supports all three applications interactions.

3.3 Control Filters cause advice

Here, two adapter control Filter operation methods are discussed. One of them, the adapter controller is not directly interfere; the quality message is transmitted upward along the appropriate FILTER GRAPH countercurrent. Adapter controllers do not have to consider Filters and their respective control methods. However, we have found that many filters (like normal DirectShow video capture filter and general CODECs) do not have the ability to process quality messages. The reactions made by other Fitlers to quality messages are not very good, such as MPEG decoding filter, which will discard all P and B frames after receiving an outbreak message. The adapter controller created for DirectShow RTP can simply send quality messages in the absence of the output of the Filters. However, this method is just the last way in the lack of good control mode, because the adapter controller cannot know the feedback of quality messages. Many Filters provide an interface that allows adapter controllers to directly control their output. In order to take advantage of this ability, adaptation control uses some interfaces on Filters to achieve adaptation behavior. Directly control each of the Output of the FILTER, which filter can be selected. It is adaptable, and there is an adaptation. This approach allows adapter controllers to adapt than the previous more beautiful and accurate control mode, which is adapted to rely on counterflow to send a quality information, and I hope that a Filter will respond as expected.

The created adapter control engine provides support for both adapter types. This engine is adapted to all multimedia streams in a single process in a single component. This produces more efficient adaptability than the default DirectShow behavior, which is just adaptation decisions for each Filter Graph stream in uncoordinated. In the next chapter we discuss two adapter mechanisms, source-based adaptation control and control of reception drives by layered broadcasts. This paper tells how these adaptation mechanisms are implemented in our component-based architecture, and how these mechanisms are used to get adaptation of networks and hosts.

4. ? Source configuration control

In the adaptive source stream control (ASRC), the sender responds to the change of the Filter's acquisition frame rate and compression settings depending on the change of the network resources. In our practice, we expand ASRC, which allows you to adapt changes in the detected local available resources. Because many components provided by DirectShow lack good support for adaptation, create new components that meet our needs are necessary. Moreover, it is necessary to create an adaptable controller that can manipulate each component interface and execute ASRC.

4.1 Video Collection Filter Control

To support adaptation, video acquisition fix must allow parameters and resolutions such as acquisition frame rates to be adjustable when not operating. Video Acquisition Filter must also generate Media Samples that contains a timestamp that is sufficiently accurately rendered by a timestamp that generates quality messages. Finally, it needs to answer the quality message from the downstream Filters.

If the currently implemented, changing the frame rate or resolution of the collecting Filter, it is necessary to stop collecting, changing the acquisition parameters, restart the acquisition. When changing the collection resolution, there is a senseable pause when reflecting the Filter, which is difficult to be aware of a high speed CPU such as the P2-200 Pro processor. This feature in the video acquisition process tells us that the adaptation strategy is adjusted by changing the frame rate rather than resolution when needed. Changing the collection frequency has a significant effect on the timestamp of DirectShow. DirectShow Media Sample The timestamp is an elongated time value based on the video frame set by the acquisition driver. Sample Start Time = Graph Start Time Elapsed Frame Time In Video Frame

Rendering Filters determines if the Media Samples arrive on time according to the SAMPLE timestamp. When the video capture is restarted, the timestamp value in the video frame is 0.0. The synthetic DirectShow timestamp seems to be older than rendering Filter. According to observation, after a few minutes, video rendering Filters displays SAMPLES has been delayed, although the system load is low. This problem occurs because the timestamp generated by video acquisition drivers gradually backward Wall clock time. Video Acquisition FILTER generates these timestamps from each video frame, which is assigned based on the clock source of the acquisition device. This clock source uses an inappropriate resolution and inaccurate feature and potential inconsistencies between clocks used for video rendering comparisons, resulting in Samples slowly signs. To solve these problems, we have modified the acquisition filter to generate a timestamp using the DirectShow reference clock (it is based on a more accurate WALL clock source), not based on the value found in the video frame. This excludes restart and delay problems, equivalent to the start timestamp of the following Sample: Sample Start Time = Graph Start Time Elapsed Wall Clock Time

In addition to the high-accuracy clock is used for timestamp generation, it is also important to collect Filter's buffer management policies to quality management. Once this filter is built, it will abandon the oldest buffer and place the new video frame in the cache chain head. This means that the Filter will get the latest video frames instead of those old, invalid buffers. This makes the rendeant think that all Samples are sent on time, meaning the low CPU load, there is no need to adapt or send quality messages. One solution is to take action when collecting Filter's optical buffer directly instead of relying on the quality message instruction from the downstream. If the new frame is discarded accordingly, the renderer is allowed to detect the received SAMPLE will be too late.

To allow video acquisition Filter adapt to the video controller's shortcomings, we modified the Quality Message Reminder to acquire the Filter to handle these messages instead of ignoring them. A downstream filter (such as video rendering) calls this feature through quality messages, indicating that the transmission rate of changing the Sample is required. Response quality message video capture is implemented in a stand-alone thread to avoid too much processing in the calling thread environment. This is necessary to avoid further delays caused by rendering errors, which may occur if the renderer thread is forced to do excessive processing on behalf of the source FILTER.

4.2 Code to adaptation support

The codec in Filter Graph is the main consumer of processor resources, so it becomes the preferred preferred. The codec of the video we use provides an interface that controls the output code stream and video quality. The adapter controller uses this interface to affect the occupation of CPUs and network bandwidth. We have found that these interfaces are very useful for our intent of the adaptation controller, it can interact directly with a single Filter.

4.3 RTP Source and Rendering Filter Adaptation

Allows the adaptation of the network and host conditions, it is necessary to render the Filter change, and its task is the network transmission RTP stream and monitoring RTCP reports. This Filter must generate a message that can reflect the feedback of the report by the RTCP receiver to receive the rendering stream host.

A method of completing this task is to allow RTP rendering Filter to resolve the RTCP receiver report. This parsing will contain changes in the RTCP packet that reflects the loss ratio of the DirectShow quality message part value. The disadvantage of this method is to limit the ability to change the RTCP package resolution and implementation of various adaptation algorithms. Thus, in order to add support for source-based network adaptation, it is also necessary to send the RTCP package to the network adapter. A adapter controller can intercept the original RTCP message through a normal interface supported by the RTP rendering Filter.

To allow the sender's host to adapt, the RTP rendering filter needs to generate a quality message that reflects the local resource status, such as the CPU. This task can be easily implemented with RTP Render Filter inherits the DirectShow video rendering base class. This basic class supports the generation of audio and video stream quality messages. 4.4 Network Adapter Control

Based on the network transmission adaptation, it is most beneficial to a point-to-point meeting that is available when the call is sent. Multicast meetings can also be used to send adaptation methods, but this is a high network bandwidth that hypothesis hosting. Thus, the best use of ASRC is a local area network conference that is homogeneous in the available network capabilities. In this environment, it is more effective than the transmission bandwidth of the transmission bandwidth, which is more expected than the receiving drive layer multicast mode (RLM). The algorithm used by the network adaptation control is similar to the BUSSE ET AL. The adapter controller allows the application to set the loss threshold and the lowest, the highest bandwidth. We extend the ASRC concept and add the ability to set the priority between the video stream in a single application. Typically, this includes visible loss of high priority streams, adapting low priority streams.

In the execution of our data source network, it consists of three steps below. That is, RTCP analysis, network status prediction, and bandwidth adjustment.

Once a new RTCP report for a sender is received, this new report is used to correct the packet that initially transmits the RTCP report recipient. We use A (RTCP's most recent loss report) = 0.3 to calculate the slip package loss in the low pass filter. Each recipient is maintained during the participation process they show. The RTCP BYE message is used to delete the recipient from the list of active recipients. Because the loss of this message will result in an inaccurate decision to adjust the decision. We plan to add timestamps to each recipient to record and expel the recipient when the RTCP package is not reached within a reasonable time.

Based on the loss of each recipient smoothing package, it can be divided into blockage, load, and unload several states. Each state maintains several recipients, and when each recipient is changed from one state to another, the recipient of the current state is reduced, and the recipient of the new state increases accordingly. The loss of smooth packets is considered to be clogged, less than 2% is considered to be unloaded, and the two is considered to be loaded.

Depending on the number of each state recipient, it is decided to decrease, improve or maintain the current bitstream. If more than 10% of the recipients are in a plug, we reduce the bits of the current network to use the target. If this is not there, more than 10% of the recipients are in the loaded state (loaded), we continue to maintain the current bitstream. Finally, if it is not in these two conditions, we improve the network's bitstream expectations.

The sender uses an algorithm for adding and multiplying a reduced multiplication to adjust their bitstream. The adjustment code has two steps, as we find that the codec is occasionally deviating from the target bitstream, if the frame sport will cause the codec to dramatically exceed the set stream. In this case we increase the acquisition frame rate to achieve the expected code stream. We plan to enhance network source admixture to allow users to get compromise between code streams and frame rate adjustments.

4.5 Host adaptation control

The host adapts the control interface provided by the FILTER in Graph to initially adapt, respond to CPU load changes. This section discusses how the host adaptation controller provides adaptation for NETMM applications for sending data. In Figure 2, we demonstrate a typical transmission video stream of Filter Graph, including a video capture fiter, a video encoding Filter, an RTP SPH, and an RTP render Filter. In order to add an adaptation function for this Filter Graph, the host adaptation controller configures RTP rendering Filter to distribute quality messages to the adapter controller instead of countercurrent transmission. The proportional field value in the distribution quality message specifies the degree of the code stream needs to change. These quality messages are converted to media parameters to be used to control video acquisition and codec. These parameters include codec, data acquisition bit rate, and data acquisition scenarios. Changing the collection rate and the plan have a significant impact on the CPU and network bandwidth consumption, but the change of the decoding bit rate is very small in the CPU consumption. Integrated these parameters for user parameter estimation during adaptation. For example, a policy we execute includes a special change path, which is described below: If the codec is exceeding the lowest, it reduces to the lowest value. If the decoding rate is already at the lowest value, the frame rate is exceeded, and the frame rate is reduced. If the frame rate has already dropped to the lowest value, reduce the quality of the frame. It is very easy to change the frame rate and bit rate during Graph operation. However, changing the quality is complicated because it changes Media Type in DirectShow for negotiating the filter. Media Types will be negotiated when the Filters connection is composed of Filter Graph, and then determine it. When DirectShow does not allow Media Types to be changed in the running state, many filters can quickly meet these changes. For these filters, it is necessary to use other methods to dynamically change Media Type. H.263 and H.261 Codes Filters belong to this type of Filter that is not allowed to change. In addition to these filters, standard video rendering Filters does not automatically adjust the display size according to MEDIA TYPE. To solve the change in the initial method of the network video source, it is necessary to write a video splitter to connect a pair of decoders and video rendering. The splitter routing Media Samples based on their method, so that changes in these distribution data stream regimens are invisible to the codec and renderer.

When Media Type changes, the application must be notified so that it can use the appropriate video rendering Filter display output. Thus, an application can sense adaptation for certain forms of adaptation (especially those that cause user interface changes). ??? 5. • Receive driver adaptation with layer multicast

The reception control we execute depends on the adaptation of the receiving drive layer multicast (RLM). This type of adaptation type requires each compression layer to be sent to different multicast IP addresses (or unicast addresses and ports). In RLM, the layer is added or discarded based on the network conditions such as the available bandwidth and the detected loss. We receive adaptations also adapt to the host CPU load, which has a direct impact on system resources such as processor load and bus bandwidth. To support RLM, it is necessary to create new components to modify existing DirectShow components. We use the C class structure mechanism to create a set of adaptable controllers that support the use of RLM implementation networks and host adaptation. In this chapter, we introduce this version of the adaptive controller and other changes required for DirectShow, add support based on the component architecture to RLM reception adaptation.

The DirectShow RTP transmits and receiving load scheduler requires a single stream to receive compressed data, and the ability to play back or network transmission is processed. To support layer compression, we need to divide a single stream containing each layer into multiple independent streams, each containing a separate layer used to connect to the independent network. It is also necessary to combine multiple layers into a single video stream, distribute to the decoder. In order to provide this, we added a new type of Filter to DirectShow RTP called a layer load scheduler (LPH). The LPH is specially written for each support hierarchy compressed load type. LPH filters have two, Send LPH Filters and Receive LPH filters. SLPH filters performs policies to assign each video frame to the corresponding stream when using layer multicast. These filters receive inputs from CODEC, which generates a stream including frames from all layers in the stream. These SLPH Filter generated streams contain some video frames and package them in RTP SPH Filters. There is no need to change the RTP SPH Filters to achieve because each layer contains only a legal frame of a specific load type of the demand stream. These packages will be transmitted through independent multicast streams by RTP Render Filter.

The reception of each stream in the layer multicast is done by the RTP Source Filter, which can support this feature without changing. Once receiving, each stream is distributed through RTP Demux Filter to RTP RPH Filter. Each RPH Filter makes a stream package to frames of the appropriate Media Type. These frames are then distributed to the RLPH Filter, which is responsible for reorganizing these frames from each layer to a unified stream of a reasonable Media Type, and then distribute to CODEC solutions and rendering.

5.1 Poorman Codec

When we start working, we are homogeneous CODECs such as H. 263 or Indeo 5.0 has not been exposed to. Thus, we decided to implement a simple SLPH (Poorman encoding) and RLPH (POORMAN decoding) allows temporary video scaling using standard CODEC. The Poorman compressor is used to output the Pins multiplexed video frame, and the Poorman decompression is integrated into a stream. The Poorman compressor assigns a frame to each PIN in Weighted Round-Robin Interleaving Algorithm. This simple algorithm can be achieved by avoiding inter-inter-frame dependence, we can implement only keyframes by forced video CODEC. Clearly, but not optimized for layer-like solutions, we found that this is a very effective multi-level adaptation study and tools that prove very useful under the lack of true grading coding conditions. Figures 7 and 8 show the implementation of the Poorman encoded RLM in Filter Graph.

Keeping timestamp information is very important in the multi-output video frame process using the weight robe-robin multicast algorithm. Make sure that the recipient recombine these streams is a single video stream in the time series, this information is necessary. In our execution, RTP SPH Filter ensures correctness between the RTP timestamps of the associated layers, based on the fact that they inherit the RTP timestamp, each timestamp for each frame from DirectShow being packaged. At the recipient, RPH Filter places the RTP timestamp on DirectShow Samples, allows synchronous and arrangement frames before POORMAN decoding. Once the frame is sorted and synchronized by the POORMAN decoder, it is necessary to set up a timestamp for each frame. This information is necessary to generate quality messages for hosts or network conditions that need to be adapted as needed by video rendering. We decided according to the formula for an assigned time stamp: time offset = FIRST_RTP_TIME - STAT_STREAM_TIMEsample_start_time = CURRENT_RTP_TIME Time_offsetInterval_time = PREVIOUS_RTP_TIME - CURRENT_RTP_TIME FIRST_RTP_TIME is acquired taking into account the result value by the minimum time stamp packets according to the arrangement of the first packet arrives. The number of packets that must be received before sorting (thus getting the first_rtp_time) is programmable. This mechanism is necessary for eliminating the impact of the jitter and rearrangement of the network and sending receiving host operating systems to the package.

5.2 H263 / INDEO layer load operator

The hierarchical load operator has designed two grade video compression, INDEO, and H.263 in the initial design process. These two video compressors have different hierarchical structures, resulting in a bit different from the functions performed by each LPH. IndEo is based on band-based, multi-band video appears in each compressed video clip. Each band can be transmitted in a separate stream and all bands can be transmitted through a stream. In order to distinguish the band in a stream, a tool called the band is used. In our implementation, the SLPH has played a role of a wave section, which receives a composite frame from the compressor, splits into band data, passes to the RTP SPH Filters package, via RTP streaming. The wave band is re-combined with the RLPH as a separate video frame, supplied to the decoder. IndEo requires a base layer that can be received separately if other layers need to process, must also be received. Other layers are dependent on each other in a continuous manner, that is, layer 1 depends on layer 0 (base layer), layer 2 depends on layer 1, and the like.

H. 263 The hierarchical structure is more complicated than INDEO, and there are three different basic types of layers, or can be said to be introduced contraction [18]. These three measuring types include time, space and SNR (noise rate signal). Time measuring, including B (two-way predictive) frame, providing a larger frame rate. Spatial measuringability refers to a change in the size of the picture, such as from QCIF (176 × 144) to CIF (352 × 288). The SNR content is to correct the data package during the compression process, and improve the image compared to the original unruly version. These three scalable types can be combined together with a number of videos of different layers. However, each video frame is isolated, only one layer. Two or more of these frames can be combined by the encoder to create a picture that can be displayed. Unlike Indeo, the hierarchy of H.263 is not arranged in an ordered manner. When there is a base layer, other reinforcing layers may rely on any other layers below it. The hierarchical load processor is configured to know the dependence of each layer and determine the discarding or adding layer based on this information. In addition to the complex layer relationship of H.263 support, a specific sequence is required when the recombinant layer is required. Such sequences have detailed description in the introduction of H.263 ITU, requiring B frame to decoders, after it is based on its bidirectional prediction frame. For example, in a continuous 3 frame, frame 2 is two-way prediction from frame 1 and frame 3, and the order will be 1, 3, 2. This is also a sequence of the encoder generated frame, which causes the frame received by the RLPH to be different from the original sampling or acquisition order. Because RTP's description requires an RTP timestamp based on instant sampling, the load operator must rebamp the frame to reconstruct an immediate sampling, generate an RTP timestamp when transmitting data. Similarly, the RTP timestamp cannot be used to arrange it to the decoder. Reconstruction timestamp is not a problem, although it leads to an estimate of instant sample values. However, there is no RTP timestamp to help the decoder arrangement frames are a very difficult problem, and the number of B frames between the two reference frames can be varied. By unlining the B frame separately with other frames, we use the reference frame as a split of the original coding sequence to solve this problem. Because the number of B frames expected in any "predictive window" changes, we add a configurable buffer timeout time period. During this period, the receiving hierarchical load controller will provide all accessible B frames before the B frame of that window. We can also dynamically adapt this timeout period based on the number of B frames received after the timeout.

Adapter Manager uses LPH filter to dynamically add and revoke the layers of the sending or receiver. By performing most of the complex SLPH and RLPH Filter-specific RLMs and reusing these filters, it is easier to add RLM support for our adaptation engines. This single architecture rather than the component-based architecture such as DirectShow will significantly increase significantly with significant complexity.

5.3 Execution of RLM in network adaptation control

Our network adaptation control components have implemented support for the RLM adaptation mechanism. To simplify the implementation, the RLM adapter controller provides a variable parameter for the adaptation policy, which can be set in the application. The parameters include a lost threshold (the total amount of loss before leaving a layer) and the off / hitting TTL message (used to limit the sending left / combined message to the network subnet). In addition to these parameters, all RLM-related timers can be used. The RLM algorithm is determined to add or remove the basis for the determination of adding or removing the layers. Because this information is currently saved in RTP SPH Filter, we need a way to broadcast to the network adapt controller. To do this, we modified RTP RPH Filter to add a callback interface to distribute packet loss information to the adapter controller. The network adapter controller receives each RTP RPH packet loss information, a cross-layer collection.

Our network adaptation control can be implemented in two operating modes. The first, directly compensated for adverse network conditions. This action may contain a command RTP source filter to add / join a multicast layer and an index as a RLPH that acts as a result behavior, let him know if it is necessary to wait for data from a particular layer. In another way, adaptation control can be used in applications. Apply information based on the information provided by the network adaptation controller (for example, operating RTP sources filter and rlph).

5.4 Host receiving adaptation using host adaptation control

The host adapt control allows the recipient of the video stream to select a set of layers that best suited for the host. The initial either of the host adaptation control is either executed by applying (in the adaptive sensing application) or by adaptation agent FILTER calls. The host adaptation controller sets itself to the receiver that sends a DirectShow quality message that renders Filter from the video. When the message is displayed as a torrent, the controller discards the clutch interface for multicast group RTP source filter, and adds a layer when the hunger message is received. These operations are implemented in a separate thread, in order to avoid conflicts with data flow.

6. ? Experience application and test

In order to test our software architecture, we have created a series of adaptation applications. These applications include an ActiveX control that allows the RTP stream to display the RTP stream in the HTML web page, which provides a function similar to VIC and VAT. The properties of the ActiveX control can be set in the HTML page in a variety of languages.

Using DirectShow RTP and Software Coding / Decoding, we can receive and decode three synchronized H.263CIF video streams, 20 frames per second, and an audio stream, total bandwidth can exceed 3MBITS / S, in a P2-266 The processor's host. Using a specific Allow Filters to write DirectShow video rendering file, we can provide full-screen high quality INDEO video and audio (MPEG2 SDTV 5Mbps quality) at 50% CPU usage.

We use these applications, as shown in Figure 11 test platform to test various scenarios. The presence of MODEM allows us to test ASRC and RLM functions in a wide range of available host bandwidths. Handling Using MODEM to create a wide range of customer bandwidth, we have written components for the analog network loss to simulate network loss. The behavior of these components can all pass the parameter configuration, so we can control the distribution of lost. Finally, the processor's ability to process the environment in our test environment is large, from P90 to P2-300.

To demonstrate host adaptation in multi-stream applications, we have created a video conferencing application in the web page using the host adaptation in multi-stream applications. The video stream in the conference uses layer video that allows the recipient to adall. Participants of the meeting can listen and see the other party, watch the video clip of the video window. Movies play with frame rate 20fps and 44.1 Hz Indeo audio. As participants increase, the quality of each video and audio stream declines. The video will be jumped and the audio will appear noise. When the reception flow and the local output stream are adapted, the reception will drop to a very low video layer (2FPS), and the sending flow will drop to 5fps. After adaptation, the quality of the audio and video will return to the level of receiving. This scenario contains obvious user interaction. In the next chapter, we focus on how to minimize these users input.

7. ? Future work

We plan to expand our adaptation architecture, including support for multi-source and application adaptation. We also tend to improve our design to allow user-driven strategies and priority settings. Expanding applications to broad systems rather than making reflection of each unfavorable condition will exceed the current implementation of the current implementation. The information that is unavailable for adaptive controllers may be used by considering the user's adaptation in such a system level. We believe that this combined system-level adaptation and use of users will allow us to reach an optimal adaptation level.

7.1 Crossing multi-resource adaptation

The mechanism for monitoring the performance of performance, the algorithm, and the control adaptation mechanism is clear. However, we believe that interfaces for adaptation control interactions for specific resources can be generated. Provide a set of standard interfaces, applications can be instantiated and interact with a specific adaptive controller without having to obtain a very high-served application knowledge. In order to allow application of multi-resource adaptation, we plan to use a hierarchical adaptation control organization to expand our system. In this arrangement type, the application interacts with a single adaptive controller, which in turn represents the application to manage other adapter controllers. For example, for adapt host and network resources, an application can instance a composite adapter controller, which uses a host adapt controller and a network adaptation controller. When the adapter controller is a leaf of this tree, it will not be directly controlled. Instead, please consult his parents, if the conditions are allowed to be adapted. Tree roots will take action based on the current state of all its children. We believe this level of grading will provide a method of assessing the current display of quality and manipulating a wide range of resources to significantly improve the quality of the user experience.

7.2 User Specific Adaptation Policy

An important area for further work is to allow users to set parameters, such as related flow and priority. Users should also be able to set the adaptation policy, perhaps in accordance with the implies of the application and adaptation controller. Allow users to follow which flow of the most important input to reach the most likely user experience is critical, such as this information (users tend to be higher in the sports channel or financial channels to higher priority) is difficult to obtain by other means. The user's preference is also very important for how the adaptation is equally, such as a user who likes high frame rate, while the other may like high quality. In the end, the adapter controller is provided to the user's clues, what form adaptation is more tenderable to the user, which will help users determine the adaptation policy is useful.

To illustrate these issues, we plan to create two entities, a policy tool, and a policy controller. The policy tool provides a graphical user interface that allows users to choose application and specified strategies. These strategies can be valid for a single application or system. We imagine a policy tool that allows these specified policies to be prioritized in the application, can be modified during operation. Finally, this policy tool will contain a series of default policies for no user input.

Once a set of strategies have been set (as long as the policy is dynamically changed), the policy tool will load them to the system policy controller. The policy controller is responsible for interacting with each adapter controller and each application to perform the policy specified by the policy tool. The policy controller also needs to monitor the status of each policy controller, send feedback to the policy tool, which may interact with the user to update the policy specification.

8. ?to sum up

Based on DirectShow, we have developed a set of frameworks called DirectShow RTP for RTP streams. By making full use of DirectShow's flexibility, and the inherent scalability design DirectShow RTP framework, we have implemented several network adaptation methods and explore the new host adaptation direction. Expanding streaming functions to automatically adapt to network and host conditions, maximizing multimedia presentation, while still using standard codes and protocols. Component-based DirectShow RTP framework makes the work of adaptation, because this framework improves the reuse and maintainability of the previously created components.

The adaptability of the DirectShow RTP framework expansion is implemented in the original quality monitoring mechanism that has existed in the system. This makes it easy to use this frame to use this frame to be added to the NETMM application, no need to change. These objects that DirectShow RTP are based on the ability of the component framework, resulting in NETMM to apply the framework with dramatic and direct improvements. This proves the value of adaptability, and allows simple and fast use of the value of the frame.

转载请注明原文地址:https://www.9cbs.com/read-17734.html

New Post(0)