Develop your own streaming client V 0.01 on the basis of LiveMedia

xiaoxiao2021-03-06  62

Develop your own streaming client V 0.01 on the basis of LiveMedia

Guitang East

xiaoguizi@gmail.com

2004-10

2004-12

Friendship declaration: This document is suitable for XDJM for streaming media transfer or to network transport protocols (especially RTSP / RTP / SDP), and this document is in the most primary stage, will be perfect in the near future, I hope everyone will give comments. And suggestions

If reproduced, please indicate the source. CopyRight @ 2004

First, background ... 3

Second, LiveMedia framework introduction ... 4

1. Overall frame ... 4

2. Client frame ... 4

2.1 Client OpenRTSP Process ... 4

2.2 Add a new media ... 9

2.2.1 Increase the media's Format 9

2.2.2 New Media needs to consider problems ... 10

2.3 Detailed description ... 13

2.3.1 BasicusageEnvironment, USAGEENVIRONMENT 13

2.3.2 Groupsock. 15

2.3.3 LiveMedia. 15

Third, some summary ... 16

A. Buffer management ... 16

B. How to control the receive loop. 19

C. Pause & seek. 19

D. Release resource problem ... 20

I, background

Nowadays, the streaming media is everywhere, and the mainstream streaming server is RealWorks, Windows Media Server, Apple Darwin Server, and the client program, including session establishment, reception, and decoding, it will put it, how to use an open source code to implement yourself Streaming media client, can you support new media formats? This is the focus of this article.

The company has exposed to a project, requiring the RTSP / RTP protocol and parsing the RTSP / RTP protocol and the RTP package in accordance with 3GPP standard, decoding, and playback, because time is tight, consider a relatively stable and comprehensive open source standard Based on the second development, mainly for new media support, BUG-FIX, and architecture adjustments.

Second, the LiveMedia framework introduction

Detailed help documentation www.live.com/livemedia

1. overall framework

There is a help documentation generated by DoxGen on Live's website, and there is no need to cheat here, but here is to remind, Live's library code can be used for servers and clients at the same time, so if it is only developed The program or needs to divide the server and client program, it is best to peel the code first, here you can refer to the reference example of Live OpenRTSP and TestondemandServer

2. Client framework

2.1 Client OpenRTSP Process

The OpenRTSP process is given here, and it will eventually give the sequence of operation in the Packet loop, and finally the reward client needs to be established.

Ø "||" Present That this item is depend on the Input Execute Parameter

Socket Environment Initial.

2. Parse theinput parameters.

3. CreateClient. Get Class RTSPClient Construct (Return Class Medium and Some Vars)

3.1 for the class media

3.1.1 // First Generate A Name for the New Medium: and Put Into The Result Buffer

3.1.2 // Then add it to our table: (It's a hash table store the medium session, should has a MAX store value, in other words, the client should handle limited medium session) 3.2 RTSPClient variables initial and construct RTSP "User -Agent

4. Send RTSP "Options" and get Options from Server.

4.1 CREATE SOCKET Connection

4.2 Send Options String

4.3 Get Response from The Server (if Response Code IS 200 And It's Supported Public Method | Options)

5. Get SDP Description by the URL of the Server (Return Value: SDPString)

5.1 CREATE SOCKET Connection

5.2 Check if The URL HAS UserName and Password

5.3 Send Options String

5.3.1 Construct Authentication

5.3.2 Construct Descrips String and Send

5.4 Get Response from the Server

5.4.1 if Response code is what we can handle?

5.4.2 Find The SDP Descriptor and do Some Validate Check

6. Create Media session from the sdp descriptor brom.

6.1 session = Mediasession :: CreateNew

6.1.1 for the class medium (this is Different from the Medium of class RTSPClient)

(1.1) // first generate a name for the new medium: and put into the result buffer

(1.2) /////.. It's a HASH TABLE THE THE: (It's a HASH TABLE THE THE MEDIUM SESSION, SHOULD HAS A MAX Store Value, In Other Words, The Client SHOULD HANDLE LIMITED Medium Session

(2) Some Variables Initial, Such as subsession (M = Present a new subsession) and cname etc

6.1.2 Initial The Mediasession with the SDP Info

(1) Parse SDP String, Get The Key and Related Value to The Var.

(2) Get the "m =" (if there "and create subssession

Decide use udp or rtp; mediumname; protocol; payload format etc.

6.2 Initial of The MediasubsessionItem (Using The Session and Subsession (M))

6.1.1 Check The Subsession's Property and Set Some Var.6.2.2 for the receivers [Receive Data But Not 'Play' THE Stream (s)]

(1) Subsession-> Initial ()

(1.1) Create RTP and RTCP 'Groupsocks' On Which To Receive Incoming Data.

(1.2) According The Protocol Name, Create Out UDP or 'RTP' SPECIAL SOURCE

(1.3) CREATE RTCPINSTANCE []

(1.3.1) // Arrange to Handle Incoming Reports from Others:

(1.3.2) // frtcpinterface.startNetworkReading (Handler);

(1.3.3) // send Our First Report. Which compose with rr and sdes (cname) To the Server

(2) Set the big threshold time, for reorder the incoming packet and restore it. Maybe Set The ReceiveBuffersize (IF We set it in the input parameter)

6.2.3 for the Player (Not Recoding The Stream, INSTEAD, 'Play' The Stream (s))

Just do nothing here, waiting the follow action.

6.3 SetupStreams (RTSP "Setup")

Perform Additional 'Setup' on Each Subsession, Before Playing Them:

For Each Subsession, RTSPClient-> setupmediasubsession (*)

6.3.1 // First, Construct An Authenticator String:

6.3.2 // When sending more Than ONE "Setup" Request, Include A "session:" Header in the 2nd and later "setup" s.

6.3.3 // Construct A Standard "Transport:" Header. [See the appendix (1)]]

6.3.4 Send Request String and Get Response,

(1) Check The Validation (Such As Response Code

(2) // Look for a "session:" To Setur Session ID), And A "Transport:" HEADER (To set the server address / port)

(3) If the subsession receive RTP (AND sent / receive rtcp) Over the RTSP Stream, Then Get The Socket Connect Changed to the Right Way

7. Create Output Files: Only for the Receiver (Store The Streaming But Not Play IT)

For Different File Format, Use Different * Filesink ClassThis Uses The QuickTime File As Demo. Output to the ':: Stdout'

7.1 qtout = QuickTimeFilesink :: CreateNew (***)

7.1.1 for Construct Class Medium Again, See The Front for Detail.

7.1.2 Some Variables Get Their Initial Value

7.1.3 // Set Up I / O State for Each Input Subsession:

(1) // Ignore Subsessions without a data source:

(2) // IF "Subsession's" SDP Description Specified Screen Dimension Or Frame Rate Parameters, THETT THESE. () "BELOW.)

(3) Maybe Create a hint TRACK INPUT Parameter Contains IT

(4) // Also Set A 'Bye' Handler for this subsession's rtcp instance:

(5) // Use the current time as the file's credibility and model. Use apple's time format: Seconds Since January 1, 1904

7.1.4 StartPlaying (Details in 7.2)

|| 7.2 Common file

7.2.1 FILESINK = filesink :: CreateNew (***)

(1) First USE MediaSink (Use Class Medium Constructionor Again, See The Front)

(2) Some Variables Got Initial Values.

7.2.2 Filesink-> Startplaying (Actually Using The Parent Function MediaSink-> St.)

(1) Check, Such AS // Make Sure We're Not Already Being Played; Our Source IS Compatible:

(2) ContinuePlaying ()

(2.1) framedsource :: getNextFrame (Source Type Was Appointed In The Startplay ... as framesource)

Check and valued Some Callback Function: // make Sure We're not already being read:

"DIFFERENT Media Source" -> DogetNextFrame () // Such as Mp3Fromadusource Virtual Func.

In this function // before returbing a frame, We Must Enqueue At Least One ADU:

OR // RETURN A FRAME NOW:

8 StartPlayingstreams

// Finally, Start Playing Each Subsession, To Start The Data Flow: 8.1 rtspclient-> PlayMediasession (*)

8.1.1 CHECK Validation

// First, Make Sure That We Have A RTSP Session in Progress

8.1.2 Send The Play Command:

(1) // first, Construct an Authenticator String:

(2) // and then a "Range:" String:

(3) Construct "Play" string

(4) Send to Server

(5) Get Response. And Check Response Code / CSEQ / ...

8.2 // Figure Out How long to delay (if at all) Before shutting down, or review the playing

|| 8.3 Checkfor Packetarrival // See IF The Any Packet COMING in The Subsssions.

|| 8.4 Checkinterpacketgaps // Check Each Subsession, Counting Up How Many Packets Have Been Received:

9 env-> taskscheduler (). Doeventloop ()

Main loop for get the data from the server and parse and store or play directly.

9.1 BasicTaskscheduler0 :: DoeventLoop,

Will Loop Use SingleStep

9.2 BasicTaskscheduler :: SINGLESTEP

See eti the any readable socket in The Freadset (Store The Socket Descriptor of The Subsession) And if Have Will Handle IT

(1) fdlayQueue.handleAlarm ();

(2) (* Handler-> HandlerProc) (Handler-> ClientData, Socket_Readable); Loop Handle The Subsession Task.

[this is multiframedrtpsource :: networkReadHandler]

(3) MultifraMedrtpsource :: NetworkReadHandler

// Get a Free BufferedPacket Descriptor to Hold The New Network Packet:

BufferedPacket * bpacket

= Source-> FreorderingBuffer-> getFreePacket (SOURCE);

// read the network packet, And Perform Sanity Checks on the RTP Header:

IF (! bpacket-> fillindata) // The Coming Packet Not Belongs Cur Session

// Handle The RTP Header Part

// the rest of the packet is the usable data. Record and save it (to the recordingbuffer)

Boolean usableinjittercalcalculation // rtcp jitter calculate

= Source-> Packetisusableinjittercalculation (BPACKET-> DATA ()), BPACKET-> DATASIZE ());

Source-> receivedstatsdb () // Note That WE Have Reve A RTP Packet

.NoteIncomingPacket (RTPSSRC, RTPSEQNO, RTPTimeStamp,

Source-> TimeStampFrequency (),

Usableinjittercalculation, PresentationTime,

HasbeensyncedusingRTCP, BPACKET-> DataSize ());

// Fill in the rest of the packet descriptor, and store it:

BPACKET-> AssignMiscparams (RTPSEQNO, RTPTimeStamp, PresentationTIME,

HasbeensyncedusingRTCP, RTPMARKERBIT,

TimeNow);

// Store the packet.

Source-> FreorderingBuffer-> Storepacket (bpacket);

THEN

Source-> dogetnextFrame1 (); //iff we didn't get proper data this time, We'll Get Another Chance

9.3 MultifraMedRTPSource :: dogetnextFrame1 ()

To multiframedrtpsource or some other inherit class

(1) // IF WE Already Have Packet Data Available, The Deliver It now.

BufferedPacket * NextPacket

= freorderingBuffer-> getNextCompletedPacket (PacketLossPrecedThis);

(2) // Before Using the Packet, Check WHETHER IT HAS A Special HEADER

// That Needs to be processed:

IF (! ProcessSpecialheader (NextPacket, SpecialHeadersize)

This is what the particular inherit Class Will Do, for Different Packet Format ...

(3) Handle The Packet Data, For Different RTP Packet, IT Has DiffERENT Construct, So ***

(4) // The packet is usable. Deliver All or Part of It To Our Caller:

Nextpacket-> USE (FTO, FmaxSize, Framesize, FnumtruncatedBytes,

FcurPackeTRTPSEQNUM, FcurPackeTRTIMESTAMP,

FpresentationTime, FcurPacketHasbeensynchronizedusingRTCP,

FcurPacketMarkerbit;

-------- Unsigned frameesize = nextenclosedframesize (NewFrameptr, ftail - fhead); (5) IF we have all the data tria the client ports

// Call Own 'after getting' function. Because We're Preceded

// by a network read, We can call this Directly, WITHOUT RISKING

// Infinite recursion.

AfTergetting (this);

------------ Void framedsource :: aftergetting (framedsource * source)

--------- Void Filesink :: aftergettingframe

Void Filesink :: aftergettingframe1

a. Adddata (FBuffer, Framesize, PresentationTIME)

b. ContinuePlaying (); // THEN TRY GETTING THE NEXT FRAME:

"==

9.4 Boolean Filesink :: ContinuePlaying ()

Fsource-> getnextFrame -------- framedsource-> getNextFrame ------- MultifraMedRTPSource->

9.5 void multiframedrtpsource :: dogetnextFrame ()

(1) Taskscheduler :: BackgroundHandlerProc * Handler

= (TasksCheduler :: BackgroundHandlerProc *) & networkReadHandler;

FRTPinterface.StartNetworkReading (Handler);

DogetNextFrame1 (); [Back to the section of 9.3]

Note:

(1) For RealNetworks Streams, Use a Special "Transport:" Header, And Also Add A 'Challenge Response'.

(2) The Detailed Relationship of The MoSn't List Because It is some time.

(3) WHEN we arrive the endtime trime That Got from the sdp line or the server translate TEARDOWN INFO, THEN THE Client Will Stop

In The Start Function "StartPlayingstream" IT Add The "SessionTimerhandler" Into the schedule.

From above, we can see that using the Live code Creating a traditional streaming of streaming, we need to establish the following process.

2.2 Add a new media

Generally, based on multi-frame-based digital media can achieve its own media class by inheritance multiframedrtpsource, while need to inherit the packetbuffer to achieve Buffer management, here you can operate according to the new media RTP PAYLOAD FORMAT, we implemented new media types, There will be a detailed description below.

2.2.1 Increase the media's Format

Increasing new media is also based on the Frame format, where each frame is called MAU (Media Access Unit), and the MAU is in the same way in the RTP Packet. As shown in Figure, the RTP Payload Format header is divided into three sections. Each section starts with a one-byte bit field, and is followed by one or more optional fields. In some cases, up to two entire sections may be omitted from .................

All RTP PayLoad Format Fields Should Be Transmitted in Network Byte Order, Which Means That The Most Significant byte of Each Field IS Transmitted First.

The RTP Payload Format header is followed by a payload. The payload can consist of a complete MAU or a MAU fragment. The payload can contain a partial MAU, allowing large MAUs to be fragmented across multiple payloads in multiple RTP packets.

................ ..

The combination of MAU in each package has the following:

2.2.2 New Media needs to consider problems

A. As can be seen from the above, each RTP Packet of the new media may contain one or more MAUs or the MAU's Fragment, and each full MAU is required after each RTP packet after Parse (data, Size, and PT: Presentation Time, DTS, etc.) Pass to Decoder, but the Live code supports the basic concentration of multimedia formats for single frame a package or a pack of more frames, all have additional information is concentrated in the first part of Packet, ie Standard RTP header :-). Therefore, after collecting the RTP Packet RTP header after the RTP Packet (MultifraMedrtpsource :: NetWorkReadHandler (×××)), the package is thrown into the REORDERDINGBUFER. When you take a package to handle your special head, you need special processing, and you will all in your package. MAU or MAU FRAGMENT header information and the like, in MultifraMedRTPSource :: dogetnextFrame1 () integrated processing

Void multiframedrtpsource :: dogetnextFrame1 ()

{

While (fneedDelivery) // Sure, See the Front

{

// IF We Already Have Packet Data Available, The Deliver It Now.Boolean PacketlossprecededThis;

BufferedPacket * nextpacket // Get the Header Packet, Maybe The One Which We Just Handled

= freorderingBuffer-> getNextCompletedPacket (PacketLossPrecedThis);

IF (NextPacket == Null)

Break;

FNEEDDELIVERY = FALSE;

IF (NextPacket-> Usecount () == 0)

{

// Before Using the Packet, Check WHether IT Has A Special Header

// That Needs to be processed:

Unsigned Specialheadersize;

IF (! ProcessSpecialheader (NextPacket, SpecialHeadersize) {

// Something's Wrong with the header; reject the packet:

FreorderingBuffer-> ReleaseUsedPacket (NextPacket);

FNEEDDELIVERY = True;

Break;

}

NextPacket-> Skip (SpecialHeadersize);

}

// Check WHETHER WE'RE Part of a multi-packet frame, and whather

// There Was Packet Loss That Would Render this packet unusable:

IF (FcurrentPacketbeginsframe) // in the processspecialheader (), IT Will Change ...

{

Unsigned pt_tem = 0; // alexis

FramePresentationTime (PT_TEM);

Nextpacket-> setPresentTime (PT_TEM); // alexis 04-11-10

IF (packetlossprecededthis || fpacketlossInfragmentedFrame) // packet loss and the former frame has unhandled fragment.

{

// We Didn't Get All of The Previous Frame.

// forget Any Data That We buy from it:

FTO = fsavedto;

Fmaxsize = fsavedmaxsize;

Fframesize = 0;

}

FpacketlossInfragmentedFrame = false; // begin frame, so ...

}

Else if (packetlossprecededthis)

{

// We're in a multi-packet frame, with preceding packet loss

FpacketlossInfragmentedFrame = true;

}

FpacketlossInfragmentedFrame

{

// --- Alexis 10-28

Unsigned maufraglength;

Dolossfrontpacket (Maufraglength); // Get The Length from Now Mau Fragment to the next mau start

IF (Maufraglength! = 0)

{

NextPACKET-> SKIP (Maufraglength);

FNEEDDELIVERY = True;

Break;

}

Else // The Original Part ...

{

// Normal Case: this packet is unusable; Reject IT:

FreorderingBuffer-> ReleaseUsedPacket (NextPacket);

FNEEDDELIVERY = True;

Break;

}

}

// The packet is usable. Deliver All or Part of It To Our Caller:

UNSIGNED FRAMESize;

Nextpacket-> USE (FTO, FmaxSize, Framesize, FnumtruncatedBytes,

FcurPackeTRTPSEQNUM, FcurPackeTRTIMESTAMP,

FpresentationTime, FcurPacketHasbeensynchronizedusingRTCP,

FcurPacketMarkerbit;

Fframesize = framesize;

IF (! Nextpacket-> HasusableData ()) {

// We're Completely Done with this packet now

FreorderingBuffer-> ReleaseUsedPacket (NextPacket);

}

IF (FcurrentPacketCompletesframe || FnumtruncatedBytes> 0)

{

// we have all the data this the client.

IF (FnumtruncatedBytes> 0) {

Envir () << "Multiframedrtpsource :: DogetNextFrame1 (): The Total Received Frame Size Exceeds The Client's Buffer Size

<< fsavedmaxsize << ")."

<< FnumtruncatedBytes << "" bytes of trailing data will be dropped! / n ";

}

// Call Own 'after getting' function. Because We're Preceded

// by a network read, We can call this Directly, WITHOUT RISKING

// Infinite recursion.

AfTergetting (this);

// it will store the whole frame to the outbuffer (in here is the file)

}

Else

{

// this Packet Contading Fragment Data, And Does Not Complete

// The data That The Client Wants. Keep Getting Data:

FTO = frameesize;

FMaxSize - = framesize;

FNEEDDELIVERY = True;}

}

}

B. In addition, since each MAU has its own time information, the Live's code is no longer adapted in the standard RTP clamp, and it is necessary to set the time itself, when transmitting a single MAU, NEXTPACKET-> setPreSentTime (pt_tem); // alexis 04-11-10 is what this means.

C.

2.3 Details LiveM's library is divided into BasicusageEnvironment, USAGEEnvironment, GroupSock, and LiveMedia, these 4 parts of LiveMedia, where BasicusageEnvironment, usageEnvironment is responsible for the configured configuration of the task, and GroupSock is responsible for the creation of the SOCKS socket and the corresponding information (inquiry information) And data information). LiveMedia is the core of the entire project, responsible for RTSP (Client, Server), Session (Subsession), RTCPinstance, *** Source, *** Sink operation, below will be described in detail below. 2.3.1 BasicusageEnvironment, usageEnvironment usageEnvironment

Hashtable (http://www.live.com/livemedia/doxygen/html/classhashtable.html)

Hash chain table building and maintenance

Class define class iterator // useed to iterate through the members of the table:

Taskscheduler

// Schedules a task to occur (after a delay) WHEN WHEN WHEN NET REACH A Scheduling Point.

ScheduledelayedTask (*) unscheduledelayedtask (*)

// for Handling Socket Reads in the background:

BackgroundHandlerProc (*)

TurnonBackgroundReadlandLing (int solutionnum) TurnoffBackgroundReadlandLing (int solutionnum)

DoeventLoop (char * watchvariable = null) = 0;

// Stops The Current Thread of Control from Proceeding, But Allows Delayed Tasks (AND / or Background I / O Handling) To proceed.

Strdup

Character string copy,

UsageEnvironment (http://www.live.com/livemedia/doxygen/html/classusageenvironment.html)

// An Abstract Base Class, Sub-Classed for Each Use of The Library

BasicusageEnvironment

Basichashtable (http://www.live.com/livemedia/doxygen/html/classbasichashtable.html)

// a Simple Hash Table Implement, Inspired by The Hash Table

// Implementation Used in TCL 7.6: Class Basichashtable: Public HashTable

Interior Class Iterator: PUBLIC HASHTABLE :: Iterator

BasicusageEnvironment0

// An Abstract Base Class, Useful for Sub-Classing (E.g., To Redefine The Implement OF "Operator <<")

Class BasicusageEnvironment0: Public UsageEnvironment

Define variables unsigned fcurbuffersize; unsigned fbuffermaxsize;

// An Abstract Base Class, Useful for Sub-Classing (E.g. To Redefine The IMPLEmentation of Socket Event HANDLING)

Class BasicTaskscheduler0: Public Taskscheduler

BasicusageEnvironment

Class BasicusageEnvironment: Public BasicusageEnvironment0

Constructing Static BasicusageEnvironment * CreateNew (Taskscheduler & Taskscheduler);

Class BasicTaskscheduler: Public BasicTaskscheduler0

Define int FmaxNumSockets; fd_set freadset; / / to imports:

2.3.2 Groupsock

// "MTunnel" MultiCast Access Service

Netcommon

For different platforms, including different SOCK header files Win32 Wince VxWorks Unix Solaris

- #include

- #include Netaddress // definition of a type representing a low-level network address.

// at present, this is 32-bits, for ipv4. Later, Generalize IT,

// TO Allow for IPv6. Class NetAddress

Class NetaddressList

Class Port

() When the class AddressPortLookupTable // A generic table for looking up objects by (address1, address2, port) to use NetAddressList SDP information returned by the server, then the initial "C =" behind the IP address information NetInterface (http MediaSubsession in the initiate: //www.live.com/livemedia/doxygen/html/classnetInterface.html)

Class NetInterface Class DirectedNetInterface: Public NetInterface is responsible for writing Class DirectedNetInterfaceSet responsible for the management of DirectedNetInterface

class Socket: public NetInterface responsible for reading class SocketLookupTable responsible for finding Sock through the port class NetInterfaceTrafficStats responsible for calculating Traffic (how many packets and bytes) Inet.c defines initializeWinsockIfNecessary and random number generation function, is called GroupsockHelper

GroupsockHelper defined UDP datagram or stream created TCP sock connection (and bind () port), read and write socket, socket set header parameters, and instructions for the SSM (special source multicast) of TunnelEncaps // Encapsulation trailer for tunnels class TunnelEncapsulationTrailer IoHandlers // NOT USED IN THE CURRENT VERSION? // Handles Incoming Data On Sockets:

Groupeid Endpoint ID // Used by Groupsock Groupsock (http://www.live.com/livemedia/doxygen/html/classgroupsock.html) // An "outputsocket" is (by default) Used Only to send packets.

// NO Packets Are Received on IT (Unless A Subclass Arranges This) Class OutputSocket: Public Socket // An "OutputSocket" is (by default) Used Only to send packets.

// no packets are receivated on it (unless a subclass arranges this)

Class DestRecord

Class Groupsock: Public OutputSocket

// a "Groupsock" is used to Both Send and received packets.

// as the name suggests, it is originally designed to send / receive

// MultiCast, but it can send / receive unicast as well.

It includes increasing, removing address, and for judging (group / unicast), TTL

Class GroupsockLookuptable

// a Data Structure for loops UP A 'Groupsock'

// by (Multicast Address, Port), or by Socket Number

2.3.3 LiveMedia This part is the core of Live.com's code. To achieve RTSP, control, and RTP transmission is established.

And all kinds of RTP PayLoad to pack and analyze. Here I will be divided into two sections, the first part introduces the basic environment, the second part focuses on the RTP PayLoad Format; 2.3.3.1 Minimal Environment is established 1. OUR_MD5 / OUR_MD5HL If you don't need to make an Authentication in the URL This part can be ignored

2. Profile

A. MEDIUM

// Basic Class of the LiveMedia defines some pure virtual functions

Class Medium

All classes behind are inherited MEDIUM or its derived class

For each media (subsession?), A mediaumName will be created, while multiple Medium will share a variable of a usageEnvironment class (internal definition of character output operators and task scheduling functions)

B. RTSPCLIENT (http://www.live.com/livemedia/doxygen/html/classrtspclient.html)

For RTSPClient, Medium is a transfer station using the usageEnvironment variable, or MEDIUM is the transfer station of all derived classes after the latter ^ _ ^. RTSPClient has the following features

Describeurl () // this is the client first used function to determine if the app URL IS Validate

Send Option Announce Describe Request to Host Setupmediasubsession

PlayMediasession

PlayMediasubsession

Pausemediasubsession

PauseMediasession

RecordMediasubsession

SetMediasessionParameter

TEARDOWNMEDIASUBSession

TEARDOWNMEDIASSISSION

These are the PUBLIC functions, while RTSPClient has some internal functions as the communication information interaction with the server.

GetResponse

Parseresponsecode ParsetransportResponse ParsrtpinfoHeader ParsescaleHeader

For the SDP information section of Server sent during the RTSP session establishment, Live's code is put in the Mediasession class, which is also in line with the actual situation, because the establishment of session and SubSession is dependent on SDP information. .

The above function function is to send RTSP specified in several request to the destination URL, get different feedback, more important is Describe, its response is SDP information given by Server

BTW: The connection to the server in RTSPCLIENT is based on TCP, so. . .

C. RTCPinstance

The RTCP this is actually unfair to the RTP connection. Its port is the corresponding RTP Socket connection port 1, and the re arrival of the RTP packet causes the change of internal (statistics) data in the RTCP.

D. Mediasession (http://www.live.com/livemedia/doxygen/html/classmediasession.html)

It should be noted that MediaSubSession is the basic components, and the MediaSession class is only a unified management to the subsis below, using an Iteratord.1 MediaSession.

Made in the session, for example, get the beginning of the session, SCALE and SCALE, and SEEK, etc.

Also initialized the entire Mediasession, using the SDP information from RTSPClient (inevitably requires various information contained in Parse SDP)

D.2 MedaisubSession

It is the control of each independent media.

For the SDP information under m =, detailed Parse gave a special information of a single Media, and it is associated with Groupsock, * Source, and * Sink, and then the RTP Packet data for processing Receive and SINK, respectively.

E. MediaSink

F. MediaSource Framesource

Class MediaSource :: Public Medium Class Framedsource: Public MediaSource Here you put them together because I think they can merge together, (BTW: I am doing this)

This class is a transit station for the entire data reading process, and it is also the base class of the following function, defining some pure virtual functions, supplied back calls (such as multifraMedRTPSource), Framesource and MultifraMedRTPSource and * Sink form a complete Frame data processing loop . Let's introduce a function of FraMesource implemented

Void framesource :: getNextFrame (unsigned maxsize,

AfTergettingFunc * aftergettingfunc,

Void * aftergettingclientdata,

OnClosefunc * onclosefunc,

void * oncloseclientdata) // Common for all the frame source class, so defined it, used by the * sink ?? This function passes the data buffer pointer and the size, and the post-processing function (who is passed to the data | End How to deal with it), {

// Make Sure We aren't Already Being Read;

FiscurrentlyawayData

{

Envir () << "Framedsource [" << this << "] :: getNextFrame (): attempting to read more tour at the same! / n";

Exit (1);

}

FTO = TO; // Input Buffer Pointer

FMaxSize = maxsize; // Left buffer size

FnumtruncatedBytes = 0; // default value, can be changing by dogetnextframe ();

FAFTERGETTINGNC = FaftergettingFunc = AfoftTingFunc

FaftergettingClientData = aftergettingclientdata;

FONCLOSEFUNC = onClosefunc;

FonCloseclientData = onCloseClientData;

FiscurrentlyawaitingData = true;

DogetNextFrame (); // Absolutly Virtual Function, Will BE Difineed.

}

At the same time, dogetNextFrame () will be called to process the next frame data, the process function in the multifraMedRTPSource called

The following functions will be handled after getting a frame of data, where the function refers to the address of the processing function pointer obtained above, and the code of Live.com is in * Sink. // after the whole frame (MAU) HAS BEEN GOT

Void framedsource :: aftergetting (framedsource * source)

{

Source-> fiscurrentlyawaitingdata = false;

// indeicates this we can be ready

// Note That this Needs to be done here, in case the "fafterfunc"

// Called Below Tries To Read Another Frame (Which IT Usually Will)

IF (Source-> FaftergettingFunc! = null) {

(* (source-> faftergettingfunc)) (Source-> FaftergettingClientData,

Source-> Fframesize, Source-> FnumtruncatedBytes,

Source-> FpresentationTime,

Source-> FDURATIONINMICROSECONDS;

}

}

Here, it will handle one frame of data as one of the judgment flags that can read and write sockets, mainly because it is afraid of reading and writing too frequently or too much to have a waste or data processing is not timely, I think it can be changed here. Because the current video media is a frame in several packet, so. . .

For processing that stops getting the frame data, there is a function in the FraMesource's derived class, and the entire thread is stopped, then handled by HandlecolSure, see the following function, still call the initial time passed the function pointer to the function. Void framesource :: stopgettingframes ()

{

FiscurrentlyawaitingData = false;

dostopgettingframes ();

}

Void framesource :: dostopgettingframes ()

{

// Default Implementation: Do Nothing

// Subclasses May wish to Specialize this so as to ensure That a

// Subsequent Reader CAN Pick Up Where this one left.

}

Void Framesource :: Handlecolsure (void * clientdata) // this sales be called if the source is discovered to be closed (i.e., no longer readable) {

Framesource * Source = (frameesource *) ClientData;

Source-> fiscurrentlyawaitingData = false; // We now got aclose instead

IF (source-> fonclosefunc! = null)

{

(* (Source-> fonclosefunc)) (Source-> FoncloseClientData);

}

}

2.3.3.2 RTP PAYLOAD FORMAT Some Introduction

Third, some summary

A. Buffer Management

How to control the burst input packet is a big topic. The leak bucker model may be useful, however, if a long burst of higher-rate packets arrives (in our system), the bucket will overflow and our control function will take actions against Packets in trat.

In our client system, in order to get the library (manager the session and the receiving thread) and the player (used to display picture and put the sound to the sound box, and this place include the decoder), we put a middle layer Between the Server and the Player, Which is easy for porting.

The Following Gives A More Detailed Description.

BTW: The Upc (USAGE Parameter Control) and the process of handling the Exception, SUCH AS Packet-Loss, Are Complicated and We Will Not Give A Full Description Here.

Receiver Buffer

When the session has been set, we will be ready for receiving the streaming packet. Now, for example, there are two media subsessions which one is Audio subsession and the other one is Video subsession, and we have one buffer for each of them, The Following Is The Details of The Receiver Buffer Manager.

1) In The Receive Part, We Have Defined A 'Packet' Class, Which Used to Store and Handle One RTP Packet.

2) For each subsession, there is one buffer queue whose number is variable, and according to the Maxim delay time, we determine the number of the buffer queue.3) Buffer queue is responsible for the packet re-order and something else.

4) In the receiver buffer, we will handle the packet as soon as possible (except one packet is delay by the network, and we will wait for it until arrived the delay threshold), and leave the buffer overflow and underflow manager to the Player .

Figure 1: Packet received Flow

Figure 2: Packet Handle Flow (with the decoder)

2. Player (decoding) buffer

The player stores media data from the RTSP client into buffers for every stream. The player allocates memory for every stream according to the maximum preroll length. In the initial phase, the player will wait for buffering till every stream has received contents at least Preroll time . So Every Buffer Length Will BE PREROLLMAX C (Here C Is A constant). When Every Buffer is Ready, The Player Will Start The Playback Thread and Play The Contents.

Figure 3: Playback with stream buffers

The playback thread counts time stamps for every stream. During playing process, one of the streams may be delayed and then the corresponding buffer will under run. If the video stream is delayed, the audio will play normally but the video stalls. The play back thread will continue to count time stamp for audio stream but the video time stamp will not increase. When the new video data is arrived the play back thread will decide it should skip some video frames till the next key frame or play faster to catch the audio time stamp. Usually the player may choose playing faster if it's just delayed a short time. On the other hand, if it's the audio stream that is stalled, the player will buffer again till every buffer stores data more than T time. Here T is a short time related with the audio stream's preroll time, and it can be smaller or equal to the preroll. This dealing is for reducing discontinuity of audio when network is jitter. to avoid this case, choose a higher T value or choose a better network.If one of the buffers is overflow, this is treated as an error. For the video stream, the error handler will drop some data till next key frame arrives. And for audio stream, the error handler will simply drop some data.

Figure 4: Process Buffer overflow or underflow

B. How to Control the Receive Loop

Main loop of the OpenRTSP code of Live

Env-> taskscheduler (). doeventloop ()

Among the function doeventloop, there is a defirmable parameter that can be used to reach the purpose of the launch cycle by setting this parameter, but can directly call the method PAUSE of the release resource written below C to receive or introduce the entire thread.

C. Pause & Seek

The OpenRTSP example does not give specific implementation, the latest LiveMedia version can support Seek (including server part)

// pause:

Playerin.rtspclient-> pausemediasession (* (playerin.session));

Playerin.rtspclient-> PlayMediasession (* (Playerin.Session), -1);

// Will Resume

// seekfloat sessionLength = session-> PlayendTime ()

// First get the play time area, in the SDP resolution

First pause ***

RTSPCLIENT-> PlayMediasession (session, start);

// Start Less Than the "sessionLength"

D. Release Resource Issues

The solution given by OpenRTSP is the shutdown () function, and in the process of connecting the library and the player connection, the release scheme that the thread is unable to be launched, and then refer to MPLAYER (its RTSP support is a Live code) release scheme, Give the following code, run everything is normal.

Void OutrtSpClient () // RTPState is a data structure we defined, saved some session information

{

IF (RTPSTATE-> session == null)

Return;

IF (RTPSTATE-> RTSPCLIENT! = null) {

MediaSubSessionItemUnsSessionTerator it (* (RTPState-> session);

MediaSubsession * subsis;

While (subsession = iter.next ())! = NULL) {

Medium :: close (Subsession-> Sink);

Subsession-> Sink = NULL;

RtpState-> RTSPClient-> TeardownMediasubSession (* subsis);

}

}

USAGEENVIRONMENT * ENV = NULL;

Taskscheduler * Scheduler = NULL;

IF (RTPState-> session! = null) {

ENV = & (RTPState-> session-> envir ());

Scheduler = & (ENV-> Taskscheduler ());

}

Medium :: close (RTPState-> session);

Medium :: close (RTPState-> RTSPClient);

ENV-> Reclaim ();

Delete Scheduler;

}

转载请注明原文地址:https://www.9cbs.com/read-82212.html

New Post(0)