A point-to-point voice communication based on local area network

zhaozj2021-02-11  260

VCKBASE ONLINE HELP JOURNAL NO.10

A point-to-point voice communication, Changsha, Zhen Wei, Yan Yuxiang, Cai Xuanping

Speech

With the increasing popularity of computer networks, people communicate through the network appear more and more important, so there are a series of voice communication software, such as Netmeeting, Ipphone, Mediaring, and Voxphone, but these software is perfect, relatively independent. It is not intended to integrate into the software developed, and sometimes we also want to integrate this voice communication function into its own software, especially when a unit of local area network users are dispersed in different rooms. This article gives a flexible, simple implementation method, using dialog-based mode, only one double DMA channel sound card (currently supported double DMA channel) and a headset, the rest of the software programming achieve. The program is compiled under Windows98 / 2000, Visual C 6.0, runs well on Windows NT 100M Ethernet.

Design ideas

To achieve point-to-point voice communication, the principle is very simple, as long as you implement real-time acquisition, processing, playback, dealing with voice, and can perform reliable transmission and reception, this two points can be called. For the former, the low-rise audio service using Windows MDK is relatively appropriate, as the callback mechanism in the low-level audio service provides us with great convenience. When the application continues to provide audio data to the device driver, the device driver controls the specific operation of the recording and playback of the audio device in the background. Through the callback mechanism, we can detect when using a data block and transmit it in time. The next data block, thereby ensuring the continuous sound, there is a real-time collection in this single machine, after playback function, the next job is to transfer voice data on the network. In terms of point-to-point network transmission, select the connection-oriented TCP protocol, TCP Transfer Protocol automatically handles group loss and delivery of sequencing, so we don't have to worry about these problems, just use this connection well, before collecting voice playback I have passed my own voice to the network, on the other hand, receive the voice from the network, so that point-to-point voice communications. The block diagram of its structure is as follows:

Implementation

First, the real-time collection, handling, playback of the voice

The first thing to introduce the Windows low-level waveform audio data block structure WaveHDR, which is as follows:

TYPE STRUCT {

LPSTR LPDATA; / / Pointer to the locked data buffer

DWORD dwbufferLength; // Data buffer size

DWORD DWBYTERECORDED; // Indicates the amount of data in the buffer when recording

DWORD dwuser; // user data

DWORD DWFLAG; / / Provide a flag of buffer information

DWORD dwloops; // Time to loop play

Struct WaveHDR_TAG * LPNEXT; // Reserved

DWORD reserved; // Reserved

Wavehdr;

The acquisition and playback of the sound is operating this audio data block structure. It is actually the first member variable LPDATA, so we allocate the WAVEHDR data block structure accordingly while allocating buffers (memory), and then The pointer of the buffer assigns the member variable LPDATA of the corresponding data block structure, so when a buffer is filled, it is filled with an audio data block, and can be processed and played in the message function through the message mechanism. After the message function can then give the buffer to the audio device input driver, continue to collect and play, when you assign multiple buffers and data block structures and assign the audio device to enter the driver, as for The buffer is filled, and then which air buffer is given to the device input driver, no human intervention is required, which is completely controlled by Windows, which is a real-time collection of voice in the dynamic cycle buffer, and the simple and smart Method. Implementation steps: 1. Initialization operation

1 Use WaveIngetNumdevs () and WaveoutGetNumdevs () to view the current system waveform audio input, output devices;

2 Set a member variable of the Waveformatex structure by 11025Hz, 16bit, mono, 22k / s format, or change to other WAVE formats;

3 Turn the Wave_Format_Query parameter to see if the Wave_Format_Query parameter is called by WaveinOpen (...) and WaveOutopen (...). Whether the waveform input device supports the set format;

4 Turn again with WaveinOpen (...) (...) (...) and Waveoutopen (...) to open the Callback_Window parameter to open the waveform input device;

5 Assign the audio data block and audio data buffer, lock the global memory;

6 Initialization of the audio data block structure each member variable, mainly assigns each buffer pointer to buffer pointer variables in the corresponding data block structure; call WaveInPreprehead (...) and WaveinAddBuffer (...) to add audio data blocks Give the input device driver;

7 call the WaveInstart (...) function to start recording.

2. Message operation

After the recording starts, whenever the sample data fills the data block, the device driver will send the message mm_wim_data to the user window, the corresponding message callback function onmmwimdata (...) processes the sampling data in the data block, and then You can send it to the output device to play back. Whenever an audio data block is played, the device driver will issue a message mm_wom_done, the corresponding message callback function onmWomDone (...) records audio data and resends it to the input device after necessary To prepare subsequent sampling data. In this way, the audio data block prepared for the input device is under the control of the message, and is cyclically used between input and output devices, and does not require human control to implement real-time acquisition, processing, and play.

When the call is ended, turn off the audio input device, when the audio device driver sends the MM_WIM_CLOSE message, which can clear the audio data block that assigns input and output devices in the corresponding message function onmmwimclose (.).

Second, point-to-point voice transmission based on TCP protocol

The transmission and reception of sound is mainly used to connect the connected TCP protocol, and use the Windows Socket to make network programming, but first put the send and receive function interface in the onmmwimdata (...) function, so that data can be acquired After the block is filled, the received data is received after receiving it. Windows Socket should be unfamiliar with those who have been engaged in network programming, because we have to achieve point-to-point communication, so you have to combine customers and server patterns into a model, so that the server can do customers, customers can do server, thus making both parties Both have the ability to call each other and accept the other party's call, which only needs to add a monitable socket. Once the call connection is established, a data stream is established between two points. Even if the two sides do not speak, each point is constantly harbor, sending data, one party has the voice naturally, with this data to each other Therefore, the key problem is how to read the voice data stream because the streaming service provided by TCP does not guarantee the boundary, when the sender wants to send 4000 bytes once, call the statement Send (SBuffer, 4000), and cannot guarantee You can send 4000 bytes; Similarly, the recipient prepares the received data, calls the statement receive (RBuffer, 4000), and cannot guarantee that you can receive 4000 bytes, so the byte that actually sends and receives. The number will be any value in 1 to 4000, the worst case is only one byte. Conversely, if a small amount of data is sent continuously with the Send function, such as sending 400 bytes at a time, 10 consecutive send 10 times, the receiver uses the RECEIVE function to receive these 4000 bytes, and in order to achieve play, we hope Calling a time delivery function can send the buffer size voice data, calling a received function to accurately receive the voice data sent by the other party, so as to play, a relatively simple and practical approach is to use the TCP protocol When sending data, add a logo for each data package: This flag header contains long (4-byte) voice data volume and a logo string, and the offset can be set according to the two flags, at the same time The offset is also set for the voice data, and it is received by overrecept onRecEive (...). Before starting receiving, the offset is 0. After the reception starts detects whether or not the four-byte voice data size value and string flag are received. If it is not received, the offset will be controlled accurately When it comes, the correctness of the string flag is then received. If the two flags are received correctly, the real voice data received is made according to the amount of data received. If the data is received, if it is called Receive in the OnReceive function (...) The reception does not reach this value, uses a non-blocking mode, call the Asyncselect (...) function to continue to receive until all received, so that the overreceive (...) receives the data from the other party at a time. Similarly, we overreload the CasyncSocket's Onsen, or you can implement data in a buffer once. This will only be placed in the onmmwimdata (...) function in the onmmwimdata (...) function, so that the transmission and playback of the network voice is implemented.

Third, interface and other functions

Interfaces and other features must also be done:

1. Set the "Find Neighbor" item, you can view who is online, display with Listbox, select a neighbor in ListBox, the corresponding neighbor editing box shows the neighbor, easy to call, "Find neighbors" The function is the function. WNETOpenENUM (...), WneetenumResource (...), WnetCloseenum (...). 2. The program is based on the dialog. Do not do any display after running, directly into the system tray, until you double-click on the tray icon or select the menu, but you can listen at any call access, once the connection establishes immediate open voice processing function.

3. Each run is written or modified in the registry's RunServicesONCE key, so that the program can be automatically running.

4. The function of adjusting the volume of the microphone and headphones is added.

5. Provide music playback for the call, but only supports the same .WAV file, other format .WAV files available in the sample format. WAV file can be converted with Windows recorder.

© 1997-2000 vckbase.com All rights reserved.

转载请注明原文地址:https://www.9cbs.com/read-3879.html

New Post(0)