Game development basis (5)
DirectSound The first section of the sound sound is a series of oscillations, called sound waves, generally can be represented by a two-dimensional waveform map. Digital audio refers to files that use some devices to record sound waves and saved as a digitized file. Play the corresponding file to produce some kind of sound effect. The sound quality of digital audio has a lot difference with the sampling frequency and the number of bits used. Therefore, it is necessary to understand the relevant standards used in the audio file format used. For example, the audio in the CD is 16 bits, and the sampling frequency reaches 44.1 MHz stereo digital audio. WAV is the most common in all formats of all sound files. This is the most common format on the Windows platform, created by Microsoft. Supports 8-bit and 16-bit sound quality, multi-diversity, and acts on the stereo and mono audio. It also supports a variety of audio compression algorithms. To achieve a good sound effect in the game, for example, use 3D sound effects, there can be two ways to implement: Play such files directly. Obviously, this is relatively simple, but it is not flexible. If the sound is required to change with the change of the game scene, it is necessary to perform real-time mixing without being limited by the number of sound files. The second section of the DirectSound structure DirectSound features a play, sound buffer, three-dimensional sound, audio capture, attribute set, and so on. DirectSound Playback is constructed above the IDirectSound COM interface. IDirectSoundBuffer, IDirectSound3DBuffer and IDirectSound3DListener interfaces are used to implement operations for sound buffers and three-dimensional sound. DirectSound Capture is constructed above the IDirectSoundCapture and the IDirectSoundCaptureBuffer COM interface. Other COM interfaces, such as IKSPropertySet, enable applications to benefit from the acoustic card extension. Finally, the IDirectSoundNotify interface is used to generate an event when playing or audio capture reaches a certain place. Section 3 Playback Function Overview The DirectSound buffer object represents a buffer containing sound data, which is stored in the PCM format. This object can not only be used to start, stop or suspend sound playback, but also set attributes such as frequency and formats in sound data. The buffer is divided into a main buffer and a subtraction. The main buffer is the audio signal to be heard by the listener, which is generally the result of mixing the signal in the subtrach. There are many separate sound signals in the subtrew area, and some can be played directly, some to mix, and some loop play. The main buffer is created automatically by DirectSound, and the subtrew is created by the application. DirectSound is mixed after mixing the sound in the subtrew area, stores the main buffer, and then outputs it to the corresponding playback device. DirectSound does not resolve the function of sound files, you need to change the sound signal of different formats in the application (PCM). The buffer can be in the main board RAM, wavelert memory, DMA channel, or virtual memory. Multiple applications can create a DirectSound object with the same sound device. When the input focus changes in the application, the audio output will automatically switch between the flow of each application. Thus, the application is not used repeatedly to play and stop their buffers in the input focus change. Through the IDirectSoundNotify interface, when played into a user specified, or when the play is over, DirectSound will dynamically notify the support of this event. Section IV Audio capture Overview DirectSoundCapture objects You can query the performance of audio capture devices and create buffers to capture audio from input source. In fact, in Win32, there has been a function of capturing audio, and current (version 5) DirectSoundCapture is not only a new feature.
However, DirectSoundCapture API enables you to write playback and audio capture programs that use the same interface, and this also provides the original model for the future API improvement, allowing you to benefit from it. DirectSoundCapture can also capture audio in compressed format. The DirectSoundCaptureBuffer object represents a buffer for capturing audio. It can be recycled, that is, when the input pointer reaches the last time of the buffer, it will return to the beginning. DirectSoundCaptureBuffer objects A variety of ways you can set the properties of the buffer, start or stop operation, lock a part of the memory (so you can safely save these data or other purposes). Similar to playback, the IDirectSoundNotify interface notifies the user when the input pointer reaches a certain place. Section 5 Initialization For some simple operations, the default preferred device can use. However, in the production of the game, we may still need to know some specific sound equipment. Thus, you should first list the available sound equipment. Prior to this, you need to set a recycle function before you discover the function after each DirectSound discovers the new device. You can do anything in a function, but you must define it the same as DsenumCallback. If you want to listen to continue, the function should return true, otherwise it returns false. The following routines come from Dsenum.c files under the CD. It enumerates the available devices and adds a corresponding information in a list box. The first is his recovery function: BOOL CALLBACK DSEnumProc (LPGUID lpGUID, LPCTSTR lpszDesc, LPCTSTR lpszDrvName, LPVOID lpContext) {HWND hCombo = * (HWND *) lpContext; LPGUID lpTemp = NULL; if (lpGUID = NULL!) {If (( lpTemp = LocalAlloc (LPTR, sizeof (GUID))) == NULL) return (TRUE); memcpy (lpTemp, lpGUID, sizeof (GUID));} ComboBox_AddString (hCombo, lpszDesc); ComboBox_SetItemData (hCombo, ComboBox_FindString (hCombo, 0 , LPSZDESC), LPTEMP; RETUN (TRUE);} When the dialog box containing the list box is initialized, the start: IF (DirectSoundEnumerate ((LPDSenumCallback) DsenumProc, & hcombo)! = ds_ok) {enddialog (hdlg, true) Return (TRUE);} Creating the easiest way to create a DirectSound object is to use the DirectSoundCreate function. The first parameter is the global unique flag of the corresponding device. You can get the GUID by enumerating the sound device, or use null to create an object for the default device. LPDirectSound LPDIRECTSOUND; HRESULT HR; HR = DirectSoundCreate (NULL, & LPDIRECTSOUND, NULL); After creating a DirectSound object, a collaborative layer should be set. This is to determine that each DirectSound application is allowed to operate the sound device to prevent them from operating devices at the wrong time or by errors.
The way used is IDirectSound :: setCooperativeEvelevel. Here hwnd parameters are handles of the application window: HRESULT HR = LPDIRECTSOOPERATIVELEVELEVELEVELEVELEVELEVEL (LPDIRECTSOUND, HWND, DSSCL_NORMAL); the cooperation layer determined here is Normal, so that the application using the sound card can be sequentially switched. Cooperative layers include Normal, Priority, Exclusive, and Write-Primary, the level increases. As mentioned earlier, DirectSound can give full play to the hardware enhancement function, so it needs to see the characteristics of the device first. We can achieve this request via iDirectSound :: getCaps. As shown below: DSCAPS DSCAPS; DSCAPS.DWSIZE = SIZEOF (DSCAPS); HRESULT HR = LPDIRECTSOUND-> LPVTBL-> getCaps (LPDIRECTSOUND, & DSCAPS); DSCAPS structure receives information about sound device performance and resource. Note that the DWSIZE member is required to initialize the structure to call it. In addition, you can also query and set the setup of the speaker, and organize the sound memory to make the maximum spare space as possible. Section 6 After playing the initialization, DirectSound will automatically create a main buffer for mixing and transferred to an output device. The secondary buffer requires you to create it.
The following example shows the creation of a basic sub-buffer with IDirectSound :: CreateSoundBuffer way: BOOL AppCreateBasicBuffer (LPDIRECTSOUND lpDirectSound, LPDIRECTSOUNDBUFFER * lplpDsb) {PCMWAVEFORMAT pcmwf; DSBUFFERDESC dsbdesc; HRESULT hr; // set the sound wave format structure memset (& pcmwf , 0, sizeof (PCMWAVEFORMAT)); pcmwf.wf.wFormatTag = WAVE_FORMAT_PCM; pcmwf.wf.nChannels = 2; pcmwf.wf.nSamplesPerSec = 22050; pcmwf.wf.nBlockAlign = 4; pcmwf.wf.nAvgBytesPerSec = pcmwf.wf .nSamplesPerSec * pcmwf.wf.nBlockAlign; pcmwf.wBitsPerSample = 16; // set DSBUFFERDESC structure, the buffer control options for setting memset (& dsbdesc, 0, sizeof (DSBUFFERDESC)); dsbdesc.dwSize = sizeof (DSBUFFERDESC); // default control requirements dsbdesc.dwFlags = DSBCAPS_CTRLDEFAULT; // 3 seconds buffer dsbdesc.dwBufferBytes = 3 * pcmwf.wf.nAvgBytesPerSec; dsbdesc.lpwfxFormat = (LPWAVEFORMATEX) & pcmwf; // create a buffer hr = lpDirectSound- > lpvtbl-> createSoundBuffer (LPDIRECTSOUND, & DSBDESC, LPLPDSB, NULL); if (DS_OK == HR) {// Success, the interface is obtained in * LPLPDSB Return Tru E;} else {// failed * LPLPDSB = null; return false;}} You must set the control options for the buffer. This is a DWFLAGS member in the DSBufferDesc structure, please see the help of DirectX 5 for details. The subtraction area does not support mixing and other special effects, so you need to be able to directly operate the main buffer. However, when you write the main buffer, the other features will lose its role, so that the hardware accelerates the mix. Therefore, most of the applications have a few direct operation of the main buffer. If you are required to operate the main buffer, you can set the DSBCAPS_PRIMARYBUFFER flag in the DSBufferDesc structure when calling the IDirectSound :: CreatesoundBuffer mode, and the cooperation layer must be Write-Primary.
The following example shows how to obtain the ability to write the primary buffer: BOOL AppCreateWritePrimaryBuffer (LPDIRECTSOUND lpDirectSound, LPDIRECTSOUNDBUFFER * lplpDsb, LPDWORD lpdwBufferSize, HWND hwnd) {DSBUFFERDESC dsbdesc; DSBCAPS dsbcaps; HRESULT hr; // set the acoustic wave format structure memset (& pcmwf, 0, sizeof (PCMWAVEFORMAT)); pcmwf.wf.wFormatTag = WAVE_FORMAT_PCM; pcmwf.wf.nChannels = 2; pcmwf.wf.nSamplesPerSec = 22050; pcmwf.wf.nBlockAlign = 4; pcmwf.wf.nAvgBytesPerSec = pcmwf .wf.nSamplesPerSec * pcmwf.wf.nBlockAlign; pcmwf.wBitsPerSample = 16; // set DSBUFFERDESC structure memset (& lplpDsb, 0, sizeof (DSBUFFERDESC)); dsbdesc.dwSize = sizeof (DSBUFFERDESC); dsbdesc.dwFlags = DSBCAPS_PRIMARYBUFFER; / / Buffer size determines DSBDesc.dwBufferBytes = 0 by sound hardware; dsbdesc.lpwfxformat = null; // The main buffer must be set to NULL // Get Write-Primary Cooperative HR = LPDIRECTSOPERATIVELEVEL (LPDIRECTSOUND, HWnd, DSSCL_WRITEPRIMARY; if (ds_ok == hr) {// successful, try to create buffer HR = LPDIRECTSOUND-> lpvtbl-> createSoundBuffer (L PDIRECTSOUND, & DSBDESC, LPLPDSB, NULL); if (DS_OK == HR) {// Successful, set the main buffer to Desired format hr = (* LPLPDSB) -> lpvtbl-> setformat (* lplpdsb, & pcmw); if ( DS_OK == hr) {// if you want to know the size of the buffer, call GetCaps dsbcaps.dwSize = sizeof (dSBCAPS); (* lplpDsb) -> lpVtbl-> GetCaps (* lplpDsb, & dsbcaps); * lpdwBufferSize = dsbcaps.dwBufferBytes Return true;}}}}} // Setting a collaborative layer failed // Create a buffer, or set the structure * LPLPDSB = null; * lpdwbuffersize = 0; Return False;