DirectX optimizes sound characteristics

zhaozj2021-02-11  205

Summary Microsoft provides a series of powerful tools developed for games and interaction Dircetx DirectSound and Direct3Dsound audio media. DirectX fully utilizes sound acceleration hardware to reduce its operating speed as much as possible to reduce the occupancy time of the CPU. However, audio signals still have a significant impact on the operation of the entire system. The techniques described herein will help users use DirectSound and Direct3Dsound to reduce the impact of audio reproducing system operations. The DirectX waveform sound playback device in the development tool provided by Microsoft is designed to support the development of games and interactive media applications in Windows 95 and Windows NT. DirectSound and Direct3DSound allow you to run multiple sound files and mobile sound sources simultaneously in the same three-dimensional space. As long as DirectX is possible, you will make full use of sound acceleration devices to improve health and reduce the use of CPUs. But this is not to say that you can run in the three-dimensional program space, compile the programs that are full of sound code, and move it with your heart. If you don't pay attention to how to use the computer's sound resource, you will soon discover that your computer's CPU cycle is from the 44.1khz of the 44.1khz, 16 stereo, which added by you yourself. Skills and technology first, let's talk about some relevant definitions. The DirectSound you are familiar with includes the following terms: from the buffer: refers to an application buffer used to perform waveform data. Each executed waveform file has a buffer, each such buffer has its own specific format. Main buffer: is the output buffer of DirectSound. In general, the application does not directly write waveform data into the main buffer. Instead, DirectSound first synthesizes waveform data from the buffer and then inputs in the main buffer. Note: There is only one main buffer, and its format determines its output format. Static Buffer: Contains a complete sound in memory. Because, you can write a complete voice in the buffer through a simple operation. So they are very convenient for users. The static buffer is accelerated through the synthesis of the sound card. Flow buffer: only part of the sound, use it, we don't need a lot of memory to run longer sound files. When using a stream buffer, the user must periodically write data in the sound buffer. However, the flow buffer cannot be synthesized in the hardware. We will mention the DirectSound synthesizer again. This DirectSound component is responsible for synthesizing the sound from the buffer. Then perform operations such as: volume size, equalization adjustment (left and right channel balance), frequency conversion, and three-dimensional operations. When the synthesizer does not recognize the components you store through the API (different from any of the above controls described above), this is exactly what DirectSound's CPU core. Some operational issues will be discussed in the "DirectSound synthesizer" project. The following is a schematic diagram of the relationship between the synthesizer and the main, from the buffer, will make the reader clearly understand the relationship between them. The diagram illustrates the relationship between the buffer and the DirectSound synthesizer. If the DirectSound development team member sees the above schematic, they will dismissive. Because the synthesizer is far more than the excellent display shown above. In the above figure, I don't have any three-dimensional components associated with the synthesizer, as well as other types of processes. Since we get the background knowledge, we will further contact other series of things that are useful to us. Here is a list of technical lists that help you maximize the DirectSound function.

1. Smartly use the skill; 2. Use the same format in the main buffer; 3. Set the main buffer to the lowest storage rate; 4. Continuous use of the main buffer in a short-frequent silent time interval ; 5. Use the hardware to perform sound synthesis; 6. Maximum guarantee control transformation; 7. Use the delayed three-dimensional process command; then we will explain the technical description of the various techniques described above. One. One of the best features of the skills of DirectSound is independent performance, controlling multi-channel audio signals. Once the designer of the sound truly masters them, then it can really be called a lot of power. The only cost is just a CPU instruction cycle. Each you use from the buffer will consume the CPU instruction cycle. Process operations, such as frequency quantization, etc. will bring additional consumption of CPU instruction cycles. The three-dimensional sound will consume more CPU instruction cycles than the conventional sound. Can these readers be imagined? You should sit down with your voice designer to explore the strong shock brought by using the sound performance in all directions (if you own itself is the programmer and sound designer, then you will calm down, you will carefully appreciate it.) Think about which sound is played in the process of transmitting your desire to the user to the user. When it is possible to reduce the use of the buffer, use sound premixing techniques. For example, if you are imitating the low 吟 吟 in a channel, and in another channel recorded the singular singing of the Tian frog to bake the summer night atmosphere, please synthesize them into a channel. If there is an application in your mind, you should have a considerable degree of simplicity of this process when you trade it in your mind. But remember, you want to design a quite refined practical sound program, you need to spend a long time. Beatles' sgt.pepper's lonelyHearts Club Band is a creative great masterpiece. It is recorded in a four-channel tape tape. In contrast, the modern audio recording apparatus provides at least forty-eight channels and provides real-in-use unlimited multi-channel cassette tapes and available MIDI sequence generators. two. The same data format DirectSound synthesizer is used in the main buffer to convert each data in the buffer into a main buffer, which is implemented in the synthesis of the data into the main buffer. It will also occupy the CPU cycle. You can eliminate this overhead under the premise of ensuring the same data format from the buffer (such as waveform file) and the main buffer. In fact, it is because of this format conversion method of DirecSound, you have to do it is just comparative sample rate and the number of channels, even if there is some difference in sample rate (8 or 16), there is no relationship because it is unique The consequence is only to reduce the data access rate of the main buffer. So far, most acoustic cards are ISA bus cards. It moves sound data from system memory to the local buffer through DMA ways, and the processor will be forced to wait for DMA data transmission before performing memory read and write, this is bound to Will affect the running speed of the CPU. For ISA bus sound cards, the above data transfer will undoubtedly generate unavoidable effects on the operation of the system, but will not have any impact on the new 32-bit PCI card. For DirectSound, the impact of DMA data transfer is directly related to the output rate of the data and the access rate of the main buffer. I have heard of this interesting thing: running the basic format of 44.1kHz, 16-bit three-dimensional sound programs on a Pentium of 90MHz, and the DMA will take up up to 30% of the CPU instruction cycle! DMA data transfer is the biggest factor affecting DirectSound operation. Fortunately, the above problem is very easy to handle when you can't exercise smoothly.

Experiments show that the best way to reduce data access rate is to change the data format in the main buffer. The conversion here is very obvious, running to change the quality of the sound, to change the data format in the main buffer, just call the method IDirectSoundBuffer :: setFormat, but don't forget: Your collaborative layer is set to DSSCL.PRIORITY or DSSCL_EXCLUSIVE to avoid The cost of the main buffer. three. Continuous use of the main buffer DMA in a silent interval affects the operation of the system from another aspect. DirectSound stops the work of the synthesizer and the activity of the Synthesizer when there is no sound playback. If there is a short frequent silent time interval in your program, the synthesis will be stopped when the intermittence of the sound will play with the sound playback, which will be worse than you make the synthesizer. . In this case, you can mandatory calling method play in the main buffer to make the synthesizer active. In this way, the synthesizer will continue to work even when no sound is played. At this point, in order to restore the default method of the stop synthesizer, we can call the method STOP in the main buffer. four. Sound synthesis using hardware If the DirectSound drive that supports sound card is assembled, most of the sound cards support a certain level of hardware synthesis. A small trick below will allow you to use hardware synthesis as much as possible. Use a static buffer when you perform hardware sound syntheses. DirectSound will try to perform sound synthesis in static buffering. Establish a sound buffer for you to use the most sound file (which can be used to perform sound hardware synthesis. When the sound file is running, the method IDIRECTSOUND :: getcaps determines how the sound acceleration hardware supports, These formats are used as much as possible (some sound cards can only synthesize sound files in specific formats. For example: SoundBlaster AWE32 sound card can only synthesize single 16-bit format sound files). When you call CreatSoundBuffer to establish a static The buffer is set to the DSBCaps_lochardware flag in the DWPLAG area of ​​the structure DSBufferDesc. You can also make mandatory hardware synthesis of the buffer data by setting the DSBCAPS_LOCHARDWARE flag. However, when the resource to be used by hardware synthesis is unavailable, CreatSoundBuffer will be wrong. Method IDIRECTSOUND :: GetCaps provides us with detailed description of sound acceleration capabilities, which is a Bang, which is a Bang, which we can use it. We can call GetCaps in its working hours, adjust the audio system to use hardware in best way Resources. View Structure DSAPS and Sign in DirectX Documents DSCaps.dwFlags allow us to accurately understand useful information for some systems. 5. Minimum Sound control transformations will change equalization, volume or frequency will affect from the synthesizer. The application is running. To prevent the occurrence of interrupts when the sound output, the DirectSound synthesizer must be synthesized 20 to 100 milliseconds in advance, even more time. When you perform a sound control transformation, the synthesizer has to refresh the ongoing The information of the buffer buffer is recombined to adapt to the adaptation. The preferred method is to minimize the number of control changes in the delivery system. This is especially important in the flow or group input. At the same time, we should minimize daily The discontinuous operation of the call setVolume, setpn, setfrequency. For example, if you perform periodic detection of frame synchronization, you should call the adaptive from the left sound speaker to the right sound speaker, you should call SetPan once per frame, not Two times each frame. Note: 3D control transformation (direction, position, speed, Doppler factor, etc.) will also cause the DirectSound synthesizer to recoose in its previous composite buffer.

转载请注明原文地址:https://www.9cbs.com/read-4223.html

New Post(0)