Generate an electronic drum with DirectSound!

xiaoxiao2021-03-06  65

Generate electronic drums with DirectSound

Release Date: 11/12/2004

| Update Date: 11/12/2004

Ianier Munozchronotron

Abstract: Specific column writer Ianier Munoz uses hosted Microsoft DirectX libraries and C # to instantly synthesize audio streams, resulting in electronic drums.

Download the source code of this article.

This page

Experience Profile DirectSound Overview Flow Audio Player Using DirectSound to implement the IAUDIOPLAYER Electronic Drum Engine to combine code to subtocate problem resources

Drumming

(This introduction is provided by Duncan Mackenzie.)

Ianier has a great job; he writes code for DJ to enable them to use user software such as Microsoft® Windows Media® Player to perform professional digital signal processing (DSP) work. This is a wonderful job and let us feel lucky that he is driving the managed code and hosting the Microsoft® DirectX® field. In this article, Ianier has generated a demo software (see Figure 1), which will make you a few minutes after a few minutes after your computer's small speaker. This is a hosted electronic drum, which is it allows you to configure and play multiple channel sampling music. The code should be able to work without any practical configuration, but you must make sure you must download and install (and then restart) DirectX SDK before opening and running the Winrythm sample project.

Figure 1. Main form of the electronic drum (Oye, Oye ... 咚咚, 咚咚, 咚咚 ...)

Back to top

Introduction

Before DirectX 9 SDK was released, Microsoft® .NET Framework was disappointing without any sound. The only way to solve this limit is to access the Microsoft® Windows® API through COM Interop or P-Invoke.

Hosting Microsoft® DirectSound® (which is a component of DirectX 9) allows you to play sound in .NET without having to help COM Interop or P-Invoke. In this article, I will explain how to achieve a simple electronic drum through instant synthetic audio samples (see Figure 1), and use DirectSound to play the resulting audio stream.

This article assumes that you are familiar with C # and .NET Framework. Some basic audio processing knowledge can also help you better understand the concepts described here.

The code included in this article is compiled with Microsoft® Visual Studio® .NET 2003. It requires DirectX 9 SDK (you can download it here).

Back to top

DirectSound Overview

DirectSound is a component of DirectX that enables applications to access audio resources in a way that is not related to hardware. In DirectSound, the audio playback unit is a sound buffer. The sound buffer belongs to the audio device, the latter represents the sound card in the host system. When the application wants to play the sound with DirectSound, it creates an audio device object, creates a buffer on the device, populates the buffer with sound data, and then play the buffer. For a detailed description of the relationship between different DirectSound objects, see DirectX SDK documentation.

Depending on the expected use of the sound buffer, they can be divided into a static buffer or stream buffer. The static buffer is initialized once with some predefined audio data, and then play any multiple times as needed. Such buffers are often used in shooting games and other games that require brief effects. On the other hand, the stream buffer is usually used to play content due to too large and unable to put into memory, or those that cannot determine their length or content in advance, such as the voice of the speaker in the telephone application. The flow buffer is implemented using a small buffer that is refreshed with new data as a new data refreshed with the playback of the buffer. Although hosted DirectSound provides excellent documentation and examples of static buffers, it is currently lacking examples of stream buffers. However, it should be mentioned that hosting DirectX does include a class for playing audio streams, namely AUDIOVIDEOPLAYBACK namespace. This class allows you to play most types of audio files, including WAV and MP3. However, the AUDIO class does not allow you to programmatically select the output device, and it does not provide you with access to audio samples to prevent you want to modify it.

Back to top

Flow audio player

I define the streaming audio player as a component that extracts audio data from a source and playing these data through a device. A typical streaming audio player assembly plays the incoming audio stream through a sound card, but it can also send an audio stream through the network or saved it to the file.

The IAudioPlayer interface contains all our applications that should be understood about all information about the player. This interface will also allow you to separate your sound synthesis engine from the actual player implementation, which may be useful when you want to transplant the sample to other .NET platforms that use different playback technologies.

///

/// delegate used to fill in a buffer

///

Public Delegate Void Pullaudiocallback (INTPTR DATA, INT Count);

///

/// Audio Player Interface

///

Public interface iaudioplayer: idisposable

{

INT Samplingrate {Get;}

INT bitsPersample {get;}

INT CHANNELS {Get;

Int getBufferedsize ();

Void Play (Pullaudiocallback onaudiodata);

Void Stop ();

}

SamplingRate, BitsPersample, and Channels properties describe the audio format that the player is understood. The Play method starts playing the audio stream provided by the Pullaudiocallback entrusted, while the STOP method does not accidentally stop the audio playback.

Note that PULLAUDIOCALLBACK requires copying the audio data of the count byte to the data buffer (it is an INTPTR). You may think that I should use byte arrays instead of INTPTR, because data will force the application to invoke the application to call functions in INTPTR. However, hosting DirectSound requires such permissions, so it doesn't have serious consequences using INTPTR, and additional data replication can be avoided when processing different sample formats and other playback technologies.

GetBufferedSize Returns the number of bytes that have been ranked to the player since the last call of the Pullaudiocallback commission. We will use this method to calculate the current playback location of the input stream.

Back to top

Implement IAudioplayer using DirectSound

As I mentioned earlier, the stream buffer in DirectSound is just a small buffer that continuously uses new data to refresh with new data as the buffer playback. The StreamingPlayer class uses a stream buffer to implement the IAUDIOPLAYER interface. Let us observe the streamingplayer constructor:

Public streamingPlayer (Control Owner,

Device Device, Waveformat Format

{

m_Device = device;

IF (m_device == null)

{

m_Device = new device ();

M_Device.setcooperativeelevel

Owner, coooperativeelevel.normal;

M_OWNSDEVICE = true;

}

BufferDescription desc = new bufferDescription (Format);

DESC.BUFFERBYTES = Format.aveRageBytespersecond;

DESC.CONTROLVOLUME = TRUE;

DESC.GLOBALFOCUS = TRUE;

m_buffer = new secondarybuffer;

M_BufferBytes = m_buffer.caps.bufferbytes;

m_timer = new system.timers.timer

BYTESTOMS (M_Bufferbytes) / 6);

m_timer.enabled = false;

M_TIMER.ELAPSED = New System.Timers.ELAPSEDEVENTHANDAL (TIMER_ELAPSED);

}

The streamingPlayer constructor first ensures that we have a valid DirectSound audio device for use, and if such devices are not specified, it creates a new device. To create a Device object, we must specify a Microsoft® Windows Form Control so that DirectSound is used to track application focus; therefore, use Owner parameters. Then, a DirectSound SecondaryBuffer instance will be created and initialized, and a timer is assigned. Later, I will discuss the role of the timer.

IAudioplayer.start and iaudioplayer.stop are almost insignificant. Play method ensures that there are some audio data to play; then, it enables the timer and starts playing the buffer. Symmetrically, the STOP method disables the counter and stops the buffer.

Public void play

Chronotron.audioPlayer.pullaudiocallback Pullaudio)

{

STOP ();

M_PullStream = New PullStream (PULLAUDIO);

m_buffer.setCurrentPosition (0);

m_nextwrite = 0;

Feed (m_bufferbytes);

m_timer.enabled = True;

m_buffer.play (0, BufferplayFlags.Looping);

}

Public void stop ()

{

IF (m_timer! = null)

m_timer.enabled = false;

IF (M_Buffer! = NULL)

m_buffer.stop ();

}

Its thoughts are constantly supplied to the buffer from the entrusted sound data. In order to achieve this goal, the timer periodically checks how much audio data has been played, and add more data to the buffer as needed. PRIVATE VOID TIMER_ELAPSED (Object Sender, System.Timers.ElapseDeventargs E) {feed (getPlayedsize ());

The getPlayedSize function uses the buffer's PlayPosition property to calculate the number of bytes that play the cursor has been advanced. Note that because the buffer loops play, getPlayedSize must detect when the play cursor will return and adjust the results accordingly.

Private int getPlayedsize () {int pos = m_buffer.playPosition; return pOS

The routine of the buffer is called FEED and it is displayed in the code below. This routine calls SecondaryBuffer.Write, which extracts audio data from the stream and writes it to the buffer. Under the occasion we are, the stream is just a package that we received in the PLAUDIOCALLBACK in the Play method.

Private Void Feed (int Bytes)

{

// limited latency to some milliseconds

INT TOCOPY = Math.min (Bytes, MStobytes);

IF (TOCOPY> 0)

{

// Restore Buffer

IF (m_buffer.status.bufferlost)

m_buffer.restore ();

// Copy Data to the Buffer

m_buffer.write (m_nextwrite, m_pullstream,

TOCOPY, LOCKFLAG.NONE);

m_nextwrite = TOCOPY;

IF (m_nextwrite> = m_bufferbytes)

m_nextwrite - = m_bufferbytes;

}

}

Note that we will add the amount of data to the buffer to limit the following limits to reduce playback delays. The delay can be defined as the difference between the change in the incoming audio stream and the time that the change is actually heard. If there is no such delay control, the average delay will be approximately equal to the total length of the buffer, which is unacceptable for real-time synthesizers.

Back to top

Electronic drum engine

The electronic drum is an example of a real-time synthesizer: a set of sample waveforms indicating each possible drum (talking with music, also known as "tone") is mixed into the output stream of certain rhythm patterns to simulate drummers. . It's as simple as it sounds, so let's study the code!

core

The main element of the electronic drum is implemented in the PATCH, TRACK and MIXER classes (see Figure 2). All of these classes are implemented in rhythm.cs.

Figure 2. Class relationship diagram of rhythm.cs

PATCH class stores the waveform of a specific instrument. The PATCH is initialized with a Stream object containing the WAV format audio data. Here, I will not explain the details of the WAV file, but you can observe the Wavestream Help class to get the overall impression.

For simplicity, Patch converts audio data into mono by adding left channels and right channels (if provided by stereo), and stores the result in an array of 32-bit integers. The actual data range is -32768 to 32767 so that we can mix multiple audio streams without having to pay attention to overflow problems. The PatchReader class allows you to read audio data from the PATCH and mix them into the target buffer. It is necessary to separate the reader from the actual PATCH data because you can listen to multiple PATCH play multiple times in different locations. In particular, this happens when the same sound appears multiple times in a very short time.

The Track class represents a series of events to play with a single instrument. The tracks are initialized using a PATCH, some time periods (ie, possible beat positions), and can also use the initial mode. This mode is only a boolean array, its length is equal to the number of time periods in the track. If you set an element of the array to True, you should play the selected patch on the beat position. The TRACK.GETBEAT method returns a PatchReader instance for a specific beat position, or if anything should be played in the current beat, return null.

The Mixer class generates the actual audio stream under a given set of tracks, so it implements a method that matches the PULLAUDIOCALLBACK signature. The mixer also tracks the list of PatchReader instances currently played in the current throttle.

The most difficult job is to be done inside the Domix method, you can see this method in the following code. How many samples have a sample corresponding to the pendant duration and advance the current throttle position as the synthesis of the output stream. To generate a set of samples, the mixer is only added together with the sound of the current beat play.

Private Void Domix (int Samples)

{

// Grow Mix Buffer As Necessary

IF (m_mixbuffer == null || m_mixbuffer.length

m_mixbuffer = new int [Samples];

// Clear Mix Buffer

Array.clear (M_MixBuffer, 0, M_MixBuffer.length);

INT POS = 0;

While (POS

{

// loading current patches

IF (m_tickleft == 0)

{

Dotick ();

LOCK (M_BPMLOCK)

m_tickleft = m_tickperiod;

}

INT Tomix = Math.min (Samples - POS, M_TICKLEFT);

// Mix Current Streams

For (int i = m_readers.count - 1; i> = 0; I -)

{

Patchreader R = (PatchReader) M_Readers [i];

IF (! R.Mix (M_MixBuffer, POS, Tomix)

M_Readers.Removeat (i);

}

m_tickleft - = Tomix;

POS = Tomix;

}

}

In order to calculate the number of audio samples corresponding to the time period of the given speed, the mixer uses the following formula: (SamplingRate * 60 / bpm) / resolution, where SamplingRate is the sampling frequency of the player (in Hz); Resolution is the number of time sections of each beat; BPM is speed (unit is a beat / minute). The BPM property applies the formula to initialize the M_TickPeriod member variable. Back to top

Combine the code together

Since we have all the elements required to implement the electronic drum, the only thing to make the work is successfully completed is to connect them together. Here is the order of operation:

• Create a streaming audio player. • Create a mixer. • Create a group of drums (sounds) based on WAV files or resources. • Add a set of tracks to the mixer to play the desired sound. • For each track definition mode to play. • Use the mixer as a data source to start the player.

As you can see in the code below, the RythmMachineApp class has completed this work.

Public RythmMachineApp (Control Control, IAudioplayer Player)

{

INT measuresPerbeat = 2;

TYPE RESTYPE = Control.gettype ();

Mixer = new chronotron.rythm.mixer

Player, MeasureSperbeat;

Mixer.Add (New Track ("Bass Drum",

New Patch (Restype, "Media.Bass.wav"), Tracklength));

Mixer.Add (New Track ("Snare Drum",

New Patch (Restype, "Media.snare.wav"), Tracklength));

Mixer.Add (New Track ("Closed Hat",

NEW PATCH (Restype, "Media.Closed.wav"), Tracklength));

Mixer.Add (New Track ("Open Hat",

NEW PATCH (Restype, "Media.Open.wav"), Tracklength));

Mixer.Add (New Track ",

New Patch (Restype, "Media.rim.wav"), Tracklength));

// Init with any PRESet

Mixer ["BASS DRUM"]. Init (new byte "

{1, 0, 0, 1, 1, 0, 0, 0, 1, 0, 0, 1, 1, 0, 0, 0});

Mixer ["Snare Drum"]. Init (new byte "

{0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0});

Mixer ["Closed Hat"]. Init (new byte "

{1, 1, 0, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0);

Mixer ["Open Hat"]. Init (new byte "

{0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 1});

Buildui (Control);

m_timer = new timer ();

m_timer.interval = 250; m_timer.tick = new eventhandler (m_timer_tick);

m_timer.enabled = True;

}

This is all the operations. The rest of the code has implemented a simple user interface for the electronic drum to enable the user to create a rhythmic mode on the computer screen.

Back to top

summary

This article describes how to create a stream buffer using the managed DirectSound API, and how to generate an audio stream. I hope that you can get some fun when using the sample code provided. You can also consider some improvements, such as supporting and save mode support, user interface controls for speeds, stereo play, and more. I will leave these work to you to do it, because I have all the fun is unfair ...

Finally, I would like to thank Duncan to allow me to post this article to his "Coding4Fun" column. I hope that you can enjoy the same fun as I write them when using these code. In future articles, I will explore how to move the electronic drum to Compact Framework so that it can run on the Pocket PC.

Back to top

Encoding problem

At the end of these "Coding4Fun" columns, Duncan usually makes several encoding issues, if you are interested in them, you can conduct a study. After reading this article, I am willing to invite you to imitate my example and create some code using DirectX. Please post your work to GotdotNet and send Duncan email (Duncanma@microsoft.com), indicating what you do and what you are interested. You can always send your opinion to Duncan at any time, but send only links to code examples instead of the example itself.

What do you see about the content of the hobby? Do you want to see other people to do a special columnist? Please inform Duncan by duncanma@microsoft.com.

Back to top

Resource

The core of this article is generated using DirectX 9 SDK (available here), but you should also refer to the DirectX section of MSDN Library (located at http://msdn.microsoft.com/nhp/default.asp?contentId=28000410). If you want to find a multimedia introduction of this topic, part of the .NET program (http://msdn.microsoft.com/theshow/episode037/default.asp) also focuses on hosting DirectX.

Coding4fun

Ianier Munoz live in Messe City, France, and is a senior consultant and analyst of an international consulting firm in Luxembourg. He has created some popular multimedia software, such as Chronotron, Adapt-X, and American DJ Pro-Mix. You can contact him via http://www.chronotron.com.

Go to the original English page

Back to top

转载请注明原文地址:https://www.9cbs.com/read-90102.html

New Post(0)