Difference between revisions of "Reggae tutorial: Playing a sound from file"
From MorphOS Library
(→Waiting for end of sound: Finished this section, added examples.) |
m |
||
(2 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
+ | ''Grzegorz Kraszewski'' | ||
+ | |||
+ | |||
Playing a sound file from disk is one of most common media related tasks. Reggae can perform it with a few lines of code. Using Reggae for audio playback has several advantages: | Playing a sound file from disk is one of most common media related tasks. Reggae can perform it with a few lines of code. Using Reggae for audio playback has several advantages: | ||
* Wide range of supported audio formats. A codec is selected and loaded by Reggae automatically. | * Wide range of supported audio formats. A codec is selected and loaded by Reggae automatically. | ||
Line 107: | Line 110: | ||
The main difference between these two methods is that message signalling is "one-shot". After the message is sent to application's port, it must be got from the port and reinitialized before it can be reused again. Signal method may be used repeatedly, which is comfortable when a short sound is triggered multiple times. | The main difference between these two methods is that message signalling is "one-shot". After the message is sent to application's port, it must be got from the port and reinitialized before it can be reused again. Signal method may be used repeatedly, which is comfortable when a short sound is triggered multiple times. | ||
− | [http://krashan.ppa.pl/reggae/library/ | + | [http://krashan.ppa.pl/reggae/library/playaudio_message.c A complete example source code using a message] |
Latest revision as of 06:50, 17 June 2010
Grzegorz Kraszewski
Playing a sound file from disk is one of most common media related tasks. Reggae can perform it with a few lines of code. Using Reggae for audio playback has several advantages:
- Wide range of supported audio formats. A codec is selected and loaded by Reggae automatically.
- Playback is asynchronous. Reggae offloads decoding and playback to a dedicated process. The main application may perform other tasks during playback. It gets informed when the playback ends.
- Reggae streams audio from disk, so it does not load the whole file to memory. Doublebuffering is fully automatic.
- Audio is played through selected unit of AHI. Multiple sounds (up to 32, depending on user settings of AHI) may be played simultaneously.
Playing audio directly from disk is best suited for long sounds without low latency requirements. A typical example is music player or playing background music in the game.
From Reggae point of view, the task of playing audio from disk can be divided in two major parts. The first one is to get raw audio samples out of encoded file. The second task is to feed audio data to the output.
Opening a sound file
This part of the job is highly automated. Reggae recognizes the file format and builds complete decoding pipeline for the file with a single function call. The result is returned to the application as one, opaque object (it may contain many objects inside, but it is irrelevant for application programmer).
Object* media;
media = MediaNewObject(
MMA_StreamType, (ULONG)"file.stream",
MMA_StreamName, (ULONG)"RAM:sound",
MMA_MediaType, MMT_SOUND,
TAG_END);
This single call creates a complete decoding infrastructure for a specified file. Data source is specified by two tags, MMA_StreamName and MMA_StreamType. The first one is the name of the source. In case of files it is just path to the file, which may be absolute (as in the example), relative to the current directory, or relative to program executable location (using PROGDIR: assign). MMA_StreamType is used to specify which Reggae stream class (or "transport") should be used. Of course file.stream is for disk based files (and other things recognized by DOS as filesystems).
The last tag is a kind of filter. If Reggae recognizes the file, but it is not sound, the file will be rejected, and the function will return NULL. Of course if file is not recognized at all, NULL will be returned as well. Checking the result of MediaNewObject() against NULL is a very good idea.
In case of success media contains a pointer to Reggae object, having at least one output data port, port 0.
Creating output
The second step is to add audio output object to the Reggae processing pipeline. Then one can "run" the pipeline, which results in playing the file. The output object belongs to audio.output class. Before an object can be created, the class must be loaded from disk. It is done by opening the class with OpenLibrary().
struct Library* AudioOutputBase;
AudioOutputBase = OpenLibrary("multimedia/audio.output", 51);
It is worth noting that audio.output has no specific functions in its shared library API (it is true for all Reggae classes except of the main multimedia.class). Then, the name of variable holding the library base is completely irrelevant (as the name is never used implicitly), and may be anything, "hja76_d62eg" for example. The name used in the example is a bit more readable however.
After class opening, an instance of the class may be created:
Object* play;
play = NewObject(NULL, "audio.output", TAG_END);
The instance is created with generic NewObject() call. There are no tags for attributes. The output object will read all sound properties from media object when they are connected together. I remind again that checking return value here may be a good idea. If objects are ready, let's connect them:
MediaConnectTagList(media, 0, play, 0, NULL);
Output port 0 of media object is connected with input port 0 of play object. Both the objects form a complete Reggae processing pipeline. Now we are ready to play sound. The whole playback control is done by talking to output object.
Making noise
Playback is controlled with three methods: MMM_Play(), MMM_Stop() and MMM_Pause() performed on the audio.output instance.
- MMM_Play() starts playback if object is stopped, is ignored when object is playing.
- MMM_Stop() stops playback and rewinds the audio stream to the start (if possible).
- MMM_Pause() (available since version 51.14 of audio.output) stops playback, but does not rewind audio stream. Following MMM_Play() will continue from paused position.
All the methods are performed immediately, so just
DoMethod(play, MMM_Play);
starts the playback and
DoMethod(play, MMM_Stop);
stops it at any time. All methods are asynchronous to the caller and return immediately. Even if MMM_Play() setup time is long (because of prebuffering for example), calling process is not stopped because setup is done by audio.output process.
Waiting for end of sound
Because audio.output plays the sound asynchronously, there must be a way to inform the main process about sound end. By "sound end" I mean either actual audio stream end, or calling MMM_Stop(). Then the application programmer need not to write separate code for handling natural and forced playback stop.
The class offers two methods for signalling sound end event, namely audio process can send a signal or can reply a message. Application specifies method choosen and its parameters by performing one of the two methods described below on audio.output object. Methods are usually called before the playback is started, but may be also called when object is already playing. The later solution is tricky however, as the sound may be very short, so a method may be called after the sound end. In this case signalling requests will be never triggered.
MMM_SignalAtEnd() method should be used, when we want to receive a signal to be Wait()-ed. It has two parameters, pointer to process to be signalled and signal number (not mask!) to be sent. We usually want to be signalled ourselves, but it is not a requirement, so process A can start playback, but signal may be received by process B. A typical usage may look like this:
DoMethod(play, MMM_SignalAtEnd, FindTask(NULL), SIGBREAKB_CTRL_C);
DoMethod(play, MMM_Play);
Wait(SIGBREAKF_CTRL_C);
In this code we send a request to be signalled themselves with system CTRL-C signal. It can be of course allocated private signal. Note that MMM_SignalAtEnd() method expect signal number while following Wait() needs a signal mask.
A complete example source code using a signal
MMM_ReplyMsgAtEnd() signals the end of sound by sending a system message prepared by application to some message port set up by application as well. This method is useful especially when an application uses multiple sounds at once. Number of signals available to a process is very limited. Number of created messages is limited only by available memory. The method is also useful if application creates
message port for other purposes. Then audio end messages can be directed to this port and distinguished by message contents. Typical usage looks as follows:
struct MsgPort *port; /* created elsewhere */
struct Message *msg; /* allocated elsewhere */
msg->mn_Node.ln_Type = NT_MESSAGE;
msg->mn_Length = sizeof(struct Message);
msg->mn_ReplyPort = port;
DoMethod(play, MMM_Sound_ReplyMsgAtEnd, msg);
DoMethod(play, MMM_Play);
WaitPort(port);
GetMsg(port);
The main difference between these two methods is that message signalling is "one-shot". After the message is sent to application's port, it must be got from the port and reinitialized before it can be reused again. Signal method may be used repeatedly, which is comfortable when a short sound is triggered multiple times.