One of the sound files won't play. The two following pieces of code are identical except for the file name.
This doesn't work:
var rewardSound = new Audio("audio/WrongAnswerSound.wav");
function rightAnswer(){
rewardSound.play();
}
However this works fine:
var rewardSound = new Audio("audio/CorrectAnswerSound.wav");
function rightAnswer(){
rewardSound.play();
}
The image is from the File Manager in cPanel. I can play both sounds from the File Manager itself. But, I cann't play the WrongAnswerSound.wav from the JS code. What am I doing wrong?
You kind of have the right idea.
Set a variable for the correct sound by creating a new Audio object:
var correctSound = new Audio("audio/CorrectAnswerSound.wav");
Set a variable for the wrong sound by creating another new Audio object:
var wrongSound = new Audio("audio/WrongAnswerSound.wav");
Now you both of these new objects already hold a play method that they get from the Audio object. So all you have to do to get these sounds to play is this:
correctSound.play();
wrongSound.play();
Related
I'm building an application using PizzicatoJS + HowlerJS. Those libraries essentially allow me to play multiple audio files at the same time. Imagine a 4 audio tracks with each track containing an instrument like guitar, bass, drums, vocals, etc..
Everything plays fine when using PizzicatoJS's Group functionality or running a forEach loop on all my Howl sounds and firing .play(). However, I would like to download the final resulting sound I am hearing from my speakers. Any idea on how to approach that?
I looked into OfflineAudioContext, but I am unsure on how to use it to generate an audio file. It looks like it needs an Audio source like an <audio> tag. Is what I'm trying to do possible? Any help is appreciated.
I think the OfflineAudioContext can help with your use case.
Let's say you want to create a file with a length of 10 seconds. It should contain one sound playing from the start up to second 8. And there is also another sound which is supposed to start at second 5 and should last until the end. Both sounds are AudioBuffers (named soundBuffer and anotherSoundBuffer) already.
You could arrange and combine the sounds as follows.
const sampleRate = 44100;
const offlineAudioContext = new OfflineAudioContext({
length: sampleRate * 10,
sampleRate
});
const soundSourceNode = new AudioBufferSourceNode({
buffer: soundBuffer
});
soundSourceNode.start(0);
soundSourceNode.stop(8);
soundSourceNode.connect(offlineAudioContext.destination);
const anotherSoundSourceNode = new AudioBufferSourceNode({
buffer: anotherSoundBuffer
});
anotherSoundSourceNode.start(5);
anotherSoundSourceNode.stop(10);
anotherSoundSourceNode.connect(offlineAudioContext.destination);
offlineAudioContext
.startRendering()
.then((audioBuffer) => {
// save the resulting buffer as a file
});
Now you can use a library to turn the resulting AudioBuffer into an encoded audio file. One library which does that is for example audiobuffer-to-wav.
Trying to follow the example here, which is basically a c&p of this
Think I got most of the parts down, except all the node.connect()'s
From what I understand, this sequence of code is needed to provide the audio analyzer with an audio stream:
var source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
analyser.connect(audioCtx.destination);
I can't seem to make sense of it as it looks rather ouroboros-y to me.
And unfortunately, I can't seem to find any documentation on .connect() so quite lost and would appreciate any clarification!
Oh and I'm loading an .mp3 via pure javascript new Audio('db.mp3').play(); and am trying to use that as the source without creating an <audio> element.
Can a mediaStream object be created from this to feed into .createMediaStreamSource(stream)?
connect simply defines the output for the filters.
In this case, your source loads the stream into the buffer and writes to the input of the next filter which is defined by the connect function. This is repeated for your analyser filter.
Think of it as pipes.
here is a sample code snippet that I have written a few years back using web audio api.
this.scriptProcessor = this.audioContext.createScriptProcessor(this.scriptProcessorBufferSize,
this.scriptProcessorInputChannels,
this.scriptProcessorOutputChannels);
this.scriptProcessor.connect(this.audioContext.destination);
this.scriptProcessor.onaudioprocess = updateMediaControl.bind(this);
//Set up the Gain Node with a default value of 1(max volume).
this.gainNode = this.audioContext.createGain();
this.gainNode.connect(this.audioContext.destination);
this.gainNode.gain.value = 1;
sewi.AudioResourceViewer.prototype.playAudio = function(){
if(this.audioBuffer){
this.source = this.audioContext.createBufferSource();
this.source.buffer = this.audioBuffer;
this.source.connect(this.gainNode);
this.source.connect(this.scriptProcessor);
this.beginTime = Date.now();
this.source.start(0, this.offset);
this.isPlaying = true;
this.controls.update({playing: this.isPlaying});
updateGraphPlaybackPosition.call(this, this.offset);
}
};
So as you can see that my source is connected to a gainNode, which is connected to a scriptProcessor. When the audio starts playing, the data is passed from the source->gainNode->destination and source->scriptProcessor->destination. flowing through the "pipes" that connects them, which is defined by connect(). When the audio data pass through the gainNode, volume can be adjusted by changing the amplitude of the audio wave. After that it is passed to the script processor so that events can be attached and triggered while the audio is being processed.
How would I properly put a mp3 file into my javascript program. By this I mean can I just type the name of any mp3 file I have saved on my computer or does it have to be mentioned somewhere else in the code.
var sound1 = new Audio('file1.mp3');
So if I declared the variable "sound" to play file1 do I have to tell the program what file1 is. If so, how would I do so.
You can use file:///, and then use the file path of your mp3 file
var sound1 = new Audio('file:///C:/Users/user/file1.mp3');
Replace
var sound1 = new Audio('file1.mp3');
Whit
var sound1 = new Audio('http://example.com/sub/file.mp3');
While you want the adress to be the adress to you'r mp3 file on your server.
You might write a shorted version of the adress if the file on sub folder like this:
var sound1 = new Audio('/sub/file.mp3');
So I've used WebAudioAPI to create a music from code. I've used OfflineAudioContext to create a music and it's oncomplete event is similar to this:
function(e) {
var audioCtx = new (window.AudioContext || window.webkitAudioContext)();
var song = audioCtx.createBufferSource();
song.buffer = e.renderedBuffer;
song.connect(audioCtx.destination);
song.start();
}
Which plays the sound. And it works. But I would like to instead store it as an <audio> element, because it's easier to play, loop, pause and stop, which I need to reuse the song.
Is it possible? I'm googling for days, but I can't find how!
The idea was to use var song = new Audio() and something to copy the e.renderedBuffer to it.
Ok, so I found this code floating around: http://codedbot.com/questions-/911767/web-audio-api-output . I've created a copy here too: http://pastebin.com/rE9a1PaX .
I've managed to use this code to create and store an audio on the fly, using all the function provided in this link.
offaudioctx.oncomplete = function(e) {
var buffer = e.renderedBuffer;
var UintWave = createWaveFileData(buffer);
var base64 = btoa(uint8ToString(UintWave));
songsarr.push(document.createElement('audio'))
songsarr[songsarr.length-1].src = "data:audio/wav;base64," + base64;
console.log("completed!");
};
It's not pretty, but it works. I'm leaving everything here in case someone finds an easier way.
I am aware of how to clone an object, but I'm wondering, how do I clone an audio object? Should I clone it differently than I would clone an object?
To "illustrate" what I mean:
var audio = new Audio("file.mp3");
var audio2 = $.extend({}, audio); // Clones `audio`
Is this the correct way to do this?
Reason why I'm asking this is that I want to be able to play the same sound multiple times simultaneously.
I had exactly the same predicament as originally raised. The following worked perfectly well for me :
var audio2 = audio.cloneNode();
This question is ancient in javascript years. I think your code is (was) downloading the audio again and that was the cause of your delay. If you grab the audio file once and store it in a blob, you can then use that blob as the source for new audio events.
let fileBlob;
fetch("file.mp3")
.then(function(response) {return response.blob()})
.then(function(blob) {
fileBlob=URL.createObjectURL(blob);
new Audio(fileBlob); // forces a request for the blob
});
...
new Audio(fileBlob).play() // fetches the audio file from the blob.
You can also do a lot of stuff with the web audio api, but it has a slightly steeper learning curve.
#1
let audio_2 = audio_1.slice()
or
#2
let audio_2 = new Blob([audio_1])