Clone Audio object - javascript

I am aware of how to clone an object, but I'm wondering, how do I clone an audio object? Should I clone it differently than I would clone an object?
To "illustrate" what I mean:
var audio = new Audio("file.mp3");
var audio2 = $.extend({}, audio); // Clones `audio`
Is this the correct way to do this?
Reason why I'm asking this is that I want to be able to play the same sound multiple times simultaneously.

I had exactly the same predicament as originally raised. The following worked perfectly well for me :
var audio2 = audio.cloneNode();

This question is ancient in javascript years. I think your code is (was) downloading the audio again and that was the cause of your delay. If you grab the audio file once and store it in a blob, you can then use that blob as the source for new audio events.
let fileBlob;
fetch("file.mp3")
.then(function(response) {return response.blob()})
.then(function(blob) {
fileBlob=URL.createObjectURL(blob);
new Audio(fileBlob); // forces a request for the blob
});
...
new Audio(fileBlob).play() // fetches the audio file from the blob.
You can also do a lot of stuff with the web audio api, but it has a slightly steeper learning curve.

#1
let audio_2 = audio_1.slice()
or
#2
let audio_2 = new Blob([audio_1])

Related

Is it possible to merge multiple webm blobs/clips into one sequential video clientside?

I already looked at this question -
Concatenate parts of two or more webm video blobs
And tried the sample code here - https://developer.mozilla.org/en-US/docs/Web/API/MediaSource -- (without modifications) in hopes of transforming the blobs into arraybuffers and appending those to a sourcebuffer for the MediaSource WebAPI, but even the sample code wasn't working on my chrome browser for which it is said to be compatible.
The crux of my problem is that I can't combine multiple blob webm clips into one without incorrect playback after the first time it plays. To go straight to the problem please scroll to the line after the first two chunks of code, for background continue reading.
I am designing a web application that allows a presenter to record scenes of him/herself explaining charts and videos.
I am using the MediaRecorder WebAPI to record video on chrome/firefox. (Side question - is there any other way (besides flash) that I can record video/audio via webcam & mic? Because MediaRecorder is not supported on not Chrome/Firefox user agents).
navigator.mediaDevices.getUserMedia(constraints)
.then(gotMedia)
.catch(e => { console.error('getUserMedia() failed: ' + e); });
function gotMedia(stream) {
recording = true;
theStream = stream;
vid.src = URL.createObjectURL(theStream);
try {
recorder = new MediaRecorder(stream);
} catch (e) {
console.error('Exception while creating MediaRecorder: ' + e);
return;
}
theRecorder = recorder;
recorder.ondataavailable =
(event) => {
tempScene.push(event.data);
};
theRecorder.start(100);
}
function finishRecording() {
recording = false;
theRecorder.stop();
theStream.getTracks().forEach(track => { track.stop(); });
while(tempScene[0].size != 1) {
tempScene.splice(0,1);
}
console.log(tempScene);
scenes.push(tempScene);
tempScene = [];
}
The function finishRecording gets called and a scene (an array of blobs of mimetype 'video/webm') gets saved to the scenes array. After it gets saved. The user can then record and save more scenes via this process. He can then view a certain scene using this following chunk of code.
function showScene(sceneNum) {
var sceneBlob = new Blob(scenes[sceneNum], {type: 'video/webm; codecs=vorbis,vp8'});
vid.src = URL.createObjectURL(sceneBlob);
vid.play();
}
In the above code what happens is the blob array for the scene gets turning into one big blob for which a url is created and pointed to by the video's src attribute, so -
[blob, blob, blob] => sceneBlob (an object, not array)
Up until this point everything works fine and dandy. Here is where the issue starts
I try to merge all the scenes into one by combining the blob arrays for each scene into one long blob array. The point of this functionality is so that the user can order the scenes however he/she deems fit and so he can choose not to include a scene. So they aren't necessarily in the same order as they were recorded in, so -
scene 1: [blob-1, blob-1] scene 2: [blob-2, blob-2]
final: [blob-2, blob-2, blob-1, blob-1]
and then I make a blob of the final blob array, so -
final: [blob, blob, blob, blob] => finalBlob
The code is below for merging the scene blob arrays
function mergeScenes() {
scenes[scenes.length] = [];
for(var i = 0; i < scenes.length - 1; i++) {
scenes[scenes.length - 1] = scenes[scenes.length - 1].concat(scenes[i]);
}
mergedScenes = scenes[scenes.length - 1];
console.log(scenes[scenes.length - 1]);
}
This final scene can be viewed by using the showScene function in the second small chunk of code because it is appended as the last scene in the scenes array. When the video is played with the showScene function it plays all the scenes all the way through. However, if I press play on the video after it plays through the first time, it only plays the last scene.
Also, if I download and play the video through my browser, the first time around it plays correctly - the subsequent times, I see the same error.
What am I doing wrong? How can I merge the files into one video containing all the scenes? Thank you very much for your time in reading this and helping me, and please let me know if I need to clarify anything.
I am using a element to display the scenes
The file's headers (metadata) should only be appended to the first chunk of data you've got.
You can't make an new video file by just pasting one after the other, they've got a structure.
So how to workaround this ?
If I understood correctly your problem, what you need is to be able to merge all the recorded videos, just like if it were only paused.
Well this can be achieved, thanks to the MediaRecorder.pause() method.
You can keep the stream open, and simply pause the MediaRecorder. At each pause event, you'll be able to generate a new video containing all the frames from the beginning of the recording, until this event.
Here is an external demo because stacksnippets don't works well with gUM...
And if ever you needed to also have shorter videos from between each resume and pause events, you could simply create new MediaRecorders for these smaller parts, while keeping the big one running.

JS Audio Not Working

One of the sound files won't play. The two following pieces of code are identical except for the file name.
This doesn't work:
var rewardSound = new Audio("audio/WrongAnswerSound.wav");
function rightAnswer(){
rewardSound.play();
}
However this works fine:
var rewardSound = new Audio("audio/CorrectAnswerSound.wav");
function rightAnswer(){
rewardSound.play();
}
The image is from the File Manager in cPanel. I can play both sounds from the File Manager itself. But, I cann't play the WrongAnswerSound.wav from the JS code. What am I doing wrong?
You kind of have the right idea.
Set a variable for the correct sound by creating a new Audio object:
var correctSound = new Audio("audio/CorrectAnswerSound.wav");
Set a variable for the wrong sound by creating another new Audio object:
var wrongSound = new Audio("audio/WrongAnswerSound.wav");
Now you both of these new objects already hold a play method that they get from the Audio object. So all you have to do to get these sounds to play is this:
correctSound.play();
wrongSound.play();

Concerning Web Audio nodes, what does .connect() do?

Trying to follow the example here, which is basically a c&p of this
Think I got most of the parts down, except all the node.connect()'s
From what I understand, this sequence of code is needed to provide the audio analyzer with an audio stream:
var source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
analyser.connect(audioCtx.destination);
I can't seem to make sense of it as it looks rather ouroboros-y to me.
And unfortunately, I can't seem to find any documentation on .connect() so quite lost and would appreciate any clarification!
Oh and I'm loading an .mp3 via pure javascript new Audio('db.mp3').play(); and am trying to use that as the source without creating an <audio> element.
Can a mediaStream object be created from this to feed into .createMediaStreamSource(stream)?
connect simply defines the output for the filters.
In this case, your source loads the stream into the buffer and writes to the input of the next filter which is defined by the connect function. This is repeated for your analyser filter.
Think of it as pipes.
here is a sample code snippet that I have written a few years back using web audio api.
this.scriptProcessor = this.audioContext.createScriptProcessor(this.scriptProcessorBufferSize,
this.scriptProcessorInputChannels,
this.scriptProcessorOutputChannels);
this.scriptProcessor.connect(this.audioContext.destination);
this.scriptProcessor.onaudioprocess = updateMediaControl.bind(this);
//Set up the Gain Node with a default value of 1(max volume).
this.gainNode = this.audioContext.createGain();
this.gainNode.connect(this.audioContext.destination);
this.gainNode.gain.value = 1;
sewi.AudioResourceViewer.prototype.playAudio = function(){
if(this.audioBuffer){
this.source = this.audioContext.createBufferSource();
this.source.buffer = this.audioBuffer;
this.source.connect(this.gainNode);
this.source.connect(this.scriptProcessor);
this.beginTime = Date.now();
this.source.start(0, this.offset);
this.isPlaying = true;
this.controls.update({playing: this.isPlaying});
updateGraphPlaybackPosition.call(this, this.offset);
}
};
So as you can see that my source is connected to a gainNode, which is connected to a scriptProcessor. When the audio starts playing, the data is passed from the source->gainNode->destination and source->scriptProcessor->destination. flowing through the "pipes" that connects them, which is defined by connect(). When the audio data pass through the gainNode, volume can be adjusted by changing the amplitude of the audio wave. After that it is passed to the script processor so that events can be attached and triggered while the audio is being processed.

How to capture a photo from the webcam in the browser and save it in the server?

I have seen this done by many websites, but I wonder how they do it. Some even allow one to crop the image. Is there a standard library or package for this?
You don't need any library, because It could be done in several steps. I assume you are familiar with webcam and able to show signal from it in the Video object. If you don't, in short It reads as:
var video: Video = new Video();
addChild(video);
video.smoothing = true;
video.attachCamera(camera); //Camera reference
video.width = someWidth;
video.height = someHeight;
Because Video object implements IBitmapDrawable you can draw it in the Bitmap, and do whatever you want.
var bitmapData : BitmapData = new BitmapData(_video.width, _video.height);
//Tada! You have screenshot of the current frame from video object
bitmapData.draw(cameraView);
//For testing, add as Bitmap
addChild(new Bitmap(bitmapData));
As for sending to the server, you need some server-side implementation
Here is a very usefull blog i came across (not mine)
http://matthewschrager.com/2013/05/25/how-to-take-webcam-pictures-from-browser-and-store-server-side/

Save audio in a buffer for later playback

With the Web Audio API, I want to save audio in a buffer for later use. I've found some examples of saving audio to disk, but I only want to store it in memory. I tried connecting the output of the last AudioNode in the chain to an AudioBuffer, but it seems AudioBuffer doesn't have a method for accepting inputs.
var contextClass = (window.AudioContext || window.webkitAudioContext);
// Output compressor
var compressor = context.createDynamicsCompressor();
var compressor.connect(context.destination);
var music = context.createBufferSource();
// Load some content into music with XMLHttpRequest...
music.connect(compressor);
music.start(0);
// Set up recording buffer
var recordBuffer = context.createBuffer(2, 10000, 44100);
compressor.connect(recordBuffer);
// Failed to execute 'connect' on 'AudioNode': No function was found that matched the signature provided.
Is there something I can use instead of AudioBuffer to achieve this? Is there a way to do this without saving files to disk?
Well, turns out Recorder.js does exactly what I wanted. I thought it was only for exporting to disk, but when I looked closer I realized it can save to buffers too. Hooray!

Categories

Resources