With the Web Audio API, I want to save audio in a buffer for later use. I've found some examples of saving audio to disk, but I only want to store it in memory. I tried connecting the output of the last AudioNode in the chain to an AudioBuffer, but it seems AudioBuffer doesn't have a method for accepting inputs.
var contextClass = (window.AudioContext || window.webkitAudioContext);
// Output compressor
var compressor = context.createDynamicsCompressor();
var compressor.connect(context.destination);
var music = context.createBufferSource();
// Load some content into music with XMLHttpRequest...
music.connect(compressor);
music.start(0);
// Set up recording buffer
var recordBuffer = context.createBuffer(2, 10000, 44100);
compressor.connect(recordBuffer);
// Failed to execute 'connect' on 'AudioNode': No function was found that matched the signature provided.
Is there something I can use instead of AudioBuffer to achieve this? Is there a way to do this without saving files to disk?
Well, turns out Recorder.js does exactly what I wanted. I thought it was only for exporting to disk, but when I looked closer I realized it can save to buffers too. Hooray!
Related
I am trying to stream Audio via Websocket.
I can get an AudioBuffer from the Microphone (or other Source) via Web-Audio-Api and stream the RAW-Audio-Buffer, but i think this would not be very efficient.
So i looked arround to encode the AudioBuffer somehow. - If the Opus-Codec would not be practicable,
i am open to alternatives and thankful for any hints in the right direction.
I have tried to use the MediaRecorder (from MediaStreamRecording-API) but it seems not possible to stream with that API, instead of plain recording.
Here is the Part how i get the RAW-AudioBuffer:
const handleSuccess = function(stream) {
const context = new AudioContext();
const source = context.createMediaStreamSource(stream);
const processor = context.createScriptProcessor(16384, 1, 1);
source.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = function(e) {
bufferLen = e.inputBuffer.length
const inputBuffer = new Float32Array(bufferLen);
e.inputBuffer.copyFromChannel(inputBuffer, 0);
let data_to_send = inputBuffer
//And send the Float32Array ...
}
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
So the Main Question is: How can i encode the AudioBuffer.
(and Decode it at the Receiver)
Is there an API or Library? Can i get the encoded Buffer from another API in the Browser?
The Web Audio API has a MediaStreamDestination node that will expose a .stream MediaStream that you can then pass through the WebRTC API.
But if you are only dealing with a microphone input, then pass directly that MediaStream to WebRTC, no need for the Web Audio step.
Ps: for the ones that only want to encode to opus, then MediaRecorder is currently the only native way. It will incur a delay, will generate a webm file, not only the raw data, and will process the data no faster than real-time.
Only other options now are to write your own encoders and run it in WabAssembly.
Hopefully in a near future, we'll have access to the WebCodecs API which should solve this use case among others.
I am trying to record and save sound clips from the user microphone using the GetUserMedia() and AudioContext APIs.
I have been able to do this with the MediaRecorder API, but unfortunately, that's not supported by Safari/iOS, so I would like to do this with just the AudioContext API and the buffer that comes from that.
I got things partially working with this tutorial from Google Web fundamentals, but I can't figure out how to do the following steps they suggest.
var handleSuccess = function(stream) {
var context = new AudioContext();
var source = context.createMediaStreamSource(stream);
var processor = context.createScriptProcessor(1024, 1, 1);
source.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = function(e) {
// ******
// TUTORIAL SUGGESTS: Do something with the data, i.e Convert this to WAV
// ******
// I ASK: How can I get this data in a buffer and then convert it to WAV etc.??
// *****
console.log(e.inputBuffer);
};
};
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
As the tutorial says:
The data that is held in the buffers is the raw data from the
microphone and you have a number of options with what you can do with
the data:
Upload it straight to the server
Store it locally
Convert to a dedicated file format, such as WAV, and then save it to your servers or locally
I could do all this, but I can't figure out how to get the audio buffer once I stop the context.
With MediaRecorder you can do something like this:
mediaRecorder.ondataavailable = function(e) {
chunks.push(e.data);
}
And then when you're done recording, you have a buffer in chunks. There must be a way to this, as suggested by the tutorial, but I can't find the data to push into the buffer in the first code example.
Once I get the audio buffer I could convert it to WAV and make it into a blob etc.
Can anyone help me with this? (I don't want to use the MediaRecorder API)
e.inputBuffer.getChannelData(0)
Where 0 is the first channel. This should return a Float32Array with the raw PCM data, which you can then convert to an ArrayBuffer e.inputBuffer.getChannelData(0).buffer and send to a worker that would convert it to the needed format.
.getChannelData() Docs: https://developer.mozilla.org/en-US/docs/Web/API/AudioBuffer/getChannelData.
About typed arrays: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays, https://javascript.info/arraybuffer-binary-arrays.
Trying to follow the example here, which is basically a c&p of this
Think I got most of the parts down, except all the node.connect()'s
From what I understand, this sequence of code is needed to provide the audio analyzer with an audio stream:
var source = audioCtx.createMediaStreamSource(stream);
source.connect(analyser);
analyser.connect(audioCtx.destination);
I can't seem to make sense of it as it looks rather ouroboros-y to me.
And unfortunately, I can't seem to find any documentation on .connect() so quite lost and would appreciate any clarification!
Oh and I'm loading an .mp3 via pure javascript new Audio('db.mp3').play(); and am trying to use that as the source without creating an <audio> element.
Can a mediaStream object be created from this to feed into .createMediaStreamSource(stream)?
connect simply defines the output for the filters.
In this case, your source loads the stream into the buffer and writes to the input of the next filter which is defined by the connect function. This is repeated for your analyser filter.
Think of it as pipes.
here is a sample code snippet that I have written a few years back using web audio api.
this.scriptProcessor = this.audioContext.createScriptProcessor(this.scriptProcessorBufferSize,
this.scriptProcessorInputChannels,
this.scriptProcessorOutputChannels);
this.scriptProcessor.connect(this.audioContext.destination);
this.scriptProcessor.onaudioprocess = updateMediaControl.bind(this);
//Set up the Gain Node with a default value of 1(max volume).
this.gainNode = this.audioContext.createGain();
this.gainNode.connect(this.audioContext.destination);
this.gainNode.gain.value = 1;
sewi.AudioResourceViewer.prototype.playAudio = function(){
if(this.audioBuffer){
this.source = this.audioContext.createBufferSource();
this.source.buffer = this.audioBuffer;
this.source.connect(this.gainNode);
this.source.connect(this.scriptProcessor);
this.beginTime = Date.now();
this.source.start(0, this.offset);
this.isPlaying = true;
this.controls.update({playing: this.isPlaying});
updateGraphPlaybackPosition.call(this, this.offset);
}
};
So as you can see that my source is connected to a gainNode, which is connected to a scriptProcessor. When the audio starts playing, the data is passed from the source->gainNode->destination and source->scriptProcessor->destination. flowing through the "pipes" that connects them, which is defined by connect(). When the audio data pass through the gainNode, volume can be adjusted by changing the amplitude of the audio wave. After that it is passed to the script processor so that events can be attached and triggered while the audio is being processed.
In javascript, How can I connect an audio context to a video fetching its data from a blob (the video uses the MediaStream capabilities). No matter what I do the audio context returns an empty buffer. Is there any way to connect the two?
Probably, createMediaElementSource is not the right kind of processing node for this use-case.
Rather, you better off to use createMediaStreamSource node from WebAudio API in case you are trying to handle audio live stream, not fixed media source.
The createMediaStreamSource() method of the AudioContext Interface is used to create a new MediaStreamAudioSourceNode object, given a media stream (say, from a navigator.getUserMedia instance), the audio from which can then be played and manipulated.
The link has a more detailed example. However, the main difference for this MediaStreamAudioSourceNode is it can be created only using a MediaStream that you get from media-server or locally(through getUserMedia). In my experience, i couldn't find any way by using only the blob url from the <video> tag.
While this is an old question, I've searched for something similar and found a solution I want to share.
To connect the Blob, you may use a new Response instance. Here is an example for creating a wave form visualizer.
var audioContext = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioContext.createAnalyser();
var dataArray = new Uint8Array(analyser.frequencyBinCount);
var arrayBuffer = await new Response(yourBlob).arrayBuffer();
var audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
var source = audioContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(analyser);
source.start(0);
Note: yourBlob needs to be a Blob instance.
You may find this fiddle usefull which records video and audio for 5 seconds, turns the recording into a Blob and than plays it back including audio wave visualization.
I have a problem with my little project.
Every time the music player is loading new songs into playlist or you are pressing a song on the list to get it playing, it's using a lot of memory, and it stays high until you shut it down. I think its every time I'm using the filereader API that it uses memory, but I'm also loading ID3 information with the jDataView.js script which I also think is taking a lot of memory.
Do you guys have any suggestion, to load,store and play songs with the FileReader, without taking up memory? I've tried to see if it was possible to clear the fileReader after using, but I couldn't find anything. I've only tested in Chrome.
UPDATE:
I have tested my project,and found out, that its when im trying to load the datastring it takes up memory.
reader.onloadend = function(evt) {
if(typeof(e) != "undefined"){
e.pause();
}
e = new Audio();
e.src = evt.target.result; // evt.target.result call takes the memory
e.setAttribute("type", songs[index]["file"].type);
e.play();
e.addEventListener("ended", function() { LoadAudioFile(index + 1) }, false);
};
Is there another way to load the data into the audio element?
This is not because of FileReader but because you are making the src attribute of audio element a 1.33 * mp3filesize string. So instead of the src attribute being a nice short url pointing to a mp3 resource, it's the whole mp3 file in base64 encoding. It's a wonder your browser didn't crash.
You should not read the file with FileReader at all, but create a blob URL from the file and use that as src.
var url = window.URL || window.webkitURL;
//Src will be like "blob:http%3A//stackoverflow.com/d13eb575-4863-4f86-8727-6400119f4afc"
//A very short string that is pointing to the original resource in hard drive
var src = url.createObjectURL( mp3filereference );
audioElement.src = src;