How do I get buffers/raw data from AudioContext? - javascript

I am trying to record and save sound clips from the user microphone using the GetUserMedia() and AudioContext APIs.
I have been able to do this with the MediaRecorder API, but unfortunately, that's not supported by Safari/iOS, so I would like to do this with just the AudioContext API and the buffer that comes from that.
I got things partially working with this tutorial from Google Web fundamentals, but I can't figure out how to do the following steps they suggest.
var handleSuccess = function(stream) {
var context = new AudioContext();
var source = context.createMediaStreamSource(stream);
var processor = context.createScriptProcessor(1024, 1, 1);
source.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = function(e) {
// ******
// TUTORIAL SUGGESTS: Do something with the data, i.e Convert this to WAV
// ******
// I ASK: How can I get this data in a buffer and then convert it to WAV etc.??
// *****
console.log(e.inputBuffer);
};
};
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
As the tutorial says:
The data that is held in the buffers is the raw data from the
microphone and you have a number of options with what you can do with
the data:
Upload it straight to the server
Store it locally
Convert to a dedicated file format, such as WAV, and then save it to your servers or locally
I could do all this, but I can't figure out how to get the audio buffer once I stop the context.
With MediaRecorder you can do something like this:
mediaRecorder.ondataavailable = function(e) {
chunks.push(e.data);
}
And then when you're done recording, you have a buffer in chunks. There must be a way to this, as suggested by the tutorial, but I can't find the data to push into the buffer in the first code example.
Once I get the audio buffer I could convert it to WAV and make it into a blob etc.
Can anyone help me with this? (I don't want to use the MediaRecorder API)

e.inputBuffer.getChannelData(0)
Where 0 is the first channel. This should return a Float32Array with the raw PCM data, which you can then convert to an ArrayBuffer e.inputBuffer.getChannelData(0).buffer and send to a worker that would convert it to the needed format.
.getChannelData() Docs: https://developer.mozilla.org/en-US/docs/Web/API/AudioBuffer/getChannelData.
About typed arrays: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays, https://javascript.info/arraybuffer-binary-arrays.

Related

Encode AudioBuffer with Opus (or other codec) in Browser

I am trying to stream Audio via Websocket.
I can get an AudioBuffer from the Microphone (or other Source) via Web-Audio-Api and stream the RAW-Audio-Buffer, but i think this would not be very efficient.
So i looked arround to encode the AudioBuffer somehow. - If the Opus-Codec would not be practicable,
i am open to alternatives and thankful for any hints in the right direction.
I have tried to use the MediaRecorder (from MediaStreamRecording-API) but it seems not possible to stream with that API, instead of plain recording.
Here is the Part how i get the RAW-AudioBuffer:
const handleSuccess = function(stream) {
const context = new AudioContext();
const source = context.createMediaStreamSource(stream);
const processor = context.createScriptProcessor(16384, 1, 1);
source.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = function(e) {
bufferLen = e.inputBuffer.length
const inputBuffer = new Float32Array(bufferLen);
e.inputBuffer.copyFromChannel(inputBuffer, 0);
let data_to_send = inputBuffer
//And send the Float32Array ...
}
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
So the Main Question is: How can i encode the AudioBuffer.
(and Decode it at the Receiver)
Is there an API or Library? Can i get the encoded Buffer from another API in the Browser?
The Web Audio API has a MediaStreamDestination node that will expose a .stream MediaStream that you can then pass through the WebRTC API.
But if you are only dealing with a microphone input, then pass directly that MediaStream to WebRTC, no need for the Web Audio step.
Ps: for the ones that only want to encode to opus, then MediaRecorder is currently the only native way. It will incur a delay, will generate a webm file, not only the raw data, and will process the data no faster than real-time.
Only other options now are to write your own encoders and run it in WabAssembly.
Hopefully in a near future, we'll have access to the WebCodecs API which should solve this use case among others.

HTML5 Audio recording too large

I managed to create a complete recorder using HTML5.
My problem is the size of the WAV file created, it's too large to be sent to my servers. I'm using the exportWAV function alot of users seem to be using.
This function creates a WAV file from the audio BLOB:
function encodeWAV(samples){
var buffer = new ArrayBuffer(44 + samples.length * 2);
var view = new DataView(buffer);
writeString(view, 0, 'RIFF');
view.setUint32(4, 32 + samples.length * 2, true);
writeString(view, 8, 'WAVE');
writeString(view, 12, 'fmt ');
view.setUint32(16, 16, true);
view.setUint16(20, 1, true);
view.setUint16(22, 1, true);
view.setUint32(24, sampleRate, true);
view.setUint32(28, sampleRate * 4, true);
view.setUint16(32, 4, true);
view.setUint16(34, 16, true);
writeString(view, 36, 'data');
view.setUint32(40, samples.length * 2, true);
floatTo16BitPCM(view, 44, samples);
return view;
}
I was browsing through the alternatives, but none of them are really sufficient or simple enough:
Zipping the file - Doesn't work well and has some security issues.
Converting to MP3 - Makes the process much slower and complicated, also has security issues, and causes the sound to lose alot of quality.
My question is - Does the HTML5 getUserMedia export only to .WAV files ?
If there was a function, like encodeWAV I used, which is encodeMP3 - That would be perfect.
What is the recommended way so solve such a problem?
I'd love to get a simple working example if possible.
Thanks.
The recommended way is probably to use the API already there in your browser instead of rewriting it yourself with the poor tools we've got.
So to record an audio stream, (https fiddle for chrome)
// get our audio stream
navigator.mediaDevices.getUserMedia({
audio: true
}).then(setup);
function startRecording(stream) {
let recorder = new MediaRecorder(stream);
let chunks = []; // here we'll store all recorded chunks
// every time a new chunk is available, store it
recorder.ondataavailable = (e) => chunks.push(e.data);
recorder.onstop = () => {
let blob = new Blob(chunks);
saveRecordedAudio(blob);
};
recorder.start();
return recorder;
}
function saveRecordedAudio(blob) {
// do whatever with this audio file e.g:
// var form = new FormData();
// form.append('file', blob, 'myaudio.ogg');
// xhr.send(form)
// for demo here, we'll just append a new audio with the recorded audio
var url = URL.createObjectURL(blob);
var a = new Audio(url);
a.controls = true;
document.body.appendChild(a);
a.onload = () => URL.revokeObjectURL(url);// better to always revoke blobURLs...
}
function setup(stream) {
let btn = document.querySelector('button');
let recording = false;
var recorder; // weird bug in FF when using let...
btn.onclick = (e) => {
if (recording = !recording) {
recorder = startRecording(stream);
} else {
recorder.stop();
}
e.target.textContent = (recording ? 'stop' : 'start') + ' recording';
};
}
<button>start recording</button>
This will record your stream as an OPUS/ogg, if you want wav, simply do the conversion server side.
Does the HTML5 getUserMedia export only to .WAV files ?
getUserMedia doesn't export anything at all. getUserMedia only returns a MediaStream for some sort of audio/video capture.
This MediaStream is used in conjunction with the Web Audio API where you can access PCM samples. WAV files typically contain raw PCM samples. (WAV is a container format. PCM is the sample format, and is the most popular way of encoding audio digitally.)
Zipping the file - Doesn't work well and has some security issues.
It works just fine when you consider the constraints and has no inherent security issues. In this case, you're getting a lossless compression of audio data. Characteristics of something like that are compression that won't reduce the size by more than 15%-30% or so.
Converting to MP3 - Makes the process much slower and complicated, also has security issues, and causes the sound to lose alot of quality.
You can encode as you record so slowness isn't a problem. Complicated... maybe at first but not really once you've used it. The issue here is that you're concerned about quality loss.
Unfortunately, you don't get to pick perfect quality and tiny size. These are tradeoffs and there is no magic bullet. Any lossy compression you use (like MP3, AAC, Opus, Vorbis) will reduce your data size considerably by removing part of the audio that we don't normally perceive. The less bandwidth there is, the more artifacts occur from this process. You have to decide between data size and quality.
If I might make a suggestion... Use the MediaRecorder API. https://developer.mozilla.org/en-US/docs/Web/API/MediaStream_Recording_API It's a very easy API to use. You create a MediaRecorder, give it a stream, tell it to record, and then deal with the data it gives you in whatever way you wish. Most browsers supporting the MediaRecorder API also support Opus for an audio codec, which provides good performance at most any bitrate. You can choose the bitrate you want and know that you're getting about the best quality audio you can get for that amount of bandwidth.
You need to encode mp3 so it would take less space...
libmp3lame.js is the tools for you...
Full article in here - how to record to mp3.

Is it possible to save an audio stream with pure JavaScript in the browser?

I would like to make a webpage where visitors can save audio streams by clicking on links (not live streams, but links from a radio archive which uses streams), I want to do this without a server backend with pure JavaScript in the browser.
I read somewhere about the JavaScript port of FFMpeg, which can encode and save video / audio in the browser utilizing so called blobs. However download library is huge, as far as I remember 17 MB. In fact I would need only stream copying the audio streams, not a real encoding process.
I usually use similar commands to save a programme:
ffmpeg -i http://stream.example.com/stream_20160518_0630.mp3 -c copy -t 3600 programme.mp3
I wonder, is it possible to compile a subset of FFMpeg into JavaScript which provides only the really needed stream copying?
var audio = new Audio();
var ms = new MediaSource();
var chunks = [];
audio.src = URL.createObjectURL(ms);
ms.addEventListener('sourceopen', function(e) {
var sourceBuffer = ms.addSourceBuffer('audio/mpeg');
var stream;
function pump(stream){
return stream.read().then(data => {
chunks.push(data.value)
sourceBuffer.appendBuffer(data.value);
})
};
sourceBuffer.addEventListener('updateend', () => {
pump(stream);
}, false);
fetch("http://stream001.radio.hu:443/stream/20160606_090000_1.mp3")
.then(res=>pump(stream = res.body.getReader()))
audio.play()
}, false);
// stop the stream when you want and save all chunks that you have
stopBtn.onclick = function(){
var blob = new Blob(chunks)
console.log(blob)
// Create object url, append to link, trigger click to download
// or saveAs(blob, 'stream.mp3') need: https://github.com/eligrey/FileSaver.js
}

AudioContext createMediaElementSource from video fetching data from blob

In javascript, How can I connect an audio context to a video fetching its data from a blob (the video uses the MediaStream capabilities). No matter what I do the audio context returns an empty buffer. Is there any way to connect the two?
Probably, createMediaElementSource is not the right kind of processing node for this use-case.
Rather, you better off to use createMediaStreamSource node from WebAudio API in case you are trying to handle audio live stream, not fixed media source.
The createMediaStreamSource() method of the AudioContext Interface is used to create a new MediaStreamAudioSourceNode object, given a media stream (say, from a navigator.getUserMedia instance), the audio from which can then be played and manipulated.
The link has a more detailed example. However, the main difference for this MediaStreamAudioSourceNode is it can be created only using a MediaStream that you get from media-server or locally(through getUserMedia). In my experience, i couldn't find any way by using only the blob url from the <video> tag.
While this is an old question, I've searched for something similar and found a solution I want to share.
To connect the Blob, you may use a new Response instance. Here is an example for creating a wave form visualizer.
var audioContext = new (window.AudioContext || window.webkitAudioContext)();
var analyser = audioContext.createAnalyser();
var dataArray = new Uint8Array(analyser.frequencyBinCount);
var arrayBuffer = await new Response(yourBlob).arrayBuffer();
var audioBuffer = await audioContext.decodeAudioData(arrayBuffer);
var source = audioContext.createBufferSource();
source.buffer = audioBuffer;
source.connect(analyser);
source.start(0);
Note: yourBlob needs to be a Blob instance.
You may find this fiddle usefull which records video and audio for 5 seconds, turns the recording into a Blob and than plays it back including audio wave visualization.

Save audio in a buffer for later playback

With the Web Audio API, I want to save audio in a buffer for later use. I've found some examples of saving audio to disk, but I only want to store it in memory. I tried connecting the output of the last AudioNode in the chain to an AudioBuffer, but it seems AudioBuffer doesn't have a method for accepting inputs.
var contextClass = (window.AudioContext || window.webkitAudioContext);
// Output compressor
var compressor = context.createDynamicsCompressor();
var compressor.connect(context.destination);
var music = context.createBufferSource();
// Load some content into music with XMLHttpRequest...
music.connect(compressor);
music.start(0);
// Set up recording buffer
var recordBuffer = context.createBuffer(2, 10000, 44100);
compressor.connect(recordBuffer);
// Failed to execute 'connect' on 'AudioNode': No function was found that matched the signature provided.
Is there something I can use instead of AudioBuffer to achieve this? Is there a way to do this without saving files to disk?
Well, turns out Recorder.js does exactly what I wanted. I thought it was only for exporting to disk, but when I looked closer I realized it can save to buffers too. Hooray!

Categories

Resources