Getting audio files buffer for web audio visualization - javascript

I'm trying to audio visualization.
using this web audio js pizzicato for load files,
and this code for visualization
I want to get the loaded files buffer and input buffer to start() method.
AudioVisualizer.prototype.start = function(buffer) {
this.audioContext.decodeAudioData(buffer, decodeAudioDataSuccess,
decodeAudioDataFailed);
var that = this;
function decodeAudioDataSuccess(decodedBuffer) {
that.sourceBuffer.buffer = decodedBuffer
that.sourceBuffer.start(0);
}
}
How can get the files buffer?
or
How can get sound buffer about current output?

Related

The Web Audio API doesn't have the ability to offload the data into a .csv file for building a graph?

Task.
The goal is to offload data from Web Audio API presented as a produced spectrogram, in a .csv array by dividing by 10 parts (that is by 10% of area along vertical axis - this is frequency), then load data from array into StreamGraph, see pic-1
Source codes:
https://codepen.io/lolliuym/pen/GREjwNR?editors=1010
Tries.
1 - Define online and offline audio context
const AudioContext = window.AudioContext || window.webkitAudioContext;
const audioCtx = new AudioContext();
const offlineCtx = new OfflineAudioContext(2,44100*40,44100);
source = offlineCtx.createBufferSource();
2 - Use XHR to load the audio track and decodeAudioData to decode it and insert the data into the buffer. Then place the buffer in the source. Unfortunately, the Web Audio API does not provide an option to create a new .csv file. Example here:
function getData() {
request = new XMLHttpRequest();
request.open('GET', 'audio.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
let audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
myBuffer = buffer;
source.buffer = myBuffer;
source.connect(offlineCtx.destination);
source.start();
offlineCtx.startRendering();
},
function(e){"Error with decoding audio data" + e.err})}
request.send()}
I found a way to create a .csv using existing.csv (defaults to NULL), but I think there is no way compatibility can affect Web Audio API.
Is there any other way?
Total.
In all my attempts, I have failed to create the contents of a .csv file through which to offload data from the Web Audio API to a graph that could play in a browser.
Questions.
What is the ability to create a new .csv file by dividing 10 by parts of an area?
What method can be used to unload?
How can Streamgraph schedule connect to .csv

How to play sound in javascript without external audio file/html file?

So im building something but im not allowed to use any external files other than the script.js file itself. I want to play a .mp3 sound in a function but im not sure how to do it without uploading the file into my folder.
You can use javascript to set the audio data src to data:audio/ogg;base64,T2dnUwACAAAAAAAAAAA+... etc.
There's an example here. https://iandevlin.com/html5/data-uri/audio.php
To do this with javascript simply
var audioElement = new Audio();
audioElement.src = "data:audio/ogg;base64,T2dnUwACAAAAAAAAAAA+...";
audioElement.play();
You can encode your mp3 file using base64. Than you can the audio in a string:
var beep = "data:audio/mp3;base64,<paste your base64 here>";
var audio = document.getElementById('audio');
audio.src = beep;
audio.play();
The base64 string can be generate using a shell
cat sound.mp3 | base64

How do I get buffers/raw data from AudioContext?

I am trying to record and save sound clips from the user microphone using the GetUserMedia() and AudioContext APIs.
I have been able to do this with the MediaRecorder API, but unfortunately, that's not supported by Safari/iOS, so I would like to do this with just the AudioContext API and the buffer that comes from that.
I got things partially working with this tutorial from Google Web fundamentals, but I can't figure out how to do the following steps they suggest.
var handleSuccess = function(stream) {
var context = new AudioContext();
var source = context.createMediaStreamSource(stream);
var processor = context.createScriptProcessor(1024, 1, 1);
source.connect(processor);
processor.connect(context.destination);
processor.onaudioprocess = function(e) {
// ******
// TUTORIAL SUGGESTS: Do something with the data, i.e Convert this to WAV
// ******
// I ASK: How can I get this data in a buffer and then convert it to WAV etc.??
// *****
console.log(e.inputBuffer);
};
};
navigator.mediaDevices.getUserMedia({ audio: true, video: false })
.then(handleSuccess);
As the tutorial says:
The data that is held in the buffers is the raw data from the
microphone and you have a number of options with what you can do with
the data:
Upload it straight to the server
Store it locally
Convert to a dedicated file format, such as WAV, and then save it to your servers or locally
I could do all this, but I can't figure out how to get the audio buffer once I stop the context.
With MediaRecorder you can do something like this:
mediaRecorder.ondataavailable = function(e) {
chunks.push(e.data);
}
And then when you're done recording, you have a buffer in chunks. There must be a way to this, as suggested by the tutorial, but I can't find the data to push into the buffer in the first code example.
Once I get the audio buffer I could convert it to WAV and make it into a blob etc.
Can anyone help me with this? (I don't want to use the MediaRecorder API)
e.inputBuffer.getChannelData(0)
Where 0 is the first channel. This should return a Float32Array with the raw PCM data, which you can then convert to an ArrayBuffer e.inputBuffer.getChannelData(0).buffer and send to a worker that would convert it to the needed format.
.getChannelData() Docs: https://developer.mozilla.org/en-US/docs/Web/API/AudioBuffer/getChannelData.
About typed arrays: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Typed_arrays, https://javascript.info/arraybuffer-binary-arrays.

Steaming fragmented Webm over websocket to MediaSouce

I am trying to do the following:
On the server I encode h264 packets into Webm (MKV) container structure, so that each cluster gets a single frame packet.Only the first data chunk is different as it contains something called Initialization Segment.Here it is explained quite well.
Then I stream those clusters one by one in a binary stream via WebSocket to a broweser, which is Chrome.
It probably sounds weird that I use h264 codec and not VP8 or VP9, which are native codec for Webm Video Format. But it appears that html video tag has no problem to play this sort of video container. If I just write the whole stream to a file and pass it to video.src, it is played fine. But I want to stream it in real-time.That's why I am breaking the video into chunks and sending them over websocket.
On the client, I am using MediaSource API. I have little experience in Web technologies, but I found that's probably the only way to go in my case.
And it doesn't work.I am getting no errors, the streams runs ok, and the video object emits no warning or errors (checking via developer console).
The client side code looks like this:
<script>
$(document).ready(function () {
var sourceBuffer;
var player = document.getElementById("video1");
var mediaSource = new MediaSource();
player.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', sourceOpen);
//array with incoming segments:
var mediaSegments = [];
var ws = new WebSocket("ws://localhost:8080/echo");
ws.binaryType = "arraybuffer";
player.addEventListener("error", function (err) {
$("#id1").append("video error "+ err.error + "\n");
}, false);
player.addEventListener("playing", function () {
$("#id1").append("playing\n");
}, false);
player.addEventListener("progress",onProgress);
ws.onopen = function () {
$("#id1").append("Socket opened\n");
};
function sourceOpen()
{
sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.64001E"');
}
function onUpdateEnd()
{
if (!mediaSegments.length)
{
return;
}
sourceBuffer.appendBuffer(mediaSegments.shift());
}
var initSegment = true;
ws.onmessage = function (evt) {
if (evt.data instanceof ArrayBuffer) {
var buffer = evt.data;
//the first segment is always 'initSegment'
//it must be appended to the buffer first
if(initSegment == true)
{
sourceBuffer.appendBuffer(buffer);
sourceBuffer.addEventListener('updateend', onUpdateEnd);
initSegment = false;
}
else
{
mediaSegments.push(buffer);
}
}
};
});
I also tried different profile codes for MIME type,even though I know that my codec is "high profile.I tried the following profiles:
avc1.42E01E baseline
avc1.58A01E extended profile
avc1.4D401E main profile
avc1.64001E high profile
In some examples I found from 2-3 years ago, I have seen developers using type= "video/x-matroska", but probably alot changed since then,because now even video.src doesn't handle this sort of MIME.
Additionally, in order to make sure the chunks I am sending through the stream are not corrupted, I opened a local streaming session in VLC player and it played it progressively with no issues.
The only thing I suspect that the MediaCodec doesn't know how to handle this sort of hybrid container.And I wonder then why video object plays such a video ok.Am I missing something in my client side code? Or MediacCodec API indeed doesn't support this type of media?
PS: For those curious why I am using MKV container and not MPEG DASH, for example. The answer is - container simplicity, data writing speed and size. EBML structures are very compact and easy to write in real time.

Is it possible to save an audio stream with pure JavaScript in the browser?

I would like to make a webpage where visitors can save audio streams by clicking on links (not live streams, but links from a radio archive which uses streams), I want to do this without a server backend with pure JavaScript in the browser.
I read somewhere about the JavaScript port of FFMpeg, which can encode and save video / audio in the browser utilizing so called blobs. However download library is huge, as far as I remember 17 MB. In fact I would need only stream copying the audio streams, not a real encoding process.
I usually use similar commands to save a programme:
ffmpeg -i http://stream.example.com/stream_20160518_0630.mp3 -c copy -t 3600 programme.mp3
I wonder, is it possible to compile a subset of FFMpeg into JavaScript which provides only the really needed stream copying?
var audio = new Audio();
var ms = new MediaSource();
var chunks = [];
audio.src = URL.createObjectURL(ms);
ms.addEventListener('sourceopen', function(e) {
var sourceBuffer = ms.addSourceBuffer('audio/mpeg');
var stream;
function pump(stream){
return stream.read().then(data => {
chunks.push(data.value)
sourceBuffer.appendBuffer(data.value);
})
};
sourceBuffer.addEventListener('updateend', () => {
pump(stream);
}, false);
fetch("http://stream001.radio.hu:443/stream/20160606_090000_1.mp3")
.then(res=>pump(stream = res.body.getReader()))
audio.play()
}, false);
// stop the stream when you want and save all chunks that you have
stopBtn.onclick = function(){
var blob = new Blob(chunks)
console.log(blob)
// Create object url, append to link, trigger click to download
// or saveAs(blob, 'stream.mp3') need: https://github.com/eligrey/FileSaver.js
}

Categories

Resources