I have a web-server that streams wav audio and I would like to play it in a web browser using the javascript audio API.
Here is my code :
function start() {
var request = new XMLHttpRequest();
request.open("GET", url, true);
request.responseType = "arraybuffer"; // Read as binary data
request.onload = function() {
var data = request.response;
playSound(data);
};
}
The problem here is that onload wont be called until the data is completely loaded which is not very convenient for streaming.
So I looked for another event and found onProgress but the problem is that request.response returns null if the data is not completely loaded.
Whats the correct way to play an audio stream with javascript ?
Thank's for your help.
Lukas
Related
I'm trying to optimize the loading times of audio files in a project where we need to use AudioBufferSourceNode. It requires audio buffer to be loaded..
but can it be possible that i can load say first 10 mins of audio first, and play it while download other part in background. And later create another source node which loads with second part of audio file.
My current implementation loads all of the audio first. Which isn't great as it takes time. My files are 60-70 MB long.
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
I think you can achieve what you want by using the WebCodecs API (which is currently only available in Chrome) but it requires some plumbing.
To get the file as a stream you could use fetch() instead of XMLHttpRequest.
Then you would need to demux the encoded file to get the raw audio data to decode it with an AudioDecoder. With a bit of luck it will output AudioData objects. These objects can be used to get the raw sample data which can then be used to create an AudioBuffer.
There are not many WebCodecs examples available yet. I think the example which shows how to decode an MP4 is the most similar to your use case available so far.
I am making a video editing tool, where the user loads a local video into the application and edits it. For this I have to extract audio from the local file.
Currently I am loading the video file through a XMLHttpRequest which gives a arraybuffer as output. From this arraybuffer using decodeAudioData from audioContext Object I am getting AudioBuffer, which is used to paint the canvas.
let audioContext = new (window.AudioContext || window.webkitAudioContext)();
var req = new XMLHttpRequest();
req.open('GET', this.props.videoFileURL, true);
req.responseType = 'arraybuffer';
req.onload = e => {
audioContext.decodeAudioData(
req.response,
buffer => {
this.currentBuffer = buffer;
this.props.setAudioBuffer(buffer);
requestAnimationFrame(this.updateCanvas);
},
this.onDecodeError
);
console.log(req.response);
};
req.send();
This is working for most mp4 files but I am getting decodeError when I test with MPEG-1/2 encoded video files
Edit 1 :
I understand this is a demux issue, I am not able to find demuxer for mpeg-1
I'm using XMLHttpRequest to load an audio, and Web Audio Api to get the frequency of that audio to use for a visualizer. I am new to the concept, but what I've learned is that the second parameter "url" in XMLHttpRequest's open method must be in the same domain as the current document, a relative url (e.g audio-folder/music.mp3).
I want to open an audio from a third party website outside of the database (https://c1.rbxcdn.com/36362fe5a1eab0c46a9b23cf4b54889e), but of course, returns an error.
I assume the way is to save the audio from the base url into the database so the XMLHttpRequest can be sent, and then remove it once the audio's frequency has been calculated. But how exactly can I do this? I'm not sure where to start, or how efficient this is, so if you have advice then I'd love to hear it.
Here is the code I'm working with.
function playSample() {
var request = new XMLHttpRequest();
request.open('GET', 'example.mp3', true);
request.responseType = 'arraybuffer';
// When loaded decode the data
request.onload = function() {
// decode the data
setupAudioNodes();
context.decodeAudioData(request.response, function(buffer) {
// when the audio is decoded play the sound
sourceNode.buffer = buffer;
sourceNode.start(0);
rafID = window.requestAnimationFrame(updateVisualization);
}, function(e) {
console.log(e);
});
};
request.send();
}
For a side project I am using the following JS plugin to draw spectrogram of an audio file in the browser :
https://www.npmjs.com/package/spectrogram
var spectrogram = require('spectrogram');
var spectro = Spectrogram(document.getElementById('canvas'), {
audio: {
enable: false
}
});
var audioContext = new audioContext();
var request = new XMLHttpRequest();
request.open('GET', 'audio.mp3', true);
request.responseType = 'arraybuffer';
request.onload = function() {
audioContext.decodeAudioData(request.response, function(buffer) {
spectro.addSource(buffer, audioContext);
spectro.start();
});
};
request.send();
(demo available here : https://lab.miguelmota.com/spectrogram/example/)
However, the current code and examples draw the spectrogram "line by line" as the sound is currently playing through buffer.
I wonder if there's a way to actually read the file, then draw the entire spectrogram at once ? I tried converting the array buffer to blob, but the rest of the functions actually expect the sound to be buffered...
So my question is : is there a way to achieve what I'm looking for ? (Draw fulls spectrogram at once instead of buffering the audio and drawing it "on the go"
Is there an easier way to achieve what I'm looking for ?
Thank you for your time and help.
I am using the Webaudio api's "createMediaElementSource" which works fine on Firefox(Gecko) and Chrome(Blink) but not Safari(Webkit). This is a big problem for me since I prefer getting the audio from my Html5 audio players rather than using XMLHttpRequests due to the latter being too slow.
The first attempt I did was to get the source as a string from the audio tag and serve it as an url in an XMLHttpRequest. As expected it works but the decoding is very slow and I cant pause the audio with stop() as a resume induces another round of prior decoding of the entire file before it can be heared..
A stackoverflow user named Kevin Ennis gave me an important advice which is a really great idea:
You could break the audio up into a number of smaller files. Like,
maybe break it up into 4 separate 1MB audio files and load them in
order. Then you can start playback after the first one loads, and
while that's playing, you load the other ones.
My question is, how do I do this technically? I am not aware of any function that checks if an audio file finished.
I imagine it would look something like this:
var source = document.getElementByTagName["audio"][0].src;
var fileExt = source.indexOf('.');
var currentFile = 1;
if(decodeCurrentData == complete) {
currentFile += 1;
source = source.slice(0, fileExt) + "_part" + currentFile.toString() + ".mp3";
loadAudioFile();
}
var loadAudioFile = function () {
var request = new XMLHttpRequest();
request.open( "GET", "source", true );
request.responseType = "arraybuffer";
request.onload = function (){
context.decodeAudioData(request.response, function (buffer) {
convolver.buffer = buffer;
});
};
request.send();
};
loadAudioFile();
Will my idea work or would it utterly fail? What would you suggest I do about the long decoding time?