I am using the Webaudio api's "createMediaElementSource" which works fine on Firefox(Gecko) and Chrome(Blink) but not Safari(Webkit). This is a big problem for me since I prefer getting the audio from my Html5 audio players rather than using XMLHttpRequests due to the latter being too slow.
The first attempt I did was to get the source as a string from the audio tag and serve it as an url in an XMLHttpRequest. As expected it works but the decoding is very slow and I cant pause the audio with stop() as a resume induces another round of prior decoding of the entire file before it can be heared..
A stackoverflow user named Kevin Ennis gave me an important advice which is a really great idea:
You could break the audio up into a number of smaller files. Like,
maybe break it up into 4 separate 1MB audio files and load them in
order. Then you can start playback after the first one loads, and
while that's playing, you load the other ones.
My question is, how do I do this technically? I am not aware of any function that checks if an audio file finished.
I imagine it would look something like this:
var source = document.getElementByTagName["audio"][0].src;
var fileExt = source.indexOf('.');
var currentFile = 1;
if(decodeCurrentData == complete) {
currentFile += 1;
source = source.slice(0, fileExt) + "_part" + currentFile.toString() + ".mp3";
loadAudioFile();
}
var loadAudioFile = function () {
var request = new XMLHttpRequest();
request.open( "GET", "source", true );
request.responseType = "arraybuffer";
request.onload = function (){
context.decodeAudioData(request.response, function (buffer) {
convolver.buffer = buffer;
});
};
request.send();
};
loadAudioFile();
Will my idea work or would it utterly fail? What would you suggest I do about the long decoding time?
Related
I'm trying to optimize the loading times of audio files in a project where we need to use AudioBufferSourceNode. It requires audio buffer to be loaded..
but can it be possible that i can load say first 10 mins of audio first, and play it while download other part in background. And later create another source node which loads with second part of audio file.
My current implementation loads all of the audio first. Which isn't great as it takes time. My files are 60-70 MB long.
function getData() {
source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open('GET', 'viper.ogg', true);
request.responseType = 'arraybuffer';
request.onload = function() {
var audioData = request.response;
audioCtx.decodeAudioData(audioData, function(buffer) {
source.buffer = buffer;
source.connect(audioCtx.destination);
source.loop = true;
},
function(e){ console.log("Error with decoding audio data" + e.err); });
}
request.send();
}
I think you can achieve what you want by using the WebCodecs API (which is currently only available in Chrome) but it requires some plumbing.
To get the file as a stream you could use fetch() instead of XMLHttpRequest.
Then you would need to demux the encoded file to get the raw audio data to decode it with an AudioDecoder. With a bit of luck it will output AudioData objects. These objects can be used to get the raw sample data which can then be used to create an AudioBuffer.
There are not many WebCodecs examples available yet. I think the example which shows how to decode an MP4 is the most similar to your use case available so far.
I've worded my title, and tags in a way that should be searchable for both video and audio, as this question isn't specific to one. My specific case only concerns audio though, so my question body will be written specific to that.
First, the big picture:
I'm sending audio to multiple P2P clients who will connect and disconnect a random intervals. The audio I'm sending is a stream, but each client only needs the part of the stream from whence they connected. Here's how I solved that:
Every {timeout} (e.g. 1000ms), create a new audio blob
Blob will be a full audio file, with all metadata it needs to be playable
As soon as a blob is created, convert to array buffer (better browser support), and upload to client over WebRTC (or WebSockets if they don't support)
That works well. There is a delay, but if you keep the timeout low enough, it's fine.
Now, my question:
How can I play my "stream" without having any audible delay?
I say stream, but I didn't implement it using the Streams API, it is a queue of blobs, that gets updated every time the client gets new data.
I've tried a lot of different things like:
Creating a BufferSource, and merging two blobs (converted to audioBuffers) then playing that
Passing an actual stream from Stream API to clients instead of blobs
Playing blobs sequentially, relying on ended event
Loading next blob while current blob is playing
Each has problems, difficulties, or still results in an audible delay.
Here's my most recent attempt at this:
let firstTime = true;
const chunks = [];
Events.on('audio-received', ({ detail: audioChunk }) => {
chunks.push(audioChunk);
if (firstTime && chunks.length > 2) {
const currentAudio = document.createElement("audio");
currentAudio.controls = true;
currentAudio.preload = 'auto';
document.body.appendChild(currentAudio);
currentAudio.src = URL.createObjectURL(chunks.shift());
currentAudio.play();
const nextAudio = document.createElement("audio");
nextAudio.controls = true;
nextAudio.preload = 'auto';
document.body.appendChild(nextAudio);
nextAudio.src = URL.createObjectURL(chunks.shift());
let currentAudioStartTime, nextAudioStartTime;
currentAudio.addEventListener("ended", () => {
nextAudio.play()
nextAudioStartTime = new Date();
if (chunks.length) {
currentAudio.src = URL.createObjectURL(chunks.shift());
}
});
nextAudio.addEventListener("ended", () => {
currentAudio.play()
currentAudioStartTime = new Date();
console.log(currentAudioStartTime - nextAudioStartTime)
if (chunks.length) {
nextAudio.src = URL.createObjectURL(chunks.shift());
}
});
firstTime = false;
}
});
The audio-received event gets called every ~1000ms. This code works; it plays each "chunk" after the last one was played, but on Chrome, there is a ~300ms delay that's very audible. It plays the first chunk, then goes quiet, then plays the second, so on. On Firefox the delay is 50ms.
Can you help me?
I can try to create a reproducible example if that would help.
in the project I'm working on, we have about 30 audio tracks where we apply filters and play the audio back. Originally this was done server-side, and returned a base64 string for each track, which I then loaded with new Audio().
This worked well if you had fast internet speeds, but on slow speeds, it could take up to an hour for the tracks to be returned from the server, so now we're applying the filters client-side.
Applying the filters is no problem, but I'm trying not to rewrite my entire playback algorithm (it's much more involved than just pause, play, stop) and am wondering If I can encode an AudioContext to Base64.
I've tried creating a new Audio and passing the AudioContext, creating a new Audio and passing the AudioBuffer and something based on this example. But none if it works and I cant find any examples of what I'm trying to do on the internet.
If someone could take a look at my code and help me out, I'd greatly appreciate it. Thanks in advance!
var audioCtx = new AudioContext();
var source = audioCtx.createBufferSource();
var request = new XMLHttpRequest();
request.open("GET", "/path/to/audio", true);
request.responseType = "arraybuffer";
request.onload = function () {
audioCtx.decodeAudioData(request.response, function (buffer) {
source.buffer = buffer;
// Apply filters to the audio
// Here I would like to convert the audio to Base64
callback(source);
}, function (error) {
console.error("decodeAudioData error", error);
});
};
request.send();
It's a bit hard to know exactly what you want from the snippet you give, but based on the snippet, you might be able to use an OfflineAudioContext if you know how long your audio files are. The offline context will return an AudioBuffer which you can then use to get a base64-encoded audio result.
I am trying to do the following:
On the server I encode h264 packets into Webm (MKV) container structure, so that each cluster gets a single frame packet.Only the first data chunk is different as it contains something called Initialization Segment.Here it is explained quite well.
Then I stream those clusters one by one in a binary stream via WebSocket to a broweser, which is Chrome.
It probably sounds weird that I use h264 codec and not VP8 or VP9, which are native codec for Webm Video Format. But it appears that html video tag has no problem to play this sort of video container. If I just write the whole stream to a file and pass it to video.src, it is played fine. But I want to stream it in real-time.That's why I am breaking the video into chunks and sending them over websocket.
On the client, I am using MediaSource API. I have little experience in Web technologies, but I found that's probably the only way to go in my case.
And it doesn't work.I am getting no errors, the streams runs ok, and the video object emits no warning or errors (checking via developer console).
The client side code looks like this:
<script>
$(document).ready(function () {
var sourceBuffer;
var player = document.getElementById("video1");
var mediaSource = new MediaSource();
player.src = URL.createObjectURL(mediaSource);
mediaSource.addEventListener('sourceopen', sourceOpen);
//array with incoming segments:
var mediaSegments = [];
var ws = new WebSocket("ws://localhost:8080/echo");
ws.binaryType = "arraybuffer";
player.addEventListener("error", function (err) {
$("#id1").append("video error "+ err.error + "\n");
}, false);
player.addEventListener("playing", function () {
$("#id1").append("playing\n");
}, false);
player.addEventListener("progress",onProgress);
ws.onopen = function () {
$("#id1").append("Socket opened\n");
};
function sourceOpen()
{
sourceBuffer = mediaSource.addSourceBuffer('video/mp4; codecs="avc1.64001E"');
}
function onUpdateEnd()
{
if (!mediaSegments.length)
{
return;
}
sourceBuffer.appendBuffer(mediaSegments.shift());
}
var initSegment = true;
ws.onmessage = function (evt) {
if (evt.data instanceof ArrayBuffer) {
var buffer = evt.data;
//the first segment is always 'initSegment'
//it must be appended to the buffer first
if(initSegment == true)
{
sourceBuffer.appendBuffer(buffer);
sourceBuffer.addEventListener('updateend', onUpdateEnd);
initSegment = false;
}
else
{
mediaSegments.push(buffer);
}
}
};
});
I also tried different profile codes for MIME type,even though I know that my codec is "high profile.I tried the following profiles:
avc1.42E01E baseline
avc1.58A01E extended profile
avc1.4D401E main profile
avc1.64001E high profile
In some examples I found from 2-3 years ago, I have seen developers using type= "video/x-matroska", but probably alot changed since then,because now even video.src doesn't handle this sort of MIME.
Additionally, in order to make sure the chunks I am sending through the stream are not corrupted, I opened a local streaming session in VLC player and it played it progressively with no issues.
The only thing I suspect that the MediaCodec doesn't know how to handle this sort of hybrid container.And I wonder then why video object plays such a video ok.Am I missing something in my client side code? Or MediacCodec API indeed doesn't support this type of media?
PS: For those curious why I am using MKV container and not MPEG DASH, for example. The answer is - container simplicity, data writing speed and size. EBML structures are very compact and easy to write in real time.
In Google Chrome:
One .wav file is played, looping. Another .wav file is played from time to time as a sound effect.
When the sound effect plays, the volume of the looping sound automatically decreases. The volume gradually increases again over about 15 seconds.
(I guess it's automatically ducking http://en.wikipedia.org/wiki/Ducking )
I don't want the volume of the loop to decrease when the sound effect plays. How can I prevent this behaviour?
Example: http://www.matthewgatland.com/games/takedown/play/web/audiofail.html
window.AudioContext = window.AudioContext||window.webkitAudioContext;
var context = new AudioContext();
var play = function (buffer, loop) {
var source = context.createBufferSource();
source.buffer = buffer;
if (loop) source.loop = true;
source.connect(context.destination);
source.start(0);
};
var load = function (url, callback) {
var request = new XMLHttpRequest();
request.open('GET', url, true);
request.responseType = 'arraybuffer';
request.onload = function() {
context.decodeAudioData(request.response, function(buffer) {
callback(buffer);
}, null);
};
request.send();
};
var musicSound;
var thudSound;
load("res/snd/music0.wav", function (buffer) {
musicSound = buffer;
});
load("res/snd/thud0.wav", function (buffer) {
thudSound = buffer;
});
Once the sounds have loaded, call:
play(musicSound, true); //start the music looping
//each time you call this, the music becomes quiet for a few seconds
play(thudSound, false);
You might have to do some sound design before you put this into your website. I don't know what you are using for an editor but you might want to edit the sounds together so that their over all level is closer to the level of the original looping sound. That way their won't be as dramatic a difference in levels that is triggering the automatic gain reduction. The combination of both sounds is too loud so the louder of the two will bring down the level of the softer one. So if you bring them closer together in level the overall difference shouldn't be as drastic when or if the gain reduction kicks in.