does new Audio preload the sound file? - javascript

I get an audio element and play the sound like so:
let audio = document.querySelector(`audio[data-key="${e.key}"]`);
audio.play();
However sometimes (I use Chrome) there is initially a delay when I play a sound, so I also added this code:
let audioElems = document.querySelectorAll('audio');
audioElems.forEach(function (audio) {
new Audio(`./sounds/${audio.dataset.key}.mp3`);
});
It seemed to make a difference at first but now again sometimes I get the delay. Does the extra code make a difference, will using new Audio actually preload the sound file?

It doesn't necessarily preload it, but it creates an HTMLAudioElement with the preload attribute set to auto.
This means that UAs are told that they should preload the resource if they dim it appropriate, e.g, they could ignore it if on mobile data.
console.log(new Audio)
Now to your issue, Chrome is known to not accept more than 6 simultaneous requests. So if your audioElems NodeList contains more than 6 elements, that would be the cause for some of them to be delayed until the first ones are fetched.
Also, there will always be at least a little delay, since all in MediaElement is asynchronous. Depending on what you are after, you may get better results using the Web Audio API.

HTML5 audio/video tags have an optional preload attribute. Is this attribute currently enabled on your audio tag?
https://www.w3schools.com/tags/av_prop_preload.asp
Using the new Audio() constructor defaults to preload="auto", so that does make a difference.
https://developer.mozilla.org/en-US/docs/Web/API/HTMLAudioElement

Related

Applying effects to MediaStream in audio tag streaming Shoutcast

I've got a project I've been working on that broke due to an update in Chrome and I've tried everything I can think of to fix it, but I think my underlying implementation is the problem.
In my project, I'm setting the src of an HTML5 audio tag to a Shoutcast link. I then capture the media stream, call createMediaStreamSource with it, then apply filter that I want to the audio (low pass filter, etc.). In Firefox, I can then call play directly on the new stream source and it plays with the given effects. In Chrome, however, since the audio element is not playing, calling play on the stream source does nothing. But if I play the audio element, Chrome plays both the audio element and the stream source. In older versions of chrome, I could just mute the audio element and the stream would continue playing, but in newer versions, this doesn't work.
Is there some better way to be doing this? I just want to play the stream source with new effects. Maybe some way of modifying the stream source with the effects, then setting that back to the audio tag? A different way of playing the streaming audio?
For additional context, I'm using a library called Pizzicato to do this. Code looks roughly like this:
audioElement.oncanplaythrough = () => {
audioElement.oncanplaythrough = null;
let capturedStream;
if (options.audioElement.mozCaptureStream) {
capturedStream = context.createMediaStreamSource(options.audioElement.mozCaptureStream());
} else {
capturedStream = context.createMediaStreamSource(options.audioElement.captureStream());
}
applyEffects(capturedStream , soundEffects); // connects effect AudioNodes together
audioStream.play();
};

js / html5 audio: Why is canplaythrough not fired on iOS safari?

i use the code below to preload an array of audio files (after user interacts with a button starting the process). After all audio files fired "canplaythrough" the code proceeds:
var loaded = 0;
function loadedAudio() {
// this will be called every time an audio file is loaded
// we keep track of the loaded files vs the requested files
loaded++;
console.log(loaded + " audio files loaded!");
if (loaded == audioFiles.length){
// all have loaded
main();
}
}
function preloadsounds()
{
$("#loader").show();
console.log(level.config);
audioFiles = level.config.soundfiles;
// we start preloading all the audio files with html audio
for (var i in audioFiles) {
preloadAudio(audioFiles[i]);
}
}
function preloadAudio(url)
{
console.log("trying to preload "+ url);
var audio = new Audio();
// once this file loads, it will call loadedAudio()
// the file will be kept by the browser as cache
audio.addEventListener('canplaythrough', loadedAudio, false);
audio.addEventListener('error', function failed(e)
{
console.log("COULD NOT LOAD AUDIO");
$("#NETWORKERROR").show();
});
audio.src = url;
}
works great on Android (Chrome and Firefox) but not a single canplaythrough event is fired on iOs Safari (tested live on 5s and emulated X both 11.x). All files are served from the same Origin. I also don't get any Error messages in my log.
What am I missing?
( the basis for the code above i go from: https://stackoverflow.com/a/31351186/2602592 )
Try calling the load() method after setting the src.
function preloadAudio(url)
{
console.log("trying to preload "+ url);
var audio = new Audio();
// once this file loads, it will call loadedAudio()
// the file will be kept by the browser as cache
audio.addEventListener('canplaythrough', loadedAudio, false);
audio.addEventListener('error', function failed(e)
{
console.log("COULD NOT LOAD AUDIO");
$("#NETWORKERROR").show();
});
audio.src = url;
audio.load(); // add this line
}
Have you checked Network/Server and confirmed Safari is downloading the audio files?
If Safari's not downloading the audio (or only loading metadata instead of the full file), you could try setting audio.preload = 'auto' before setting audio.src
In addition to the above, on a project where I'm reusing the same audio element multiple times by reassigning src at runtime, there were several more steps required. Yes, I would not get any canplaythrough event whatsoever if I did not at least
set preload="auto" on the element before setting src
set src
call load() after setting src
but after most of a day of print-statement debugging and setting watchdog timeouts (since iOS inspection through Mac Safari is highly prone to both hardlocks and losing track of where it is....), I inadvertently stumbled across one more factor:
set audio.currentTime=0 before reassigning src
A ==0 check happened to be the gating condition within my watchdog timeout to see if the audio had in fact cascaded through my canplaythrough handler and begun to play, but it turns out that resetting it ahead of time so it would absolutely be 0 afterwards if the load/play failed... made the load not fail. Go figure. I was, for the record, previously also seeing the error 206 in the asset/network inspector on failed files, as reported by Stephen in earlier answer commentary, so I guess maybe iOS always loads a bit of the file, but gives up trying to load any more if the play head is already further than the load progress?
Anyway, this miraculously let the audio load in some circumstances, but if audio load was triggered by e.g. a message arriving from another frame, I still saw no canplaythrough, and possibly no events whatsoever (didn't check for lesser events since recovering from a playback halt due to canplay-but-not-canplaythrough was going to be worse for me than not playing at all). So that watchdog timer from my debugging became structural:
kick off a setTimeout(()=>{if(audio.readyState==4) audio.play(); else presumeError();},1000);
It turns out that most of the time, the audio is in fact loading, Safari just doesn't let you know.
HOWEVER, it also turns out that in some circumstances where you don't get load events, various other tools are equally broken. Like setTimeout(). It flat out doesn't run the callback. There's at least one StackOverflow on this sort of issue from the mid 2010s circa iOS 6 that has been copypasta'd onto a million other sketchier support sites with the same dubious answer involving not using .bind() or else rewriting .bind() but I doubt most folks are using .bind() to begin with. What some other answers point to is https://gist.github.com/ronkorving/3755461 which may be overkill, and which I didn't use in full, but I did steal the rough concept of:
if setTimeout() isn't working (or you just want finer granularity on your load watcher), write your own based on a (requestAnimationFrame || webkitRequestAnimationFrame)(keepTrackOfRequestedAudio); loop.
So now, if preload isn't handled, you get the notice after you manually load(); if manual load() isn't handled, you get the notice when you check in after a delay, and if the delay isn't handled, you at least get the notice (or can proactively give up) by constantly burning cycles to constantly watch state. Of course, none of this guarantees your audio has permission to play to begin with, but that's an entirely different iOS problem (hint: .play() returns a promise, and you can .catch() that promise for permission errors on any platform I've tried so far).
I see a lot of claims to make an audio object work, especially on Safari!
To go quickly I can tell you that you do not need much, just know to run everything on Safari 5 and more, and all browsers.
Force the call to trough by reloading the first file in your list or file if you only use one. The Trough event is the one that will allow the user to read the file since it allows to know if the file is loaded and ready to be read. If you build a player, you must plan and let the user click on Play only if the through has occurred.
           ObjectAudio.src = file;
Use trough to update your player
ObjectAudio.addEventListener ('canplaythrough', Function, false);
Use Progress to update the percentage buffering bar.
ObjectAudio.addEventListener ('progress', Function, false);
Use the timeupdate event to update the read bar.
ObjectAudio.addEventListener ('timeupdate', Function, false);
You do not need complex things as I see what you are doing.
** Just one worry about Safari 5 Widows. For the object to work, QuickTime must be installed on the user, otherwise Safari 5 will not recognize the HTML 5 Audio object.

Does setting an Audio object to undefined cause memory leaks?

I'm currently writing an application that makes use of the HTML5 Audio API. In Chrome, IE and Firefox I've noticed I can create a Javascript audio object, set it to play a sound file, then make it undefined and the sound will still play, as per this example:
var a = new Audio;
a.src = 'longAudioFile.mp3';
a.play();
a = undefined;
As I am working with many Audio objects in a similar way, will this cause memory leaks if I set one to undefined or will the browser clean it up when it's finished playing/set to paused?
According to spec:
Media elements must not stop playing just because all references to them have been removed; only once a media element is in a state where no further audio could ever be played by that element may the element be garbage collected.
It is possible for an element to which no explicit references exist to play audio, even if such an element is not still actively playing: for instance, it could have a current media controller that still has references and can still be unpaused, or it could be unpaused but stalled waiting for content to buffer.
Playing the Media Resource
There are also cleanup instructions:
<...> to release resources held by media elements when they are done playing, either by being very careful about removing all references to the element and allowing it to be garbage collected, or, even better, by removing the element's src attribute and any source element descendants, and invoking the element's load() method.
Best practices for authors using media elements
As much as I know The browser will clean it up when it's finished playing.

javascript Audio object vs. HTML5 Audio tag

In a project recently when I loaded a sound with
var myAudio = new Audio("myAudio.mp3");
myAudio.play();
It played fine unless a dialogue was opened (ie alert, confirm). However when I instead tried adding an audio tag in my html
<audio id="audio1">
<source src="alarm.mp3" type="audio/mpeg" />
</audio>
and using
var myAudio1 = document.getElementById("audio1");
myAudio1.play()
it continued to play after a dialogue was opened. Does anyone know why this is? Also more generally what are the differences between the two ways to play sounds?
According to this wiki entry at Mozilla <audio> and new Audio() should be the same but it doesn't look like that is the case in practice. Whenever I need to create an audio object in JavaScript I actually just create an <audio> element like this:
var audio = document.createElement('audio');
That actually creates an audio element that you can use exactly like an <audio> element that was declared in the page's HTML.
To recreate your example with this technique you'd do this:
var audio = document.createElement('audio');
audio.src = 'alarm.mp3'
audio.play();
JavaScript halts during an Alert or Confirm box.
You cannot concurrently run code and display an alert(), confirm(), or prompt(), it literally waits for user input on this, this is a core feature of JavaScript.
I am assuming it is that very reason why an audio file played entirely within JavaScript scope does this. Comparatively Flash video clips or HTML5 audio/video will continue to play on even when a JavaScript alert/confirm/prompt is open.
As for what method is better, well that is up to you. It is pretty archaic to do anything with the JavaScript built in alert/confirm/prompt anymore, there are way better looking prompts you can make with jQuery UI and so on.
If you have a lot of dynamic content on the page or are you looking into background buffering audio before they need to be triggered and so on, then JavaScript is probably the saner way to go about things.
If you have literally just one player on the screen then there is no excuse for not putting in onto the HTML code. Although unlikely to affect anyone these days, it is still bad practice to rely heavily on JavaScript when there is no reason to.
I came up with the function below from several answers across the web.
function playAudio(url){
var audio = document.createElement('audio');
audio.src = url;
audio.style.display = "none"; //added to fix ios issue
audio.autoplay = false; //avoid the user has not interacted with your page issue
audio.onended = function(){
audio.remove(); //remove after playing to clean the Dom
};
document.body.appendChild(audio);
}
If you will create - then you will have problems on ios, because it showing even you will set width:0px
var myAudio = new Audio("myAudio.mp3"); is faster because it does not interact with the DOM.
If you are using multiple audios and/or won't need the user to interact with the player controls you should definetly chose new Audio() where the DOM is not involved.
First let me answer the difference that lies between them.
audio tag in html and the new audio object in js, if have a difference is a subtle one and insignificant. They actually do the same thing.
If you just want to include an audio inside your webpage, then using the html tag is seem fit and recommended. And
If you would like the audio to play whilst there has been an interaction from the user, then the javascript Audio object is seem fit and recommended. For instance;
document.querySelector("button).onclick=()=>{let audio=new Audio(audio url); audio.play;
Besides that's the primary purpose of javascript.
Now the reason why the audio still plays when the dialogue opens when you use the html audio tag is because of the fact that the browser first loads your html file, execute the content of the file until it encounters the script tag in the html file and loads the javascript file too. All I'm trying to say is, the audio tag was already read by the browser even before the script loaded.
Javascript pauses when an alert(), prompt() or confirm is encountered. Thus "playing fine after an alert was opened". (•‿•).

XMLHttpRequest to download large HTML5 video in chunks?

I am trying to load a large video into a tag using XMLHttpRequest. I've successfully gotten this to work with small video files using the following code:
window.URL = window.URL || window.webkitURL;
var xhr = new XMLHttpRequest();
xhr.open('GET', 'quicktest.mp4', true);
xhr.responseType = 'blob';
xhr.onload = function(e) {
var video = document.createElement('video');
video.src = window.URL.createObjectURL(this.response);
video.autoplay = true;
document.body.appendChild(video);
};
xhr.send();
No problem if the video is small. However, my file is quite large and creates an out-of-memory error, causing Chrome to crash. Firefox won't even finish loading it, either. I've seen two seperate instances after hours of searching of people suggesting to 'download the file in chunks' and put them back together before sending it to the tag, and I've even found one very promising solution that deals with this very exact scenario. However, I'm at a complete loss as to implementing this workaround. If someone could point me in the right direction.. what would be required to impliment this fix, or anything, I would extremely appreciate it.
If curious why I'm even bothering to load video using XHR.. I'm wanting to autoplay my large video, but Chrome always jumps the gun and starts playing it immediately, thinking that it can play the entire thing with no problem (unreliable "canplaythrough" event) and I can't seem to find a way to get Chrome to buffer the entire video before playing. So I'm resorting to XHR, but having no luck.
It's unfortunate that the browser needs to do so much copying in this scenario - the code does seem like it should work, since you're creating a blob URL and not a super long dataURI string. But sure enough, this method is very slow. However, there is another solution that should give you what you want without messing about with XHR.
Create a video element and set src to your mp4 (or webm for firefox) file. Add a listener for the 'progress' event, which should fire regularly as the browser downloads the file. In that listener, you can check the buffered property to see how much of the video has downloaded and make your own decision as to whether you're ready to start playing.
Now, here it gets a little bit ugly. Even with preload, the browser still will only buffer a little bit of the video, probably about the same amount that it needs to fire canplaythrough. You'll only get one or two progress events, and then nothing. So, when you create the video element, set it to autoplay, and then hide it and mute it. That should force the browser to download the rest of the video and trigger more progress events. When it's fully buffered, or enough to satisfy you, set currentTime back to zero, unmute and unhide.

Categories

Resources