I have an FFT using canvas that plots a high speed display. I want to optimize the code to have 16 browser windows showing at the same time at 60 fps or close to it. Right now on my machine it runs at 5 fps with 16 windows showing simultaneously.
I was wondering if there was a better way to optimize my code for drawing performance.
With this I am getting 60 fps for up to four simultaneous browser windows but fps drops significantly after that. Right now I am loading all the files into an array buffer and manipulating the points and drawing them at the same time in drawFFT(). Any tips on improving fps performance on multiple browser windows running at the same time?
60 fps animation on multiple windows
99.99% of the time, requestAnimationFrame is the way to do visual animations. It's a great tool, synced with the screen refresh rate, with great timing precision, and battery friendly.
This last advantage is your problem : To save power, browsers do allow the screen synchronization only for the currently focused window at 60fps. All other windows are delayed, on a lower frame-rate.
Since you want to have multiple windows, you'll be able to get only one focused, and thus only one refreshing at 60fps, all others will be slowed down to about 5fps.
How to circumvent this ?
The WebAudioAPI does have its own low-level and high-precision clock system.
By "low-level", I mean that this clock system is not tied to the main js-thread*. On some implementations (chrome) all the WebAudioAPI even runs on a parallel thread. And more importantly to our case, this clock system is not only tied to focused window. This does mean that we can run code, in a background window, at 60fps.
Here is a simple implementation of an timing loop based on the WebAudioAPI clock.
(*Note that while the clock is not tied to the main js thread, the event handler is).
/*
An alternative timing loop, based on AudioContext's clock
#arg callback : a callback function
with the audioContext's currentTime passed as unique argument
#arg frequency : float in ms;
#returns : a stop function
*/
function audioTimerLoop(callback, frequency) {
var freq = frequency / 1000; // AudioContext time parameters are in seconds
var aCtx = new AudioContext();
// Chrome needs our oscillator node to be attached to the destination
// So we create a silent Gain Node
var silence = aCtx.createGain();
silence.gain.value = 0;
silence.connect(aCtx.destination);
onOSCend();
var stopped = false; // A flag to know when we'll stop the loop
function onOSCend() {
var osc = aCtx.createOscillator();
osc.onended = onOSCend; // so we can loop
osc.connect(silence);
osc.start(0); // start it now
osc.stop(aCtx.currentTime + freq); // stop it next frame
callback(aCtx.currentTime); // one frame is done
if (stopped) { // user broke the loop
osc.onended = function() {
aCtx.close(); // clear the audioContext
return;
};
}
};
// return a function to stop our loop
return function() {
stopped = true;
};
}
And call it like that :
var stop_anim = audioTimerLoop(yourCallback, 60); // runs 'yourCallback' every 60ms
And to stop it :
stop_anim();
All right, Now we are able to run smooth animation even on blurred windows.
But I want to run it on 16 windows !
Unfortunately, browsers are tied to hardware limitations when creating AudioContext. (e.g. on my computer, I can not have more than 6 contexts running at the same time.)
Here the solution is to run all the code on a single, master window.
From this master window, you'll first
Open all the other windows, so that you can access their content,
Grab the contexts of your canvas elements,
draw on these contexts
This way, you've got a single update thread, on the main window, and all other windows only have to render.
Live Example
(Be sure to allow popups and to disable your ad-blocker though).
Related
I support several churches that don't have musicians, by providing a little website with a bunch of pure Javascript so they can select music for their services from a collection of about 1100 mp3 and m4a music files. Previously, they created playlists in iTunes or Media Player, but after a track completed, the player would immediately start the next track unless they quickly clicked 'Stop'. So my website allows them to select all their music ahead of time (up to 10 tracks), with a separate "Play" button for each. Hit "Play" and it plays that one track and stops. (Duh.)
I'm encountering delays in loading the files into my "audio" tags - and I need the file to load when they select it so I can display the track duration, which is frequently important to the selection of the music for the service. A delay doesn't occur very often, but often enough to be annoying. Also, the load will occasionally time out completely, even after several attempts. I've experimented played with various techniques, like using setTimeout with different values to allow several seconds before checking if it's loaded, or, loading 5 or 10 times with shorter timeout values until it's loaded. I created a test page that indicates that the timeouts vary greatly - from 2% to 5% of the time, to upwards of 25% occasionally (during tests of 1,000 to 10,000 random loads).
My first technique was relying on events (I tried both 'canplay' and 'canplaythrough' events with minimal difference):
const testAudio = document.getElementById('test-audio');
let timeStart = Date.now();
function loadMusic(p_file) {
testAudio.src = p_file;
testAudio.addEventListener('canplaythrough', musicLoaded);
timeStart = Date.now();
testAudio.load();
}
function musicLoaded() {
console.log('music loaded in ' + (Date.now()-timeStart) + 'ms');
testAudio.removeEventListener('canplaythrough', musicLoaded);
/* should I add/remove the listener each time I change the source file ? */
}
My second approach (from a post here: https://stackoverflow.com/questions/10235919/the-canplay-canplaythrough-events-for-an-html5-video-are-not-called-on-firefox) is to check the 'readyState' of the audio element after a specified timeout, rather than relying on an event. This question specifically addressed Firefox, so I should mention that in my tests Firefox has horrible load times for both the "events" and the "readyState" techniques. Chrome and Edge vary in the range of 2% to 6% load failure due to timeout and Firefox has 27% to 39% load timeouts.
let myTimeout = '';
function loadMusic(p_file) {
myTimeout = setTimeout(fileTimeout, 1000); /* I've tried various values here */
testAudio.src = p_file;
timeStart = Date.now();
testAudio.load();
}
function fileTimeout() {
if (testAudio.readyState > 3) {
console.log('music loaded in ' + (Date.now()-timeStart) + 'ms');
} else {
/* here, I've tried calling loadMusic again 5 to 10 times, which sometimes works */
/* or, just reporting that the load failed... */
console.log('music FAILED to load!');
}
}
I have a shared server hosting plan, and I suspect the delay might be due to traffic on my server. Unfortunately, my hosting service turns a deaf ear to anything that might be application or content related (not surprising). And this isn't worth upgrading to a dedicated server just to eliminate that variable. But I suspect that might be a major factor here.
I need a technique that will always work - even if it takes 30 seconds or more. As long as I can display an intermittent "Still loading..." type message I (and my users) would be satisfied. The "track X won't load" messages happen often enough to be annoying. Early on, I had a few files with bad characters in the file name that needed to be fixed before they would load. So the users think that problem persists. But I know I've fixed all them now.
Any and all suggestions are welcome - but I'd love to keep everything in plain Javascript.
Using an audio constructor:
function loadMusic(p_file) {
myTimeout = setTimeout(fileTimeout, 1000);
let audioConst = new Audio();
audioConst.src = p_file;
timeStart = Date.now();
}
function fileTimeout() {
if (audioConst.readyState > 3) {
console.log('music loaded in ' + (Date.now()-timeStart) + 'ms');
} else {
console.log('music FAILED to load!');
}
myTimeout = '';
}
I'm working on a Javascript Music App that includes a Sequencer. For those who are not familiar, MIDI sequencers work pretty much like this: There is something called PPQ: pulses per quarter note. Each pulse is called "Tick". It depicts how may "subdivisions" there are per quarter note, like resolution. So Sequencers "play" the Events that are in the tracks one Tick at a time: Play Tick1, wait Tick Duration, Play tick2, Tick Duration, and so on.
Now, let's say we have a BPM (Beats per Min) of 120 with PPQ=96 (standard). That means that each Quarter Note Duration is 500ms, and each Tick Duration is 5.20833ms.
What Timer Alternatives we have in Javascript?
1) We have the old setTimeOut. It has several problems: the min. wait time is 4ms. (https://developer.mozilla.org/en-US/docs/Web/API/WindowOrWorkerGlobalScope/setTimeout#Minimum_delay_and_timeout_nesting)
It is also subject to JITTER/time Variations. It is not precise and it is demanding, as call backs are stacked in the even loop.
2) There is an alternative to setTimeOut/setInterval which involves using requestAnimationFrame(). It is VERY precise and CPU efficient. However, the minimum time it can be set is around 16.7ms (the duration of a Frame in a typical 60FPS monitor)
Is there any other Alternative? To to precisely schedule an event every 2-5ms?
Note: the function done in side the loop, playEventsAtTick() is NOT demanding at all, so it would never take more time to execute than Tick Duration.
Thanks!
Danny Bullo
To maintain any sanity in doing this kind of thing, you're going to want to do the audio processing on a devoted thread. Better yet, use the Web Audio API and let people who have been thinking about these problems for a long time do the hard work of sample-accuracy.
Also check out Web MIDI (chrome only).
Thanks nvioli. I'm aware of Web Audio API. However, I don't think that can help here.
I'm not triggering AUDIO directly: I have MIDI events (or let's say just "EVENTS") stored in the TRACKS. And those events happen at any TICK. So the Sequencer needs to loop every Tick Duration to scan what to play at that particular tick.
Regards,
Danny Bullo
In a separate thread, such as a web worker, you can create an endless loop. In this loop, all you need to do is calculate the time between beats. After the time is valid, you can then send a message to the main process, to do some visuals, play a sound or what ever you would like to do.
Here is a Working example
class MyWorker {
constructor() {
// Keeps the loop running
this.run = true
// Beats per minute
this.bpm = 120
// Time last beat was called
this.lastLoopTime = this.milliseconds
}
get milliseconds() {
return new Date().getTime()
}
start() {
while (this.run) {
// Get the current time
let now = this.milliseconds
// Get the elapsed time between now and the last beat
let updateLength = now - this.lastLoopTime
// If not enough time has passed restart from the beginning of the loop
if (updateLength < (1000 * 60) / this.bpm) continue;
// Enough time has passed update the last time
this.lastLoopTime = now
// Do any processing that you would like here
// Send a message back to the main thread
postMessage({ msg: 'beat', time: now })
}
}
}
new MyWorker().start()
Next we can create the index page, which will run the worker, and flash a square everytime a message comes back from the worker.
<!DOCTYPE html>
<html lang="en">
<head>
<script>
// Start the worker
var myWorker = new Worker('worker.js')
// Listen for messages from the worker
myWorker.onmessage = function (e) {
var msg = e.data
switch (msg.msg) {
// If the message is a `beat` message, flash the square
case 'beat':
let div = document.querySelector('div')
div.classList.add('red')
setTimeout(() => div.classList.remove('red'), 100)
break;
}
}
</script>
<style>
div { width: 100px; height: 100px; border: solid 1px; }
.red { background: red; }
</style>
</head>
<body>
<div></div>
</body>
</html>
Get Off My Lawn: The approach you suggested does not completely work. Let's say I add a method to the web worker to STOP the Sequencer:
stop() {
this.run = false;
}
The problem is that the method myWorker.onmessage = function (e) {...} never get's triggered. I suspect it is because the Web Worker Thread is "TOO BUSY" with the endless loop. any way to solve that?
Also, while playing, it works.....but the CPU goes up considerably..... The only possible Solution would be a Sleep() method, but Real SLEEP that does not exist in Javascript...
Thanks
I wrote a fractal image generator which can run from fractions of seconds to several minutes, depending on the number of iterations for each pixel. In the current version, the user has to wait for the image to become fully rendered until he can see the result. During this time the browser UI is blocked, and Firefox will display a warning message every 10 seconds, asking whether the script should be continued, debugged or stopped.
Question: Is it possible to display updates of the canvas contents while the script is running?
Yes
The UI is blocked until the current call (usually started by an event) has returned. When the function returns any changes to the DOM are updated and the next event if there is one is placed on the call stack and called, else the javascript engine just waits for an event.
You can use setTimeout to schedule an event, process some pixels, set the timeout again exit and so on.
Example just in terms of a logic flow
var complete = false;
var pixels = 100000;
var pixelsPerCall = 1000;
function addPixels(){
// process x number of pixels
var i = pixelsPerCall;
while(i-- && pixels--){
// do a pixel
}
if(pixels === 0){
complete = true;
}
if(! complete){
setTimeout(addPixels,0);
}
}
addPixels();
Though for this type of app you are best of using webWorkers. Depending on the number of cores the machine has you can get a huge increase in throughput. Eg an I7 CPU with 8 cores will complete the job ~8 times as quick. Also web workers do not block the DOM so can run for however long you want.
One of possible approaches would be to split your computation into chunks, run each single step with setTimeout / setImmediate, update the canvas and run another chunk.
This not only updates the canvas incrementally but also stops the browser from complaining about long running script.
I am having problems when I want to know the current time of a file playing using the Web Audio API. My code plays the file nicely and the current time returned by the getCurrentTime() function is accurate enough when it comes to short files which load fast.
But when I try to load big files, sometimes the current time returned by the getCurrentTime() function is accurate and sometimes not. Sometimes, after waiting for example for 20 seconds to hear the file playing, when it starts playing it says that the current time is about 20 seconds (which is not true because it is just playing the beginning of the file). It happens with any audio format (OGG, MP3, WAV...) but only sometimes.
I am using a slow system (Asus EEE PC 901 with an Intel Atom 1.60 Ghz and 2 GB RAM with Windows XP Home Edition and SP3) and Firefox 41.0.1.
I am not sure, but it seems that the source.start() method starts playing the sound way too late, so the line after calling that method, where I set the value for the startTime variable, is not the real starting time.
Here is the code (simplified):
var context, buffer, startTime, source;
var stopped = true;
function load(file, startAt)
{
//Here creates the AudioContext and loads the file through XHR (AJAX) and gets the buffer. All works fine.
//When it gots the buffer through XHR (AJAX) and all is fine, it calls play(startAt) function immediately.
//Note: normally, startAt is 0.
}
function play(startAt)
{
source = context.createBufferSource(); //Context created before.
source.buffer = buffer; //Buffer got before from XHR (AJAX).
//Creates a gain node to be able to set the volume later:
var gainNode = context.createGain();
source.connect(gainNode);
gainNode.connect(context.destination);
//Plays the sound:
source.loop = false;
source.start(startAt, 0, buffer.duration - 3); //I don't want the last 3 seconds.
//Stores the start time (useful for pause/resume):
startTime = context.currentTime - startAt; //Here I store the startTime but maybe the file has still not begun to play (should it be just startTime = context.currentTime?).
stopped = false;
}
function stop()
{
source.stop(0);
stopped = true;
}
function getCurrentTime()
{
return (stopped) ? 0 : context.currentTime - startTime;
}
How can I detect when exactly the source.start() method starts playing the file? So I can set the startTime variable value just at that moment, and never before.
Thank you very much in advance. I would really appreciate any kind of help.
From MDN (https://developer.mozilla.org/en-US/docs/Web/API/AudioBufferSourceNode/start), about the first parameter of the start() function:
when (Optional)
The time, in seconds, at which the sound should begin to play, in the same time coordinate system used by the AudioContext. If when is less than (AudioContext.currentTime, or if it's 0, the sound begins to play at once. The default value is 0.
There is no evident issue with your code (although there is no example a call to play()): if you call play(0) or play(context.currentTime + someDelayInSeconds), start() should behave as expected. Unfortunately here the issue is that AudioBufferSource is not meant for big files. Again from the MDN doc of AudioBuffer (https://developer.mozilla.org/en-US/docs/Web/API/AudioBuffer):
Objects of these types are designed to hold small audio snippets, typically less than 45 s.
I suspect that for big something doesn't work very well with the "sound begins play at once" assumption (I also experienced it, although 20 seconds seems way too much...). Unfortunately there is no way to get the exact start time of AudioBufferSource in WebAudio yet.
If you don't have any real reason to load this big file with AudioBufferSource, I suggest you use a MediaElementSourceNode (https://developer.mozilla.org/en-US/docs/Web/API/MediaElementAudioSourceNode): as you can see from the example on the linked doc, it allows you to plug a simple HTML5 Audio element into the AudioContext. You then can have all usual control over the element itself, i.e. you also have access to the audioElement.currentTime property, which tells you the current playout time (in this case of the file itself, which is what you need). Additionally, you don't have to handle loading of the file in memory and could start playing as soon as some data is available.
context.currentTime starts counting the second you create the context object. That means if it takes 20 seconds for your audio to load, context.currentTime == 20.
To account for this delay, you can set a simple timer from the time that you create the context to the time that audio loading completes.
var context; //Create your context here
var audioLoadStart = new Date();
//Do audio load
var audioLoadOffset = (new Date() - audioLoadStart) / 1000;
currentTime = context.currentTime - audioLoadOffset - startTime;
We have a long piece of video, up to 1 hour long.
We want to show users small 30 second chunks of this video.
It's imperative that the video does not stutter at any point.
The user can't then jump around the rest of the video, they only see the 30 second chunk.
An example would be say, a football match, the whole match is on video but clicking a button in another page would load up the full video and play just a goal.
Is this possible with HTML5 Video?
Would it have anything to do with TimeRanges?
Does the video have to served over a pure streaming protocol?
Can we buffer the full 30 second chunk before playing it?
The goal is to cut down on the workflow required to cut out all the little clips (and the time transcoding these to all the different HTML 5 video formats), we can just throw up a trans-coded piece of footage and send the user to a section of that footage.
Your thoughts and input are most welcome, thanks!
At this point in time HTML5 videos are a real PITA -- we have no real API to control the browser buffering, hence they tend to stutter on slower connections, as the browsers try to buffer intelligently, but usually do quite the opposite.
Additionally, if you only want your users to view a particular 30 second chunk of a video (I assume that would be your way of forcing users to registers to view the full videos), HTML5 is not the right choice -- it would be incredibly simple to abuse your system.
What you really need in this case is a decent Flash Player and a Media Server in the backend -- this is when you have full control.
You could do some of this, but then you'd be subject the the browser's own buffering. (You also can't stop it from buffering beyond X sec)
Best put, you could easily have a custom seek control to restrict the ranges and stop the video when is hits the 30 second chunk.
Also, buffering is not something you can control other the tell the browser not to do it. The rest is automatic and support to force a full buffer has been removed from specs.
Anyways, just letting you know this is terrible practice and it could be done but you'll be potentially running into many issues. You could always use a service like Zencoder to help handle transcoding too. Another alternative would be to have ffmpeg or other software on the server to handle clipping and transcoding.
You can set the time using javascript (the video's currentTime property).
In case you want a custom seekbar you can do something like this:
<input type="range" step="any" id="seekbar">
var seekbar = document.getElementById('seekbar');
function setupSeekbar() {
seekbar.max = video.duration;
}
video.ondurationchange = setupSeekbar;
function seekVideo() {
video.currentTime = seekbar.value;
}
function updateUI() {
seekbar.value = video.currentTime;
}
seekbar.onchange = seekVideo;
video.ontimeupdate = updateUI;
function setupSeekbar() {
seekbar.min = video.startTime;
seekbar.max = video.startTime + video.duration;
}
If the video is streaming you will need to "calculate" the "end" time.
var lastBuffered = video.buffered.end(video.buffered.length-1);
function updateUI() {
var lastBuffered = video.buffered.end(video.buffered.length-1);
seekbar.min = video.startTime;
seekbar.max = lastBuffered;
seekbar.value = video.currentTime;
}