How to tell what's causing slow HTML5 Canvas performance? - javascript

How can I tell if the canvas's slow performance is caused by the drawing itself, or the underlying logic that calculates what should be drawn and where?
The second part of my question is: how to calculate canvas fps? Here's how I did it, seems logical to me, but I can be absolutely wrong too. Is this the right way to do it?
var fps = 0;
setInterval(draw, 1000/30);
setInterval(checkFps, 1000);
function draw() {
//...
fps++;
}
function checkFps() {
$("#fps").html(fps);
fps = 0;
}
Edit:
I replaced the above with the following, according to Nathan's comments:
var lastTimeStamp = new Date().getTime();
function draw() {
//...
var now = new Date().getTime();
$("#fps").html(Math.floor(1000/(now - lastTimeStamp)));
lastTimeStamp = now;
}
So how's this one? You could also calculate only the difference in ms since the last update, performance differences can be seen that way too. By the way, I also did a side-by-side comparison of the two, and they usually moved pretty much together (a difference of 2 at most), however, the latter one had bigger spikes, when performance was extraordinarily low.

Your FPS code is definitely wrong
setInterval(checkFps, 1000);
No-one assures this function will be called exactly every second (it could be more than 1000ms, or less - but probably more), so
function checkFps() {
$("#fps").html(fps);
fps = 0;
}
is wrong (if fps is 32 at that moment it is possible that you have 32 frames in 1.5s (extreme case))
beter is to see what was the real time passes since the last update and calculate the realtimepassed / frames (I'm sure javascript has function to get the time, but I'm not sure if it will be accurate enough = ms or better)
fps is btw not a good name, it contains the number of frames (since last update), not the number of frames per second, so frames would be a better name.
In the same way
setInterval(draw, 1000/30);
is wrong, since you want to achieve a FPS of 30, but since the setInterval is not very accurate (and is probably going to wait longer than you say, you will end up with lower FPS even if the CPU is able to handle the load)

Webkit and Firebug both provide profiling tools to see where CPU cycles are being spent in your javascript code. I'd recommend starting there.
For the FPS calculation, I don't think your code is going to work, but I don't have any good recommendation :(
Reason being: Most (all?) browsers use a dedicated thread for running javascript and a different thread for running UI updates. If the Javascript thread is busy, the UI thread won't be triggered.
So, you can run some javascript looping code that'll "update" the UI 1000 times in succession (for instance, setting the color of some text) - but unless you add a setTimeout to allow the UI thread to paint the change, you won't see any changes until the 1000 iterations are finished.
That said, I don't know if you can assertively increment your fps counter at the end of the draw() routine. Sure, your javascript function has finished, but did the browser actually draw?

Check if you dont use some innerHTML method to debug your project. This can slow your project in a way you can't imagine, especially if you do some concatenation like this innerHTML += newDebugValues;
Or like desau said, profile your cpu usage with firebug or webkit inner debugger.

Related

timing in Javascript acting strange

I am working on js to compare the performance of brute force O(n^2) and Barnes-Hut O(nlog(n))
In my code right now I am doing the same thing with the same data five times
like this:
let data = [arrays of 100 data];
let count = 5;
while(count>0){
count--;
console.time('brute force time' + iterationCount);
brute force function()
console.timeEnd('brute force time' + iterationCount);
console.time('BH time' + iterationCount );
Barnes Hut algorithm();
console.timeEnd('BH time' + iterationCount);
}
}
Though, the code is the same for each time, console.time is showing the different and worrying results.
In the timing, the difference between BH and brute force is not relatively similar in multiple iterations.
and it is not predictable when I run code every time.
One more thing to notice is that every time I run the code, in the first iteration brute force and Barnes Hut algorithm timing are almost similar where as after first, it shows that Barnes-Hut is way better.
PS: everything in both functions is scoped to each function i.e local variable and shares the same data in each iteration so in each iteration code is identical !!!!
can anyone help me understand why I am getting like this?
I'd guess these times are so tiny random noise affects the results - your machine doing other processing perhaps. I'd increase the data size, and therefore the processing time - maybe you'd see more consistent results then.

Animation alternatives Javascript

So I want to make a small JavaScript game.
And in order to do that I need to animate a few things.
I went researching and found about setInterval and requestAnimationFrame.
I can use either of those 2 for making my game work, however I understood that requestAnimationFrame is the better alternative there.
The problem I see with this is that while the function has its benefits , you are unable to set a framerate , or an update rate for it easily.
I found another thread that explained a way of making this work however it seemed somewhat complicated.
Controlling fps with requestAnimationFrame?
Is there an easier way of animating with a set framerate ?
Is there an easier way of animating with a set framerate ?
Simply put: no. Since rendering is one of the most computation heavy process for a browser, which can be triggered in various ways, it is not possible to foretell how long an update will take, since it can range from drawing one circle on a canvas up to a complete replace of all the visible content of the page.
To overcome this, browser offer a way to call a function a often as possible and the developer is responsible to make his animation sensitive to different time deltas/time steps.
One way to tackle that is to use the concept of velocity. velocity = distance / time. If you want an asset to have a constant velocity you can do the following, since distance = velocity * time follows:
var
asset_velocity = 1; //pixel per millisecond
last = new Date().getTime()
;
(function loop () {
var
now = new Date().getTime(),
delta = now - last,
distance = asset_velocity * delta
;
//update the asset
last = now;
window.requestAnimationFrame(loop)
})();

Do I got number of operations per second in this way?

Look at this code:
function wait(time) {
let i = 0;
let a = Date.now();
let x = a + (time || 0);
let b;
while ((b = Date.now()) <= x) ++i;
return i;
}
If I run it in browser (particularly Google Chrome, but I don't think it matters) in the way like wait(1000), the machine will obviously freeze for a second and then return recalculated value of i.
Let it be 10 000 000 (I'm getting values close to this one). This value varies every time, so lets take an average number.
Did I just got current number of operations per second of the processor in my machine?
Not at all.
What you get is the number of loop cycles completed by the Javascript process in a certain time. Each loop cycle consists of:
Creating a new Date object
Comparing two Date objects
Incrementing a Number
Incrementing the Number variable i is probably the least expensive of these, so the function is not really reporting how much it takes to make the increment.
Aside from that, note that the machine is doing a lot more than running a Javascript process. You will see interference from all sorts of activity going on in the computer at the same time.
When running inside a Javascript process, you're simply too far away from the processor (in terms of software layers) to make that measurement. Beneath Javascript, there's the browser and the operating system, each of which can (and will) make decisions that affect this result.
No. You can get the number of language operations per second, though the actual number of machine operations per second on a whole processor is more complicated.
Firstly the processor is not wholly dedicated to the browser, so it is actually likely switching back and forth between prioritized processes. On top of that memory access is obscured and the processor uses extra operations to manage memory (page flushing, etc.) and this is not gonna be very transparent to you at a given time. On top of that physical properties means that the real clock rate of the processor is dynamic... You can see it's pretty complicated already ;)
To really calculate the number of machine operations per second you need to measure the clock rate of the processor and multiply it by the number of instructions per cycle the processor can perform. Again this varies, but really the manufacturer specs will likely be good enough of an estimate :P.
If you wanted to use a program to measure this, you'd need to somehow dedicate 100% of the processor to your program and have it run a predictable set of instructions with no other hangups (like memory management). Then you need to include the number of instructions it takes to load the program instructions into the code caches. This is not really feasible however.
As others have pointed out, this will not help you determine the number of operations the processor does per second due to the factors that prior answers have pointed out. I do however think that a similar experiment could be set up to estimate the number of operations to be executed by your JavaScript interpreter running on your browser. For example given a function: factorial(n) an operation that runs in O(n). You could execute an operation such as factorial(100) repeatedly over the course of a minute.
function test(){
let start = Date.now();
let end = start + 60 * 1000;
let numberOfExecutions = 0;
while(Date.now() < end){
factorial(100);
numberOfExecutions++;
}
return numberOfExecutions/(60 * 100);
}
The idea here is that factorial is by far the most time consuming function in the code. And since factorial runs in O(n) we know factorial(100) is approximately 100 operations. Note that this will not be exact and that larger numbers will make for better approximations. Also remember that this will estimate the number of operations executed by your interpreter and not your processor.
There is a lot of truth to all previous comments, but I want to invert the reasoning a little bit because I do believe it is easier to understand it like that.
I believe that the fairest way to calculate it is with the most basic loop, and not relying on any dates or functions, and instead calculate the values later.
You will see that the smaller the function, the bigger the initial overload is. That means it takes a small amount of time to start and finish each function, but at a certain point they all start reaching a number that can clearly be seen as close-enough to be considered how many operations per second can JavaScript run.
My example:
const oneMillion = 1_000_000;
const tenMillion = 10_000_000;
const oneHundredMillion = 100_000_000;
const oneBillion = 1_000_000_000;
const tenBillion = 10_000_000_000;
const oneHundredBillion = 100_000_000_000;
const oneTrillion = 1_000_000_000_000;
function runABunchOfTimes(times) {
console.time('timer')
for (let i = 0; i < times; ++i) {}
console.timeEnd('timer')
}
I've tried on a machine that has a lot of load already on it with many processes running, 2020 macbook, these were my results:
at the very end I am taking the time the console showed me it took to run, and I divided the number of runs by it. The oneTrillion and oneBillion runs are virtually the same, however when it goes to oneMillion and 1000 you can see that they are not as performant due to the initial load of creating the for loop in the first place.
We usually try to sway away from O(n^2) and slower functions exactly because we do not want to reach for that maximum. If you were to perform a find inside of a map for an array with all cities in the world (around 10_000 according to google, I haven't counted) we would already each 100_000_000 iterations, and they would certainly not be as simple as just iterating through nothing like in my example. Your code then would take minutes to run, but I am sure you are aware of this and that is why you posted the question in the first place.
Calculating how long it would take is tricky not only because of the above, but also because you cannot predict which device will run your function. Nowadays I can open in my TV, my watch, a raspberry py and none of them would be nearly as fast as the computer I am running from when creating these functions. Sure. But if I were to try to benchmark a device I would use something like the function above since it is the simplest loop operation I could think of.

Is it safe to assume 60 fps for browser rendering?

I want to make a JavaScript animation take 5 seconds to complete using requestAnimationFrame().
I don't want a strict and precise timing, so anything close to 5 seconds is OK and I want my code to be simple and readable, so solutions like this won't work for me.
My question is, is it safe to assume most browsers render the page at 60 fps? i.e. if I want my animation to take 5 seconds to complete, I'll divide it to 60 * 5 = 300 steps and with each call of function draw() using requestAnimationFrame(), draw the next step of animation. (Given the fact the animation is pretty simple, just moving a colored div around.)
By the way, I can't use jQuery.
Edit: Let me rephrase the question this way: Do all browsers 'normally' try to render the page at 60 fps? I want to know if Chrome for example renders at 75 fps or Firefox renders at 70 fps.
(Normal condition: CPU isn't highly loaded, RAM is not full, there are no storage failures, room is properly ventilated and nobody tries to throw my laptop out the window.)
Relying on 60 frames per second is very unsafe, because the browser isn't always in the same conditions, and even if it tries to render the page at the maximum fps possible, there's always a chance of the processor/cpu/gpu being busy doing something else, causing the FPS to drop down.
If you want to rely on FPS (although I wouldn't suggest you so), you should first detect the current fps, and adjust the speed of your animation frame per frame. Here's an example:
var lastCall, fps;
function detectFps() {
var delta;
if (lastCall) {
delta = (Date.now() - lastCall)/1000;
lastCall = Date.now();
fps = 1/delta;
} else {
lastCall = Date.now();
fps = 0;
}
}
function myFunc() {
detectFps();
// Calculate the speed using the var fps
// Animate...
requestAnimationFrame(myFunc);
}
detectFps(); // Initialize fps
requestAnimationFrame(myFunc); // Start your animation
It depends on the GPU and monitor combination. I have a good GPU and a 120 hertz monitor, so it renders at 120 fps. During the render, If I move to 60 hertz monitor, it will max out at 60 fps.
Another factor, that happens in some browsers/OS, is the iGPU being used instead of the discrete gpu.
As already stated by others, it isn't.
But if you need to end your animation in approximately 5 seconds and it's not crucial not to miss any frames in the animation, you can use the old setTimeout() way. That way you can miss a target by a few milliseconds, and some of the frames in your animation will be skipped (not rendered) because of the fps mismatch, but this can be a "good enough" solution, especially if your animation is simple as you state it is, there's a chance that users won't even see the glitch.
It's not safe to assume everyone can handle animation.
People will have different needs.
A lot of common animations, and common web design practices, give me awful migraines, so I set my browser to 1 frame per second to kill the animation without causing too much fast flashing.

Most efficient way to throttle continuous JavaScript execution on a web page

I'd like to continuously execute a piece of JavaScript code on a page, spending all available CPU time I can for it, but allowing browser to be functional and responsive at the same time.
If I just run my code continuously, it freezes the browser's UI and browser starts to complain. Right now I pass a zero timeout to setTimeout, which then does a small chunk of work and loops back to setTimeout. This works, but does not seem to utilize all available CPU. Any better ways of doing this you might think of?
Update: To be more specific, the code in question is rendering frames on canvas continuously. The unit of work here is one frame. We aim for the maximum possible frame rate.
Probably what you want is to centralize everything that happens on the page and use requestAnimationFrame to do all your drawing. So basically you would have a function/class that looks something like this (you'll have to forgive some style/syntax errors I'm used to Mootools classes, just take this as an outline)
var Main = function(){
this.queue = [];
this.actions = {};
requestAnimationFrame(this.loop)
}
Main.prototype.loop = function(){
while (this.queue.length){
var action = this.queue.pop();
this.executeAction(e);
}
//do you rendering here
requestAnimationFrame(this.loop);
}
Main.prototype.addToQueue = function(e){
this.queue.push(e);
}
Main.prototype.addAction = function(target, event, callback){
if (this.actions[target] === void 0) this.actions[target] = {};
if (this.actions[target][event] === void 0) this.actions[target][event] = [];
this.actions[target][event].push(callback);
}
Main.prototype.executeAction = function(e){
if (this.actions[e.target]!==void 0 && this.actions[e.target][e.type]!==void 0){
for (var i=0; i<this.actions[e.target][e.type].length; i++){
this.actions[e.target][e.type](e);
}
}
}
So basically you'd use this class to handle everything that happens on the page. Every event handler would be onclick='Main.addToQueue(event)' or however you want to add your events to your page, you just point them to adding the event to the cue, and just use Main.addAction to direct those events to whatever you want them to do. This way every user action gets executed as soon as your canvas is finished redrawing and before it gets redrawn again. So long as your canvas renders at a decent framerate your app should remain responsive.
EDIT: forgot the "this" in requestAnimationFrame(this.loop)
web workers are something to try
https://developer.mozilla.org/en-US/docs/DOM/Using_web_workers
You can tune your performance by changing the amount of work you do per invocation. In your question you say you do a "small chunk of work". Establish a parameter which controls the amount of work being done and try various values.
You might also try to set the timeout before you do the processing. That way the time spent processing should count towards any minimum the browsers set.
One technique I use is to have a counter in my processing loop counting iterations. Then set up an interval of, say one second, in that function, display the counter and clear it to zero. This provides a rough performance value with which to measure the effects of changes you make.
In general this is likely to be very dependent on specific browsers, even versions of browsers. With tunable parameters and performance measurements you could implement a feedback loop to optimize in real-time.
One can use window.postMessage() to overcome the limitation on the minimum amount of time setTimeout enforces. See this article for details. A demo is available here.

Categories

Resources