HTML5 Video buffered attribute features - javascript

I am designing a custom HTML5 video player. Thus, it will have its own custom slider to mimic the video progress, so I need to understand the entire buffering shebang of a HTML5 video.
I came across this article: Video Buffering. It says that the buffered object consists of several time ranges in linear order of start time. But I couldn't find out the following:
Say the video starts. It continues upto 1:45 on its own (occasionally stalling perhaps, waiting for further data), after which I suddenly jump to 32:45. Now after some time, if I jump back to 1:27 (within the time range initially loaded and played through, before I made the jump), will it start playing immediately as it was already loaded before? Or is it that since I made a jump, that portion is lost and will have to be fetched again? Either way, is the behavior consistent for all such scenarios?
Say I make 5 or 6 such jumps, each time waiting for a few seconds for some data to load after the jump. Does that mean the buffered object will have all those time ranges stored? Or might some get lost? Is it a stack kind of thing, where the earlier ranges will get popped off as more ranges get loaded due to further jumps?
Will checking whether the buffered object has one time range starting at 0 (forget live streaming) and ending at the video duration length ensure that the entire video resource has been loaded fully? If not, is there some way to know that the entire video has been downloaded, and any portion is seekable, from which video can play continuously upto end without a moment's delay?
The W3C specs are not very clear on this, and I also can't find a suitably large (say more than an hour) remote video resource to test.

How video is buffered is browser implementation-dependent and therefore may vary from browser to browser.
Various browsers can use different factors to determine to keep or to discard a part of the buffer. Old segments, disk space, memory, and performance are typical factors.
The only way to know is to "see" what the browser has or is loading.
For example - in Chrome I played a few seconds then I skipped to about 30 seconds and you can see that it starts to load another part starting from that position.
(The buffer also seem to be bounded to key-frames so it is possible to decode the n-frames in that buffer. This means the buffer can start to load data a little before the actual position).
I supplied a demo video about 1 minute long - however, this is not long enough to do proper testing. Free free to supply video links that contain longer video (or please share if you want me to update the demo with this).
The main function will iterate through the buffered object on the video element. It will render all parts that exist to the canvas right below the video showing in red.
You can click (bit not drag) on this viewer to move the video to different positions.
/// buffer viewer loop (updates about every 2nd frame)
function loop() {
var b = vid.buffered, /// get buffer object
i = b.length, /// counter for loop
w = canvas.width, /// cache canvas width and height
h = canvas.height,
vl = vid.duration, /// total video duration in seconds
x1, x2; /// buffer segment mark positions
/// clear canvas with black
ctx.fillStyle = '#000';
ctx.fillRect(0, 0, w, h);
/// red color for loaded buffer(s)
ctx.fillStyle = '#d00';
/// iterate through buffers
while (i--) {
x1 = b.start(i) / vl * w;
x2 = b.end(i) / vl * w;
ctx.fillRect(x1, 0, x2 - x1, h);
}
/// draw info
ctx.fillStyle = '#fff';
ctx.textBaseline = 'top';
ctx.textAlign = 'left';
ctx.fillText(vid.currentTime.toFixed(1), 4, 4);
ctx.textAlign = 'right';
ctx.fillText(vl.toFixed(1), w - 4, 4);
/// draw cursor for position
x1 = vid.currentTime / vl * w;
ctx.beginPath();
ctx.arc(x1, h * 0.5, 7, 0, 2 * Math.PI);
ctx.fill();
setTimeout(loop, 29);
}

According to
https://developer.mozilla.org/en-US/docs/Web/HTML/Element/video
https://developer.mozilla.org/en-US/docs/Web/API/TimeRanges
https://developer.mozilla.org/en-US/docs/Web/API/TimeRanges.start
the buffered attribute holds information about all currently buffered time ranges. To my understanding, if a buffered portion is lost, it is removed from the object (in case that ever happens).
Esepcially the last link seems to be very useful for understanding the matter (since it offers a code sample) but keep in mind these are mozilla documents and support might be different in other browsers.
To answer your questions
Say the video starts. It continues upto 1:45 on its own (occasionally stalling perhaps, waiting for further data), after which I suddenly jump to 32:45. Now after some time, if I jump back to 1:27 (within the time range initially loaded and played through, before I made the jump), will it start playing immediately as it was already loaded before?
It should play immediately when jumping back unless the buffer of that portion was unloaded. I think it's very reasonable to assume that buffers or buffer ranges are unloaded at some point if the overall buffersize exceeds a certain volume.
Say I make 5 or 6 such jumps, each time waiting for a few seconds for some data to load after the jump. Does that mean the buffered object will have all those time ranges stored?
Yes, all buffered ranges should be readable through the attribute.
Will checking whether the buffered object has one time range starting at 0 (forget live streaming) and ending at the video duration length ensure tht the entire video resource has been loaded fully?
Yes, this is the code example in the last link. Apparently this is an applicable method of determining if the entire video has been loaded.
if (buf.start(0) == 0 && buf.end(0) == v.duration)

Almost every browser saves the buffered data in cache for that session. The cache expires after the user goes away from that page. I don't think that the user will have to load the page each time he loads the video from a point where the video has been loaded. The user will face this issue only when the server is clearing out all the cache data. HTML5 video tag will support this, and will save the video upto the point till where it has been loaded.
It doesnot mean that the session has been lost, it means that either the object (if you are using flash player) is looking for some data from that particular point or the html5 video tag is having some issues either because of the INTERNET connection failure, or some other server errors.
The metadata is automatically loaded, untill you use this
<audio preload="none"... this will make the browser not to download anything from server, you can use it as:
<audio preload="auto|metadata|none"... If you use none, nothing is downloaded unless the user clicks play button, and metadata will download name, timing and other meta data from server, but not the file somehow, auto will start downloading as soon as the page loads.
I will always refer you to read some documentations by jQuery. As the jQuery will let you change and update the content using ajax API and will be helpfull too. Hope you succeed! Cheers.

Although the accepted answer's description is excellent, I decided to update its code sample, for several reasons:
The progress render task should really be fired only on a progress event.
The progress render task is mixed up with some other tasks like drawing the timestamp and the playhead position.
The code refers to several DOM elements by their IDs without using document.getElementById().
The variable names were all obscured.
I thought a forward for() loop was more elegant than a backward while() loop.
Note that I have removed the playhead and timestamp to keep the code clean, as this answer focusses purely on visualisation of the video buffer.
LINK TO ONLINE VIDEO BUFFER VISUALISER
Rewrite of accepted answer's loop() function:
function drawProgress(canvas, buffered, duration){
// I've turned off anti-aliasing since we're just drawing rectangles.
var context = canvas.getContext('2d', { antialias: false });
context.fillStyle = 'blue';
var width = canvas.width;
var height = canvas.height;
if(!width || !height) throw "Canvas's width or height weren't set!";
context.clearRect(0, 0, width, height); // clear canvas
for(var i = 0; i < buffered.length; i++){
var leadingEdge = buffered.start(i) / duration * width;
var trailingEdge = buffered.end(i) / duration * width;
context.fillRect(leadingEdge, 0, trailingEdge - leadingEdge, height);
}
}

This is just a variation of this excellent answer https://stackoverflow.com/a/18624833/985454
I only made it work without any work required and added few perks. Everything is automatic.
currently intended for full-screen video playback such as netflix or hbogo
automatically creates the canvas
auto-updates width to 100% viewport width
works as a bookmarklet
does not obstruct the view much (transparent, 2px tall)
function prepare() {
console.log('prepare');
_v = $('video')[0];
_v.insertAdjacentHTML('afterend',
`<canvas
id="WowSuchName"
height="1"
style="
position: absolute;
bottom: 0;
left: 0;
opacity: 0.4;
"></canvas>`);
_c = WowSuchName
_cx = _c.getContext('2d');
window.addEventListener('resize', resizeCanvas, false);
function resizeCanvas() {
console.log('resize');
_c.width = window.innerWidth;
}
resizeCanvas();
}
/// buffer viewer loop (updates about every 2nd frame)
function loop() {
if (!window.WowSuchName) { prepare(); }
var b = _v.buffered, /// get buffer object
i = b.length, /// counter for loop
w = _c.width, /// cache canvas width and height
h = _c.height,
vl = _v.duration, /// total video duration in seconds
x1, x2; /// buffer segment mark positions
/// clear canvas
_cx.clearRect(0, 0, w, h);
/// color for loaded buffer(s)
_cx.fillStyle = '#888';
/// iterate through buffers
while (i--) {
x1 = b.start(i) / vl * w;
x2 = b.end(i) / vl * w;
_cx.fillRect(x1, 0, x2 - x1, h);
}
/// draw cursor for position
_cx.fillStyle = '#fff';
x1 = _v.currentTime / vl * w;
_cx.fillRect(x1-1, 0, 2, h);
setTimeout(loop, 29);
}
loop();
And the code for bookmarklet is here
javascript:eval(atob("CmZ1bmN0aW9uIHByZXBhcmUoKSB7CiAgICBjb25zb2xlLmxvZygncHJlcGFyZScpOwoKICAgIF92ID0gJCgndmlkZW8nKVswXTsKICAgIF92Lmluc2VydEFkamFjZW50SFRNTCgnYWZ0ZXJlbmQnLAogICAgYDxjYW52YXMKICAgICAgICBpZD0iV293U3VjaE5hbWUiCiAgICAgICAgaGVpZ2h0PSIxIgogICAgICAgIHN0eWxlPSIKICAgICAgICAgICAgcG9zaXRpb246IGFic29sdXRlOwogICAgICAgICAgICBib3R0b206IDA7CiAgICAgICAgICAgIGxlZnQ6IDA7CiAgICAgICAgICAgIG9wYWNpdHk6IDAuNDsKICAgICAgICAiPjwvY2FudmFzPmApOwogICAgCiAgICBfYyA9IFdvd1N1Y2hOYW1lCiAgICBfY3ggPSBfYy5nZXRDb250ZXh0KCcyZCcpOwoKICAgIHdpbmRvdy5hZGRFdmVudExpc3RlbmVyKCdyZXNpemUnLCByZXNpemVDYW52YXMsIGZhbHNlKTsKCiAgICBmdW5jdGlvbiByZXNpemVDYW52YXMoKSB7CiAgICAgICAgY29uc29sZS5sb2coJ3Jlc2l6ZScpOwogICAgICAgIF9jLndpZHRoID0gd2luZG93LmlubmVyV2lkdGg7CiAgICB9CiAgICByZXNpemVDYW52YXMoKTsKfQoKLy8vIGJ1ZmZlciB2aWV3ZXIgbG9vcCAodXBkYXRlcyBhYm91dCBldmVyeSAybmQgZnJhbWUpCmZ1bmN0aW9uIGxvb3AoKSB7CiAgICBpZiAoIXdpbmRvdy5Xb3dTdWNoTmFtZSkgeyBwcmVwYXJlKCk7IH0KCiAgICB2YXIgYiA9IF92LmJ1ZmZlcmVkLCAgLy8vIGdldCBidWZmZXIgb2JqZWN0CiAgICAgICAgaSA9IGIubGVuZ3RoLCAgICAgLy8vIGNvdW50ZXIgZm9yIGxvb3AKICAgICAgICB3ID0gX2Mud2lkdGgsICAgICAvLy8gY2FjaGUgY2FudmFzIHdpZHRoIGFuZCBoZWlnaHQKICAgICAgICBoID0gX2MuaGVpZ2h0LAogICAgICAgIHZsID0gX3YuZHVyYXRpb24sIC8vLyB0b3RhbCB2aWRlbyBkdXJhdGlvbiBpbiBzZWNvbmRzCiAgICAgICAgeDEsIHgyOyAgICAgICAgICAgLy8vIGJ1ZmZlciBzZWdtZW50IG1hcmsgcG9zaXRpb25zCgogICAgLy8vIGNsZWFyIGNhbnZhcwovLyAgICAgX2N4LmZpbGxTdHlsZSA9ICcjMDAwJzsKLy8gICAgIF9jeC5maWxsUmVjdCgwLCAwLCB3LCBoKTsKICAgIF9jeC5jbGVhclJlY3QoMCwgMCwgdywgaCk7CgogICAgLy8vIGNvbG9yIGZvciBsb2FkZWQgYnVmZmVyKHMpCiAgICBfY3guZmlsbFN0eWxlID0gJyM4ODgnOwoKICAgIC8vLyBpdGVyYXRlIHRocm91Z2ggYnVmZmVycwogICAgd2hpbGUgKGktLSkgewogICAgICAgIHgxID0gYi5zdGFydChpKSAvIHZsICogdzsKICAgICAgICB4MiA9IGIuZW5kKGkpIC8gdmwgKiB3OwogICAgICAgIF9jeC5maWxsUmVjdCh4MSwgMCwgeDIgLSB4MSwgaCk7CiAgICB9CgogICAgLy8vIGRyYXcgY3Vyc29yIGZvciBwb3NpdGlvbgogICAgX2N4LmZpbGxTdHlsZSA9ICcjZmZmJzsKICAgIHgxID0gX3YuY3VycmVudFRpbWUgLyB2bCAqIHc7CiAgICBfY3guZmlsbFJlY3QoeDEtMSwgMCwgMiwgaCk7CgogICAgc2V0VGltZW91dChsb29wLCAyOSk7Cn0KCmxvb3AoKTsK"))

Related

Detect differences between two video frames displayed in HTML5 canvas

I'm displaying a live video stream in a HTML5 canvas which works fine.
Now, what I need to do is to check for any "motion" in the camera which is being displayed in the HTML5 Canvas.
During my research, I found out that this can be done by checking the previous frame with the current frame that's being displayed in the canvas.
So I tried this code within a setInterval function:
var c = document.querySelector('.mycanv');
var ctx = c.getContext("2d");
var imageData = ctx.getImageData(0, 0, 200, 200);
var data = imageData.data.length;
console.log(data);
However, when I look in the console, the Number that the variable data outputs is always the same which is 16000! and it wont change even if there is a movemnet infront of the camera.
I'm not entirely sure if I am on a right track.
Could someone please advice on this issue and point me in a right direction please?
Thanks in advance.
Here is a minified and simple example:
https://jsfiddle.net/2648xwgz/
Basically, the code draws a frame of the video in the canvas.
Now, what I need to do is to check if the previous frame/image in the canvas is the same as the current frame/image.
Second EDIT:
Okay, so I've taken all the advice in the comments on board and tried to come up with something that actually doesn't check for all the random pixels in the images as that is heavy and not a good practic...
So, I tried something like this:
https://jsfiddle.net/qpxjcv3a/6/
The above code will run fine in Firefox and with a local Video file. but on JSFIDDLE, you will get a cross origion error.
Anyway, The key point in the code above is using the ctx.globalCompositeOperation = 'difference'; i guess.
And then doing a "scrore" calculation taken from here to detect some changes.
However, when I run my code, I always get: console.log('we have motion'); in the console! Even when the video is paused and there is no more new frames.
So I did console.log(imageScore); and it is always adding up by 10000 even when the video is paused or ended! I'm not sure why that is and whether this calculation is correct at all. But that is where I am at the moment.
Any pointers and help appreciated.
As Daniel said you need to check the pixels, the legnth would be the same for all iterations.
You should look into image hashing algorithms. At every interval you can calculate a hash, and store that in a global variable to compare at the next interval. This would also give you the option to set a threshold, so minor changes would not trigger motion detection.
This page explains image hashing in more details: https://jenssegers.com/61/perceptual-image-hashes
You can start by implementing average hash. It is quite simple. You would reduce your canvas size to 8x8 pixels.
ctx.drawImage(video, 0, 0, video.width, video.height, 0, 0, 8, 8);
var imageData = ctx.getImageData(0, 0, 8, 8);
var data = imageData.data;
Then you iterate over the image data and calculate the brightness.
var brightnessdata = []
for(var i = 0; i<data.length; i+=4){
brightnessdata.push((data[i]+data[i+1]+data[i+2])/3);
}
The rest is simply calculating the average brightness and comparing each pixel brightness to the average brightness to calculate the hash.

HTML5 Canvas FPS drops when rendering another canvas as an image using drawImage() at a specific width & height

So I have run into a very strange problem while using the HTML5 Canvas API. I'm attempting to create a game and the problem occurs because I'm drawing different canvases (using drawImage()) onto the main canvas. The other canvases has had graphics drawn on them and then I am simply drawing those canvases onto the main one. The problem is that at very specific widths and heights (coming close to the width and height of the main canvas) the fps suddenly drops by about 20-30. And this happens when only drawing one of those big canvases onto the main one. I thought the performance drop might've been attributed to drawing such a big canvas with graphics on them, so I emptied the canvases and was basically drawing an "empty" canvas onto the main canvas. Even so, the performance dropped. What's even more stange is that when I subtract just ONE pixel from either the width or height of the big canvas the fps goes back to 60! The widths and heights that this has occurred (there are probably more sets) are:
W: 1797
H: 891
W: 2026
H: 790
So for example for the first set, if you were to draw a canvas with those measurements (EVEN AN EMPTY ONE) you would get 30-40 fps. Yet if you were to draw a canvas with those one of the measurements subtracted by one (i.e. W: 1796, H: 891) then it would go back to 60 fps.
What I find even more strange is that this happens on only Chrome. I have tried it on Internet Explorer and Safari and I can draw significantly bigger canvases (i.e. far bigger than even the main canvas) and still get 60 fps. I'm sorry for not being able to list the code because it would require me to post a significantly large piece of code (the code is intertwined in multiple files). Could somebody elaborate on why this is happening? Thank you!
EDIT: This also happens if I draw only a clipped version of the big canvas using the other version of drawImage(). So it doesn't even have to do with rendering the actual number of pixels...which I find extremely strange.
EDIT 2: So I have run some tests and turns out it has nothing to do with the rendering but rather the memory usage because of the fact that I have an array that's holding 25 of these big canvases. I created a fiddle to benchmark:
https://jsfiddle.net/eu3zoc4f/3/
HTML:
<body>
<canvas id="canvas" style="display: block;"></canvas>
</body>
JS:
var timer = {
startedAt: null,
stoppedAt: null,
start: function() {
this.stoppedAt = null;
this.startedAt = new Date();
},
stop: function() {
this.stoppedAt = new Date();
},
getTime: function() {
if (!this.stoppedAt) this.stop();
return this.stoppedAt.getTime() - this.startedAt.getTime();
}
};
var body = document.getElementsByTagName("body");
body[0].style.width = screen.availWidth + "px";
document.getElementById("canvas").width = window.innerWidth;
document.getElementById("canvas").height = window.innerHeight;
var context = document.getElementById("canvas").getContext("2d");
var TESTING_1 = [];
for (var c = 0; c < 25; c++) {
TESTING_1[c] = document.createElement('canvas');
TESTING_1[c].width = 2000;
TESTING_1[c].height = 1200;
TESTING_1[c].getContext('2d').fillStyle = 'rgb(255, 0, 0)';
TESTING_1[c].getContext('2d').fillRect(0, 0, 1900, 897);
}
function main() {
timer.start();
context.drawImage(TESTING_1[0], 0, 0);
context.fillStyle = "blue";
context.font = "20px Arial";
context.fillText(timer.getTime(), 100, 100);
}
main();
If you change the "25" number that's in the for loop to a lower number you will get much faster results. So the problem now is why is this happening only in Chrome and is there a way to fix it?

Image Flickering In Canvas Game

For a university project I have been tasked with creating a Flappy Bird clone. It's being done using the HTML5 canvas.
The issue doesn't happen very often, but it seems that every 6 or so seconds, the grass will flicker. I'm not sure what's causing this, it could be a performance issue.
Here is a link so you may see the issue: http://canvas.pixcelstudios.uk
Here is the function I'm using to the draw the grass:
var drawGrass = function(cWidth, ctx, minusX)
{
var x = bg_grass.x;
var y = bg_grass.y;
var w = bg_grass.w;
var h = bg_grass.h;
var img = bg_grass.img;
if (minusX[0] >= cWidth)
{
bg_grass.x = 0;
minusX[0] = 0;
}
ctx.drawImage(img, x, y, w, h);
if (minusX[0] > 0)
{
ctx.drawImage(img, w-minusX[0], y, w, h);
}
};
Basically, I'm drawing two grass sprites, each taking up a canvas width. One starts with an X of 0 and the other starts at the end of the canvas. Both are decremented each frame, then one is completely off the screen, it's completely reset to keep it looping.
I don't think it's anything to do with my update loop which is as follows:
this.update = function()
{
clearScreen();
updateBackground();
updatePositions();
checkCollisions();
render();
requestAnimFrame(gameSpace.update);
};
I've done a little bit of reading and I've read about having a second canvas to act as a buffer. Apparently this can stop flickering and improve performance? But all of the examples I've seen show the parts being drawn into the canvas out of a loop and I can't really see how doing it within a game loop (moving parts and all) would increase performance rather than decrease it. Surely the same operations are being performed, except now you also have to draw the second canvas onto the first?
Please let me know if you need any more information (although you should be able to see the whole source from the web link).
Thanks!
Okay I found the issue! Was just a simple mistake in my drawGrass function.
Due to the ordering, there'd be just a single frame where I'd set my shorthand X variable to bg_grass.x and THEN set bg_grass.x to something else, therefore drawing the wrong value.
I've now set my shorthand variables after the first if-statement.
However, if anyone could provide any insight into the second part of the question regarding a buffer canvas, I'd still much appreciate that.

Analyser not updating canvas draws as quickly as audio is coming in

Whenever I have a new buffer that come into my client I want to redraw that instance of audio onto my canvas. I took the sample code from http://webaudioapi.com/samples/visualizer/ and tried to alter it to fit my needs in a live environment. I seem to have something working because I do see the canvas updating when I call .draw() but it's not nearly as fast as it should be. I'm probably seeing about 1 fps as it is. How do I speed up my fps and still call draw for each instance of a new buffer?
Entire code:
https://github.com/grkblood13/web-audio-stream/tree/master/visualizer
Here's the portion calling .draw() for every buffer:
function playBuffer(audio, sampleRate) {
var source = context.createBufferSource();
var audioBuffer = context.createBuffer(1, audio.length , sampleRate);
source.buffer = audioBuffer;
audioBuffer.getChannelData(0).set(audio);
source.connect(analyser);
var visualizer = new Visualizer(analyser);
visualizer.analyser.connect(context.destination);
visualizer.draw(); // Draw new canvas for every new set of data
if (nextTime == 0) {
nextTime = context.currentTime + 0.05; /// add 50ms latency to work well across systems - tune this if you like
}
source.start(nextTime);
nextTime+=source.buffer.duration; // Make the next buffer wait the length of the last buffer before being played
}
And here's the .draw() method:
Visualizer.prototype.draw = function() {
function myDraw() {
this.analyser.smoothingTimeConstant = SMOOTHING;
this.analyser.fftSize = FFT_SIZE;
// Get the frequency data from the currently playing music
this.analyser.getByteFrequencyData(this.freqs);
this.analyser.getByteTimeDomainData(this.times);
var width = Math.floor(1/this.freqs.length, 10);
// Draw the time domain chart.
this.drawContext.fillStyle = 'black';
this.drawContext.fillRect(0, 0, WIDTH, HEIGHT);
for (var i = 0; i < this.analyser.frequencyBinCount; i++) {
var value = this.times[i];
var percent = value / 256;
var height = HEIGHT * percent;
var offset = HEIGHT - height - 1;
var barWidth = WIDTH/this.analyser.frequencyBinCount;
this.drawContext.fillStyle = 'green';
this.drawContext.fillRect(i * barWidth, offset, 1, 2);
}
}
requestAnimFrame(myDraw.bind(this));
}
Do you have a working demo? as you can easily debug this using the Timeline in chrome. You can find out what process takes long. Also please take unnecessary math out. Most of your code doesn't need to be executed every frame. Also, how many times is the draw function called from playBuffer? When you call play, on the end of that function it requests a new animation frame. If you call play every time you get a buffer, you get much more cycles of math->drawing->request frame. This also makes it very slow. If you are already using the requestanimationframe, you shall only call the play function once.
To fix up the multi frame issue:
window.animframe = requestAnimFrame(myDraw.bind(this));
And on your playBuffer:
if(!animframe) visualizer.draw();
This makes sure it only executes the play function when there is no request.
Do you have a live example demo? I'd like to run it through some profiling. You're trying to get updates for the playing audio, not just once per chunk, right?
I see a number of inefficiencies:
you're copying the data at least once more than necessary - you should have your scheduleBuffers() method create an AudioBuffer of the appropriate length, rather than an array that then needs to be converted.
If I understand your code logic, it's going to create a new Visualizer for every incoming chunk, although they use the same Analyser. I'm not sure you really want a new Visualizer every time - in fact, I think you probably don't.
-You're using a pretty big fftSize, which might be desirable for a frequency analysis, but at 2048/44100 you're sampling more than you need. Minor point, though.
-I'm not sure why you're doing a getByteFrequencyData at all.
-I think the extra closure may be causing memory leakage. This is one of the reasons I'd like to run it through the dev tools.
-You should move the barWidth definition outside of the loop, along with the snippet length:
var snippetLength = this.analyser.frequencyBinCount;
var barWidth = WIDTH/snippetLength;
if you can post a live demo that shows the 1fps behavior, or send an URL to me privately (cwilso at google or gmail), I'd be happy to take a look.

HTML 5 canvas clip very costly

I have serious performance problems using canvas clip() with chrome.
I have made a test case to illustrate.
Even in a simple case like this the red rectangle blinks as if it takes too much time to redraw, and a CPU profiling shows the clip() method takes about 10% of the CPU.
In my real program, it get to 16% and keep increasing each frame until the canvas almost freezes the browser..
Is there something wrong in my use of clip ?
Thank you for any suggestions,
Regards.
Cause
Insert a beginPath() as rect() adds to the path unlike fillRect()/strokeRect(). What happens here is that the rectangles are accumulating eventually slowing the clipping down over time.
This in combination with using setInterval, which is not able to synchronize with monitor/screen refreshes, worsens the problem.
To elaborate on the latter:
Using setInterval()/setTimeout() can cause tearing which happens when the draw is in the "middle" of its operation not fully completed and a screen refresh occur.
setInterval can only take integer values and you would need for 16.67 ms to synchronize frame-wise (#60Hz). Even if setInterval could take floats it would not synchronize the timing with the monitor timing as the timer mechanism isn't bound to monitor at all.
To solve this always use requestAnimationFrame to synchronize drawings with screen updates. This is directly linked to monitor refreshes and is a more low-level and efficient implementation than the other, and is made for this purpose, hence the name.
Solution embedding both fixes above
See modified bin here.
The code for future visitors:
function draw() {
context.fillStyle = '#000';
context.fillRect(0, 0, width, height);
context.save();
context.beginPath(); /// add a beginPath here
context.rect(0, 0, 100, 100);
context.clip();
context.fillStyle = '#ff0000';
context.fillRect(0, 0, 200, 200);
context.restore();
requestAnimationFrame(draw); /// use rAF here
}
canvas.width = width;
canvas.height = height;
canvas.style.width = width+'px';
canvas.style.height = height+'px';
requestAnimationFrame(draw); /// start loop
PS: If you need to stop the loop inject a condition to run rAF all inside the loop, ie:
if (isPlaying) requestAnimationFrame(draw);
There is BTW no need for closePath() as this will be done implicit for you as by the specs. You can add it if you want, but calling clip() (or fill() etc.) will do this for you (specs is addressing browser implementers):
Open subpaths must be implicitly closed when computing the clipping region

Categories

Resources