Real time data plotting performance HTML5 canvas vs Dom appending - javascript

I have some realtime data: 3 integers that are changing over time. These integers are from my accelerometer readings: x, y, and z. I was thinking of a way to plot these data so it will be easier to trend the changes.
There are many chart libraries are out there such as flot. What I want to do is represent the integers as bar heights. There are two methods I can use to display the bar graph:
Use divs for the bars which will be appended to a parent div.
Use an HTML5 canvas to draw the bars that will represent the integers.
My question is: which of these two methods will work better from the performance perspective, assuming the data update frequency is 50 msec (i.e., data will change every 50 milliseconds).

To a certain extent this depends, it's going to depend on a few factors:
The number of items you've got that you need to update (are you talking 10s, 100s, 1000s or more)
The frequency that you want to update is going to be a big factor, you're limited based upon the speed that the browser can execute the JavaScript.
The browser - some browsers can render content at significantly different speeds.
A good example to compare is looking at some of the D3 performance comparisons. Because the core library is doing the same work underneath, this is comparing the render speed between SVG (DOM based) and Canvas rendering. Take a look at these two swarm examples first:
Canvas Swarm
SVG Swarm
If you start scaling up the frequency and the number of items Canvas is going to outperform SVG, because it's a much lighter workload for the browser. It doesn't have to manipulate the DOM in the same way, check CSS rules to apply etc. However SVG is more powerful because of these.
There's a really good breakdown of performance in this post on the D3 google group which compares browser render times. To pull out some numbers as an example (although these figures are a little old):
Chrome was 74x slower rendering SVG over Canvas.
Firefox was 150x slowe rendering SVG over Canvas.
Opera was lightening fast at SVG but 71x slower at render Canvas over SVG.

Here is a fiddle showing both a flot bar chart and a DOM / div approach. You can run them seperately or together and compare the look and the cpu load with your taskmanager.
This is based on my answer to this question (which gave the Windows 7 Task Manager style to the chart.)
The update loops for both methods:
function update() {
GetData();
$.plot($("#placeholder"), dataset, options)
timerFlot = setTimeout(update, updateInterval);
}
function updateDom() {
GetData();
$('#domtest').html('');
for (var i = 0; i < data.length && i < 100; i++) {
$('#domtest').append('<div class="bar" style="height: ' + (3 * data[i][3]).toString() + 'px; top: ' + (300 - 3 * data[i][4]).toString() + 'px;"></div>');
}
timerDom = setTimeout(updateDom, updateInterval);
}

I would recommend using d3 and a complimentary charting library to plot out your bar chart. You can then use svg which is more performant than appending dom nodes. I would check out c3

I would second Sean's answer about D3. Coupling that with either jQuery or D3's native json handling you can pull data quite rapidly. For instance, I've written code to pull LDAP query data with records numbering in the 1000's with turnaround times of a second or so. But that's server side.
On the rendering side, see these examples comparing the difference between canvas and SVG rendering performance. It's not a benchmark per se but you can see pretty clearly that performance not an issue.
bl.ocks.org/mbostock/1276463 - bl.ocks.org/mbostock/1062544 - bl.ocks.org/mbostock/9539958
But to your question about charts, D3 very elegantly handles data updates with data joins. Here is a nice article about updating a data series over time. bost.ocks.org/mike/path/
Hope that helps.

I would suggest going through the Canvas approach.
Changing the Dom elements causes the DOM repaint to be called to paint the element on the screen. In your case there are 3 elements which will be changing frequently.
The internal operation of animation will be done this way:
An element with x height to changed to x+3.
First step the browser changes the height of the element to +1 repaints adjusts all the children within it. And then increases it by +1 and repaints and adjusts the children and so on. Every time it changes the element there will be lot of happenings inside the browser.
Coming to canvas: it's like a painting board no matter how many cycles you increase or decrease canvas will not make any iterative paint calls.
Hope this provide some info.

Related

Reduce the size of a large data set by sampling/interpolation to improve chart performance

I have a large set (>2000) of time series data that I'd like to display using d3 in the browser. D3 is working great for displaying a subset of the data (~100 points) to the user, but I also want a "context" view (like this) to show the entire data set and allow users to select as subregion to view in detail.
However, performance is abysmal when trying to display that many points in d3. I feel like a good solution would be to select a sample of the data and then use some kind of interpolation (spline, polynomial, etc., this is the part I know how to do) to draw a curve that is reasonably similar to the actual data.
However, it's not clear to me how I ought to go about selecting the subset. The data (shown below) has rather flat regions where fewer samples would be needed for a decent interpolation, and other regions where the absolute derivative is quite high, where more frequent sampling is needed.
To further complicate matters, the data has gaps (where the sensor generating it was failing or out of range), and I'd like to keep these gaps in the chart rather than interpolating through them. Detection of the gaps is fairly simple though, and simply clipping them out after drawing the entire data set with the interpolation seems like a reasonable solution.
I'm doing this in JavaScript, but a solution in any language or a mathematical answer to the problem would do.
You could use the d3fc-sample module, which provides a number of different algorithms for sampling data. Here's what the API looks like:
// Create the sampler
var sampler = fc_sample.largestTriangleThreeBucket();
// Configure the x / y value accessors
sampler.x(function (d) { return d.x; })
.y(function (d) { return d.y; });
// Configure the size of the buckets used to downsample the data.
sampler.bucketSize(10);
// Run the sampler
var sampledData = sampler(data);
You can see an example of it running on the website:
https://d3fc.io/examples/sample/
The largest-triangle three-buckets algorithm works quite well on data that is 'patchy'. It doesn't vary the bucket size, but does ensure that peaks / troughs are included, which results in a good representation of the sampled data.
I know this doesn't answer your question entirely, but this library might help you to simplify your line during rendering. Not sure if they handle data gaps though.
http://mourner.github.io/simplify-js/
My advice is to average (not subsample) over longer or shorter time intervals and plot those average values as horizontal bars. I think that's very comprehensible to the user -- if you try something fancier, you might give up the ability to explain exactly what's going on. I'm assuming you can let the user choose to zoom in or out so as to show more or less detail.
You might be able to get the database engine to compute averages over intervals for you, so that's a potential speed-up too.
As to the time intervals to pick, you could try either (1) fixed intervals such as 1 second, 15 seconds, 1 minute, 15 minutes, hours, days, or whatever; that might be easier for the user to understand, or (2) choose the interval to make a fixed number of units across the whole time range, e.g. if you decide to display 7 hours of data in 100 units, then each unit = 252 seconds.

maximum number of svg elements for the browser in a map

I am creating a map with leaflet and d3. A lot of circles will be plotted on a map. In terms of browser compatibility, there is an expected limit of how many svg elements the browser can render. In terms of user experience however, I would prefer that the user can see as many elements on the map as possible (otherwise the user might need to zoom in and out constantly and would need to wait for the ajax to return data). There will be some optimisation that I need to consider (user waiting time user vs. server query load vs. what the browser can handle).
See plot, there is a limit right now on the number of points that the server returns and thus only a portion of the map is filled.
The browser cannot handle a fully filled map here and the user would need to wait too long for the server response as well.
I suppose the problem that I am faced with needs to be solved by answering two questions:
Is there a standard in terms of what the average browser can handle in terms of number of simple svg shapes (circles) on a map?
What is the best technique to show as many shapes on the map as possible?
I'm considering the following points but I am unsure if it will help;
use squares instead of circles
use the leaflet API instead of the D3
Speaking in general terms, neither of the points you're considering will help. In both cases, the amount of processing to be done / information to display by the browser will be approximately the same.
Regarding your first question, not that I'm aware of. There are huge variations between browsers and platforms (especially if you consider mobile devices as well) and an average would be almost meaningless. Furthermore, this is changing constantly. I've found that up to about 1000 simple shapes are usually not a problem.
To show as many shapes as possible on the map, I would pre-render them into bitmap tiles and then use either the leaflet API or something like d3.geo.tile (example here) to overlay it on the actual map. This way you can easily scale to millions of points.
Although you can only render ~2-5k SVG elements before you start to see noticeable slowdown (depending on size, use of gradient fills, etc.), you can store and manipulate much larger datasets client-side. You can often handle tens or hundreds of thousands of data points efficiently in SVG: the trick is to be very selective about what you actually render, and to use techniques like debouncing to redraw only when necessary.
(For very large datasets: yes, you'll need to either aggregate/subsample points or pre-render.)
With this in mind, one technique I've used for d3 maps in particular is to use d3.geom.quadtree() to dynamically cull points as the user pans/zooms. More specifically, I avoid drawing points that are either
outside the current map bounding box (since these aren't visible at all), or
too close to other points (since these add visual clutter and are hard to interact with anyways).
In JS-ish pseudocode, this would look roughly like:
function getIndicesToDraw(data, r, bbox) {
var indicesToDraw = [];
var Q = d3.geom.quadtree();
// set bounds in pixel space
for (var i = 0; i < data.length; i++) {
var d = data[i];
var p = getPointForDatum(d);
if (isInsideBoundingBox(bbox, p) && !hasPointWithinRadius(Q, r, p)) {
Q.add(p);
indicesToDraw.push(i);
}
}
return indicesToDraw;
}
function redraw(svg, data, r, bbox) {
var indicesToDraw = getIndicesToDraw(data, r, bbox);
var points = svg.selectAll('.data-point')
.data(indicesToDraw, function(i) { return i; });
// draw new points for points.enter()
points.exit().remove();
// update positions of points (or SVG transforms, etc.)
}
This is a question of cartography as much as it is of technology. Just because you can put thousands of points on a map doesn't mean you should. You should ask yourself if your user needs to see as many points as possible at once, and will understand the data better as a result of it. Will seeing this many points confuse and overwhelm the user, or will it help them accomplish their desired task? Is there any way you could filter the data so that all points do not need to be drawn at once?
The most common solution to this problem is to use clusters, usually with something like Leaflet MarkerCluster. In the past while using D3 and Leaflet I have created a sort of heat map by creating an arbitrary grid, assigning each point to a bin, and applying a color ramp to the grid. However, getting back to your original concern, it is rather computationally intensive.
Depending on which platforms you want to support, be aware that phones and tablets do not always handle SVGs very well, especially while drawing thousands of features.
Another place for potential performance gain is in the delivery of the point geometry. I'm not sure how you are currently loading them, but using a spatially indexed PostGIS table and selecting by bounding box is often quite quick, but because you are drawing points and not polygons, you could even get away with loading them into the browser via CSV, which is substantially smaller than a GeoJSON or Topojson.
I would load all the data at once but only draw circles in the view port that are large enough. On zoom or pan, remove the circles that shouldn't be shown and check if previously hidden circles should be added.
You may use canvas + three.js or webgl where may join another map and 10k animation created 3d models or native code svg some elements, and intercative live time animation in one scene very good. I`m tested for fun this methode.Sry, my bad engl. Another puths - i think may used dlsl shader, opengl+ wasm and so on

Optimize d3 force directed layout, via charge/gravity properties, based on number of nodes

I've been working on a network topology visualization using the force-directed algorithm built into D3. Everything is working well but having troubles with one important detail... I can't seem to get the graph to layout in an ideal way for graphs with a varying number of nodes. By ideal, i mean the nodes are nicely spaced out from each (no overlap) and nodes cluster wherever it makes sense. I've been trying to do this by adjusting the 'charge' and 'gravity' properties of the force layout, but no matter what i try, it seems to always work for one scenario (ie. large number of nodes), but not for another scenario (ie. small number of nodes). For example, if i have the layout working for a large graph, then when i look at small graph using the same formula for charge/gravity, ill have a few nodes that are way way out of site from the rest of the nodes. Here's an example of a formula i was using based on another SO question post:
var k = Math.sqrt(json.nodes.length / (dim.w * dim.h));
var charge = -10 / k;
var gravity = 100 * k;
This works for a graph with 14 nodes, but if i try the same with a graph of 5 nodes, some of those nodes are completely off the screen. Note that the width/height used in the calculation of 'k' is not changing between these two scenarios. Now maybe i shouldn't have these properties based on the width/height of visible area of graph. To be honest, this is not a requirement. I don't need the graph to render and fit within the viewport of graph. I just need the graph to lay itself out sensibly, so it's fine if some of it may be outside of visible area, especially in a large graph. I've also tried the following with some success, but i still find nodes are being rendered too far away from the rest of the graph for small graphs:
var charge = -1 * Math.pow(json.nodes.length, 3);
var gravity = 1 / json.nodes.length;
Can anyone out there help me out with this ? Would be greatly appreciated as i feel kind of stuck on this atm.
I actually figured this out on my own...
So the values i was using for charge/gravity/etc were not so much the problem. The issue was related to how many times the tick function was being called to adjust the graph. For my larger graphs, the nodes were always laid out fairly well. The main problem i was having was with smaller graphs. I was finding that nodes would frequently be placed outside of the viewport when the graph only had around 5-10 nodes in it.
In my code, im calling the tick function manually, like so:
force.start();
for (var i = tickLimit; i > 0; --i)
force.tick();
force.stop();
Previously, tickLimit was being set as follows:
var tickLimit = Math.pow(json.nodes.length, 2);
After messing around with charge/gravity values etc, I eventually realized that this is not sufficient for small graphs. If i have a graph with 4 nodes, then that means only 16 tick() calls will be made. This is not enough for the graph to adjust itself fully (ie. stabilize). Therefore i just needed to add a check to ensure the graph ticks a minimum number of times (ex. at least 300), and a maximum (ex. no more than 10000).
This may not work for everyone but it solves the issue for me.
In this force based algorithm case I would say that it's almost impossible to set all-cases-fit settings. This layout'd hardly depend on graph density and inner graph semantic.
What is the range of possible number of nodes? What about density? Is it randomly generated graph with predefined density coefficient or it has some semantic behind and has possibility to look good based on this semantic.
You said that nodes flows far away from each other. What higher gravity gives you?
Also the suggestion about linkDistance may help you too. E.g. I'm using d3.forceLayout also for drawing network graphs (but they're mostly small handmade graphs with nodes < 50). And I just copied stats from one of Mike Bostock's force layout example. Here they're:
// graph force layout defaults
var linkDistance = 50,
charge = -200;
// chart properties
var height = 720,
width = 720;
radius = 10;
I don't expect that it'll help you but maybe it'll stimulate the others to discuss.
UPD. All I can suggest you is to experiment. Choose a small set of test graphs and find optimal init settings for every one, then interpolate given numbers. Also if you deal with very big graphs (very big for "nice" visualization I mean) maybe you'll find grouping (colapsing) some parts of it helpfull - it reduces the number of nodes (and maybe the graph complexity).
Also keep in mind you don't need to set constant force layout settings (charge, gravity, linkDistance and etc. are all maintain functions). And you can set radius of nodes a little bit more than visible radius so they won't overlapping each other. Or set non constant function for charge, e.g. smth like this. Or use Mike Bostock advice to spread nodes on each tick manually.

HTML5 Canvas vs. SVG vs. div

What is the best approach for creating elements on the fly and being able to move them around? For example, let's say I want to create a rectangle, circle and polygon and then select those objects and move them around.
I understand that HTML5 provides three elements that can make this possible: svg, canvas and div. For what I want to do, which one of those elements will provide the best performance?
To compare these approaches, I was thinking of creating three visually identical web pages that each have a header, footer, widget and text content in them. The widget in the first page would be created entirely with the canvas element, the second entirely with the svg element, and the third with the plain div element, HTML and CSS.
The short answer:
SVG would be easier for you, since selection and moving it around is already built in. SVG objects are DOM objects, so they have "click" handlers, etc.
DIVs are okay but clunky and have awful performance loading at large numbers.
Canvas has the best performance hands-down, but you have to implement all concepts of managed state (object selection, etc) yourself, or use a library.
The long answer:
HTML5 Canvas is simply a drawing surface for a bit-map. You set up to draw (Say with a color and line thickness), draw that thing, and then the Canvas has no knowledge of that thing: It doesn't know where it is or what it is that you've just drawn, it's just pixels. If you want to draw rectangles and have them move around or be selectable then you have to code all of that from scratch, including the code to remember that you drew them.
SVG on the other hand must maintain references to each object that it renders. Every SVG/VML element you create is a real element in the DOM. By default this allows you to keep much better track of the elements you create and makes dealing with things like mouse events easier by default, but it slows down significantly when there are a large number of objects
Those SVG DOM references mean that some of the footwork of dealing with the things you draw is done for you. And SVG is faster when rendering really large objects, but slower when rendering many objects.
A game would probably be faster in Canvas. A huge map program would probably be faster in SVG. If you do want to use Canvas, I have some tutorials on getting movable objects up and running here.
Canvas would be better for faster things and heavy bitmap manipulation (like animation), but will take more code if you want lots of interactivity.
I've run a bunch of numbers on HTML DIV-made drawing versus Canvas-made drawing. I could make a huge post about the benefits of each, but I will give some of the relevant results of my tests to consider for your specific application:
I made Canvas and HTML DIV test pages, both had movable "nodes." Canvas nodes were objects I created and kept track of in Javascript. HTML nodes were movable Divs.
I added 100,000 nodes to each of my two tests. They performed quite differently:
The HTML test tab took forever to load (timed at slightly under 5 minutes, chrome asked to kill the page the first time). Chrome's task manager says that tab is taking up 168MB. It takes up 12-13% CPU time when I am looking at it, 0% when I am not looking.
The Canvas tab loaded in one second and takes up 30MB. It also takes up 13% of CPU time all of the time, regardless of whether or not one is looking at it. (2013 edit: They've mostly fixed that)
Dragging on the HTML page is smoother, which is expected by the design, since the current setup is to redraw EVERYTHING every 30 milliseconds in the Canvas test. There are plenty of optimizations to be had for Canvas for this. (canvas invalidation being the easiest, also clipping regions, selective redrawing, etc.. just depends on how much you feel like implementing)
There is no doubt you could get Canvas to be faster at object manipulation as the divs in that simple test, and of course far faster in the load time. Drawing/loading is faster in Canvas and has far more room for optimizations, too (ie, excluding things that are off-screen is very easy).
Conclusion:
SVG is probably better for applications and apps with few items (less than 1000? Depends really)
Canvas is better for thousands of objects and careful manipulation, but a lot more code (or a library) is needed to get it off the ground.
HTML Divs are clunky and do not scale, making a circle is only possible with rounded corners, making complex shapes is possible but involves hundreds of tiny tiny pixel-wide divs. Madness ensues.
To add to this, I've been doing a diagram application, and initially started out with canvas. The diagram consists of many nodes, and they can get quite big. The user can drag elements in the diagram around.
What I found was that on my Mac, for very large images, SVG is superior. I have a MacBook Pro 2013 13" Retina, and it runs the fiddle below quite well. The image is 6000x6000 pixels, and has 1000 objects. A similar construction in canvas was impossible to animate for me when the user was dragging objects around in the diagram.
On modern displays you also have to account for different resolutions, and here SVG gives you all of this for free.
Fiddle: http://jsfiddle.net/knutsi/PUcr8/16/
Fullscreen: http://jsfiddle.net/knutsi/PUcr8/16/embedded/result/
var wiggle_factor = 0.0;
nodes = [];
// create svg:
var svg = document.createElementNS("http://www.w3.org/2000/svg", "svg");
svg.setAttribute('style', 'border: 1px solid black');
svg.setAttribute('width', '6000');
svg.setAttribute('height', '6000');
svg.setAttributeNS("http://www.w3.org/2000/xmlns/", "xmlns:xlink",
"http://www.w3.org/1999/xlink");
document.body.appendChild(svg);
function makeNode(wiggle) {
var node = document.createElementNS("http://www.w3.org/2000/svg", "g");
var node_x = (Math.random() * 6000);
var node_y = (Math.random() * 6000);
node.setAttribute("transform", "translate(" + node_x + ", " + node_y +")");
// circle:
var circ = document.createElementNS("http://www.w3.org/2000/svg", "circle");
circ.setAttribute( "id","cir")
circ.setAttribute( "cx", 0 + "px")
circ.setAttribute( "cy", 0 + "px")
circ.setAttribute( "r","100px");
circ.setAttribute('fill', 'red');
circ.setAttribute('pointer-events', 'inherit')
// text:
var text = document.createElementNS("http://www.w3.org/2000/svg", "text");
text.textContent = "This is a test! ÅÆØ";
node.appendChild(circ);
node.appendChild(text);
node.x = node_x;
node.y = node_y;
if(wiggle)
nodes.push(node)
return node;
}
// populate with 1000 nodes:
for(var i = 0; i < 1000; i++) {
var node = makeNode(true);
svg.appendChild(node);
}
// make one mapped to mouse:
var bnode = makeNode(false);
svg.appendChild(bnode);
document.body.onmousemove=function(event){
bnode.setAttribute("transform","translate(" +
(event.clientX + window.pageXOffset) + ", " +
(event.clientY + window.pageYOffset) +")");
};
setInterval(function() {
wiggle_factor += 1/60;
nodes.forEach(function(node) {
node.setAttribute("transform", "translate("
+ (Math.sin(wiggle_factor) * 200 + node.x)
+ ", "
+ (Math.sin(wiggle_factor) * 200 + node.y)
+ ")");
})
},1000/60);
Knowing the differences between SVG and Canvas would be helpful in selecting the right one.
Canvas
Resolution dependent
No support for event handlers
Poor text rendering capabilities
You can save the resulting image as .png or .jpg
Well suited for graphic-intensive games
SVG
Resolution independent
Support for event handlers
Best suited for applications with large rendering areas (Google Maps)
Slow rendering if complex (anything that uses the DOM a lot will be
slow)
Not suited for game application
While there is still some truth to most of the answers above, I think they deserve an update:
Over the years the performance of SVG has improved a lot and now there is hardware-accelerated CSS transitions and animations for SVG that do not depend on JavaScript performance at all. Of course JavaScript performance has improved, too and with it the performance of Canvas, but not as much as SVG got improved. Also there is a "new kid" on the block that is available in almost all browsers today and that is WebGL. To use the same words that Simon used above: It beats both Canvas and SVG hands down. This doesn't mean it should be the go-to technology, though, since it's a beast to work with and it is only faster in very specific use-cases.
IMHO for most use-cases today, SVG gives the best performance/usability ratio. Visualizations need to be really complex (with respect to number of elements) and really simple at the same time (per element) so that Canvas and even more so WebGL really shine.
In this answer to a similar question I am providing more details, why I think that the combination of all three technologies sometimes is the best option you have.
I agree with Simon Sarris's conclusions:
I have compared some visualization in Protovis (SVG) to Processingjs (Canvas) which display > 2000 points and processingjs is much faster than protovis.
Handling events with SVG is of course much easer because you can attach them to the objects. In Canvas you have to do it manually (check mouse position, etc) but for simple interaction it shouldn't be hard.
There is also the dojo.gfx library of the dojo toolkit. It provides an abstraction layer and you can specify the renderer (SVG, Canvas, Silverlight). That might be also an viable choice although I don't know how much overhead the additional abstraction layer adds but it makes it easy to code interactions and animations and is renderer-agnostic.
Here are some interesting benchmarks:
http://svbreakaway.info/tp.php#jan21a
http://www.eleqtriq.com/2010/02/canvas-svg-flash/
http://smus.com/canvas-vs-svg-performance/
Just my 2 cents regarding the divs option.
Famous/Infamous and SamsaraJS (and possibly others) use absolutely positioned non-nested divs (with non-trivial HTML/CSS content), combined with matrix2d/matrix3d for positioning and 2D/3D transformations, and achieve a stable 60FPS on moderate mobile hardware, so I'd argue against divs being a slow option.
There are plenty of screen recordings on Youtube and elsewhere, of high-performance 2D/3D stuff running in the browser with everything being an DOM element which you can Inspect Element on, at 60FPS (mixed with WebGL for certain effects, but not for the main part of the rendering).
For your purposes, I recommend using SVG, since you get DOM events, like mouse handling, including drag and drop, included, you don't have to implement your own redraw, and you don't have to keep track of the state of your objects. Use Canvas when you have to do bitmap image manipulation and use a regular div when you want to manipulate stuff created in HTML. As to performance, you'll find that modern browsers are now accelerating all three, but that canvas has received the most attention so far. On the other hand, how well you write your javascript is critical to getting the most performance with canvas, so I'd still recommend using SVG.
While googling I find a good explanation about usage and compression of SVG and Canvas at http://teropa.info/blog/2016/12/12/graphics-in-angular-2.html
Hope it helps:
SVG, like HTML, uses retained rendering: When we want to draw a
rectangle on the screen, we declaratively use a element in our
DOM. The browser will then draw a rectangle, but it will also create
an in-memory SVGRectElement object that represents the rectangle. This
object is something that sticks around for us to manipulate – it is
retained. We can assign different positions and sizes to it over time.
We can also attach event listeners to make it interactive.
Canvas uses immediate rendering: When we draw a rectangle, the browser immediately renders a rectangle on the screen, but there is
never going to be any "rectangle object" that represents it. There's
just a bunch of pixels in the canvas buffer. We can't move the
rectangle. We can only draw another rectangle. We can't respond to
clicks or other events on the rectangle. We can only respond to events
on the whole canvas.
So canvas is a more low-level, restrictive API than SVG. But there's a
flipside to that, which is that with canvas you can do more with the
same amount of resources. Because the browser does not have to create
and maintain the in-memory object graph of all the things we have
drawn, it needs less memory and computation resources to draw the same
visual scene. If you have a very large and complex visualization to
draw, Canvas may be your ticket.
They all have good things and bad things, so lets compare that below.
Canvas will have overall best performance, but only if you use it correctly.
Divs:
Good Performance
You can manipulate it using the DOM
You have access to DOM Events
CSS support
It's hard to make complex shapes
Performance Test Here: https://kajam.hg0428.repl.co/pref/
Canvas:
Better Shape Support
Great Performance
Great Browser Support
No CSS
Performance Test Here: https://js-game-engine.hg0428.repl.co/canvasTest/preform.html
SVGs:
Better Shape Support
Harder to use
Good Browser Support
No CSS, but lots of different SVG things
Bad Performance
I didn't make a performance test for this one yet, but based on other tests, its not good.
To make Canvas fast:
Canvas can have a very dynamic performance, so lets review some tips.
Avoid using ctx.rect and ctx.fill, use ctx.fillRect instead, this is the biggest one, it can ruin even the most simple games.
Instead of using shapes with their fill and stroke, use fill[Shape] instead.
If you dont remember that when using canvas, your games will be very slow. I learned this from experience.

Merging pixels to minimize paint operations on Html canvas

Hi I'm painting an image onto a canvas (html5) and I want to reduce the number of fillRect calls I'm making in the hope of making the process more efficient.
Note: I'm calling canvas.fillRect to paint individual pixels and pixels can be 1x1 or other size rectangle depending on resolution I'm painting in (So I know that they are not called pixels but my image vocab is limited).
What I would like to do is find and merge individual pixels if they are the same colour. This will reduce the number of calls to fillRect and hopefully be faster than what I currently have.
So lets say I have a bit map like this:
[fff, fff, fff]
[f00, f00, f00]
[00f, 00f, 00f]
Instead of making 9 calls to fillRect I would like to make 3.
So my questions are:
1) What is this process called (so I can do some more intelligent research, as googling 'merging pixels', 'merging rectangles', etc, yields no useful results).
2) Anyone aware of any open source library that implements this (does not have to be javascript)
3) Does anyone think that adding this pre-processing step will actually make the code slower in JS?
Thanks All
Guido Tapia
I would agree with #Jakub that doing this kind of analysis in Javascript is likely to take a lot longer than the time you save by filling fewer rectangles (usually a very fast operation for a graphics card). That is unless you have to paint the same set of rectangles many thousands of times.
As for what it's called, the closest thing I can come up with is run-length encoding, which admittedly is one-dimensional rather than two.

Categories

Resources