maximum number of svg elements for the browser in a map - javascript

I am creating a map with leaflet and d3. A lot of circles will be plotted on a map. In terms of browser compatibility, there is an expected limit of how many svg elements the browser can render. In terms of user experience however, I would prefer that the user can see as many elements on the map as possible (otherwise the user might need to zoom in and out constantly and would need to wait for the ajax to return data). There will be some optimisation that I need to consider (user waiting time user vs. server query load vs. what the browser can handle).
See plot, there is a limit right now on the number of points that the server returns and thus only a portion of the map is filled.
The browser cannot handle a fully filled map here and the user would need to wait too long for the server response as well.
I suppose the problem that I am faced with needs to be solved by answering two questions:
Is there a standard in terms of what the average browser can handle in terms of number of simple svg shapes (circles) on a map?
What is the best technique to show as many shapes on the map as possible?
I'm considering the following points but I am unsure if it will help;
use squares instead of circles
use the leaflet API instead of the D3

Speaking in general terms, neither of the points you're considering will help. In both cases, the amount of processing to be done / information to display by the browser will be approximately the same.
Regarding your first question, not that I'm aware of. There are huge variations between browsers and platforms (especially if you consider mobile devices as well) and an average would be almost meaningless. Furthermore, this is changing constantly. I've found that up to about 1000 simple shapes are usually not a problem.
To show as many shapes as possible on the map, I would pre-render them into bitmap tiles and then use either the leaflet API or something like d3.geo.tile (example here) to overlay it on the actual map. This way you can easily scale to millions of points.

Although you can only render ~2-5k SVG elements before you start to see noticeable slowdown (depending on size, use of gradient fills, etc.), you can store and manipulate much larger datasets client-side. You can often handle tens or hundreds of thousands of data points efficiently in SVG: the trick is to be very selective about what you actually render, and to use techniques like debouncing to redraw only when necessary.
(For very large datasets: yes, you'll need to either aggregate/subsample points or pre-render.)
With this in mind, one technique I've used for d3 maps in particular is to use d3.geom.quadtree() to dynamically cull points as the user pans/zooms. More specifically, I avoid drawing points that are either
outside the current map bounding box (since these aren't visible at all), or
too close to other points (since these add visual clutter and are hard to interact with anyways).
In JS-ish pseudocode, this would look roughly like:
function getIndicesToDraw(data, r, bbox) {
var indicesToDraw = [];
var Q = d3.geom.quadtree();
// set bounds in pixel space
for (var i = 0; i < data.length; i++) {
var d = data[i];
var p = getPointForDatum(d);
if (isInsideBoundingBox(bbox, p) && !hasPointWithinRadius(Q, r, p)) {
Q.add(p);
indicesToDraw.push(i);
}
}
return indicesToDraw;
}
function redraw(svg, data, r, bbox) {
var indicesToDraw = getIndicesToDraw(data, r, bbox);
var points = svg.selectAll('.data-point')
.data(indicesToDraw, function(i) { return i; });
// draw new points for points.enter()
points.exit().remove();
// update positions of points (or SVG transforms, etc.)
}

This is a question of cartography as much as it is of technology. Just because you can put thousands of points on a map doesn't mean you should. You should ask yourself if your user needs to see as many points as possible at once, and will understand the data better as a result of it. Will seeing this many points confuse and overwhelm the user, or will it help them accomplish their desired task? Is there any way you could filter the data so that all points do not need to be drawn at once?
The most common solution to this problem is to use clusters, usually with something like Leaflet MarkerCluster. In the past while using D3 and Leaflet I have created a sort of heat map by creating an arbitrary grid, assigning each point to a bin, and applying a color ramp to the grid. However, getting back to your original concern, it is rather computationally intensive.
Depending on which platforms you want to support, be aware that phones and tablets do not always handle SVGs very well, especially while drawing thousands of features.
Another place for potential performance gain is in the delivery of the point geometry. I'm not sure how you are currently loading them, but using a spatially indexed PostGIS table and selecting by bounding box is often quite quick, but because you are drawing points and not polygons, you could even get away with loading them into the browser via CSV, which is substantially smaller than a GeoJSON or Topojson.

I would load all the data at once but only draw circles in the view port that are large enough. On zoom or pan, remove the circles that shouldn't be shown and check if previously hidden circles should be added.

You may use canvas + three.js or webgl where may join another map and 10k animation created 3d models or native code svg some elements, and intercative live time animation in one scene very good. I`m tested for fun this methode.Sry, my bad engl. Another puths - i think may used dlsl shader, opengl+ wasm and so on

Related

Get the best Map View based on all polygons in Bing Maps Javascript

I have a Bing Maps where i draw some polygons,
I would like to get the best view (zoom / center) to show all the polygons.
I've tried this, and that achieve what i want, but it make a lot of time.
var poly = new Microsoft.Maps.Polygon(allPoints);
var boundaries = Microsoft.Maps.LocationRect.fromLocations(poly.getLocations());
Thanks for help.
If you have a lot of points (tens of thousands) it will take time. Another way to do this is to loop through your array of location objects and get the min and max latitude and longitude values and then use those to create a location rect. However I doubt that would be much faster as that is basically what the fromLocations method does behind the scenes.
Also, rather than using getLocations, just use the allPoints value you have. Might save a bit of processing.
All that said, unless you are working with really large polygons (thousands of points), I can't see this being slow. Panning and zooming may be slow, but would likely be because the browser has to constantly update the position of the points for the polygon as the map moves. The more points your polygon has, the slower the browser will become.

D3 map SVG performance

I've been struggling the past few days to optimize performance on a D3 map, especially on mobile. I am using SVG transforms for zooming and panning but made the following observation: the overkill comes from path strokes used to fake spacing between countries.
I have uploaded a pair of sample maps for comparison:
http://www.nicksotiriadis.gr/d3/d3-map-1.html
http://www.nicksotiriadis.gr/d3/d3-map-2.html
The only difference between the two maps is the stroke path along the country paths, and the difference in performance is even noticeable on desktop devices - but more obvious on mobile. Removing the path strokes makes mobile performance a breeze..
I tried all kinds of svg stroke shape-rendering options without significant results.
Now to the question. Is there any way to remove a thin border from each country to fake the spacing between countries instead of using a stroke?
If anyone else has a different suggestion I'd love to hear it!
Update: Attaching explanation photo.
What I have drawn is this. The red arrow points to the country joints. When adding a stroke in a color same as the background to the country paths (here depicted in dark grey color) it creates the sense that the countries are seprated - however this adds a serious performance hit on mobile devices. What I am looking for is somehow re-shape the countries paths so that their borderlines are where the blue arrow points, but without having a stroke.
Update 2: People seem not to be able to understand what I am looking for, so I am updating this in order to make the question even clearer.
Let's assume that the original countries paths are shown on the left of this image. What I am looking for is a way that I can somehow 'contract' the paths inwards so that the newly created paths shown in red, leave enough empty space between them that will 'emulate' a stroke between them.
Doing this, will leave no use to having an extra layer of strokes, thus gain performance from only using paths instead of paths+strokes.
Update 2: Hello again, I seem to have found a half-solution to my problem. I managed to extract the topojson to shapefile, edit the shapefile the way I want (used a program named OpenJump), but the conversion takes away all the topojson properties I need - id, country name, so I can't convert back to the original topojson.
Does anyone have any suggestions?
D3 has a thing just for that: topojson.mesh() (see documentation). The idea is that since most countries share borders, there's no need to draw the shared borders twice. If you can draw each border only once, you get as much as 80% reduction in the number of strokes you have to draw. The mesh method does the javascript processing to turn a bunch of closed shapes (countries) into the multiline path of just the borders between them. You can then draw that multiline path into a single <path> object that you position on top of the fills.
The mesh looks like this.
Here's another example.
Finally found the answer. This radically improves d3 map performance!
1) I got my topojson file and extracted to shapefile using mapshaper.org. This gives 3 files: .shp, .shx, .dbf . From what I realized the .dbf file holds all the TopoJSON properties/attributes.
2) Opened the .shp shape file to OpenJUMP http://www.openjump.org/ - Which automatically imports the .dbf file as well.
3) I selected the countries layer and went to Tools > Analysis > Buffer.
4) Checked the Update geometry in source layer box so that the geometry is edited without losing the rest of the attributes/properties and added a negative Fixed Distance -0.1. This shrinked all the country geometries to the result I was looking for.
5) Saved Dataset as ESRI Shapefile
6) Reimported BOTH .shp and .dbf that were produced from OpenJUMP back to mapshaper.org - careful, BOTH files.
7) Exported as TopoJSON. Contains new shape and all original properties/attributes!
The following link has been updated with the new produced map; we have a 'bordered' look without the need of strokes.
http://v7.nicksotiriadis.gr/d3/d3-map-1.html
Compare the performance to this link that has the original shapes + stroke. Please try on mobile to see the performance difference!
http://v7.nicksotiriadis.gr/d3/d3-map-2.html
Also, here is the updated world map TopoJSON file in case someone wants some extra performance! :D
http://v7.nicksotiriadis.gr/d3/js/world-topo-bordered.json
There might be a couple of reasons of this behaviour (on my computer, everything is working fine at the same speed ):
Browser
Which browser do you use ? On Chrome, your exemples are working perfectly.
TopoJson
eg. previous answer.
Animation
You are launching the animation when the page is loading. You might want to add a delay (animation().delay(in ms)). There is also a function in D3: queue(), https://github.com/mbostock/queue which load the data before launching a function.
--
If none of this change your problem, and if you want it to work fine on mobile, you can try to mix D3 and Leaflet (map for mobiles), which is great in term of performance by loading tiles.
One example:
http://bl.ocks.org/zross/6a31f4ef9e778d94c204
Hope it helps

How can I increase map rendering performance in HTML Canvas?

We are developing a web-based game. The map has a fixed size and is procedually generated.
At the moment, all these polygons are stored in one array and checked whether they should be drawn or not. This requires a lot of performance. Which is the best rendering / buffering solution for big maps?
What I've tried:
Quadtrees. Problem: Performance still not as great because there are so many polygons.
Drawing sections of the map to offscreen-canvases. A test run: http://norizon.ch/repo/buffered-map-rendering/ Problem: The browser crashes when trying to buffer that much data and such big images (maybe 2000x2000) still seem to perform badly on a canvas.
(posting comments as an answer for convenience)
One idea could be, when the user is translating the map, to re-use the part that will still be in view, and to draw only the stripe(s) that are no longer corrects.
I believe (do you confirm ?) that the most costly operation is the drawing, not to find which polygon to draw.
If so, you should use your QuadTree to find the polygons that are within the strips. Notice that, given Javascript's overhead, a simple 2D bucket that contains the polygons that are within a given (x,y) tile might be faster to use (if the cost of the quadtree is too high).
Now i have a doubt about the precise way you should do that, i'm afraid you'll have to experiment / benchmark, and maybe choose a prefered browser.
Problems :
• Copying a canvas on itself can be very slow depending on devices/Browsers. (might require to do 2 copy, in fact)
• Using an offscreen canvas can be very slow depending on devices/Browsers. (might not use hardware acceleration when off-screen).
If you are drawing things on top of the map, you can either use a secondary canvas on top of the map canvas, or you'll be forced to use an off-screen canvas that you'll copy on each frame.
I have tried a lot of things and this solution turned out to be the best for us.
Because our map has a fixed size, it is calculated server-side.
One big image atlas with all the required tiles will be loaded at the beginning of the game. For each image on the atlas, a seperate canvas is created. The client loads the whole map data into one two-dimensional array. The values determine, which tile has to be loaded. Maybe it would be even better if the map was drawn on a seperate canvas, so that only the stripes have to be painted. But the performance is really good, so we won't change that.
Three conclusions:
Images are fast. GetImageData is not!
JavaScript has not yet great support for multi threading, so we don't calculate the map client-side in game-time.
Quadtrees are fast. Arrays are faster.

Real time data plotting performance HTML5 canvas vs Dom appending

I have some realtime data: 3 integers that are changing over time. These integers are from my accelerometer readings: x, y, and z. I was thinking of a way to plot these data so it will be easier to trend the changes.
There are many chart libraries are out there such as flot. What I want to do is represent the integers as bar heights. There are two methods I can use to display the bar graph:
Use divs for the bars which will be appended to a parent div.
Use an HTML5 canvas to draw the bars that will represent the integers.
My question is: which of these two methods will work better from the performance perspective, assuming the data update frequency is 50 msec (i.e., data will change every 50 milliseconds).
To a certain extent this depends, it's going to depend on a few factors:
The number of items you've got that you need to update (are you talking 10s, 100s, 1000s or more)
The frequency that you want to update is going to be a big factor, you're limited based upon the speed that the browser can execute the JavaScript.
The browser - some browsers can render content at significantly different speeds.
A good example to compare is looking at some of the D3 performance comparisons. Because the core library is doing the same work underneath, this is comparing the render speed between SVG (DOM based) and Canvas rendering. Take a look at these two swarm examples first:
Canvas Swarm
SVG Swarm
If you start scaling up the frequency and the number of items Canvas is going to outperform SVG, because it's a much lighter workload for the browser. It doesn't have to manipulate the DOM in the same way, check CSS rules to apply etc. However SVG is more powerful because of these.
There's a really good breakdown of performance in this post on the D3 google group which compares browser render times. To pull out some numbers as an example (although these figures are a little old):
Chrome was 74x slower rendering SVG over Canvas.
Firefox was 150x slowe rendering SVG over Canvas.
Opera was lightening fast at SVG but 71x slower at render Canvas over SVG.
Here is a fiddle showing both a flot bar chart and a DOM / div approach. You can run them seperately or together and compare the look and the cpu load with your taskmanager.
This is based on my answer to this question (which gave the Windows 7 Task Manager style to the chart.)
The update loops for both methods:
function update() {
GetData();
$.plot($("#placeholder"), dataset, options)
timerFlot = setTimeout(update, updateInterval);
}
function updateDom() {
GetData();
$('#domtest').html('');
for (var i = 0; i < data.length && i < 100; i++) {
$('#domtest').append('<div class="bar" style="height: ' + (3 * data[i][3]).toString() + 'px; top: ' + (300 - 3 * data[i][4]).toString() + 'px;"></div>');
}
timerDom = setTimeout(updateDom, updateInterval);
}
I would recommend using d3 and a complimentary charting library to plot out your bar chart. You can then use svg which is more performant than appending dom nodes. I would check out c3
I would second Sean's answer about D3. Coupling that with either jQuery or D3's native json handling you can pull data quite rapidly. For instance, I've written code to pull LDAP query data with records numbering in the 1000's with turnaround times of a second or so. But that's server side.
On the rendering side, see these examples comparing the difference between canvas and SVG rendering performance. It's not a benchmark per se but you can see pretty clearly that performance not an issue.
bl.ocks.org/mbostock/1276463 - bl.ocks.org/mbostock/1062544 - bl.ocks.org/mbostock/9539958
But to your question about charts, D3 very elegantly handles data updates with data joins. Here is a nice article about updating a data series over time. bost.ocks.org/mike/path/
Hope that helps.
I would suggest going through the Canvas approach.
Changing the Dom elements causes the DOM repaint to be called to paint the element on the screen. In your case there are 3 elements which will be changing frequently.
The internal operation of animation will be done this way:
An element with x height to changed to x+3.
First step the browser changes the height of the element to +1 repaints adjusts all the children within it. And then increases it by +1 and repaints and adjusts the children and so on. Every time it changes the element there will be lot of happenings inside the browser.
Coming to canvas: it's like a painting board no matter how many cycles you increase or decrease canvas will not make any iterative paint calls.
Hope this provide some info.

Classify lon/lat coordinate into geojson polygon using Javascript

I have a geojson object defining Neighborhoods in Los Angeles using lon/lat polygons. In my web application, the client has to process a live stream of spatial events, basically a list of lon/lat coordinates. How can I classify these coordinates into neighborhoods using Javascript on the client (in the browser)?
I am willing to assume neighborhoods are exclusive. So once a coordinate as been classified as neighborhood X, there is no need to further test it for other neighborhoods.
There's a great set of answers here on how to solve the general problem of determining whether a point is contained by a polygon. The two options there that sound the most interesting in your case:
As #Bubbles mentioned, do a bounding box check first. This is very fast, and I believe should work fine with either projected or unprotected coordinates. If you have SVG paths for the neighborhoods, you can use the native .getBBox() method to quickly get the bounding box.
the next thing I'd try for complex polygons, especially if you can use D3 v3, is rendering to an off-screen canvas and checking pixel color. D3 v3 offers a geo path helper that can produce canvas paths as well as SVG paths, and I suspect if you can pre-render the neighborhoods this could be very fast indeed.
Update: I thought this was an interesting problem, so I came up with a generalized raster-based plugin here: http://bl.ocks.org/4246925
This works with D3 and a canvas element to do raster-based geocoding. Once the features are drawn to the canvas, the actual geocoding is O(1), so it should be very fast - a quick in-browser test could geocode 1000 points in ~0.5 sec. If you were using this in practice, you'd need to deal with edge-cases better than I do here.
If you're not working in a browser, you may still be able to do this with node-canvas.
I've seen a few libraries out there that do this, but most of them are canvas libraries that may rely on approximations more than you'd want, and might be hard to adapt to a project which has no direct need to rely on them for intersections.
The only other half-decent option I can think of is implementing ray casting in javascript. This algorithm isn't technically perfect since it's for Euclidean geometry and lat/long coordinates are not (as they denote points on a curved surface), but for areas as small as a neighbourhood in a city I doubt this will matter.
Here's a google maps extension that essentially does this algorithm. You'd have to adapt it a bit, but the principles are quite similar. The big thing is you'd have to preprocess your coordinates into paths of just two coordinates, but that should be doable.*
This is by no means cheap - for every point you have to classify, you must test every line segment in the neighborhood polygons. If you expect a user to be reusing the same coordinates over and over between sessions, I'd be tempted to store their neighborhood as part of it's data. Otherwise, if you are testing against many, many neighborhoods, there are a few simple timesavers you can implement. For example, you can preprocess every neighborhoods extreme coordinates (get their northmost, eastmost, southmost, and westmost points), and use these to define a rectangle that inscribes the town. Then, you can first check the points for candidate neighborhoods by checking if it lies inside the rectangle, then run the full ray casting algorithm.
*If you decide to go this route and have any trouble adapting this code, I'd be happy to help

Categories

Resources