How to build a noise pollution map with heatmap.js - javascript

I have to build something like a heatmap but representing the noise sources in a colour, and in the noise expansion radious keep gradient to low level. My problem is that the heapmap.js library takes sense of the points concentration.
I want that the colorize depends on the noise level, and not on the concentration of noise sources.
Also I want to use it not only over a map, also over plant plains and images.
I don't know how can I do it with heatmap.js. If anyone knows or some other libraries...
Thanks in advance!

To show heatmap based on noise level first you will have to define lower and upper bounds of your data. For example, 0 = 0dB lower bound; 10 = 200dB upper bound.
heatmapInstance.setDataMax(10);
// setting the minimum value triggers a complete rerendering of the heatmap
heatmapInstance.setDataMin(0);
Then assign weight Values to each of your data point based on the noise level.
You can assign 0 values to the sources which are quiet and 7-8 to the noisy ones. It should produce nice heatmap of the noise intensity.
Also I want to use it not only over a map, also over plant plains and
images. I don't know how can I do it with heatmap.js
AFAIK, heatmap.js uses pixel position values to plot the data, so it won't be a problem to show heatmap on anything on your screen; be it an image, map or any other canvas.
If you are talking about Google Heatmap apis, I think then it only applies for maps.
Check documentation of Heatmap.js, it is pretty straightforward.

Related

d3 Best practices to visualize data?

I am working on a project where data points are visualized in the scatterplot using d3. Since it is a web application, the region is limited and a lot of points overlap. In total there are 20k points and I allow users zooming in with a brush (and its extent) on regions, but even when zoomed in there is still a huge overlap of points. An example of such a situation:
What are good approaches to still visualize underlying points, to enhance the view or perception of the points? I was thinking about maybe using transparency, but I do not know if that would do it.
It might be worthy to note that all points represent genes, so clustering them may not be very logical in terms of representation.
I would suggest trying d3's fisheye plug-in. It allows you to zoom and distort the scale with the mouse letting you zoom in on areas.
You can see an example of it used with a scatter/bubble chart lower on the page here: http://bost.ocks.org/mike/fisheye/
In addition, if you have overlap I would increase opacity, so you can see which points have lots of overlap vs. points that don't.
Here's an example graph with very clustered points that I created using both fisheye and opacity: http://crclayton.com/projects/fisheye/
It also allows you to hover over individual points to see a tooltip containing more details about them.
If the number of data points is of interest, then you could cluster the points (either on client/server side). You typically see this pattern if maps have too many markers (example cluster map).
Edit:
I am still not quite sure if I'm heading in the right direction. To visualize the quantity of points you could use a 3D visualization. Here is an idea taken from the Software Cities project:
You could basically render the position of the points on the plane and create vertical cylinders - the more points on the same spot, the higher the cylinder.

Google Heat Map intensity based on value

I want to create a heat map using Google API (Javascript). The problem is that Heatmaps use colors to represent the density of points. But, in my case the density of points is irrelevant. I want to create a map using a specific set of values. The colors on the map will be determined by that value only for that location. My set of values is Latency (ms).
Example: If a value is low it will appear on the map with green color. If a value is high it will appear with red color.
I´ve searched the API documentation, but the only thing i found was using "weighted locations". This is not a suitable solution, because the density of points is still considered.

maximum number of svg elements for the browser in a map

I am creating a map with leaflet and d3. A lot of circles will be plotted on a map. In terms of browser compatibility, there is an expected limit of how many svg elements the browser can render. In terms of user experience however, I would prefer that the user can see as many elements on the map as possible (otherwise the user might need to zoom in and out constantly and would need to wait for the ajax to return data). There will be some optimisation that I need to consider (user waiting time user vs. server query load vs. what the browser can handle).
See plot, there is a limit right now on the number of points that the server returns and thus only a portion of the map is filled.
The browser cannot handle a fully filled map here and the user would need to wait too long for the server response as well.
I suppose the problem that I am faced with needs to be solved by answering two questions:
Is there a standard in terms of what the average browser can handle in terms of number of simple svg shapes (circles) on a map?
What is the best technique to show as many shapes on the map as possible?
I'm considering the following points but I am unsure if it will help;
use squares instead of circles
use the leaflet API instead of the D3
Speaking in general terms, neither of the points you're considering will help. In both cases, the amount of processing to be done / information to display by the browser will be approximately the same.
Regarding your first question, not that I'm aware of. There are huge variations between browsers and platforms (especially if you consider mobile devices as well) and an average would be almost meaningless. Furthermore, this is changing constantly. I've found that up to about 1000 simple shapes are usually not a problem.
To show as many shapes as possible on the map, I would pre-render them into bitmap tiles and then use either the leaflet API or something like d3.geo.tile (example here) to overlay it on the actual map. This way you can easily scale to millions of points.
Although you can only render ~2-5k SVG elements before you start to see noticeable slowdown (depending on size, use of gradient fills, etc.), you can store and manipulate much larger datasets client-side. You can often handle tens or hundreds of thousands of data points efficiently in SVG: the trick is to be very selective about what you actually render, and to use techniques like debouncing to redraw only when necessary.
(For very large datasets: yes, you'll need to either aggregate/subsample points or pre-render.)
With this in mind, one technique I've used for d3 maps in particular is to use d3.geom.quadtree() to dynamically cull points as the user pans/zooms. More specifically, I avoid drawing points that are either
outside the current map bounding box (since these aren't visible at all), or
too close to other points (since these add visual clutter and are hard to interact with anyways).
In JS-ish pseudocode, this would look roughly like:
function getIndicesToDraw(data, r, bbox) {
var indicesToDraw = [];
var Q = d3.geom.quadtree();
// set bounds in pixel space
for (var i = 0; i < data.length; i++) {
var d = data[i];
var p = getPointForDatum(d);
if (isInsideBoundingBox(bbox, p) && !hasPointWithinRadius(Q, r, p)) {
Q.add(p);
indicesToDraw.push(i);
}
}
return indicesToDraw;
}
function redraw(svg, data, r, bbox) {
var indicesToDraw = getIndicesToDraw(data, r, bbox);
var points = svg.selectAll('.data-point')
.data(indicesToDraw, function(i) { return i; });
// draw new points for points.enter()
points.exit().remove();
// update positions of points (or SVG transforms, etc.)
}
This is a question of cartography as much as it is of technology. Just because you can put thousands of points on a map doesn't mean you should. You should ask yourself if your user needs to see as many points as possible at once, and will understand the data better as a result of it. Will seeing this many points confuse and overwhelm the user, or will it help them accomplish their desired task? Is there any way you could filter the data so that all points do not need to be drawn at once?
The most common solution to this problem is to use clusters, usually with something like Leaflet MarkerCluster. In the past while using D3 and Leaflet I have created a sort of heat map by creating an arbitrary grid, assigning each point to a bin, and applying a color ramp to the grid. However, getting back to your original concern, it is rather computationally intensive.
Depending on which platforms you want to support, be aware that phones and tablets do not always handle SVGs very well, especially while drawing thousands of features.
Another place for potential performance gain is in the delivery of the point geometry. I'm not sure how you are currently loading them, but using a spatially indexed PostGIS table and selecting by bounding box is often quite quick, but because you are drawing points and not polygons, you could even get away with loading them into the browser via CSV, which is substantially smaller than a GeoJSON or Topojson.
I would load all the data at once but only draw circles in the view port that are large enough. On zoom or pan, remove the circles that shouldn't be shown and check if previously hidden circles should be added.
You may use canvas + three.js or webgl where may join another map and 10k animation created 3d models or native code svg some elements, and intercative live time animation in one scene very good. I`m tested for fun this methode.Sry, my bad engl. Another puths - i think may used dlsl shader, opengl+ wasm and so on

Classify lon/lat coordinate into geojson polygon using Javascript

I have a geojson object defining Neighborhoods in Los Angeles using lon/lat polygons. In my web application, the client has to process a live stream of spatial events, basically a list of lon/lat coordinates. How can I classify these coordinates into neighborhoods using Javascript on the client (in the browser)?
I am willing to assume neighborhoods are exclusive. So once a coordinate as been classified as neighborhood X, there is no need to further test it for other neighborhoods.
There's a great set of answers here on how to solve the general problem of determining whether a point is contained by a polygon. The two options there that sound the most interesting in your case:
As #Bubbles mentioned, do a bounding box check first. This is very fast, and I believe should work fine with either projected or unprotected coordinates. If you have SVG paths for the neighborhoods, you can use the native .getBBox() method to quickly get the bounding box.
the next thing I'd try for complex polygons, especially if you can use D3 v3, is rendering to an off-screen canvas and checking pixel color. D3 v3 offers a geo path helper that can produce canvas paths as well as SVG paths, and I suspect if you can pre-render the neighborhoods this could be very fast indeed.
Update: I thought this was an interesting problem, so I came up with a generalized raster-based plugin here: http://bl.ocks.org/4246925
This works with D3 and a canvas element to do raster-based geocoding. Once the features are drawn to the canvas, the actual geocoding is O(1), so it should be very fast - a quick in-browser test could geocode 1000 points in ~0.5 sec. If you were using this in practice, you'd need to deal with edge-cases better than I do here.
If you're not working in a browser, you may still be able to do this with node-canvas.
I've seen a few libraries out there that do this, but most of them are canvas libraries that may rely on approximations more than you'd want, and might be hard to adapt to a project which has no direct need to rely on them for intersections.
The only other half-decent option I can think of is implementing ray casting in javascript. This algorithm isn't technically perfect since it's for Euclidean geometry and lat/long coordinates are not (as they denote points on a curved surface), but for areas as small as a neighbourhood in a city I doubt this will matter.
Here's a google maps extension that essentially does this algorithm. You'd have to adapt it a bit, but the principles are quite similar. The big thing is you'd have to preprocess your coordinates into paths of just two coordinates, but that should be doable.*
This is by no means cheap - for every point you have to classify, you must test every line segment in the neighborhood polygons. If you expect a user to be reusing the same coordinates over and over between sessions, I'd be tempted to store their neighborhood as part of it's data. Otherwise, if you are testing against many, many neighborhoods, there are a few simple timesavers you can implement. For example, you can preprocess every neighborhoods extreme coordinates (get their northmost, eastmost, southmost, and westmost points), and use these to define a rectangle that inscribes the town. Then, you can first check the points for candidate neighborhoods by checking if it lies inside the rectangle, then run the full ray casting algorithm.
*If you decide to go this route and have any trouble adapting this code, I'd be happy to help

OpenLayers as a large (changing and growing) image viewer

Basically, what I'm trying to do is use a map viewer as an image viewer with the same sort of efficient tile-loading, zoom/pan awesomeness without having to build it myself.
Specifically, I need an image viewer that will allow the image to grow and change while not altering the coordinates of any older (unchanged) tiles. This means that the center point (0,0), where the image started growing from, must always remain (0,0). So I'm looking for a library that will allow me to use a very basic Cartesian coordinate system (no map projection!), which will ask for tiles infinitely in all directions with no repetition (as opposed to how map libraries just ignore y-axis above and below the map, but the x axis repeats).
There's another catch. I need zoom level 0 to be zoomed in all the way. Since the image is constantly growing, there's no way to tell what the max zoom level will be, and the coordinates need to be based on the base image layer tiles so that every tile in zoom level z contains 2^z base layer tiles.
I am wondering if this is possible with OpenLayers and how to do it. If it's not, any suggestions of other (open-source javascript) libraries that can do this would be very appreciated! I've tried playing around with Polymaps, but the documentation is lacking too much for me to be able to tell if it will work. So far no luck.
Please let me know if none of this made sense, and I'll try to include some images or better explanations. Thanks!
I ended up using Polymaps after all, since I like it more than OpenLayers, because it's faster and has much smoother scrolling and panning. I wasn't able to do exactly what I wanted, but what I did was close enough.
I ended up writing my own layer (based on the po.image() layer), which disabled infinite horizontal looping of the map. I then wrote my own version of po.url() that modified the requests going to the server for tiles so that zooming was reversed (I just arbitrarily picked a 'max' zoom of 20, then when making a request subtract the zoom level from 20) and the x and y coordinates were converted to cartesian coordinates from the standard row, column coordinates Polymaps uses, based on the zoom level and the map centered at (0,0).
If anyone is interested in the code I can post it here. Let me know!
EDIT: I've posted the code on github at https://github.com/camupod/polymaps
The relevant files are src/Backwards* and examples/backwards (though it actually doesn't work, you might be able to clean some information about how it should work).

Categories

Resources