I've made a program that creates parametric shapes (spheres, toruses, and cylinders) and performs matrix transformations on them to animate them in specific ways (e.g. translate(), rotate(), scale()). I now want to animate the shapes using splines which will serve to allow the shapes to move along smoother paths than with the transformations alone. I'm not exactly sure how to approach this with html5 and javascript, but based on my work up to this point, I would think that the approach would involve first calculating all of the points that lie along the spline curve (for x,y, and z), storing each point in a data structure, and then iterating over the data structure passing in each point to the translate() method.
I've looked at some resources on B-Splines and NURB's but sites like these generally seem either very theoretical and hard to translate to code, or they use specific api's in languages that I'm not using. What would be the best implementation (code wise) of this sort of spline animation in html5 and javascript?
Related
The problem here is I don't really know the right question to ask, but essentially I want to generate a pattern of ngons that all fit perfectly together, kinda like the picture.
Is there an algorithm or anything that can do this?
FYI I'm attempting this in JavaScript
The algorithm you want is a Voronoi Diagram. The essential description of the algorithm is such:
Generate a list of random points on a plane (or get the points as input from somewhere).
Create a geometric map of n-gons that represent all the space in the plane closest to each point.
The resulting graph will look something like this (stylized and colored):
The look and shape of the n-gons depend on the spacing of the points. You can play with different point distributions or generation methods to get a Voronoi Diagram with particular characteristics. You can also play with the n-gons themselves, for example you can treat the boundaries as fuzzy approximations, blending or leaving gaps between adjacent n-gons:
There are a ton of cool things you can do with a Voronoi Diagram, and pretty much every programming language has libraries that can compute one very quickly. For example, one of the interactive examples for Paper.js is a dynamically generated Voronoi Diagram where one of the points is the location of the cursor. Here's another example where someone uses Voronoi Diagrams as one of the steps for procedural terrain generation. Yet another example is a Voronoi Diagram using the locations of all the airports in the world, which you could use to find the closest airport to any location on the planet.
One such library in Javascript is d3-voronoi, though like I said, there are quite a few libraries out there, not to mention a gazillion tutorial articles on how to implement it yourself should you decide to go that route.
I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.
I'm learning webgl. I've managed to draw stuff and hopefully understood the pipeline. Now, every tutorial I see explains matrices before even loading a mesh. While it can be good for most, I think I need to concentrate on the process of loading external geometry, maybe through a json file. I've read that openGL by default displays things orthogonally, so I ask: is it possible to display a 3d mesh without any kind of transformation?
Now, every tutorial I see explains matrices before even loading a mesh.
Yes. Because understanding transformations is essential and you will need to work with them. They're not hard to understand and the sooner you wrap your head around them, the better. Actually in the case of OpenGL for the model-view transformation part it's actually rather simple:
The transformation matrix is just a bunch of vectors (in columns) placed within a "parent" coordinate system. The first the columns define how the X, Y and Z axes of the "embedded" coordinate system are aligned within the "parent", the W column moves it around. By varying the lengths of the base vectors you can stretc, i.e. scale things.
That's it, there's nothing more to it (in the modelview) than that. Learn the rules of matrix-matrix multiplication. Matrix-vector multiplication is just a special case of matrix-matrix multiplication.
The projection matrix is a little bit trickier, but I suggest you don't bother too much with it, just use GLM, Eigen::3D or linmath.h to build the matrix. The best analogy for the projection matrix is being the "lens" of OpenGL, i.e. this is where you apply zoom (aka field of view), tilt and shift. But the place of the "camera" is defined through the modelview.
is it possible to display a 3d mesh without any kind of transformation?
No. Because the mesh coordinates have to be transformed into screen coordinates. However a identity transform is perfectly possible, which, yes, looks like a dead on orthographic projection where the coordinate range [-1, 1] in either dimension is mapped to fill the viewport.
I have a geojson object defining Neighborhoods in Los Angeles using lon/lat polygons. In my web application, the client has to process a live stream of spatial events, basically a list of lon/lat coordinates. How can I classify these coordinates into neighborhoods using Javascript on the client (in the browser)?
I am willing to assume neighborhoods are exclusive. So once a coordinate as been classified as neighborhood X, there is no need to further test it for other neighborhoods.
There's a great set of answers here on how to solve the general problem of determining whether a point is contained by a polygon. The two options there that sound the most interesting in your case:
As #Bubbles mentioned, do a bounding box check first. This is very fast, and I believe should work fine with either projected or unprotected coordinates. If you have SVG paths for the neighborhoods, you can use the native .getBBox() method to quickly get the bounding box.
the next thing I'd try for complex polygons, especially if you can use D3 v3, is rendering to an off-screen canvas and checking pixel color. D3 v3 offers a geo path helper that can produce canvas paths as well as SVG paths, and I suspect if you can pre-render the neighborhoods this could be very fast indeed.
Update: I thought this was an interesting problem, so I came up with a generalized raster-based plugin here: http://bl.ocks.org/4246925
This works with D3 and a canvas element to do raster-based geocoding. Once the features are drawn to the canvas, the actual geocoding is O(1), so it should be very fast - a quick in-browser test could geocode 1000 points in ~0.5 sec. If you were using this in practice, you'd need to deal with edge-cases better than I do here.
If you're not working in a browser, you may still be able to do this with node-canvas.
I've seen a few libraries out there that do this, but most of them are canvas libraries that may rely on approximations more than you'd want, and might be hard to adapt to a project which has no direct need to rely on them for intersections.
The only other half-decent option I can think of is implementing ray casting in javascript. This algorithm isn't technically perfect since it's for Euclidean geometry and lat/long coordinates are not (as they denote points on a curved surface), but for areas as small as a neighbourhood in a city I doubt this will matter.
Here's a google maps extension that essentially does this algorithm. You'd have to adapt it a bit, but the principles are quite similar. The big thing is you'd have to preprocess your coordinates into paths of just two coordinates, but that should be doable.*
This is by no means cheap - for every point you have to classify, you must test every line segment in the neighborhood polygons. If you expect a user to be reusing the same coordinates over and over between sessions, I'd be tempted to store their neighborhood as part of it's data. Otherwise, if you are testing against many, many neighborhoods, there are a few simple timesavers you can implement. For example, you can preprocess every neighborhoods extreme coordinates (get their northmost, eastmost, southmost, and westmost points), and use these to define a rectangle that inscribes the town. Then, you can first check the points for candidate neighborhoods by checking if it lies inside the rectangle, then run the full ray casting algorithm.
*If you decide to go this route and have any trouble adapting this code, I'd be happy to help
I need the ability to efficiently draw a large number of interactive curves (possibly Bezier) in a web app. Imagine a graph-like structure with many draggable elements that are connected with smooth curves. Hence, the curves must adjust in shape and length as single elements are moved.
What graphic method will be best to assure efficiency and interactivity for a large number curves?
SVG? Canvas? something else?
(And once we know which method is best, is there a good library that would make it easier to implement?)
You might take a look at JSXGraph. I haven't personally used it, but know some who has with nice results. It looks like it will use 'SVG, VML or canvas'.