I’m working on 2d map, where I have to render buildings (polygons with some fill and border colors). Three.js library used. I use shader where GLSL program handle rendering of all buildings as well as hover/select effects by changing the colors.
The problem with borders of buildings. The barycentric coordinates approach is used for rendering of borders: https://stackoverflow.com/a/18068177/3093329
In simple cases of square building I have to specify which diagonal lines (edges) have to be eliminated and it easy because it always the same. But in cases of more complex shapes of building I can't define these edges easily because they always are different:
So, is this approach the only one way how to render borders of polygons? How can I define which edges have to be eliminated in complex cases?
Related
I decided to use SVGs to generate squares and images I need. SVGs are flexible and that should work with me.
I can't figure out how can I generate squares to fit inside the border of Croatia. End result I'd like to have can be seen in (see image below).
IMAGE :
It's pretty easy to generate squares in some rectangular shape. Since border of Croatia is not rectangular the only thing I have in mind is to do it manually, but this is not flexible. What if I want to create bigger or smaller squares just to test it out and fit them within borders.
It's usually a mistake to hand-implement low-level graphics primitives like this. Dealing with wavy or nested borders and edge conditions is bug-attracting code.
I'd suggest creating a small HTML canvas, drawing Croatia on it with path primitives and fill, then reading back its content with getImageData. Each fully black pixel corresponds to a square you want to draw. (Size the canvas to ensure this.)
(Or, if you just want the aesthetic, use an SVG pattern fill. That'd be less work.)
I've made a program that creates parametric shapes (spheres, toruses, and cylinders) and performs matrix transformations on them to animate them in specific ways (e.g. translate(), rotate(), scale()). I now want to animate the shapes using splines which will serve to allow the shapes to move along smoother paths than with the transformations alone. I'm not exactly sure how to approach this with html5 and javascript, but based on my work up to this point, I would think that the approach would involve first calculating all of the points that lie along the spline curve (for x,y, and z), storing each point in a data structure, and then iterating over the data structure passing in each point to the translate() method.
I've looked at some resources on B-Splines and NURB's but sites like these generally seem either very theoretical and hard to translate to code, or they use specific api's in languages that I'm not using. What would be the best implementation (code wise) of this sort of spline animation in html5 and javascript?
I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.
Let's say I have three flat shapes. For simplicity we'll make them circles:
Is there any way in THREE.js to 'stack' these vertically and create a shape that fills in the space between them? If you imagine those circles stacked vertically, the eventual shape I'd want would be a sort of flat-topped cone.
Process is called extrusion and is showed here - http://stemkoski.github.io/Three.js/Extrusion.html
I've never tried it myself, so I can't help with actual use.
Aloha,
is there any possibility to make the bundles in this visualization:
... look just like the bundles in that visualization
?
I have no idea how to achieve this in d3.
EDIT 1:
Obviously I have to write a custom interpolator. How can I extend the bundle interpolator to additionally interpolate between two colors without changing the d3 library?
Unfortunately, neither SVG or Canvas support stroking a gradient along a path. The way that my dependency tree visualization is implemented is as follows. For each path:
Start with a basis spline (see Hierarchical Edge Bundling).
Convert to a piecewise cubic Bézier curve (see BasisSpline.segments).
Convert to a piecewise linear curve (i.e., polyline; see Path.flatten).
Split into equal-length linear segments (see Path.split).
Once you have these linear segments, you color each segment by computing the appropriate color along the gradient. So, the first segment is drawn green, the last segment is drawn red, and the intermediate segments are drawn with a color somewhere in-between. It might be possible to combine steps 2-4 by sampling the basis spline at equidistant points, but that will require more math.
My dependency tree is implemented in Canvas, but you could achieve the same effect in SVG by creating separate path elements (or line elements) for each segment of constant color. You might get slightly better performance by combining segments of the same color.