THREE.js: Shadows cast by distant Light sources - javascript

I'm trying to create a scene with objects on a solar system scale.
Some examples of what I want are:
-When a small (on the order of 10m in diameter) object, crosses behind
a large object (earth sized), which blocks the light source
(THREE.DirectionalLight), the smaller object is shadowed by the larger
object.
-When a moon crosses between the light source and a planet,
a shadow is cast on the planet.
-All objects must cast, and receive shadows (except stars, which only cast).
I know that I should be shooting to "pancake" my shadow camera as much as possible, but with the variable nature of the scale that I need, this becomes very difficult to do.
What are some techniques or tricks that can be used when creating a shadowed scene on such a variable scale?
Is there some sort of logarithmic depth buffer for shadows (like there is for rendering)?
Or could I somehow leverage camera/trackball control events to dynamically adjust the frustum of the shadow camera? (as the camera(scene) gets further away, use a more coarse buffer/expand the shadow camera frustum)
Check out this JSfiddle for a relevant, but different example of my problem. These are two small objects, close together, with a very distant light source.
http://jsfiddle.net/mtcq070x/6/
Notice how the shadows flicker on and off, and there's shadowing on the front of the sphere (which there shouldn't be).
EDIT: I changed the jsfiddle to use a proper bias, and the ball now receives and casts shadows. notice how shadow darkness increasing worsens the self shadowing. Lowering shadow darkness isn't an option, because then the shadow cast to the plane disappears.
Also Here's exactly what I'm working on looks like (to scale solar system)

What you are seeing in the jsfiddle is whats known as shadow acne. This can be fixed by using a non zero, small positive shadow bias value. Setting light.shadowBias = 0.01; seems to solve the problem in your fiddle: http://jsfiddle.net/mtcq070x/4/. Also see https://msdn.microsoft.com/en-us/library/windows/desktop/ee416324(v=vs.85).aspx

Related

Three.js - Shadow.bias fixes shadow static, but moves the shadows. How to fix?

I have a scene with a light casting shadows. It does well, except from the static from the shadow. Like so:
When I add a simple light.shadow.bias = -0.005;:
It fixes the issue, but causes another. As you can see from the building's shadow, the shadows are completely out of place! Is there a way to fix this, or an alternative method of getting rid of the shadow static?
You should be able to mitigate the issue by using the new normalBiasBias property. From the documentation:
Defines how much the position used to query the shadow map is offset along the object normal. The default is 0. Increasing this value can be used to reduce shadow acne especially in large scenes where light shines onto geometry at a shallow angle. The cost is that shadows may appear distorted.

How to find the faces that appear on the screen?

In my application, a user is browsing a scene, and I'd like to be able to find the faces that appear on the screen meaning that the user can see it (so I'd like to exclude the faces that are not in the frustum of the camera, and the faces that are hidden by other faces).
An idea I had was to use the Raycaster class to throw rays on each pixel of the screen, but I'm afraid the performances will be low (I don't need it to be realtime but I'd like it not to be really slow).
I know that there is a z-buffer to know which faces are shown because they are not hidden and I wanted to know if there was an easy way with Three.js to use the z-buffer to find those faces.
Thank you !
My final solution is the following :
I use three.js server-side to render my model (people here, and there explain how to do it).
I use the color attribute of Face3 in order to set a specific color for each face. Each face has a number (the number of the face in the .obj file), this number will be represent the Face3 color.
I use only ambient light
I do the rendering
My render represents in fact a set of pixels : if a certain color appears on the rendering, it means that the face corresponding the color is appearing on the screen.

Best practice: Rendering volume (voxel) based data in WebGL

I´m searching for a (or more) best practice(s) for the following problem. I´ll try to describe it as abstract as possible, so the solution can be applied to scenarios i have not yet thought of.
Data available: Voxels (Volumetric Pixels), forming a cube, with coordinates x,y,z and a color attached.
Goal: Use OpenGL to display this data, as you move through it from different sides.
Question: Whats the best practice to render those voxels, depending on the viewpoint? How (which type of Object) can store the data?
Consider the following:
The cube of data can be considered as z layers of x y data. It should
be possible to view, in-between-layers, then the displayed color
should be interpolated from the closest matching voxels.
For my application, i have data sets of (x,y,z)=(512,512,128) and
more, containing medical data (scans of hearts, brains, ...).
What i´ve tried so far:
Evaluated different frameworks (PIXI.js, three.js) and worked through a few WebGL tutorials.
If something is not yet clear enough, please ask.
There are 2 major ways to represent / render 3D datasets. Rasterization and Ray-tracing.
One fair rasterization approach is a surface reconstruction technique by the use of algorithms such as Marching Cubes, Dual Contouring or Dual Marching Cubes.
Three.js have a Marching Cubes implementation in the examples section. You basically create polygons from your voxels for classical rasterization. It may be faster than it seems. Depending the level of detail you want to reach, the process can be fast enough to be done more than 60 times per second, for thousands of vertices.
Although, unless you want to simply represent cubes (I doubt) instead of a surface, you will also need more info associated to each of your voxels rather than only voxel positions and colors.
The other way is raycasting. Unless you find a really efficient raycasting algorithm, you will have serious performance hit with a naive implementation.
You can try to cast rays from your camera position through your data structure, find / stop marching through when you reach a surface and project your intersection point back to screen space with the desired color.
You may draw the resulting pixel in a texture buffer to map it on a full-screen quad with a simple shader.
In both cases, you need more information than just colors and cubes. For example, you need at least density values at each corners of your voxels for Marching cubes or intersection normals along voxels edges (hermite data) for Dual Contouring.
The same for ray-casting, you need at least some density information to figure out where the surface lies or not.
One of the keys is also in how you organize the data in your structure specially for out-of-core accesses.

Rounded Plane In THREE JS

THREE JS, can often seem angular and straight edged. I haven't used it for very long and thus am struggling to understand how to curve the world so to speak. I would imagine a renderer or something must be changed, but the idea is to take a 2d map and turn it into a simple three lane running game. However, if you look at the picture below from another similar game, how can i achieve the fish eye effect?
I would do that kind of effect on per-vertex base depending on the distance from the camera.
Also, maybe a bit tweaked perspective camera with bigger vertical fov would boost up the effect of the "curviness".
It's just a simple distortion effect that has been simulated in some way, it probably isn't really curved. Hope this helps.
I'm sure there are many possible different approaches... Here's one that creates nice barrel distortion effect.
You can do something like that by rendering normal wide angle camera to a texture, then project it to a lens-shaped plane (a sphere even), then the actual on-screen render is from a camera pointing to that.
I don't have the code available ATM, but I should be able to dig it up in few days if interested. Or you can just adapt from the three.js examples. Three.js includes some postprocessing examples where the scene is first rendered into a texture, that texture is applied to a a quad then photographed with ortographic camera. You can modify such an example by changing the ortographic camera to a perspective one, then distorting/changing the quad to something more appropriately shaped.
Taken to extremes, this approach can produce some pixelization / blocky artifacts.

three.js outer glow for sphere object?

I'm building some sort of planetary system in three.js and I spent couple of hours looking for a decent solution to get an outer glow on one planet - a sphere object with a texture.
I came across this example http://stemkoski.github.io/Three.js/Selective-Glow.html which kind of does the trick, but the thing is - this form of glow also affects the main 3D object resulting in color change (as seen there).
Another nice glow example can be found here http://bkcore.com/blog/3d/webgl-three-js-animated-selective-glow.html but again it glows the entire region, not only "outer" thing.
I've been reading some discussion thread about "overrideMaterial" property on GitHub but this seems experimental, unused and undocumented... not even sure if this could solve my problem.
Please share your ideas, thanks!
I've worked a bit on separating out the part of the WebGL Globe code (linked to above) that produces the atmospheric effect. A preliminary working version is here:
http://stemkoski.github.io/Three.js/Atmosphere.html
To the best of my understanding, there are a few interesting things going on in the original code to create the atmospheric effect. First, the glowing texture is placed on another sphere -- let's call it the Atmo Sphere :) -- that surrounds the sphere with the image of earth on it. The Atmosphere material is flipped so that the front side does not render, only the back side, thus it does not obscure the earth sphere even though it surrounds it. Second, the gradient lighting effect is achieved by using a fragment shader rather than a texture. However, the atmosphere will change its appearance if you zoom in and out; this was not evident in the WebGL Globe experiment because zooming was disabled.
[updated April 30th]
Next, similar to the source code from
http://stemkoski.github.io/Three.js/Selective-Glow.html
the sphere with the gradient lighting texture (and another black-textured sphere) are placed in a second scene, and then the results from that scene are composed with the original scene using an additive blender. And just so you can experiment with the parameters used to create the glow effect, I have included a couple of sliders so that you can change the values and see the different glow effects that result.
I hope this helps you get started. Good luck!
[updated June 11]
I have a new example which achieves the same effect in a much simpler way, rather than using post-processing and additively blending two scenes, I just changed some of the parameters in the customized material. (It seems obvious in retrospect.) For an updated example, check out:
http://stemkoski.github.io/Three.js/Shader-Halo.html
Still haven't figured out the pan/zoom issues though.
[Updated July 24]
I figured out the pan/zoom issues. It requires using a shader; for details about the complexities, see the related question Three.js - shader code for halo effect, normals need transformation and for the final working example, see:
http://stemkoski.github.io/Three.js/Shader-Glow.html.
I'm pretty happy with the final result, so I will not be updating this answer any more :)
In the example you are referring to, I used a blue glow with additive blending -- if you used a white color instead maybe that would produce the effect you want.

Categories

Resources