In my application, a user is browsing a scene, and I'd like to be able to find the faces that appear on the screen meaning that the user can see it (so I'd like to exclude the faces that are not in the frustum of the camera, and the faces that are hidden by other faces).
An idea I had was to use the Raycaster class to throw rays on each pixel of the screen, but I'm afraid the performances will be low (I don't need it to be realtime but I'd like it not to be really slow).
I know that there is a z-buffer to know which faces are shown because they are not hidden and I wanted to know if there was an easy way with Three.js to use the z-buffer to find those faces.
Thank you !
My final solution is the following :
I use three.js server-side to render my model (people here, and there explain how to do it).
I use the color attribute of Face3 in order to set a specific color for each face. Each face has a number (the number of the face in the .obj file), this number will be represent the Face3 color.
I use only ambient light
I do the rendering
My render represents in fact a set of pixels : if a certain color appears on the rendering, it means that the face corresponding the color is appearing on the screen.
Related
I'm trying to create a scene with objects on a solar system scale.
Some examples of what I want are:
-When a small (on the order of 10m in diameter) object, crosses behind
a large object (earth sized), which blocks the light source
(THREE.DirectionalLight), the smaller object is shadowed by the larger
object.
-When a moon crosses between the light source and a planet,
a shadow is cast on the planet.
-All objects must cast, and receive shadows (except stars, which only cast).
I know that I should be shooting to "pancake" my shadow camera as much as possible, but with the variable nature of the scale that I need, this becomes very difficult to do.
What are some techniques or tricks that can be used when creating a shadowed scene on such a variable scale?
Is there some sort of logarithmic depth buffer for shadows (like there is for rendering)?
Or could I somehow leverage camera/trackball control events to dynamically adjust the frustum of the shadow camera? (as the camera(scene) gets further away, use a more coarse buffer/expand the shadow camera frustum)
Check out this JSfiddle for a relevant, but different example of my problem. These are two small objects, close together, with a very distant light source.
http://jsfiddle.net/mtcq070x/6/
Notice how the shadows flicker on and off, and there's shadowing on the front of the sphere (which there shouldn't be).
EDIT: I changed the jsfiddle to use a proper bias, and the ball now receives and casts shadows. notice how shadow darkness increasing worsens the self shadowing. Lowering shadow darkness isn't an option, because then the shadow cast to the plane disappears.
Also Here's exactly what I'm working on looks like (to scale solar system)
What you are seeing in the jsfiddle is whats known as shadow acne. This can be fixed by using a non zero, small positive shadow bias value. Setting light.shadowBias = 0.01; seems to solve the problem in your fiddle: http://jsfiddle.net/mtcq070x/4/. Also see https://msdn.microsoft.com/en-us/library/windows/desktop/ee416324(v=vs.85).aspx
My problem seems to be a very simpel problem but I can't really find a good solution.
I'm working on an application that detects motions in an webcam stream. The plugin is written in JavaScript and WebGL. To this end it works fairly good.
I want to extand the application with color-tracking and ultimatley object recogniztion.
For now the color detection simply pass the a given color and the camera texture to a shader. The shader converts the texture and color to CIELAB space and checks the euclidean distance(on the A and B axes,Not the luminance component). If it is within the given distance the texture keeps the color, else the fragment is set to black. The result is barely "OK".
So my question is, is there a more robust and better way to find these colors?
I choosed the CIELAB space since it is somewhat invariant to shadows etc.
EDIT:
Seems that the biggest problem is that I use a Guassian Filter to reduce noise in the video image, this leads to a darker image. Even though LAB is, as stated "somewhat invariant to shadows etc" it makes the detection less efficient. So I'm guessing I need a different way to reduce noise in the image. And I have only tried a spatial median filter that uses the last 5 frames. But it is just not good enough. So an better solution for noise reduction would be MUCH appreciated.
Best regards
You need to normalize the Gaussian filter kernel values to prevent darkening/brightening of the image - you may take a look at this gaussian blur tutorial. You can also try Symmetric nearest neighbor or Kuwahara filters.
Regarding the color tracking, it seems to me you can also use HSV color space.
I want to display text with WebGL, and I know that there is not a built in way to do this. However, I know it can be done, with textures. I am new to OpenGL, so I don't really have much experience with shaders, so if someone could add how to set up the shaders for this. I would like to draw the entire string on the same object, instead of a bunch of seperate letters, and the strings are NOT preset, they will not always be the same. How can I get the text to appear? Also, how do I know how to space each letter?
I read post #7 at this page, and that sounds like it's what I want to do, but I don't understand exactly what It all means. (It's mostly the shader stuff I don't understand).
By the way, I am using sylvester.js
There are many ways to render text but one of the simplest is called bitmap font rendering.
All you need to get started is a sprite sheet with all of the letters you might want to render. Then you simply render a quad with the texture coordinates set to the location of the character you want to draw. To render a full sentence, just draw a bunch of quads, each representing a single letter.
Your sprite sheet will look something like the following texture.
Once you have that, you'll need the texture coordinates, essentially (x, y) coordinates in the range 0 to 1, for each character in the sprite texture. Use these when generating quad meshes. You'll end up drawing something like this to the screen:
Now that you have text on the screen, you can get fancy and take into account the glyph kerning between the letters. This allows you to render more natural text.
Unfortunately, I can't find a tutorial to point you to. And its not really something that I can whip together for you here. There are many pieces to the puzzle and its no small task (matrix math, camera's, orthographic projection, texture coords, textures, sprites, generating meshes, etc...).
If you'd like you can look through one of my projects where I have done this with WebGL. I even generate the initial sprite sheet using javascript + 2d canvas.
Sprite Sheet generated here:
https://github.com/zfedoran/prefab.js/blob/master/app/graphics/spriteFont.js
Quad Mesh generated in this file:
https://github.com/zfedoran/prefab.js/blob/master/app/controllers/labelController.js
Wrapper around WebGL:
https://github.com/zfedoran/prefab.js/blob/master/app/graphics/device.js
Or You Could
Watch Notch (the guy who made Minecraft) do this, in only about 30 minutes, in Java (fast forward to 2:21 hours in):
http://www.twitch.tv/notch/b/487451713
http://www.twitch.tv/notch/b/487621698
Good luck, and have fun :)
Three.js has actual text glyph support. In addition, dimensionthree.net uses textures on shapes. if you need source let me know.
There also is my http://taccGL.org library that can draw HTML text on a 2D canvas and then use it as textures on 3D objects drawn on a 3D/WebGL canvas.
Iam using Threejs for rendering model in browser, after rendering model in browser is it possible to shrink the faces, not the mesh. Give your suggestions for it.
See
three.js/examples/js/modifiers/ExplodeModifier.js.
Once each face has its own unique vertices, you can move each face vertex to a new location. You will likely also want the reset the face centroid, faceNormal, and vertexNormals.
If you are using WebGLrenderer, you will have to set
geometry.verticesNeedUpdate = true;
See the Wiki article How to Update Things with WebGLRenderer.
Hey im trying to implement shadow mapping in webgl using this example:
tutorial
What im trying to do is
initialize the depth texture and framebuffer.
draw a scene to that framebuffer with a simple shader, then draw a new scene with a box that has the depthtexture as texture so i can see the depth map using an other shader.
I think i look ok with the colortexture but cant get i to work with the depthtexture its all white.
i put the code on dropbox:
source code
most is in the files
index html
webgl_all js
objects js
have some light shaders im not using at the moment.
Really hope somebody can help me.
greetings from denmark
This could have several causes:
For common setups of the near and far planes, normalized depth values will be high enough to appear all white for most of the scene, even though they are not actually identical (remember that a depth texture has an accuracy of at least 16bits, while your screen output has only 8 bits per color channel. So a depth texture may appear all white, even when its values are not all identical.)
On some setups (e.g. desktop OpenGl), a texture may appear all white, when it is incomplete, that is when texture filtering is set to use mipmaps, but not all mipmap levels have been created. This may be the same with WebGl.
You may have hit a browser WebGl implementation bug.