Iam using Threejs for rendering model in browser, after rendering model in browser is it possible to shrink the faces, not the mesh. Give your suggestions for it.
See
three.js/examples/js/modifiers/ExplodeModifier.js.
Once each face has its own unique vertices, you can move each face vertex to a new location. You will likely also want the reset the face centroid, faceNormal, and vertexNormals.
If you are using WebGLrenderer, you will have to set
geometry.verticesNeedUpdate = true;
See the Wiki article How to Update Things with WebGLRenderer.
Related
Let's say I have a 3DObject in three.js:
const geometry = new THREE.BoxGeometry(1, 1, 1);
const material = new THREE.MeshStandardMaterial({ color: 0xff0000 });
const mesh = new THREE.Mesh(geometry, material);
This mesh is also dynamic, that means, user sets the size of it, which is dynamically set as the scale of the mesh.
widthInput.subscribe(value => this.mesh.scale.x = +value); // just to get the idea
So, I know it's possible to set separate materials to the different sides of geometry. I'm also aware that it should be possible to set it on separate segments of that geometry's sides (if I'd had more).
The problem is that the user can set the width from the range 200 - 260, but I need a different material on the very right of the mesh with a fixed size of 10. I'm not really sure how would I do that without creating another geometry. Is there any way to set the material on the fixed part of the mesh? or is there any way to set the segments the way one of them will always have the fixed size? Thank you in advance.
To visualize the problem (white area needs to have a fixed width of 10 while the red area resizes)
Is there any way to set the material on the fixed part of the mesh?
As you've already mentioned there is a way to set different materials on different parts of the geometry. The problem here is defining what fixed means:
or is there any way to set the segments the way one of them will always have the fixed size?
Yes. You'd have to modify the geometry yourself. Reach into say g.attributes.position.array and modify the vertices that make the segment. It's lower level and different than the scene graph.
There may not be a good reason for wanting to keep everything in the same geometry. It would make sense if you used vertex colors for example to paint the different segments, and perhaps animated the extrusion in GLSL rather than with scale.set(). But since you want to apply different materials and are not writing GLSL, you will end up with multiple draw calls anyway.
You actually can save a bit of memory by storing just the simple cube and referencing it twice, than storing extra vertices and faces. So what you're trying to do is more likely going to consume more memory, and have the same amount of render overhead. In which case, doing everything with the scene graph, with two meshes and one geometry (you dont need to duplicate the box, you only need two nodes) should be as performant, and much easier to work with.
while working on three js. I have a 3d model which has several small parts which are joint together to form a one three d loader (model). now I am using raycaster and intersects for finding out the clicked element. so its basically a door which has small screws, handles, hinges, bolts, rails, rotation points, pivots etc.
My problem is that while using raycaster and three js the element is not getting identified until or unless I zoom. when I zoom into a good extent I am able to identify clicked element. can anyone help me with that where I do get rid of zoom in.
You might be best to create an additional mesh attached/added to the small elements (like screws) - the additional mesh could be defined by a large sphere geometry shape that surrounds the screw. This larger sphere geometry would be invisible (ie its material visibility is false), and would function to intercept raycasting of the small elements like screws that are difficult to intercept on their own.
Has anybody used ThreeJS StereoEffect and Raycaster together for collision detection (in stereo view). In standard full screen view I can easily check if a Vector2 in the middle of the screen colides with an object in my scene. When I switch on the stereo effect I in effect get 2 scenes, and the collision detection stops working, but I am not really sure how to proceed. Should I create two new vector2d objects, one for each view - help :) ...
It's a bit late, but ...
I encountered a similar problem, I eventually found the reason. Actually in StereoEffect THREE.js displays the meshes on the two eyes, but in reality is actually adds only one mesh to the scene, exactly in the middle of the line left-eye-mesh <-> right-eye-mesh, hidden to the viewer.
So when you use the raycaster, you need to use it on the real mesh on the middle, not the illusion displayed on each eye !
I detailled here how to do it
Three.js StereoEffect displays meshes across 2 eyes
Hopes it solves your problem !
You can use my StereoEffect.js file in your project for resolving problem. See example of using.
In my application, a user is browsing a scene, and I'd like to be able to find the faces that appear on the screen meaning that the user can see it (so I'd like to exclude the faces that are not in the frustum of the camera, and the faces that are hidden by other faces).
An idea I had was to use the Raycaster class to throw rays on each pixel of the screen, but I'm afraid the performances will be low (I don't need it to be realtime but I'd like it not to be really slow).
I know that there is a z-buffer to know which faces are shown because they are not hidden and I wanted to know if there was an easy way with Three.js to use the z-buffer to find those faces.
Thank you !
My final solution is the following :
I use three.js server-side to render my model (people here, and there explain how to do it).
I use the color attribute of Face3 in order to set a specific color for each face. Each face has a number (the number of the face in the .obj file), this number will be represent the Face3 color.
I use only ambient light
I do the rendering
My render represents in fact a set of pixels : if a certain color appears on the rendering, it means that the face corresponding the color is appearing on the screen.
Hey im trying to implement shadow mapping in webgl using this example:
tutorial
What im trying to do is
initialize the depth texture and framebuffer.
draw a scene to that framebuffer with a simple shader, then draw a new scene with a box that has the depthtexture as texture so i can see the depth map using an other shader.
I think i look ok with the colortexture but cant get i to work with the depthtexture its all white.
i put the code on dropbox:
source code
most is in the files
index html
webgl_all js
objects js
have some light shaders im not using at the moment.
Really hope somebody can help me.
greetings from denmark
This could have several causes:
For common setups of the near and far planes, normalized depth values will be high enough to appear all white for most of the scene, even though they are not actually identical (remember that a depth texture has an accuracy of at least 16bits, while your screen output has only 8 bits per color channel. So a depth texture may appear all white, even when its values are not all identical.)
On some setups (e.g. desktop OpenGl), a texture may appear all white, when it is incomplete, that is when texture filtering is set to use mipmaps, but not all mipmap levels have been created. This may be the same with WebGl.
You may have hit a browser WebGl implementation bug.