(Three.JS) Transform controls for translation (Size Issue) - javascript

I am working on customizing the transform controls available in three.js for my project.
I have already changed the rotation part and now working on translation part.
if you notice in translation Gizmo, there is a XYZ octahedron in the center. I have removed all other planes and arrows and wrote all functionality only on that center mesh, which is working fine.
Now I am only stuck at one small problem, that is the size and position of that controller. I changed that Octahedron to boxGeometry and writing the code to make the size of that controller to be exact size of the selected object. for that I get the idea to make the size of the controller, same as the boxHelper size, which act as outline of object.
when I tried this logic in sample code, where I created a box, and getting the size of box helper and creating another box of same size, it was working fine. but when I am writing same code in threejs transform controls, result is not the same.
below is the geometry code init
XYZ: [[ new THREE.Mesh( new THREE.BoxGeometry( 0.1, 0.1, 0.1 ), pickerMaterial )]],
then I am getting the size of box3 when attaching to any object
this.addBoxHelper = function () {
this.removeBoxHelper();
if(this.object.box3) {
**this.object.box3.getSize(selectionBoxSize);**
console.log(selectionBoxSize)
this.objectBoxHelper = new THREE.Box3Helper(this.object.box3, 0xffff00);
this.objectBoxHelper.canSelect = function () {
return false;
}
this.object.add(this.objectBoxHelper);
}
}
then below is my update function of transform controls
this.update = function () {
if ( scope.object === undefined ) return;
scope.object.updateMatrixWorld();
worldPosition.setFromMatrixPosition( scope.object.matrixWorld );
worldRotation.setFromRotationMatrix( tempMatrix.extractRotation( scope.object.matrixWorld ) );
scope.object.box3.getSize(selectionBoxSize);
scope.object.getWorldPosition(selectionBoxPos);
camera.updateMatrixWorld();
camPosition.setFromMatrixPosition( camera.matrixWorld );
camRotation.setFromRotationMatrix( tempMatrix.extractRotation( camera.matrixWorld ) );
**scaleT = selectionBoxSize;**
//below three lines are for dynamic size change based on camera position..for
//next level functionality
//scaleT.x = worldPosition.distanceTo( camPosition ) / 6 * selectionBoxSize.x;
//scaleT.y = worldPosition.distanceTo( camPosition ) / 6 * selectionBoxSize.y;
//scaleT.z = worldPosition.distanceTo( camPosition ) / 6 * selectionBoxSize.z;
this.position.copy( selectionBoxPos );
this.scale.set( scaleT.x, scaleT.y, scaleT.z);
this.updateMatrixWorld();
below is the output of console log
TransformControls.js:526 Vector3 {x: 10.020332336425781, y: 2.621583938598633, z: 3.503500819206238}
TransformControls.js:601 Vector3 {x: 10.020332336425781, y: 2.621583938598633, z: 3.503500819206238}
as you can see, the scale is the same, but the result is different. see the result in below images.
as you see in images, that red color box at bottom is translation controller but smaller than selection box.
another issue is that pivot of my objects are at bottom, and I want this controller to come at the center of the selection box, that is also not happening with getCenter method of box3.
Please help!! let me know if I am unclear in explaining the issue

If you are getting the bounding box of the object from its geometry, that will be wrong because it doesn't take the objects transform into account. You have to use box3.setFromObject (yourObject) instead. does that help?

Related

Why does setting the position of my mesh not negate the previous translation?

I am creating a simple "Hello World' Three.js application and I am curious to know why this works.
Firstly, I create and show a centered "Hello World" from the code snippet below. This code snippet is responsible for centering the text and moving it back 20 units.
/* Create the scene Text */
let loader = new THREE.FontLoader();
loader.load( 'fonts/helvetiker_regular.typeface.json', function (font) {
/* Create the geometry */
let geometry_text = new THREE.TextGeometry( "Hello World", {
font: font,
size: 5,
height: 1,
});
/* Create a bounding box in order to calculate the center position of the created text */
geometry_text.computeBoundingBox();
let x_mid = geometry_text.boundingBox.max.x - geometry_text.boundingBox.min.x;
geometry_text.translate(-0.5 * x_mid, 0, 0); // Center the text by offsetting half the width
/* Currently using basic material because I do not have a light, Phong will be black */
let material_text = new THREE.MeshBasicMaterial({
color: new THREE.Color( 0x006699 )
});
let textMesh = new THREE.Mesh(geometry_text, material_text);
textMesh.position.set(0, 0, -20);
//debugger;
scene.add(textMesh);
console.log('added mesh')
} );
Now notice here that I perform the translation first
geometry_text.computeBoundingBox();
let x_mid = geometry_text.boundingBox.max.x - geometry_text.boundingBox.min.x;
geometry_text.translate(-0.5 * x_mid, 0, 0);
and then the position is performed to move the mesh
let textMesh = new THREE.Mesh(geometry_text, material_text);
textMesh.position.set(0, 0, -20);
Now my confusion comes from that fact that if I remove my translation, then my "Hello World" text is not centered. However after my translation is completed, I am setting the position on my mesh to (0, 0, -20), shouldn't this set_position call overwrite my previous translation and move the object to the position (0, 0, -20), why is my text still centered eventhough my set_position is called after my translation?
This is because the call to THREE.TextGeometry.translate() ends up calling THREE.Geometry.applyMatrix() with the corresponding translation matrix, which bakes the transformation by directly modifying the vertex coordinates. See Geometry.js#L149 for the source.
In other words, before the call
textMesh.position.set(0, 0, -20);
the mesh transformation matrix was still the identity matrix. Mesh transformation differs from geometry transformation in that it only updates the matrix that is passed into the shader, instead of recomputing every vertex. For which one you would want to use: transforming the geometry is more expensive, but you can do it once and avoid it in the render loop (See the explanation here).

Three js Obj loader Rotation

Ive made a stick in the Three js editor, after loading it with the following code i can position it but not rotate it.
Its a stick made of 4 meshes, so i probably have to make a rotation point but i cant figure out how i get that to work.
The ideal situation :
x -------
The x represents the point where i want to rotate the dashes(stick) around
can anyone help me?
thx in advance
var loader = new THREE.ObjectLoader();
loader.load("scene.json", function ( obj )
{
stick =obj;
scene.add( stick );
stick.position.z = -9;
stick.position.y = .4;
stick.children[3].rotation.x(45);
});
If I understand aright, you have an object with 4 children that are meshes.
Object3D
-Mesh
-Mesh
-Mesh
-Mesh
First, I would translate all the children, and then rotate the parent object.
for(var i=0; i<stick.children.length; i++) {
stick.children[i].position.set(0, 0.4, -9); // or another offset
}
stick.rotateX( 45 * Math.PI / 180 ); // make sure to use Radians

Raycaster is not working with combined cameras - three js

I am building an application like threeJs editor. I have four cameras and each one named differently and positioned diffrently and one of the camera is
cameras['home'] = new THREE.CombinedCamera(window.innerWidth / 2, window.innerHeight / 2, 70, 1, 1000, -500, 1000);
cameras['home'].lookAt(centerPoint);
When I use raycaster to work with the selected Camera
raycaster.setFromCamera(mouse, selectedCamera);
var intersects = raycaster.intersectObjects([sceneObjects], true);
it throws me this error
'THREE.Raycaster: Unsupported camera type.'
I edited Three.js from
Raycaster.prototype = {
...
setFromCamera: function ( coords, camera ) {
if ( (camera && camera.isPerspectiveCamera) ) {
to below
if ( (camera ) ) {
even though the Raycaster is working fine. I just wanted to know Why the camera is not working for CombinedCamera
I think the reason is right there in the error message: the Raycaster code just doesn't currently support CombinedCamera. Just a reminder that CombinedCamera is actually 2 cameras: one ortho and one perspective.
You might try using setFromCamera(selectedCamera.cameraP) or setFromCamera(selectedCamera.cameraO), although I haven't tested this.

THREE.js line drawn with BufferGeometry not rendering if the origin of the line isn't in the camera's view

I am writing a trace-line function for a visualization project that requires jumping between time step values. My issue is that during rendering, the line created using THREE.js's BufferGeometry and the setDrawRange method, will only be visible if the origin of the line is in the camera's view. Panning away will result in the line disappearing and panning toward the origin of the line (usually 0,0,0) will make it appear again. Is there a reason for this and a way around it? I have tried playing around with render settings.
The code I have included is being used in testing and draws the trace of the object as time progresses.
var traceHandle = {
/* setup() returns trace-line */
setup : function (MAX_POINTS) {
var lineGeo = new THREE.BufferGeometry();
//var MAX_POINTS = 500*10;
var positions = new Float32Array( MAX_POINTS * 3 ); // 3 vertices per point
lineGeo.addAttribute('position', new THREE.BufferAttribute(positions, 3));
var lineMaterial = new THREE.LineBasicMaterial({color:0x00ff00 });
var traceLine = new THREE.Line(lineGeo, lineMaterial);
scene.add(traceLine);
return traceLine;
},
/****
* updateTrace() updates and draws trace line
* Need 'index' saved globally for this
****/
updateTrace : function (traceLine, obj, timeStep, index) {
traceLine.geometry.setDrawRange( 0, timeStep );
traceLine.geometry.dynamic = true;
var positions = traceLine.geometry.attributes.position.array;
positions[index++]=obj.position.x;
positions[index++]=obj.position.y;
positions[index++]=obj.position.z;
// required after the first render
traceLine.geometry.attributes.position.needsUpdate = true;
return index;
}
};
Thanks a lot!
Likely, the bounding sphere is not defined or has radius zero. Since you are adding points dynamically, you can set:
traceLine.frustumCulled = false;
The other option is to make sure the bounding sphere is current, but given your use case, that seems too computationally expensive.
three.js r.73

three.js - canvas texture on tube geometry for ipad

I've a 3d model of a tube geometry. There are 18000 co-ordinates on production side. I am taking every 9th co-ordinate so that actually plotting 9000 co-ordinates to build a tube geometry. I've to use CanvasRenderer only.
Now when I use vertexColors: THREE.VertexColors in WebGLRenderer, the model displays different color on each face. When I change it to CanvasRenderer, the model turns into white color only. Even I change vertexColors: THREE.FaceColors, the result is same.
Please find below the link of jsfiddle and link of my previous where mrdoob added support for material.vertexColors = THREE.FaceColors to CanvasRenderer.
support for vertex color in canvas rendering
tube in canvas rendering
Please find below the image to apply colors based on values.
As shown in the image there are 12 values at 12 different degrees for every co-ordinate. So I've created a tube with radius segment of 12. Then I've stored these values into JSON file but as there 18000 points, the file becomes to heavy. Even though I am plotting 2000 points it takes too much time. For 2000 segments and each segment has 12 faces, there are 24000 faces on a tube.
Please find below the programming logic to apply color based on value of a parameter.
// get res values & apply color
var lblSeg=0; var pntId; var d=0; var faceLength=tube.faces.length;
var degrees = [ '30', '60', '90', '120', '150', '180', '210', '240', '270', '300', '330' ];
var faces = tube.faces; var degreeCntr=0; var degreeProp;
//console.log(faces);
var res30=0,res60=0,res90=0,res120=0,res150=0,res180=0,res210=0,res240=0,res270=0,res300=0,res330=0;
var res; var resDegree; var pnt=0;
// fetching json data of resistivity values at different degree as //shown in the image
var result = getResValue();
for(var k=0; k<faceLength; k++){
resDegree = degrees[degreeCntr];
degreeProp = "r"+resDegree;
res = result.resistivity[pnt][degreeProp];
objects.push(result.resistivity[pnt]);
f = faces[k];
color = new THREE.Color( 0xffffff );
if(res<5){
color.setRGB( 197/255, 217/255, 241/255);
}
else if(res>=5 && res<50){
color.setRGB( 141/255, 180/255, 226/255);
}
else if(res>=50 && res<100){
color.setRGB( 83/255, 141/255, 213/255);
}
else if(res>=100 && res<200){
color.setRGB( 22, 54, 92);
}
else if(res>=200 && res<300){
color.setRGB( 15/255,36/255,62/255);
}
else if(res>=300 && res<400){
color.setRGB( 220/255, 230/255, 241/255);
}
else if(res>=400 && res<700){
color.setRGB( 184/255, 204/255, 228/255);
}
else if(res>=700 && res<1200){
color.setRGB( 149/255, 179/255, 215/255);
}
else if(res>=1200 && res<1500){
color.setRGB( 54/255, 96/255, 146/255);
}
else if(res>=1700 && res<1800){
color.setRGB( 36/255, 84/255, 98/255);
}
else if(res>1900){
color.setRGB( 128/255, 128/255, 128/255);
}
for(var j=0;j<4;j++)
{
tube.vertices.push(f.centroid);
vertexIndex = f[ faceIndices[ j ] ];
p = tube.vertices[ vertexIndex ];
f.vertexColors[ j ] = color;
}
degreeCntr++;
if(degreeCntr==10){
degreeCntr=0;
}
if(k%12==0 && k!=0){
pnt++;
}
}
This logic takes too much time to render the model and the model becomes too heavy and we can't perform other operations. The FPS on android drops at 2-3 FPS. Actually I've to render this model on iPad so have to use canvas renderer only.
So, how do I make this model lighter to load and works smoothly on iPad ? and is there any other way to apply colors on every face ? If canvas map as texture can be applied to make the model lighter, how do I build that map with all the colors based on value ?
Update:
After changing library version to r53, vertexColors: THREE.FaceColors and face.color.setRGB( Math.random(), Math.random(), Math.random()), the model displays random color for each face on canvas rendering.
So now the issue is applying colors as per requirements (either by canvas map or any feasible solution) and to make the model lighter to load it smoothly on iPad.
I believe this will give you a little bit better performance + if you could come up with some automated method of calculating colors for each angle offset, that you could set hex color directly:
for ( var i = 0; i < tube.faces.length; i ++ ) {
tube.faces[ i ].color.setHex( Math.random() * 0xffffff );
}
As I explained to you in the previous message - three.js - text next to line, using canvas textures will only increase load to you fps if you'll attempt to render so many faces.
If you really want to render 24,000 faces on canvas renderer and still hope that it gonna show up good on an iPad – you are out of your mind!))
Here is the only solution that I can think of for now:
1) Set your tube to only 1 segment.
2) Create 12 canvas elements (for every radius segment) with Width equal to your tube length (see my link above).
3) Now imagine that your 2000 segments you are going to create inside of each canvas. So, you divide your canvas length by 2000 and for every one of the portion of this division you set your calculated color!!! (Just like the Stats() FPS bar shows it’s bar, but you are going to have each bar different color).
4) Then you just apply your colored-bars-canvas-texture to each one of your 12 radius segments and you are good to go!!
This way you’ll only get initial page load (calculating 'em 24,000 colored-bars) and YOUR WHOLE TUBE ONLY GONNA BE 12 FACES!!!
Now, I know your next question is going to be: How I'll pick my faces to show my lines with tag text?
Well, very simple! Just take current face (1 of 12) pick position coordinates and translate them back to your JSON, just the same way you would do with 24,000 faces;)
Hope that helps!

Categories

Resources