Inverse of camera.lookAt() - javascript

I googled far and wide but I haven't found the solution to what I think to actually be a pretty common situation. Say I have a THREE.PerspectiveCamera initialized to look at a certain point in space:
var camera = new THREE.PerspectiveCamera(45, 2, 0.1, 100);
var target = new THREE.Vector3(1, 2, 3);
camera.lookAt(target);
Now, later on in the code I'd like to be able to find out the coordinates of target by simply querying camera.
I tried what suggested in this question, adapting it to my own scenario:
var vector = new THREE.Vector3();
vector.applyQuaternion(camera.quaternion);
console.log(vector);
But it logs a vector of coordinates (0, 0, 0) instead of the correct coordinates (which, in my example, should be (1, 2, 3)).
Any insights? Cheers.
EDIT:
Ok so I'm going to try to contextualize here, so as to justify why MrTrustworthy's solution is unfortunately not applicable in my scenario. I'm trying to tweak the THREE.OrbitControls library for my purposes, since I noticed that when using it, it overrides whichever position the camera was looking at originally. This has also been reported here. Basically, on line 36 of OrbitControls.js (I'm using the version which can be found here) this.target is initialized to a new THREE.Vector3(); I found out that if I manually set it to equal the same vector I use as argument of camera.lookAt() everything works just fine: I can start panning, orbiting and zooming the scene from the same POV I would see the scene from if I didn't apply the controls. Of course, I cannot hard-code this information into OrbitControls.js because it would require me to change it every time I want to change the initial "lookAt" of my camera; and if I were to follow MrTrustworthy's suggestion I would have to change line 36 of OrbitControls.js to read like this: this.target = object.targetRef (or this.target = object.targetRef || new THREE.Vecotr3()), which is also too "opinionated" (it would always require object to have a targetRef property, whereas I'm trying to stick to using only three.js's existing object properties and methods). Hope this helps get a better understanding of my case. Cheers.

If your only usecase is "I want to be able to access the camera-targets position via the camera object", you could just put a reference into the camera object.
var camera = new THREE.PerspectiveCamera(45, 2, 0.1, 100);
var target = new THREE.Vector3(1, 2, 3);
camera.lookAt(target);
camera.targetRef = target;
//access it
var iNeedThisNow = camera.targetRef;

I figured it out and wrote my solution here. Since the issue affects both THREE.TrackballControls and THREE.OrbitControls, the solution involves applying a slight change to both those files. I wonder if it can be considered a valid change and make its way to rev. 70; I will issue a PR on github just for the sake of it :)
Thanks to all those who pitched in.

well you could put the object in parent, have parent lookat, and have the child object rotated 180 degrees. That's the quick noob solution

Related

Experiencing something odd when using THREE.Raycaster for collision detection (r68)

I've been using the THREE.Raycaster successfully to test collisions for many things in my game engine so far, it's great and it works well.
However, recently I've run into something quite peculiar which I cannot seem to figure out. From my point of view, my logic and code are sound but the expected result is not correct.
Perhaps I'm just missing something obvious so I thought I'd ask for some help.
I am casting rays out from the center of the top of a group of meshes, one by one, in a circular arc. The meshes are all children of a parent Object3D and the goal is to test collisions between the origin mesh and other meshes which are also children of the parent. To test my rays, I am using the THREE.ArrowHelper.
Here's an image of the result of my code - http://imgur.com/ipzYUsa
In this image, the ArrowHelper objects are positioned (origin:direction) exactly how I want them. But yeah, there's something wrong with this picture, the code that is produces this is:
var degree = Math.PI / 16,
tiles = this.tilesContainer.children,
tilesNum = tiles.length,
raycaster = new THREE.Raycaster(),
rayDirections, rayDirectionsNum, rayOrigin, rayDirection, collisions,
tile, i, j, k;
for (i = 0; i < tilesNum; i++) {
tile = tiles[i];
rayOrigin = new THREE.Vector3(
tile.position.x,
tile.geometry.boundingBox.max.y,
tile.position.z
);
rayDirections = [];
for (j = 0; j < Math.PI * 2; j += degree) {
rayDirections.push(new THREE.Vector3(Math.sin(j), 0, Math.cos(j)).normalize());
}
rayDirectionsNum = rayDirections.length;
for (k = 0; k < rayDirectionsNum; k++) {
rayDirection = rayDirections[k];
raycaster.set(rayOrigin, rayDirection);
collisions = raycaster.intersectObjects(tiles);
this.testRay(rayOrigin, rayDirection, collisions);
}
}
The testRay method looks like this:
testRay: function (origin, direction, collisions) {
var arrowHelper = new THREE.ArrowHelper(
direction,
origin,
1,
(collisions.length === 0) ? 0xFF0000 : 0x0000FF
);
this.scene.add(arrowHelper);
}
Now, obviously, something is off about this image. The rays that collide with other meshes should be blue, while those that do not collide should be red.
It's clear from this image that something is totally out of whack, and when I inspect the collisions, I get some really off results. For a lot of those rays which appear blue in the image, I'm getting a huge number of collisions, something like 30 collisions for a single ray sometimes, but nothing for the others even when they are right next to other tiles.
I just can't figure out what it might be. How can it be that so many rays that should be blue are red? And how can rays from tiles at the edge of the level have blue collisions to tiles that do not exist?
Really scratching my head (read: bashing my head repeatedly) over this one, any help would be super appreciated!
The solution was actually outside this code and not, at least I don't believe, related to the outdated r68 build.
When making the tile meshes, I needed to set three properties on them
tileMesh.matrixAutoUpdate = false;
tileMesh.updateMatrix();
tileMesh.updateMatrixWorld(); // this is new
I was doing the first two, just not the last one. Why this is necessary, I do not know, it seems a little odd to me but this is what fixed my problem. I had an AxisHelper in the scene, if you look at the original image, you'll notice that all the ArrowHelper objects that are blue are actually pointing towards the AxisHelper. This is really weird because the AxisHelper was added to the scene, not to tilesContainer. Adding the ArrowHelper objects to tilesContainer did not help.
The process to render the scene had the raycaster code run before the AxisHelper was added to the scene and before the initial render happened. The problem was also fixed if I moved the raycaster code call after the AxisHelper was added, but this was a hacky solution.
So the true fix was to add .updateMatrixWorld() to the tiles. The result now looks like this http://imgur.com/8LewqxL, which is correct (the ArrowHelper objects have been shortened in length so they don't overlap).
Big thanks to Manthrax for his help on this one.
I think you make some local vs global space error. I don't see so fast where exactly you go wrong, but all your position and direction calculations seem to be in the local system of the tilesContainer. Are you consistent in your local vs global coordinate system handling?
For example you add your arrowHelper to the scene instead of to the tilesContainer. It could be that the tilesContainer has some rotation set and because of this the arrows are pointing in another direction then you expected.
What happens for example if you add the arrows to the tilesContainer instead?

PhaserJS: add physics to graphic objects

Most examples use sprites to add physics, but I want to add physics to objects created using the Graphics class. For example:
var square = game.add.graphics( 0, 0 );
//some definitions
game.physics.arcade.enable( square );
This doesn't work at all with graphics, but it does right away with sprites. Why is that and how can I achieve this?
Thanks in advance.
I had to investigate quite a bit since it seems this is not the standard (at least tutorial wise), but you have to create a BitmapData object and use canvas to draw the figures. This is not nearly as fun as using game.add.graphics to create circles and poligons, etc. but it works well.
This is how you create a platform:
//creates the BitmapData object, you can use it to create figures:
var floor = game.add.bitmapData( 400, 20 ); //width, height
//this fills the whole object with a color:
floor.fill( 200, 100, 0, 1 ); //Red, Green, Blue, Alpha
//floor will have a canvas context object to draw figures.
//Here are some more examples:
//http://www.html5canvastutorials.com/tutorials/html5-canvas-circles/
//after you finish drawing, you need to convert the object to a sprite:
var floorSprite = game.add.sprite( 100, 500, floor );
//this adds physics to the object and adds a .body property:
game.physics.arcade.enable( floorSprite );
//stops the object in place and prevents other objects from displacing it:
floorSprite.body.allowGravity = false;
floorSprite.body.immovable = true;
And that's how you can create a platform without having to rely on image files. I have seen a few tutorials using files instead of generating the platform and I think it's such a waste.
Also, I think you need to convert your vector to a bitmap is because vector physics is quite heavy on the hardware (or so it seems).
Hope this can help a few more people!
maybe it works with this:
anySprite.addChild(yourGraphicsObject);
and after that:
game.physics.arcade.enable( anySprite );

three.js Object3D local location, versus global location

So originally I wanted my little 'ship' to have turrets that track a target. This is the jfiddle for it. http://jsfiddle.net/czGZF/2/ When they track, they act odd. I noticed that it thinks that the turret is slighly next to it (at the origin), by this peice of code.
turret_a.position.y = .25;
turret_a.position.z = 2;
However, I had done that so it could be a relative position for when i called (below) to add it to the 'base ship'
ship = new THREE.Object3D();
ship.add( ship_base );
ship.add( turret_a ) ;
When i changed the position of turret_a after it had been added to the ship, and after the ship was added to the scene, the turret tracked mostly how i wanted it to look.
I guess my question is, Why is the lookAt() function using its old location of that, and not the location of where it currently is on its parent object to determine the rotation angles that it needs to be at?
If you look at the code for Object3D.lookA(), you will see
// This routine does not support objects with rotated and/or translated parent(s)
Your code works if the parent ship is located at the origin and is not rotated.
Updated fiddle: http://jsfiddle.net/czGZF/4/
three.js r.59
From the API, the lookAt() method for Camera objects is defined to use world position, as you've discovered. This seems to be relatively common way to handle things.
I'm not that familiar with the three.js API in particular, but it appears that if you want to get the global position of ball, you can use the following:
var targetPos = ball.position.clone();
ball.localToWorld(targetPos);
Hopefully that gets you closer to your goal. Unfortunately, the fiddle you have seems to be (very) non-deterministic, so I can't quickly get a 100% solution for you.

Three.js outlines

Is it possible to have an black outline on my 3d models with three.js?
I would have graphics which looks like Borderlands 2. (toon shading + black outlines)
I'm sure I came in late. Let's hope this would solve someone's question later.
Here's the deal, you don't need to render everything twice, the overhead actually is not substantial, all you need to do is duplicate the mesh and set the duplicate mesh's material side to "backside". No double passes. You will be rendering two meshes instead, with most of the outline's geometry culled by WebGL's "backface culling".
Here's an example:
var scene = new THREE.Scene();
//Create main object
var mesh_geo = new THREE.BoxGeometry(1, 1, 1);
var mesh_mat = new THREE.MeshBasicMaterial({color : 0xff0000});
var mesh = new THREE.Mesh(mesh_geo, mesh_mat);
scene.add(mesh);
//Create outline object
var outline_geo = new THREE.BoxGeometry(1, 1, 1);
//Notice the second parameter of the material
var outline_mat = new THREE.MeshBasicMaterial({color : 0x00ff00, side: THREE.BackSide});
var outline = new THREE.Mesh(outline_geo, outline_mat);
//Scale the object up to have an outline (as discussed in previous answer)
outline.scale.multiplyScalar(1.5);
scene.add(outline);
For more details on backface culling, check out: http://en.wikipedia.org/wiki/Back-face_culling
The above approach works well if you want to add an outline to objects, without adding a toon shader, and thus losing "realism".
Toon shading by itself supports edge detection. They've developed the 'cel' shader in Borderlands to achieve this effect.
In cel shading devs can either use the object duplication method (done at the [low] pipeline level), or can use image processing filters for edge detection. This is the point at which performance tradeoff is compared between the two techniques.
More info on cel: http://en.wikipedia.org/wiki/Cel_shading
Cheers!
Yes it is possible but not in a simple out-of-the-box way. For toon shading there are even shaders included in /examples/js/ShaderToon.js
For the outlines I think the most commonly suggested method is to render in two passes. First pass renders the models in black, and slightly larger scale. Second pass is normal scale and with the toon shaders. This way you'll see the larger black models as an outline. It's not perfect but I don't think there's an easy way out. You might have more success searching for "three.js hidden line rendering", as, while different look, somewhat similar method is used to achieve that.
Its a old question but here is what i did.
I created a Outlined Cel-shader for my CG course. Unfortunately it takes 3 rendering passes. Im currently trying to figure out how to remove one pass.
Here's the idea:
1) Render NormalDepth image to texture.
In vertex shader you do what you normally do, position to screen space and normal to screen space.
In fragment shader you calculate the depth of the pixel and then create the normal color with the depth as the alpha value
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
2) Render the scene on to a texture with cel-shading. I changed the scene override material.
3)Make quad and render both textures on the quad and have a orto camera look at it. Cel-shaded texture is just renderd on quad but the normaldepth shaded on that you use some edge detection and then with that you know when the pixel needs to be black(edge).

raphael question - using animateAlong

I am using raphael to do some SVG animation and cannot seem to get the function animateAlong to work. I continue to get the error "attrs[0] is undefined" referencing line 3450 of the un-compressed raphael code.
Basically, I create a circle with a given center and then want to animate an image around that path. Here is that simple code:
var circle = paper.circle(circleCenterX, circleCenterY, circleRadius);
I then clone an image (since I plan to have a number of these on this path) and place at the edge of the circle:
var wheelClone = wheel.clone();
var wheelRadius = parseInt(wheel8ImageWidth/2);
wheelClone
.translate((circleCenterX + circleRadius)-3, circleCenterY-wheelRadius);
where I init circleCenterX earlier with circleCenterX = circle.attr(cx);
This all works fine with image placed correctly - but it errors on animateAlong.
I have studied as many examples as i can find and have dissected the documentation but cannot get the hang here.
So, I simply try to call the function but have no earthly idea what the documentation is referring to. The documentation animates a dot around a path but refers to two variables - rx and ry which I cannot suss out - both in an init function and then with the callback.
Here is what I have - - where the rx and ry and just made up as I have no idea what they refer to.
var wheelAttr = {
rx: 5,
ry: 3
};
wheelClone.attr(wheelAttr).animateAlong(circle, 2000, true, function() {
wheel.attr({rx: 4, ry: 4});
});
My current jsFiddle is a bit of a mess at the moment and I can clean it up, but I suspect that there is some obvious thing here?
Thanks to all
S
I don't think a circle is actually a valid path (i.e, something you can pass to animateAlong()). I think you need to create a path that is circular. See the following:
svg-animation-along-path-with-raphael
Hopefully, it will help.

Categories

Resources