Using this example code:
app.project.item(index).layers.addLight(name, centerPoint)
I created the following test code where I add a light to my second scene (composition) in my project to create a shadow:
var s2light1 = scene2.layers.addLight("s2light1", [1143,121]);
This works perfectly. But I now also want to set the 3rd (Z) value for the centerPoint in Extendscript (as is possible in After Effects).
However according to the After Effects CS6 scripting guide it seems you can only set the X and Y values: "The center of the new camera, a floating-point array [x, y]. This is used to set the initial x and y values of the new camera’s Point of Interest property. The z value is set to 0."
Is there another approach or work around to set the Z-value for the center point in Extendscript which I can try?
newLight = app.project.item(1).layers.addLight("foo", [22, 33]);
//now set the point of interest ('center point') value:
newLight.property("Point of Interest").setValue([22, 33, 11]);
and to make a light not auto-orient (one-node):
newLight.autoOrient = AutoOrientType.NO_AUTO_ORIENT;
In which case you would control the Position and Rotation properties -- no point of interest.
Related
I'm creating an app where a person (right now I'm using a cone-shape) is standing on some surface (right now I'm using a cylinder laid lengthwise) and I'd like their feet to orient toward some point (right now it's the center of the cylinder).
(edit: I just realized that my Z axis in this photo is pointing in the wrong direction; it should be pointing towards the camera, but the question remains unchanged.)
Here is a version of the code similar to what I'm trying to accomplish. https://codepen.io/liamcorbett/pen/YMWayJ (Use arrow keys to move the cone)
//...
person = CreatePerson();
person.mesh.up = new THREE.Vector3(0, 0, 1);
//
// ...
//
function updateObj(obj, aboutObj=false){
let mesh = obj.mesh;
if (aboutObj) {
mesh.lookAt(
aboutObj.mesh.position.x,
aboutObj.mesh.position.y,
mesh.position.z)
};
}
//
// ...
//
function animate() {
// ...
updateObj(person);
// ...
}
The code above gives me something similar to what I'm looking for, but the issue is that lookAt() seems to always point the local Positive Z-axis in some direction, and I'd much prefer that it point the local Negative Y-axis instead.
I'd prefer to not change the x,y,z axes of the model itself, as I feel that's going to be a pain to deal with when I'm applying other logic to the person object.
Is there a way to change which axis lookAt() uses? Or am I going to have to roll my own lookAt() function? Thanks ~
Is there a way to change which axis lookAt() uses?
No, the default local forward vector for 3D objects (excluding cameras) is (0, 0, 1). Unlike other engines, three.js does not allow to configure the forward vector, only the up vector. But this is not really helpful in your case.
You can try to transform the geometry in order to achieve a similar effect.
If you don't want to do this for some reasons and you still want to use Object3D.lookAt(), you have to compute a different target vector (so not the cylinder's center).
Even if the forward vector of the lookAt method can't be changed (as #Mugen87 said), you can still adjust the local rotation afterwards by knowing in advance the difference between the forward Z axis used, and the axis you consider your mesh to be "upward" (ex: a person standing up on the Y axis).
Basically, in your case, just add this line after the lookAt method :
mesh.rotateOnAxis( new THREE.Vector3(1,0,0), Math.PI * -0.5 );
And the cone will look up :)
I'm am searching for how WebGL / Three.js in general sets their heights and widths.
As in what numbers systems do they use to set x,y,z.
For the Example below, the arrow it pointing straight up with the Y being set to 1 but in pixels it looks like 15- - 200 pixels.
Is there a helper function that i can write that I could pass in 100 for the pixels and it would return me the correct number to float number to use with THREE.js.
Excuse me if I am not talking in correct terms when it comes to number system but this is he only way i know how to reference it at this point.
The only thing i am missing below is creating the scene. but the rest is there, the image shows what it looks lik.
Once again is there a helper function that i can pass pixels to and in return get back the correct number in float for use with THREE.js?
Here is my arrow:
//scene.remove(cube);
scene.remove(group);
// create a new one
var sphere = createMesh(new THREE.SphereGeometry(5, 10, 10));
var cube = createMesh(new THREE.BoxGeometry(6, 6, 6));
sphere.position.set(controls.spherePosX, controls.spherePosY, controls.spherePosZ);
cube.position.set(controls.cubePosX, controls.cubePosY, controls.cubePosZ);
// add it to the scene.
// also create a group, only used for rotating
var group = new THREE.Group();
group.add(sphere);
group.add(cube);
scene.add(group);
controls.positionBoundingBox();
var arrow = new THREE.ArrowHelper(new THREE.Vector3(0, 1, 0), 0, 10, 0x0000ff);
scene.add(arrow);
I receive these JS objects with the Pixels then write to screen, but how do i convert the pixels down to usable units in 3D?
The lengths in 3D do not translate to lengths in 2D uniformly. Especially when perspective projection is employed.
Let's consider your example: Two arrows of the same 3D length and the same orientation would render to different 2D lengths depending on their distance from the camera. The arrow that is closer to camera will be rendered longer than the arrow farther from camera.
In order to maintain a certain pixel length for a certain arrow, you'd have to adjust the 3D length of the arrow every time some parameters of the camera change (e.g. position, orientation, FOV). And also every time the position or orientation of the arrow changes. This is possible (see comment by #WacławJasper ) but rather complicated.
If you could explain the bigger picture of what you wish to achieve there might be a simpler solution to your problem.
In Three.JS, I am capable of rotating an object about its origin. If I were to do this with a line, for instance, the line rotates, but the positions of its vertices are not updated with their new locations. Is there some way to apply the rotation matrix to the position of the vertices to find the new position of the point? Say I rotate a line with points at (0,0,0) and (0,100,100) by 45° on the x, 20° on the y, and 100° on the z. How would I go about finding the actual position of the vertices with respect to the entire scene.
Thanks
yes, 'entire scene' means world position.
THREE.Vector3() has a applyMatrix4() method,
you can do the same things that the shader does so in order to project a vertex into world space you would do this
yourPoint.applyMatrix4(yourObject.matrixWorld);
to project that into camera space you can apply this next
yourPoint.applyMatrix4(camera.matrixWorld);
to get an actual screen position in -1 to 1
yourPoint.applyMatrix4(camera.projectionMatrix);
you would access your point like this
var yourPoint = yourObject.geometry.vertices[0]; //first vertex
also, rather than doing this three times, you can just combine the matrices. Didnt test this, but something along the lines of this. Might go the other way:
var neededPVMmatrix = new THREE.Matrix4().multiplyMatrices(yourObject.matrixWorld, camera.matrixWorld);
neededPVMmatrix.multiplyMatrices(neededPVMmatrix, camera.projectionMatrix);
if you need a good tutorial on what this does under the hood i recommend this
Alteredq posted everything there is to know about three.js matrices here
edit
One thing to note though, if you want just the rotation, not the translation, you need to use the upper 3x3 portion which is the rotation matrix, of the models world matrix. This might be slightly more complicated. I forgot what three.js gives you, but i think the normalMatrix would do the trick, or perhaps you can convert your THREE.Vector3() to THREE.Vector4(), and set .w to 0, this will prevent any translation from being applied.
edit2
if you want to move the line point in your example, instead of applying it to the particle, apply it to
var yourVertexWorldPosition = new THREE.Vector3().clone(geo.vertices[1]); //this is your second line point, to whatever you set it in your init function
yourVertexWorldPosition.applyMatrix4();//this transforms the new vector into world space based on the matrix you provide (line.matrixWorld)
I'm trying to do a Tiny Wings like in javascript.
I first saw a technique using Box2D, I'm using the closure-web version (because of the memory leaks fix).
In short, I explode the curve into polygons so it looks like that:
I also tried with Chipmunk-js and I use the segment shape to simulate my ground like that:
In both cases, I'm experiencing some "crashes" or "bumps" at the common points between polygons or segments when a circle is rolling.
I asked about it for Chipmunk and the author said he implemented a radius property for the segment to reduce this behavior. I tried and it indeed did the trick but it's not perfect. I still have some bumps(I had to set to 30px of radius to get a positive effect).
The "bumps" append at the shared points between two polygons :
Using, as illandril suggested to me, the edging technique (he only tested with polygon-polygon contact) to avoid the circle to crash on an edge:
Also tried to add the bullet option as Luc suggested and nothing seems to change.
Here the demo of the issue.
You can try to change the value to check :
bullet option
edge size
iterations count
the physics
(only tested on latest dev Chrome)
Be patient (or change the horizontal gravity) and you'll see what I mean.
Here the repo for the interested.
The best solution is edge shapes with ghost vertices, but if that's not available in the version/port you're using, the next best thing is like the diagram in your question called 'edging', but extend the polygons further underground with a very shallow slope, like in this thread: http://www.box2d.org/forum/viewtopic.php?f=8&t=7917
I first thought the problem could come from the change of slope between two adjacent segments, but since on a flat surface of polygons you still have bumps I think the problem is rather hitting the corner of a polygon.
I don't know if you can set two sets of polygons, overlapping each other ? Just use the same interpolation calculations and generate a second set of polygons just like in the diagram hereafter : you have the red set of polygons built and add the green set by setting the left vertices of a green polygon in the middle of a red polygon, and its right vertices in the middle of the next red polygon.
![diagram][1]
This should work on concave curves and... well you should be flying over the convex ones anyway.
If this doesn't work try setting a big number of polygons to build the slope. Use a tenth of the circle's radius for the polygon's width, maybe even less. That should reduce your slope discontinuities.
-- Edit
In Box2D.js line 5082 (in this repo at least) you have the PreSolve(contact, manifold) function that you can override to check if the manifolds (directions in which the snowball are impulsed when colliding the polygons) are correct.
To do so, you would need to recover the manifold vector and compare it to the normal of the curve. It should look like that (maybe not exactly) :
Box2D.Dynamics.b2ContactListener.prototype.PreSolve = function (contact, oldManifold) {
// contact instanceof Box2D.Dynamics.Contacts.b2Contact == true
var localManifold, worldManifold, xA, xB, man_vect, curve_vect, normal_vect, angle;
localManifold = contact.GetManifold();
if(localManifold.m_pointCount == 0)
return; // or raise an exception
worldManifold = new Box2D.Collision.b2WorldManifold();
contact.GetWorldManifold( worldManifold );
// deduce the impulse direction from the manifold points
man_vect = worldManifold.m_normal.Copy();
// we need two points close to & surrounding the collision to compute the normal vector
// not sure this is the right order of magnitude
xA = worldManifold.m_points[0].x - 0.1;
xB = worldManifold.m_points[0].x + 0.1;
man_vect.Normalize();
// now we have the abscissas let's get the ordinate of these points on the curve
// the subtraction of these two points will give us a vector parallel to the curve
var SmoothConfig;
SmoothConfig = {
params: {
method: 'cubic',
clip: 'mirror',
cubicTension: 0,
deepValidation: false
},
options: {
averageLineLength: .5
}
}
// get the points, smooth and smooth config stuff here
smooth = Smooth(global_points,SmoothConfig);
curve_vect = new Box2D.Common.Math.b2Vec2(xB, smooth(xB)[1]);
curve_vect.Subtract(new Box2D.Common.Math.b2Vec2(xA, smooth(xA)[1]));
// now turn it to have a normal vector, turned upwards
normal_vect = new Box2D.Common.Math.b2Vec2(-curve_vect.y, curve_vect.x);
if(normal_vect.y > 0)
normal_vect.NegativeSelf();
normal_vect.Normalize();
worldManifold.m_normal = normal_vect.Copy();
// and finally compute the angle between the two vectors
angle = Box2D.Common.Math.b2Math.Dot(man_vect, normal_vect);
$('#angle').text("" + Math.round(Math.acos(angle)*36000/Math.PI)/100 + "°");
// here try to raise an exception if the angle is too big (maybe after a few ms)
// with different thresholds on the angle value to see if the bumps correspond
// to a manifold that's not normal enough to your curve
};
I'd say the problem has been tackled in Box2D 2.2.0 , see its manual, section 4.5 "Edge Shapes"
The thing is it's a feature of the 2.2.0 version, along with the chainshape thing, and the box2dweb is actually ported from 2.2.1a - don't know about box2dweb-closure.
Anything I've tried by modifying Box2D.Collision.b2Collision.CollidePolygonAndCircle has resulted in erratic behaviour. At least a part of the time (e.g. ball bumping in random directions, but only when it rolls slowly).
I'm going to create 3D Earth with search input. Could someone guide how to write code that finds point (exact place point) by input search, using WebGL?
I think your question is really vague but I can imagine that what you want to do is to rotate your 3D Earth so the point you queried for appears in the center of the view (or what it is the same, on the view axis of the camera).
To do it you need to:
assign every landmark a set of
spherical coordinates
given that you are locating
points on the surface of the sphere
you can forget about the radius and
only assign elevation and
azimuth to each point.
then you write the code for the
user to input the point of interest.
Say "Rome".
you look for this point in a
javascript array and recover the
elevation and the azimuth values
you apply the correspondent
rotations to your Model-View Matrix. Assuming you are using glMatrix you
should have something like this:
var M = mat4.create();
var Y_axis = [0,1,0];
var X_axis = [1,0,0];
mat4.rotate(M,azimuth,Y_axis);
mat4.rotate(M,elevation,X_axis);
the point of interest should be
displayed now