What I want to achieve is a camera rotation like http://www.keithclark.co.uk/labs/3dcss/demo/ . It's not perfect and sometimes the camera breaks, but that's the idea.
I like the rotation to be similar like a human view, but I only managed to obtain a rotation across a certain point. This is an example of what I obtained http://jsfiddle.net/gaAXk/3/.
As i said before, i would like a human like behaviour.
I also tried with -webkit-transform-origin but with no better result.
Any help/suggestion will be highly appreciated.
The problem here is the following:
To give a human-like behavior, when the point of view moves, you should calculate the new positions on the x/y/z axis for the objects (not just the rotation angle in case of a rotation, for instance).
CSS transform should work in the follwing way, we give a perspective, for example of 800px to a scene. Then the objects will be visible with a Z position up to 800px, if the Z position is, for example 1000px, it will be behind our point of view, so we won't be able to see the element.
That said, after a rotation you should calculate the new position for the items based on our new point of view.
To be clearer I've updated your example with much simpler code (it only supports rotation and there's just one image): http://jsfiddle.net/gaAXk/12/
The perspective in the example is 800px.
The image is initially placed at x=0px, y=0px, z=0px. So it will be visible in front of us at a "distance" of 800px.
When we rotate the point of view, the element should move along a circumference around the point of view, so the x,z positions and the rotation angle of the element, needs to be updated.
The element in the example moves along a circumference with 800px radius (the calculatePos() function does the trick).
The same calculation should be updated if we change position (if the point of view gets closer to some objects, and further from others).
This isn't so trivial. If anyone has better solutions (I'm not a 3D expert), I will be glad to hear some.
Related
The short story: I am trying to use THREE.TrackballControls to move the camera, but the (upside-down) x-z plane is where the x-y plane should be. Can anyone help?
The long story: I've been trying to add device orientation controls to a project. I have already used the THREE.TrackballControls to move the camera when mouse and touch are being used, and the direction the camera points feeds into other functionality. I am using v69 of three.js.
So, I have been looking into using THREE.DeviceOrientationControls to enable device orientation. Specifically, what I'm after is for rotation to be in the x-y plane when the device is upright in front of me and I turn around. Or in other words, when the device is face up on the table it is looking in the -ve z-direction, and when upside down it it looking in the +ve z-direction. Sounds fairly straightforward, right?
There are plenty of examples around to follow, but I seem to be stuck with axes incorrectly orientated, i.e. what should be my x-y plane is coming out as the x-z plane, but upside-down. I created a test page based on an example with a BoxGeometry cube I found, and then added red, yellow and blue spheres to the middle of the faces that corresponded to the +ve x-, y-, and z-directions respectively, and then pale versions of the same coloured spheres for the corresponding -ve directions. Testing this on an iPad confirmed that the scene axes and the real world axes were not lining up.
I have spent a bit of time trying to get to grips with how this Object works, and the main sticking point is in the function returned by setObjectQuaternion() which does the tricky bit:
...
return function (quaternion, alpha, beta, gamma, orient) {
euler.set(beta, alpha, -gamma, 'YXZ'); // 'ZXY' for the device, but 'YXZ' for us
quaternion.setFromEuler(euler); // orient the device
quaternion.multiply(q1); // camera looks out the back of the device, not the top
quaternion.multiply( q0.setFromAxisAngle( zee, - orient ) ); // adjust for screen orientation
}
...
where q1 is quaternion for a -pi/2 rotation around the x-axis, and zee is a unit z-axis vector.
I set up a jsfiddle here to help me debug this, but it wasn't rendering correctly on the iPad itself, so I had to add in some faking of orientation events, and plenty of logging, and continue on a normal desktop + console. This jsfiddle goes through each of the 6 basic orientations and sees whether the camera is looking in the direction I expect.
(Initially it would seem that a pi/2 rotation around the x-axis is what is required, but removing the quaternion.multiply(q1) doesn't fix it - I haven't even started looking at non-zero screen orientations yet.)
Ultimately, I'd like to make this more like the TrackballControls/OrbitControls with a target point that the camera always looks at (unless panned) and rotates around, once I've figured this "simple" stuff out.
Anybody have any ideas how I can orientate my camera properly?
I've been using perlin noise to generate tile based, isometric landscapes. So far I've been using the noise value as a height map for the tiles themselves: Math.floor(noise * 10), basically. This generates perfectly nice looking but linear maps. However, I found the "mountains" rather boring looking so I applied an exponent: Math.floor(Math.pow((noise / 4), 2.3)). This pushes the higher values up, producing the image attached.
These height values are the are stored in a 2d grid, giving me the x, y and z i need to draw the map to the screen.
The drawback is kind of obvious: there are gaps in my mountain that should be filled up. I'm just not sure where to start since that is information that I can no longer store in a 2d grid. I guess I can cheat using "longer" tiles but that is kind of lame. Any suggestions?
If you need more info I'm happy to explain. Maybe I'm barking up the wrong tree.
Before you draw the first tile far in the back have a look to the two neighboring tiles on the left and right that are closer to the viewer. Get the lowest height of them and check if this height is lower than your back-tile minus one height. (Because this would cause a gap). Now you can start to draw on the pile in the back starting with this "low height" and stack tiles on it until you reach the height you want. Then you can draw the next tile that is closer to the viewer using the same algorithm.
Edit: But I am just wondering if it would maybe look a but awkward with so many stacked tiles. Maybe it better to just stretch the soil layer until the "low height".
I am working on this browser-based experiment where i am given N specific circles (let's say they have a unique picture in them) and need to position them together, leaving as little space between them as possible. It doesn't have to be arranged in a circle, but they should be "clustered" together.
The circle sizes are customizable and a user will be able to change the sizes by dragging a javascript slider, changing some circles' sizes (for example, in 10% of the slider the circle 4 will have radius of 20px, circle 2 10px, circle 5 stays the same, etc...). As you may have already guessed, i will try to "transition" the resizing-repositioning smoothly when the slider is being moved.
The approach i have tried tried so far: instead of manually trying to position them i've tried to use a physics engine-
The idea:
place some kind of gravitational pull in the center of the screen
use a physics engine to take care of the balls collision
during the "drag the time" slider event i would just set different
ball sizes and let the engine take care of the rest
For this task i have used "box2Dweb". i placed a gravitational pull to the center of the screen, however, it took a really long time until the balls were placed in the center and they floated around. Then i put a small static piece of ball in the center so they would hit it and then stop. It looked like this:
The results were a bit better, but the circles still moved for some time before they went static. Even after playing around with variables like the ball friction and different gravitational pulls, the whole thing just floated around and felt very "wobbly", while i wanted the balls move only when i drag the time slider (when they change sizes). Plus, box2d doesn't allow to change the sizes of the objects and i would have to hack my way for a workaround.
So, the box2d approach made me realize that maybe to leave a physics engine to handle this isn't the best solution for the problem. Or maybe i have to include some other force i haven't thought of. I have found this similar question to mine on StackOverflow. However, the very important difference is that it just generates some n unspecific circles "at once" and doesn't allow for additional specific ball size and position manipulation.
I am really stuck now, does anyone have any ideas how to approach this problem?
update: it's been almost a year now and i totally forgot about this thread. what i did in the end is to stick to the physics model and reset forces/stop in almost idle conditions. the result can be seen here http://stateofwealth.net/
the triangles you see are inside those circles. the remaining lines are connected via "delaunay triangulation algorithm"
I recall seeing a d3.js demo that is very similar to what you're describing. It's written by Mike Bostock himself: http://bl.ocks.org/mbostock/1747543
It uses quadtrees for fast collision detection and uses a force based graph, which are both d3.js utilities.
In the tick function, you should be able to add a .attr("r", function(d) { return d.radius; }) which will update the radius each tick for when you change the nodes data. Just for starters you can set it to return random and the circles should jitter around like crazy.
(Not a comment because it wouldn't fit)
I'm impressed that you've brought in Box2D to help with the heavy-lifting, but it's true that unfortunately it is probably not well-suited to your requirements, as Box2D is at its best when you are after simulating rigid objects and their collision dynamics.
I think if you really consider what it is that you need, it isn't quite so much a rigid body dynamics problem at all. You actually want none of the complexity of box2d as all of your geometry consists of spheres (which I assure you are vastly simpler to model than arbitrary convex polygons, which is what IMO Box2D's complexity arises from), and like you mention, Box2D's inability to smoothly change the geometric parameters isn't helping as it will bog down the browser with unnecessary geometry allocations and deallocations and fail to apply any sort of smooth animation.
What you are probably looking for is an algorithm or method to evolve the positions of a set of coordinates (each with a radius that is also potentially changing) so that they stay separated by their radii and also minimize their distance to the center position. If this has to be smooth, you can't just apply the minimal solution every time, as you may get "warping" as the optimal configuration might shift dramatically at particular points along your slider's movement. Suffice it to say there is a lot of tweaking for you to do, but not really anything scarier than what one must contend with inside of Box2D.
How important is it that your circles do not overlap? I think you should just do a simple iterative "solver" that first tries to bring the circles toward their target (center of screen?), and then tries to separate them based on radii.
I believe if you try to come up with a simplified mathematical model for the motion that you want, it will be better than trying to get Box2D to do it. Box2D is magical, but it's only good at what it's good at.
At least for me, seems like the easiest solution is to first set up the circles in a cluster. So first set the largest circle in the center, put the second circle next to the first one. For the third one you can just put it next to the first circle, and then move it along the edge until it hits the second circle.
All the other circles can follow the same method: place it next to an arbitrary circle, and move it along the edge until it is touching, but not intersecting, another circle. Note that this won't make it the most efficient clustering, but it works. After that, when you expand, say, circle 1, you'd move all the adjacent circles outward, and shift them around to re-cluster.
I'm trying to write a script (javascript) in an API of a Virtual Table Top program so I can manipulate some tokens (Car Wars :)).
I'm sort of finding the answer, but it seems like I'm struggling and reinventing the wheel so I thought I'd ask for help. One reason I'm getting confused is the program returns results based on +y is down and Deg go clockwise which is different than what all the trig formulas want (counter clockwise and +y is up).
Here is what I have access to. Rectangle rotates around centre, Centre point(x,y), width, height, and rotation. I've got the code working for moving the rectangle in the direction of the rotation, side to side, up and down, etc. Now I need to be able to rotate it around any of the four corners or any point would be nice, but four corners are all thats needed.
It won't let me include an image since I'm new so I hope the description is good enough. I had an image all done up. :(
In the API I can't actually draw the rectangle, I can only set its rotation, and centre value. So my thought was if I can find the x,y of one corner currently, then rotate it the desired degs around the centre (I can do this easily by setting the rectangles rotation), find the new x,y of that same corner. Then I will know the offset and apply that to the centre (thats how the rectangle is moved as well).
So I need to be able to find the x,y of any corner of a rectangle at any given starting angle, then again at a new angle rotated at its centre. This offset would then be easily applied to the centre x,y and the rectangle would see to have rotated along one of its corners.
Thanks for any help you can give. I'm hoping I will eventually figure it out, just writing this description out actually has helped me think it through. But I'm currently stuck!
Konrad
The trick to rotating around an arbitrary point in 2d (eg, one of the four corners of the rectangle), is to first translate the vertices of the shape so that the point around which you want to rotate is in the origin (ie 0,0).
To achieve this:
1. Translate your rectangle by (-x, -y).
2. Rotate your rectangle by the desired angle.
3. Translate your rectangle by (x, y) to place it back where it originally was.
where (x,y) is the x/y coordinates of the point around which to rotate.
You can use negative angles to adjust for clockwise rotations.
There is a lot of info about this on the net, for example:
http://www.siggraph.org/education/materials/HyperGraph/modeling/mod_tran/2drota.htm
I'm doing some graphing around a center X,Y of 0,0. When it's time to render, I reposition with translate, and then use scale to make the graph fill the canvas (ie scale everything by 50% for example).
I notice that it matters whether you call scale and then translate, or translate and then scale and I can't quite get my head around it. This is a problem since everything doesn't always fit, but my mental model isn't complete so having a hard time fixing it.
Can someone explain why the order of the scale and translate calls matter?
So let's draw a grid on a 300x300 canvas:
http://jsfiddle.net/simonsarris/4uaZy/
This will do. Nothing special. A red line denotes where the origin is located by running through (0,0) and extending very very far, so when we translate it we'll see something. The origin here is the top left corner, where the red lines meet at (0,0).
All of the translations below happen before we draw the grid, so we'll be moving the grid. This lets you see exactly what's happening to the orgiin.
So lets translate the canvas by 100,100, moving it down+right:
http://jsfiddle.net/simonsarris/4uaZy/1/
So we've translated the origin, which is where the red X is centered. The origin is now at 100,100.
Now we'll translate and then scale. Notice how the origin is in the same place as the last image, everything is just twice as large.
http://jsfiddle.net/simonsarris/4uaZy/2/
Boom. The orgin is still at 100,100. Everything is puffed up by 2 though. The origin changed, then everything gets puffed up in place.
Now lets look at them in reverse. This time we scale first, so everything is fat from the start:
http://jsfiddle.net/simonsarris/4uaZy/3/
Everything is puffed by 2. The origin is at 0,0, its starting point.
Now we do a scale and then a translate.
http://jsfiddle.net/simonsarris/4uaZy/4/
We're translating by 100,100 still, but the origin has moved by 200,200 in real pixels. Compare this to two images previous.
This is because everything that happens after a scale must be scaled, including additional transforms. So transforming by (100,100) on a scaled canvas leads to it moving by 200, 200.
The takeaway thing to remember here is that changing the transformation affects how things are drawn (or transformed!) from then on. If you scale x2, and then translate, the translation will be x2
If you want to see, mathematically, what is happening at each step I encourage you to take a look at the code here:
https://github.com/simonsarris/Canvas-tutorials/blob/master/transform.js
This mimics the entire transformation process done by canvas and lets you see how previous transforms modify those that come afterwards.
Scaling and rotation are done respect to the origin so if your transform has a translation, for example, then this will make the order important.
Heres a good read:
Why Transformation Order Is Significant