How to use Three.js OrbitControl on multiple objects - javascript

So i have the orbit control working, but i have 3 objects on the page. When it controls one, it controls them all. Also pan/zoom does not work at all with the OrthographicCamera.
I have each instance of the OrbitControls assigned its own variable, so it is not global across them all.
controlsObjOne = new THREE.OrbitControls(cameraObjOne);
controlsObjOne.addEventListener('change', renderObjOne);
I use ObjTwo, Three, etc for the other models. Everything works this way (camera, light, render, etc) except the orbit. Is it possible with this library or is there another one that i have not seen that will work with multiple objects?

The reason for this is that by default, the OrbitControls attach the eventListeners for mouse/touch input to the document. One can hand over a domElement as a second parameter. Then all eventListeners will be bound to that element (for example the renderer canvas domElement). But that will highly limit the navigation possibilities, since the mousemove will only get registered when the mouse is in the canvas area.
What you want is the mousedown eventListeners on the renderer-canvas and the mousemove on the document to freely move the mouse once the mouse is pressed.
I created a modified version with a 3rd parameter to hand over your canvas element: https://gist.github.com/mrflix/8351020

Related

Phantom image along border while dragging object in fabric.js

I have a group of objects in fabric.js, everything is non evented and not selectable except a single object which I will call the selector. They are grouped together because they all need to move as one group. When this selector moves with in the bounds of the group everything works as expected. However when I moved the object outside of the bounds (even though I programmatically have it stop before that point it draws phantom objects along the edge of the main group.
I have gone through the code and commented things out and tried placing .renderAll() and .setCoords() on objects I think may be the issue but so far no luck.
Here is a short clip showing what is happening - https://i.imgur.com/bnIJWY7.mp4

Three.js position model with touchscreen?

I'm looking over the documentation for three.js right now and have found the controls section. I see that it's possible to use orbit to control the cameras view of the scene and I ahve confirmed that this works with touchscreen. What I cannot find anywhere online is if it has the possibility to rotate, scale, and transform a loaded model. I see that transform exists but I can't find anything else that I would need for it.
You need to add event listeners to the canvas by yourself and detect intersections. If intersection with the desired object was made, then make transformations depending on the event to it.

XML3D: Camera controls & XML3D tools

What is the suggested approach for handling user input and camera controls in XML3D?
Basic interactivity can be added using the DOM tree events, but I'm not sure if that would be enough to provide rotation gizmos (for example).
Does library provides some API to handle user input and camera controls?
I've noticed that there is xml3d toolkit that was developed year ago.
It seem however that this is rather a loose collection of demos rather than a library for handling user input, also there is no decent use documentation for it.
I need to provide basics functionalities like rotation/translation/scaling of models and controlling the camera.
xml3d.js doesn't provide any cameras or gizmos by itself. They're usually application-specific (there are dozens of ways to implement a camera for instance) so it doesn't really make sense to include them as part of the core library. A very basic camera is provided alongside xml3d.js but it's quite limited.
The xml3d-toolkit did include transformation gizmos and various camera controllers but it's not in active development anymore since the creator has moved on to other things. Still, it might be a good place to start, or at least to use as a reference in building your own camera or gizmo.
For example, a simple way to allow the user to change the transformation of a model would be to:
Add an onclick listener to each model that toggles the editing mode
Show 3 buttons somewhere in your UI to let the user switch between editing rotation, translation or scale
Add onmouseup and onmousedown listeners to the <xml3d> element to record click+drag movements
As part of those listeners, convert the change in mouse position to a change in transformation depending on what editing mode the user is in
Apply those transformation changes to the model either by changing its CSS transform, or through the relevant attribute on a <transform> element that's referenced by a <group> around your model.
Exit the editing mode if the user clicks the canvas to deselect the object (rather than a click+drag operation)
To keep it from conflicting with camera interaction you could use the right mouse button for editing, or simply disable the camera while the user is editing a transformation.
A 3D gizmo is a bit trickier because it needs to be drawn over top of the model while still being clickable, currently there is no way to do this. You could use the RenderInterface to draw the gizmo in a second pass after clearing the depth buffer, but this would not be done in the internal object picking pass that's required to find out which object a user has clicked on.
As a workaround, the toolkit library used a second XML3D canvas with a transparent background positioned over the first that intercepted and relayed all mouse events. When an object was selected its transformation was mirrored into the second canvas where the gizmo was drawn. Changes in the gizmo's transformation were then mirrored back to the object in the main canvas.
Have a look at the classes in the toolkit's xml3doverlay and widgets folders.
An advice for people implementing draggable objects with XML3D:
Use ray picking method of XML3D element to get both object and the point of intersection of ray & model ( function getElementByRay).
Change the mouse movements from screen coordinates to world coordinates.
You must scale transform by the relative distance of picked point to camera and camera to projection plane, so the moving object can track your cursor.

Why there is no method draw() in KineticJS documentation?

I've spent hours googling about Kinetic.Layer.draw() method. All that I've found is use-cases - no documentation about how, when and why to use it. Maybe it's deprecated already?
These are primary links which I use while learning and playing with this wonderful framework:
http://kineticjs.com/docs/index.html
http://www.html5canvastutorials.com/kineticjs/html5-canvas-events-tutorials-introduction-with-kineticjs/
It will be really helpful if somebody explains to me such misunderstanding.
Actually draw() and drawHit() are in the docs, but they are poorly documented:
http://kineticjs.com/docs/Kinetic.Stage.html#draw
draw()
draw layer scene graphs
http://kineticjs.com/docs/Kinetic.Stage.html#drawHit
drawHit()
draw layer hit graphs
Surprisingly I was unable to find the 3rd and last draw method: drawScene() in the Kinetic Docs. Also to my surprise, these 3 functions were not found to be extended from the parent class of Kinetic.Stage: Kinetic.Container
Anyways, I think this SO question explains the differences of the methods perfectly: What is the difference between KineticJS draw methods?
And definitely, there's no avoiding using these functions, you'll need to use one of them eventually unless your canvas/stage is static during your entire application. (*There may be an exception, see below)
To answer your questions:
How:
Call .draw() on any Kinetic.Container which includes: stage layer and group, or any Kinetic.Node which includes all the Kinetic.Shape
Examples:
stage.draw(); //Updates the scene renderer and hit graph for the stage
layer.drawHit(); //Updates the hit graph for layer
rect.drawScene(); //Updates the scene renderer for this Kinetic.Rect
Why:
I would think it's a performance thing to not have everything redraw on the Kinetic.Stage every single time there is a change. The use of the draw methods this way we can control programatically when we want the stage to be updated and rendered. As you might imagine, it is quite expensive to have to draw the stage all the time if we have say 10000 nodes in the scene.
When:
drawScene()
Anytime you need to update either the scene renderer (for example using .setFill() to change the fill of a shape)
drawHit()
To update the hit graph if you're binding events to your shapes so that the hit area for any events will be updated to the node changes.
draw()
Whenever you need to do both of the above.
Finally, perhaps an example/lab will be the most beneficial learning tool here, so I've prepared a JSFIDDLE for you to test out the differences. Follow the instructions and read my comments inside to get a better understanding of what's going on.
*NOTE: I mentioned above there was an exception to having to use the draw methods. That is because whenever you add a layer to the stage, everything in the layer is automatically drawn. There is small example of this described at the bottom of the fiddle.
The draw() method is basically used for drawing all the (visible) elements associated with the container you call the method on.
It is therefore not just limited to Kinetic.Layer but can also be used on Kinetic.Group, Kinetic.Container and so on...
When & Why to use:
Whenever you make any change to the canvas, you call the appropriate container's Draw() method. KineticJS does not refresh the canvas unless you explicitly say it using Draw(). In general, try to call the smallest container affected by your changes to make use of the efficient caching and redrawing only a part of canvas that was affected.
Take for instance:
You have 2 layers in your application. Layer1 is used for a static background and some other static items that need not be redrawn everytime.
And Layer2 contains your moving elements, or active objects. Then you can simply make a call to Layer2.draw()
To add the complexity, you have a group of objects, lets say all menu items. When a user presses any menu btn, its better to call menuGroup.draw() rather than the draw function of the its parent layer.

How to connect two shapes in Raphael by dragging the mouse?

Im trying to connect two shapes using the path by dragging the mouse from one shape to the other.Is this possible in Rapahael?If some one has already done this a lttle help will be much appreciated.
Im looking to do something like below.I want to be able to drag my mouse from the grey shape to other green shape and connect them using a path
Thanks
i'd approach it like so:
create a set to hold the shapes once they're joined.
assign a drag() handler to the desired element, to push it to the set upon dragging (with certain constrains, obviously - if shapes are intersecting or other conditions).
treat the set (now containing several shapes) as the new shape, as Raphael's set API allows precisely this by providing an opaque interface to the contained shapes inside the set object.
i hope this helps, for any questions or clarifications on this, please comment. i'll try and manifest another approach for a solution, and see if i'd come up with anything.

Categories

Resources