I am loading a model of a mechanism (e.g. a robot arm) in Three.js. Sadly the models I am using don't have a skeleton, but I have the locations, axes and so on of the joints. In order to use e.g. inverse kinematic solvers like Three-IK, I want to create a skeleton from these parameters. Since I want to use many different models I would prefer to not create the skeletons by hand but in code.
I have been trying for over a week now to create a valid bone structure from these values that reflects the model, but nothing succeeded. For example, if I create a chain of bones using the positions of the joints I get a very long skeleton which in no way matches the positions I used.
let boneParent;
let bonepos = [];
let bones = [];
model.traverse(child => {
switch(child.type) {
case "joint":
let p = new Vector3();
child.getWorldPosition(p);
bonepos.push(p);
let bone = new Bone();
boneParent && boneParent.add(p);
bone.worldToLocal(p.clone());
bone.position.copy(p);
bone.rotation.copy(child.rotation);
bone.scale.copy(child.scale);
boneParent = bone;
bones.push(bone);
break;
}
});
showPoints(scene, bonepos, 0xff0000);
const skeletonHelper = new SkeletonHelper(bones[0]);
skeletonHelper.visible = true;
scene.add(skeletonHelper);
The code above results in the screenshot below. The red markers are the positions I get from the robot joints, the line snaking into the distance is the skeleton as visualized by the SkeletonHelper.
So my question is this: it seems like I don't understand well enough how bones are handled in Three.js. How can I create a skeleton that reflects my existing model from its joint locations and orientations?
Thanks in advance!
child.getWorldPosition(p);
I'm afraid it's incorrect to apply the position in world space to Bone.position which represents the position in local space.
boneParent = bone;
This line looks problematic, too. A bone can have multiple child elements. It seems to me that this use case is not considered of your code.
After some fiddling around I found a solution:
let root = new Bone();
let parent = root;
let pos = new Vector3();
for (let joint of robot.arm.movable) {
let link = robot.getLinkForJoint(joint);
link.getWorldPosition(pos);
let bone = new Bone();
parent.add(bone);
parent.lookAt(pos);
parent.updateMatrixWorld(); // crucial for worldToLocal!
bone.position.copy(bone.worldToLocal(pos));
parent = bone;
}
The important part is to call updateMatrixWOrld() after lookAt() so that bone.worldToLocal() works correctly. Also lookAt() saves a lot of matrix hassles :)
Related
Greetings to the community! I am currently creating a custom mxgraph editor and I am converting the graphs using different layouts. I would like to convert my graph layout into an mxHierarchicalLayout but reversed (instead of top-to-down I would like it to be down-to-top). Example of what I do and what I would like to do.
My graph
My graph converted with mxHierarchicalLayout
Code snippet:
let layout = new mxHierarchicalLayout(graph);
layout.execute(graph.getDefaultParent());
How I want to convert the graph
I found in the mxHierarchicalLayout that there is an orientation member variable which is by default 'NORTH'. However when I tried to do the following
let layout = new mxHierarchicalLayout(graph);
layout.orientation=mxConstants.DIRECTION_SOUTH;
layout.execute(graph.getDefaultParent());
The graph went out of my "canvas" and cannot be seen to check if this is the responsible parameter to "reverse" the tree. Could anyone help ? Thanks in advance.
PS
When I use layout.orientation = mxConstants.DIRECTION_SOUTH;
it seems that the layout is converted as I want to but cannot be seen because the coordinates of the svg elements are of type x=-10, y=-5 (negatives), any workaround for this?
SOLVED
I do not know why mxgraph has that buggy behaviour with orientation = south in mxHierarchicalLayout but I managed to create a workaround solution for my problem since I realized that the newly generated layout put my mxCell children objects of the graph to the north ( negative y coordinate). So what I did was after layout.execute(graph.getDefaultParent()); I get each and every child of the graph and retrieve the most negative y coordinate and then move all the cells of the graph from their new coordinates to a new incremented by the absolute value of the most negative y-coordinated element.
Code
function convertGraphToInverseHorizontalTree(){
let layout = new mxHierarchicalLayout(graph);
layout.orientation = mxConstants.DIRECTION_SOUTH;
layout.execute(graph.getDefaultParent());
graph.model.beginUpdate();
//get the most negative y
let mostNegativeY = getMostNegativeCoordinateY(graph);
let children = graph.getChildCells();
graph.moveCells(children, undefined , Math.abs(mostNegativeY));
graph.model.endUpdate();
}
function getMostNegativeCoordinateY(graph){
let children = graph.getChildCells();
let mostNegative = 10000;
for(let i = 0; i < children.length; i++){
if(children[i].geometry != undefined){
if(children[i].geometry.y < mostNegative){
mostNegative = children[i].geometry.y;
}
}
}
return mostNegative;
}
Now the graph is like this
This one takes a bit more to describe, sorry for the shorter title.
I currently do not have my code in front of me but might update this with the full details of how it works and where the problem is located.
Basically I notice that without doing a more or less global name (window.whatever) for the shapes/groups it will never draw them. I have done logs of the objects to see if they are proper and have seen nothing wrong with them.
The full scope is a layer first, that is passed to a function that i then create shapes and groups in a logical way, without passing the groups back and instead sending the layer as a parameter to add it from within the function. When I do it this way it seems that I can never get it to draw to the container (stage).
To add to it, I am looping to create the shapes, as I am making them variable and more percentage based then exact pixels, so I am holding the objects in an array that is generated through the loop.
I have done this flat first and it worked, however after moving it to the function it stopped working, is there any reason for this I may be missing?
Another note if it is relevant, I am using Adobe AIR.
If anyone has any ideas let me know, I will post the actual code when I can (few hours from this posting).
UPDATE:
Basically the issue I am having is that when i separate the shapes/groups into their own function it does not want to draw them at all. I have tried to call the draw function inline and also just add them to the stage later, both to my understanding will trigger the draw.
here is the code
var forms = {
selector:function(params,bind){
//globals
var parent,obj,lastWidth=0,items=[];
//setup defaults
var params = params || {};
params.x = params.x || 0;
params.y = params.y || 0;
params.active = params.active || 0;
params.height = params.height || 200;
params.children = params.children || [{
fill:'yellow'
}];
params.width = params.width || bind.getWidth();
params.margins = params.margins || 5;
//container for selector
parent = new Kinetic.Group({
x:params.x,
y:params.y,
height:params.height,
width:params.width
});
air.Introspector.Console.log(params);
var totalMargins = params.margins*(params.children.length+1),
activeWidth = (params.width-totalMargins)/2,
subItems = params.children.length-1,
subWidth = activeWidth/subItems,
itemHeight = (params.height-params.margins)/2;
//loop all children
for(var i=0;i<params.children.length;i++){
//use an array to store objects
items[i]={};
//create default object for rectangle
obj = {};
obj.y = params.margins;
obj.fill = params.children[i].fill;
if(params.active==i){
obj.width = activeWidth;
}else{
obj.width = subWidth;
}
obj.x = params.margins+lastWidth;
obj.height = itemHeight;
lastWidth = lastWidth+(params.margins+obj.width);
//create group for text
items[i].text = new Kinetic.Group({
x:0,
y:itemHeight+params.margins
});
//create box for text
items[i].box = new Kinetic.Rect({
x: params.margins,
y: params.margins,
width:params.width-(params.margins*2),
height:(params.height-itemHeight)-(params.margins*2),
fill:'yellow'
});
air.Introspector.Console.log(items[i].box);
//add box to text groups
items[i].text.add(items[i].box);
//create item
items[i].item = new Kinetic.Rect(obj);
//add groups to parent
parent
.add(items[i].item)
.add(items[i].text);
}
//add parent to the bound container
bind.add(parent);
}
}
From your question, I'm assuming bind is your Kinetic.Layer.
Make sure bind has been added to the stage: stage.add(bind);
After adding your groups/rects to bind, also do bind.draw();
I have since figured it out.
The issue is that the layer must be added to the stage before anything else is added it it. It was not in my code because I did not see it as an issue as it worked elsewhere.
Basically if you add anything to a layer, then add the layer to the stage it will fail. it must go like this:
CREATE STAGE
CREATE LAYER
ADD LAYER TO STAGE
ADD GROUPS TO LAYER
DRAW IF NEEDED
The way it was done originally is like so:
CREATE STAGE
CREATE LAYER
ADD GROUPS TO LAYER
ADD LAYER TO STAGE
DRAW IF NEEDED
The question will be updated to make this apparent, this is something that is not in the documentation as a problem (as far as I have seen) and I can see it as confusing some.
So I have a Geometry (the scope of this code is THREE.Geometry.prototype) and I am dynamically editing. newData is an object of { faces: [array of face Indexes], vertices: [array of vertice indexes]}. (these arrays maintain the length of the origin face and vertices arrays length and hold the form [null, null, null, "4", "5", null, null... ])
Using these arrays, I strip through all the faces and vertices and apply them to 1 of 2 new arrays, effectively splitting all the data into 2 groups. I also update the pointers on the faces!
In the end I know I've updated the geometry and it is correct, but the changes I make aren't getting displayed. I've tried .elementsNeedUpdate which causes and error. (no property 'a' of undefined in InitWebGlObjects... I looked there, couldn't see a reference to a)
I've tried vertices need update, it does nothing.
I've also tried updateCentroids in combination with the previous tool. It does nothing.
I've heard of not being able to resize the buffer. What is the buffer and the length of the buffer? The amount of verticies I'm giving to a model?
I've seen "You can emulate resizing by pre-allocating larger buffer and then keeping unneeded vertices collapsed / hidden." It sounds like that may be what I'm doing? How can I collapse/ hide a vertice? I haven't seen any references to that.
Thanks for your time!
var oldVertices = this.vertices
var oldFaces = this.faces;
var newVertices = []
var newFaces = [];
var verticeChanges = [];
this.vertices = [];
this.faces = [];
for(var i in oldVertices){
var curAr = ((newData.vertices[i]) ? (newVertices):(this.vertices));
curAr.push(oldVertices[i]);
verticeChanges[i] = curAr.length-1;
}
for(var i in oldFaces){
var curAr = ((newData.faces[i]) ? (newFaces):(this.faces));
oldFaces[i].a = verticeChanges[oldFaces[i].a];
oldFaces[i].b = verticeChanges[oldFaces[i].b];
oldFaces[i].c = verticeChanges[oldFaces[i].c];
}
console.log('Vertices Cut from', oldVertices.length, "to:", newVertices.length, 'and', this.vertices.length);
console.log('Faces Cut from', oldFaces.length, "to:", newFaces.length, 'and', this.faces.length);
I recently ran into this problem myself. I found that if I'm adding vertices and faces to the geometry, I need to set this.groupsNeedUpdate = true in order to tell the renderer to update it's internal buffers.
Maybe connected to this point from this tutorial:
"I just wanted to quickly point out a quick gotcha for Three.js, which
is that if you modify, for example, the vertices of a mesh, you will
notice in your render loop that nothing changes. Why? Well because
Three.js (as far as I can tell) caches the data for a mesh as
something of an optimisation. What you actually need to do is to flag
to Three.js that something has changed so it can recalculate whatever
it needs to. You do this with the following:
// set the geometry to dynamic so that it allow updates
sphere.geometry.dynamic = true;
// changes to the vertices
sphere.geometry.__dirtyVertices = true;
// changes to the normals
sphere.geometry.__dirtyNormals = true;"
I'm just getting started with OpenLayers, and have hit a small snag - when I create a LineString and then try to modify it, I can move the existing vertices and drag the virtual vertices to create new ones. When I continue to add to the line though, only the changes to the existing vertices are saved - new vertices are discarded. Am I missing something? You can see an example of what I'm talking about here:
http://dev.darrenhall.info/temp/open-layers/modify-feature/
Click to add points, and use the dots to edit, then click to continue adding to see what I mean. Any help would be appreciated! Thanks!
Darren
After a quick look, your code looks more complex than it should be.
You manually push point into an array of point manually on click, and generate a linestring with those points.
You don’t listen to any change done with virtual vertices. I don’t get why, in your addWayPoint function, you don’t get the geometry of the feature from the layer rather than your array of point.
Maybe that would be a good start to use the real feature geometry and avoid using your route.waypoints.
In the end I decided not to use modifyFeature, and instead went for using vectors as handles and manually handling the dragging and line modification. You can see my workaround here:
http://dev.darrenhall.info/temp/open-layers/draw-route
The guys at Ordnance Survey cam up with a (rather simple) fix for my code though that repopulates the array from the vertices after modification:
function addWayPoint(e) {
var position = osMap.getLonLatFromViewPortPx(e.xy);
if(route.waypoints.length>1) {
layers.lines.layer.removeFeatures([layers.lines.feature]);
}
/* vvvvvvvvvvv start */
/* Get the potentially modified feature */
if (modifyFeature.feature) {
route.waypoints = [];
var vertices = modifyFeature.feature.geometry.getVertices();
for (i = 0; i < vertices.length; i++) {
//console.log(vertices[i]);
route.waypoints.push(vertices[i]);
}
}
/* ^^^^^^^^^^^ end */
route.waypoints.push(new OpenLayers.Geometry.Point(position.lon, position.lat));
var string = new OpenLayers.Geometry.LineString(route.waypoints);
layers.lines.feature = new OpenLayers.Feature.Vector(string, null, styles.pink);
layers.lines.feature.attributes['id']=1;
layers.lines.layer.addFeatures([layers.lines.feature]);
for (i = 0; i < layers.lines.layer.features.length; i++) {
if (layers.lines.layer.features[i].attributes.id == 1) {
modifyFeature.selectFeature(layers.lines.layer.features[i]);
}
}
}
I am trying to manually define the vertices of a polygon in box2D for javascript. I ultimately want to resize each side of a box manually, but I need to be able to draw it with vertices first (I already have a resizing mechanism). I've looked at the examples in the manual, but they are for ActionScript, and it doesn't seem to work in javascript. I've tried defining the polygon in different ways (like standalone polygon = new b2Polygon;), but it makes no difference.
No matter how I define a new polygon, the box2D source is throwing an error in the call to create the fixture. The error says "tVec is undefined," which is a variable in the box2D function: b2PolygonShape.prototype.ComputeAABB = function (aabb, xf)
Here are the relevant parts of the code(fixDef and bodyDef are created earlier in the code):
var vertices = [];
vertices[0] = new b2Vec2()
vertices[0].Set(1,1);
vertices[1] = new b2Vec2();
vertices[1].Set(1, 6);
vertices[2] = new b2Vec2();
vertices[2].Set(6, 6);
vertices[3] = new b2Vec2();
vertices[3].Set(6, 1);
fixDef.shape = new b2PolygonShape;
fixDef.shape.Set(vertices, 4);
world.CreateBody(bodyDef).CreateFixture(fixDef);
Any help would be greatly appreciated as this has been giving me trouble for a while now.