I've got grid of cylinder meshes created simply by
var tile = BABYLON.MeshBuilder.CreateCylinder("tile-" + i, { tessellation: 6, height: 0.1 }, scene);
then I have following event callback
window.addEventListener("click", function (evt) {
// try to pick an object
var pickResult = scene.pick(evt.clientX, evt.clientY);
if (pickResult.pickedMesh != null){
alert(pickResult.pickedMesh.name)
});
Then mouse-click on one of tiles raises message box with correct tile name.
When I add some new meshes (3D model inside .babylon file) by
var house;
BABYLON.SceneLoader.ImportMesh("", "../Content/"
, "house.babylon"
, scene
, function (newMeshes)
{ house = newMeshes[0]; });
For better imagination it's texture of house created from four different meshes which is placed over grid of cylinder tiles.
It's displayed fine but when mouse-click it too much often behave as it would totally ignore there is such a mesh and so pickResult.pickedMesh is either null or pickResult.pickedMesh.name points to tile underlaying my imported mesh in point I've clicked.
Just approximately 5% of mesh area corresponds properly to mouse-clicks (let's say in middle of roof, in middle of walls).
I've tried playing with setting some virtual (hidden) house.parent mesh for that which would not be created by importing meshes but seems as dead end.
Are you aware about some way how enforce that scene.pick(evt.clientX, evt.clientY); would respect mesh hierarchy and would consider all visible parts of overlaying texture?
Just for completeness I'm working with middle part of this 3D model (removed left and right house from that).
EDIT: Demo on BabylonJS playground
you could try change
var pickResult = scene.pick(evt.clientX, evt.clientY);
to
var pickResult = scene.pick(scene.pointerX, scene.pointerY);
as evt corresponds to whole page.
Related
I'm trying to add an Object3D to my gltf model and place it above the model. I'm doing it the following way:
this.el.addEventListener('model-loaded', () => {
this.bar = new MyCustomObject3D();
const size = new THREE.Vector3();
let box = new THREE.Box3().setFromObject(this.el.object3D);
box.getSize(size)
let height = size.y + 1;
this.bar.position.set(0, height, 0);
this.el.setObject3D("bar", this.bar);
// same result:
// this.el.object3D.add(this.bar);
})
The height is 2 and if I placed an element with this position into root (i.e. scene) it would be placed correctly right above the model. But when I add it to the Object3D it's being placed somewhere below the model on height ~0.5. Only by multiplying the height by 25 I could achieve the right position.
So how to calculate the exact offset needed to place the new Object3D above the model without multiplying it to a random number?
UPDATE:
Adding reproducible example. Note the width and height I had to pass to GLTF model.
One way of placing objects above a model, would be grabbing its bounding box, and placing an object above it.
In general, it it simple - just like you did it:
let box = new THREE.Box3().setFromObject(this.el.object3D);
box.getSize(size)
let height = size.y + 1;
this.bar.position.set(0, height, 0);
But in this case - the bounding box is off. Way off. The minimum is way too low, and the maximum is somewhere in the middle. Why is that? (tldr: check it out here)
The cuprit is: skinning. The model is transformed by its bones - which is a form of vertex displacement that happens on the GPU (vertex shader), and has nothing to do with the geometry (source).
Here is some visual aid - the model with its armature:
And without the armature applied:
Now we see why the box is off - its corresponding to the bottom picture!
So we need to re-create what the bones are doing to the geometry:
1. The hard route
You need to take a THREE.Box3.
Iterate through each geometry point of the model
Apply the bone transform to each point (it is done here - but not available in a-frame 1.0.4)
Expand the THREE.Box3
2. The easy route
While looking into this, I've made a utility function THREE.Box3Utils.fromSkinnedMesh(mesh, box3); - box3 will be the bounding box of the model at the time when the function is called.
The function is a part of this repo.
Its used on this example.
I'm making a ThreeJS project in which I have planes (Object3D) flying inside a sphere (Mesh).
I'm trying to detect the collision between a plane and the border of the sphere so I can delete the plane and make it reappear at another place inside the sphere.
My question is how do I detect when an object leaves another object ?
The code I have now :
detectCollision(plane, sphere) {
var boxPlane = new THREE.Box3().setFromObject(plane);
boxPlane.applyMatrix4(plane.matrixWorld);
var boxSphere = new THREE.Box3().setFromObject(sphere);
boxSphere.applyMatrix4(sphere.matrixWorld);
return boxPlane.intersectsBox(boxSphere);
}
In my render function :
var collision = this.detectCollision(plane, this.radar)
if (collision == true) {
console.log("the plane is inside the sphere")
}
else {
console.log("the plane is outside the sphere")
}
})
The problem is that when the planes are inside the sphere I get true and false basically all the time until all the planes leave the sphere. At that point I have a false and no more true.
Box3 is not what you want to use to calculate sphere and plane collisions because the box won't respect the sphere's curvature, nor will it follow the plane's rotation.
Three.js has a class THREE.Sphere that is closer to what you need. Keep in mind that this class is not the same as a Mesh with a SphereGeometry, this is more of a math helper that doesn't render to the canvas. You can use its .containsPoint() method for what you need:
var sphereCalc = new THREE.Sphere( center, radius );
var point = new THREE.Vector3(10, 4, -6);
detectCollision() {
var collided = sphereCalc.containsPoint(point);
if (collided) {
console.log("Point is in sphere");
} else {
console.log("No collision");
}
return collided;
}
You'll have to apply transforms and check all 4 points of each plane in a loop. Notice there's a Sphere.intersectsPlane() method that sounds like it would do this for you, but it's not the same because it uses an infinite plane to calculate the intersection, not one with a defined width and height, so don't use this.
Edit:
To clarify, each plane typically has 4 verts, so you'll have to check each vertex in a for() loop to see if the sphere contains each one of the 4 points.
Additionally, the plane will probably have been moved and rotated, so its original vertex positions will have a transform matrix applied to them. I think you were already taking this into account in your example, but it would be something like:
point.copy(vertex1);
point.applyMatrix4(plane.matrixWorld)
sphereCalc.containsPoint(point);
point.copy(vertex2);
point.applyMatrix4(plane.matrixWorld)
sphereCalc.containsPoint(point);
// ... and so on
I'm working on my project in Paper.js.
In the part of It, I need to use sprite with animation.
To examplain It, I've got a space ship that can be everywhere on the screen, and there is an effect of disortion that happens sometimes.
I got prepared a spritesheet with 10 frames, and all I want is use Paper.js RASTER class to load It and animate on every frame.
The problem is in the positions, that I don't know how to calculate them...
When I load a raster
let slide = new Raster({
source: 'assets/sprite.png',
position: [0, 0]
});
I see center of a very long image, when I need to see the first frame.
My idea was to use group with containts mask (square)
let mask = new Rectangle({
position: [220, 100],
size: [186, 154],
});
That I can change position dynamically and animate the spread at the same time.
Is It possible that way?
It would be cool, If I cant calculate the position of raster against the mask, but for me now It seems impossible.
Anyone have idea how to attain this in a simple way?
Cheers.
I've looked into this in my project. The key is using a group with a pivot point. This code is admittedly unfinished and in raw javascript but it should give you a good idea:
var Sprite = paper.Group.extend({
_class: 'Sprite',
initialize: function Sprite(url, size) {
var maskSize = size || new paper.Size(256, 256);
var that = this;
this._raster = new paper.Raster(url);
this._raster.pivot = new paper.Point();
this._raster.on('load', function () {
that._spriteSheetWidth = Math.floor(this.size.width / maskSize.width);
that.setIndex(that._spriteIndex || 0);
});
this._clipRect = new paper.Path.Rectangle(new paper.Point(), maskSize);
Sprite.base.call(this, [this._clipRect, this._raster]);
this.clipped = true;
// Just use a blank point if you want the position to be in the corner
this.pivot = new paper.Point(maskSize.divide(2));
},
setIndex: function (index) {
if (typeof this._spriteSheetWidth !== "undefined") {
// TODO: FINISH SPRITE SHEET IMPLEMENTATION
}
this._spriteIndex = index;
}
});
I'm not actually using sprites in my project anymore so I never finished the implementation. But the complicated concepts should be completed above. Namely the way that paper.js implements pivot points and clipping masks. The position of an object is the center of it's bounds by default... this is kind of unweildy for a lot of reasons, like an images position will appear to change when it loads etc... or when the contents of a path change. So I like to set a pivot of 0,0 immediately after making any object. The next key section is that clipping masks only work on Groups. And finally you can extend the Group class to make a standard Sprite class.
Normal sprite shifting of this._raster.position.x and this._raster.position.y should finish this implementation off.
Edit: Finished my implementation... https://jsfiddle.net/willstott101/vgxq9kak/
Im a newbie in 3D computer graphics and seen an odd thing.
I used the XTK-Toolkit, witch is great with DICOM. I add a cube in the scene and translated it far from the center (http://jsfiddle.net/64L47wtd/2/).
when the cube rotate it looks like it is moving
Is this a bug in XTK, or an principle problem with 3D rendering?
window.onload = function() {
// create and initialize a 3D renderer
var r = new X.renderer3D();
r.init();
// create a cube
cube = new X.cube();
// skin it..
cube.texture.file = 'http://x.babymri.org/?xtk.png';
cube.transform.translateX(250);
cube.transform.translateY(200);
cube.transform.translateX(270);
r.add(cube); // add the cube to the renderer
r.render(); // ..and render it
// add some animation
r.onRender = function() {
// rotation by 1 degree in X and Y directions
cube.transform.rotateX(1);
cube.transform.rotateY(1);
};
};
You miss to consider the cube a compound object consisting of several vertices, edges and/or faces. As a compound object it's using local coordinate system consisting of axes X, Y, Z. The actual cube is described internally using coordinates for vertices related to that cube-local coordinate system.
By "translating" you declare those relative coordinates of vertices being adjusted prior to applying inside that local coordinate system. Rotation is then still working on the axes of that local coordinate system.
Thus, this isn't an error of X toolkit.
You might need to put the cube into another (probably fully transparent) container object to translate/move it, but keep rotating the cube itself.
I tried to extend your fiddle accordingly but didn't succeed at all. Taking obvious intentions of X Toolkit into account this might be an intended limitation of that toolkit for it doesn't obviously support programmatic construction of complex scenes consisting of multi-level object hierarchies by relying on its API only.
Is it possible to have an black outline on my 3d models with three.js?
I would have graphics which looks like Borderlands 2. (toon shading + black outlines)
I'm sure I came in late. Let's hope this would solve someone's question later.
Here's the deal, you don't need to render everything twice, the overhead actually is not substantial, all you need to do is duplicate the mesh and set the duplicate mesh's material side to "backside". No double passes. You will be rendering two meshes instead, with most of the outline's geometry culled by WebGL's "backface culling".
Here's an example:
var scene = new THREE.Scene();
//Create main object
var mesh_geo = new THREE.BoxGeometry(1, 1, 1);
var mesh_mat = new THREE.MeshBasicMaterial({color : 0xff0000});
var mesh = new THREE.Mesh(mesh_geo, mesh_mat);
scene.add(mesh);
//Create outline object
var outline_geo = new THREE.BoxGeometry(1, 1, 1);
//Notice the second parameter of the material
var outline_mat = new THREE.MeshBasicMaterial({color : 0x00ff00, side: THREE.BackSide});
var outline = new THREE.Mesh(outline_geo, outline_mat);
//Scale the object up to have an outline (as discussed in previous answer)
outline.scale.multiplyScalar(1.5);
scene.add(outline);
For more details on backface culling, check out: http://en.wikipedia.org/wiki/Back-face_culling
The above approach works well if you want to add an outline to objects, without adding a toon shader, and thus losing "realism".
Toon shading by itself supports edge detection. They've developed the 'cel' shader in Borderlands to achieve this effect.
In cel shading devs can either use the object duplication method (done at the [low] pipeline level), or can use image processing filters for edge detection. This is the point at which performance tradeoff is compared between the two techniques.
More info on cel: http://en.wikipedia.org/wiki/Cel_shading
Cheers!
Yes it is possible but not in a simple out-of-the-box way. For toon shading there are even shaders included in /examples/js/ShaderToon.js
For the outlines I think the most commonly suggested method is to render in two passes. First pass renders the models in black, and slightly larger scale. Second pass is normal scale and with the toon shaders. This way you'll see the larger black models as an outline. It's not perfect but I don't think there's an easy way out. You might have more success searching for "three.js hidden line rendering", as, while different look, somewhat similar method is used to achieve that.
Its a old question but here is what i did.
I created a Outlined Cel-shader for my CG course. Unfortunately it takes 3 rendering passes. Im currently trying to figure out how to remove one pass.
Here's the idea:
1) Render NormalDepth image to texture.
In vertex shader you do what you normally do, position to screen space and normal to screen space.
In fragment shader you calculate the depth of the pixel and then create the normal color with the depth as the alpha value
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
2) Render the scene on to a texture with cel-shading. I changed the scene override material.
3)Make quad and render both textures on the quad and have a orto camera look at it. Cel-shaded texture is just renderd on quad but the normaldepth shaded on that you use some edge detection and then with that you know when the pixel needs to be black(edge).