THREE.BufferAttribute: .setArray has been removed. Use BufferGeometry .setAttribute unindexBufferGeometry - javascript

Would really appreciate some help updating the webgl-wireframes library code to the latest version of threejs.
This function causes the following errors
Uncaught TypeError: THREE.Geometry is not a constructor
THREE.BufferAttribute: .setArray has been removed. Use BufferGeometry
.setAttribute to replace/resize attribute buffers
Library with implementation: https://github.com/mattdesl/webgl-wireframes.
Thanks to Mugen87, this code now works for me in place of the helper functions with the original libray.
function createGeometry ( edgeRemoval, x_divisions, y_divisions) {
if (mesh.geometry) mesh.geometry.dispose();
geometry = new THREE.PlaneBufferGeometry(3, 3, x_divisions, y_divisions)
geometry = geometry.toNonIndexed();
const pos = geometry.attributes.position
const count = pos.length / 3
let bary = []
const removeEdge = edgeRemoval
for (let i = 0; i < count; i++){
const even = i % 2 === 0
const Q = removeEdge ? 1 : 0
if (even) {
bary.push(0, 0, 1,
0, 1, 0,
1, 0, Q )
} else {
bary.push(0, 1, 0,
0, 0, 1,
1, 0, Q
)
}
}
bary = new Float32Array(bary)
geometry.setAttribute(
"barycentric",
new THREE.BufferAttribute(bary, 3)
)
mesh.geometry = geometry;
mesh.material = material;
}

webgl-wireframes requires non-indexed geometries so barycentric coordiantes can be computed for the wireframe effect. Hence, the project developed the helper function unindexBufferGeometry().
With the latest version of three.js (r128) the library could use BufferGeometry.toNonIndexed() which does not throw the above error. So this line should be:
geometry = geometry.toNonIndexed();
Notice that setArray() was removed because it was possible to use this method to resize buffer attributes. This workflow is not supported anymore since buffer attributes are considered to have a fixed size (for performance reasons). So if you want to resize buffer data, create a new geometry with new buffer attributes.

Related

All mesh instances reset to (0,0,0) before they lerp to new position when count changes

I am currently working on my graduation project in which mesh instances are re positioned over time. Besides the positions of the instances, the count of the mesh instances can also change over time.
Based on the following code examples, I managed to build this functionality.
https://jsfiddle.net/ew1tyz63/2/
https://threejs.org/examples/?q=dynami#webgl_instancing_dynamic
However, the problem appears when I want to lerp the positions of the instances. The position of all instances is resets to (0,0,0) when the count of the mesh instances changes.
I've created a codesandbox that reproduces this. The code has been forked from https://codesandbox.io/s/x8ric by James Wesc and tweaked a bit to clarify the issue.
My problem appears when you change the count of the instances by dragging the slider. The position of all instances is resets to (0,0,0).
Is there a way to stop the reset and only update the new instances when the count changes?
This is a link to the code sandbox.
https://codesandbox.io/s/instanced-mesh-lerping-positions-forked-d03ckr?file=/src/App.tsx
I added a snippet to the code as well!
Thanks in advance!!
const tempObject = new Object3D();
const tempMatrix = new Matrix4();
const tempVector = new Vector3();
const tempVector2 = new Vector3();
type XYZ = [number, number, number];
const data = dataJSON as Array<{ p1: XYZ; p2: XYZ }>;
const pos = new Vector3(10, 1, 1);
const YourCanvas = withControls(Canvas);
const Boxes: React.FC = () => {
const count = useControl("count", {
type: "number",
value: 1000,
min: 100,
max: 1000,
distance: 0.1
});
const ref = useRef<InstancedMesh>(null!);
React.useEffect(() => {
if (ref.current) {
ref.current.instanceMatrix.setUsage(THREE.DynamicDrawUsage);
}
}, []);
useFrame(({ clock: { elapsedTime } }) => {
const t = Math.floor(elapsedTime / 5) % 2;
for (let i = 0; i < count; i++) {
ref.current.getMatrixAt(i, tempMatrix);
tempVector.setFromMatrixPosition(tempMatrix);
const toPosition = t ? data[i].p1 : data[i].p2;
// Resets positions of all instances when count changes
// tempVector2.set(toPosition[0], toPosition[1], toPosition[2])
// tempObject.position.lerpVectors(tempVector, tempVector2, 0.01)
// Only updating positions of new instances when count changes
tempObject.position.set(toPosition[0], toPosition[1], toPosition[2]);
tempObject.updateMatrix();
ref.current.setMatrixAt(i, tempObject.matrix);
}
ref.current.instanceMatrix.needsUpdate = true;
});
return (
<instancedMesh
ref={ref}
args={[
new THREE.BoxGeometry(1.0, 1.0, 1.0, 1.0),
new THREE.MeshStandardMaterial({ color: new THREE.Color("#00ff00") }),
count
]}
></instancedMesh>
);
};

Attempting to make Ray Tracing inside of p5.js but function recursion is acting weird

So, I found a source online that went over ray tracing for c++ (https://www.scratchapixel.com/code.php?id=3&origin=/lessons/3d-basic-rendering/introduction-to-ray-tracing)
I decided to go into p5.js and attempt to replicate what they have in their source code, but ran into an error when I got to function recursion. To add reflections they used recursion and ran the same function again, but when I attempt the same thing I get all sorts of incorrect outputs... This is my code:
https://editor.p5js.org/20025249/sketches/0LcyoY8yS
function trace(rayorig, raydir, spheres, depth) {
let tnear = INFINITY;
let sphere;
// find intersection of this ray with the spheres in the scene
for (let i = 0; i < spheres.length; i++) {
t0 = INFINITY;
t1 = INFINITY;
if (spheres[i].intersect(rayorig, raydir)) {
if (t0 < 0) t0 = t1;
if (t0 < tnear) {
tnear = t0;
sphere = spheres[i];
}
}
}
// if there's no intersection return black or background color
if (!sphere) return createVector(2, 2, 2);
let surfaceColor = createVector(0); // color of the ray/surfaceof the object intersected by the ray
let phit = createVector(rayorig.x, rayorig.y, rayorig.z).add(createVector(raydir.x, raydir.y, raydir.z).mult(tnear)); // point of intersection
let nhit = createVector(phit.x, phit.y, phit.z).sub(sphere.center); // normal at the intersection point
nhit.normalize(); // normalize normal direction
// If the normal and the view direction are not opposite to each other
// reverse the normal direction. That also means we are inside the sphere so set
// the inside bool to true. Finally reverse the sign of IdotN which we want
// positive.
let bias = 1e-4; // add some bias to the point from which we will be tracing
let inside = false;
if (createVector(raydir.x, raydir.y, raydir.z).dot(nhit) > 0) {
nhit = -nhit;
inside = true;
}
if ((sphere.transparency > 0 || sphere.reflection > 0) && depth < MAX_RAY_DEPTH) {
let facingratio = createVector(-raydir.x, -raydir.y, -raydir.z).dot(nhit);
// change the mix value to tweak the effect
let fresneleffect = mix(pow(1 - facingratio, 3), 1, 0.1);
// compute reflection direction (not need to normalize because all vectors
// are already normalized)
let refldir = createVector(raydir.x, raydir.y, raydir.z).sub(createVector(nhit.x, nhit.y, nhit.z).mult(2).mult(createVector(raydir.x, raydir.y, raydir.z).dot(nhit)));
refldir.normalize();
// Here is the error:
let reflection = trace(createVector(phit.x, phit.y, phit.z).add(createVector(nhit.x, nhit.y, nhit.z).mult(bias)),
refldir,
spheres,
depth+1
);
let refraction = createVector(0);
// // if the sphere is also transparent compute refraction ray (transmission)
// if (sphere.transparency) {
// let ior = 1.1
// let eta = (inside) ? ior : 1 / ior; // are we inside or outside the surface?
// let cosi = createVector(-nhit.x, -nhit.y, -nhit.z).dot(raydir);
// let k = 1 - eta * eta * (1 - cosi * cosi);
// let refrdir = createVector(raydir.x, raydir.y, raydir.z).mult(eta).add(createVector(nhit.x, nhit.y, nhit.z).mult(eta * cosi - sqrt(k)));
// refrdir.normalize();
// refraction = trace(createVector(phit.x, phit.y, phit.z).sub(createVector(nhit.x, nhit.y, nhit.z).mult(bias)),
// refrdir,
// spheres,
// depth + 1
// );
// }
// the result is a mix of reflection and refraction (if the sphere is transparent)
surfaceColor = (
createVector(reflection.x, reflection.y, reflection.z)
.mult(fresneleffect)
.add(
createVector(refraction.x, refraction.y, refraction.z).mult(1 - fresneleffect).mult(sphere.transparency)
)
)
.mult(sphere.surfaceColor);
}
return createVector(surfaceColor.x, surfaceColor.y, surfaceColor.z).add(sphere.emissionColor);
}
The error is that the reflections don't give me the same output as the c++ script and seems to be wonky. I cannot for the love of me figure out why the recursive function just doesn't work.
I have attempted to run it without the recursion and it worked perfectly fine but the recursion is where is breaks
The way I found the error was by printing on the original c++ script and printing on the one I was making and it all works up until the recursive reflections. I get the correct first output but then it all goes down hill.
Their outputs:
[-0.224259 3.89783 -19.1297]
[-0.202411 3.88842 -19.0835]
[-0.180822 3.88236 -19.0538]
My outputs:
[-0.224259 3.89783 -19.1297] // correct
[-0.000065 0.001253 -0.005654] // incorrect
[-0.000064 0.00136 -0.00618] // incorrect
Summary: I made a function that works but the recursion breaks it and I cannot figure out why

FabricJS custom filter in angular

I have binary/grayscale image and I want to filter that image that all white color became transparent and all dark color change to some user specific color.
I have a problem to create custom filter in angular. All example I found are for pure JavaScript and demo page does not work http://fabricjs.com/image-filters.
What I tried, from information I have:
private canvas: fabric.Canvas;
constructor() { this.initializeNewFilter() }
initializeNewFilter() {
fabric.Image.filters['Redify'] = fabric.util.createClass(fabric.Image.filters.BaseFilter, {
type: 'Redify',
applyTo: function (canvasEl) {
const context = canvasEl.getContext('2d');
const imageData = context.getImageData(0, 0, canvasEl.width, canvasEl.height);
const data = imageData.data;
for (let i = 0, len = data.length; i < len; i += 4) {
data[i + 1] = 0;
data[i + 2] = 0;
}
context.putImageData(imageData, 0, 0);
}
});
fabric.Image.filters['Redify'].fromObject = function (object) {
return new fabric.Image.filters['Redify'](object);
};
}
Even in this base example I get error:
ERROR TypeError: canvasEl.getContext is not a function
const context = canvasEl.getContext('2d') // getContext('2d') does not exists, but canvasEl is here
Also I do not know how to send via custom filter, is there any better explanation ?
depends on the filtering engine you are using, getContext is available for fabric.Canvas2dFilterBackend().
try
fabric.initFilterBackend = function() {
return (new fabric.Canvas2dFilterBackend());
};
reference:
FabricJS filtering overview

three.js - How to have non visual attributes connected to objects?

I have a rather broad question, but no idea how to tackle that. So forgive me.
I am trying to have several (like 200 and more) objects and, let's just say, a container to the side of the field, where I draw the objects. Now what I want is, that each object has some non visual attributes and when I click on that object, the attributes should appear in that container.
Now I could go about it?
I mean, I know I can ask for the name of the selected object and then do a key value query from some dictionary. Question is, whether there is an easier way to go about it.
For the click event I used a library called threex.domevents, check the GitHub page for more information, the code for the event it's self explanatory.
First domevents needs to be initialized in your scene like this:
var domEvents = new THREEx.DomEvents(camera, renderer.domElement);
Then I created a custom Mesh object:
// random id
function genRandomId()
{
var text = "";
var possible = "ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789";
for( var i=0; i < 5; i++ )
text += possible.charAt(Math.floor(Math.random() * possible.length));
return text;
}
// random int for position
var min = -50;
var max = 50;
function genRandomInt(min, max) {
return Math.floor(Math.random() * (max - min)) + min;
}
// custom mesh --------------------------------------------
function MyMesh(geometry, material, destinationContainer) {
THREE.Mesh.call(this, geometry, material);
this.userData = {
foo1: genRandomId(),
foo2: genRandomId(),
foo3: genRandomId(),
};
this.position.x = genRandomInt(min, max);
this.position.y = genRandomInt(min, max);
this.position.z = genRandomInt(min, max);
var that = this;
// click event listener
domEvents.addEventListener(this, 'click', function(event) {
console.log('clicked object on position:');
console.log(that.position);
destinationContainer.userData = that.userData;
console.log('Now the conainer has:');
console.log(destinationContainer.userData);
destinationContainer.userData = that.userData;
}, false);
}
MyMesh.prototype = Object.create(THREE.Mesh.prototype);
MyMesh.prototype.constructor = MyMesh;
genRandomId and genRandomInt are random generators for the pourpose of illustrating this example, I took the code for the random ids from Generate random string/characters in JavaScript.
In your scene you can generate 200 (or more) MyMesh meshes and add them to the scene:
const color = 0x156289;
const emissive = 0x072534;
var planeGeometry = new THREE.PlaneGeometry(5, 5);
var planeMaterial = new THREE.MeshPhongMaterial({
color: color,
emissive: emissive,
side: THREE.DoubleSide,
shading: THREE.FlatShading
});
var planeMesh = new THREE.Mesh(planeGeometry, planeMaterial);
scene.add(planeMesh);
var objGeometry = new THREE.BoxGeometry(1, 1, 1);
var objMaterial = new THREE.MeshPhongMaterial({
color: color,
emissive: emissive,
shading: THREE.FlatShading
});
var i = 0;
while (i < 200) {
scene.add(new MyMesh(objGeometry, objMaterial, planeMesh));
i++;
}
And finally render the scene:
var render = function() {
requestAnimationFrame(render);
planeMesh.rotation.x += 0.010;
planeMesh.rotation.y += 0.010;
renderer.render(scene, camera);
};
render();
This is a demo with the full source code: http://run.plnkr.co/plunks/W4x8XsXVroOaLUCSeXgO/
Open the browser console and click on a cube and you'll see the that planeMesh is switching its userData attributes with the ones of the clicked cube mesh.
Yes, that's fine. You can put your own custom keys directly on a Three.js object and it shouldn't bother it as long as you don't accidentally overwrite an important built-in Three.js key. For that reason I'd recommend that you put all of your custom keys in a "namespace" on the object so they're nice and neat and contained.
For example, if you had a Three.js object foo, you could put all your keys under foo.myCustomNamespace, so that your custom data like foo.myCustomNamespace.name, foo.myCustomNamespace.description, etc. are all together and won't interfere with THREE.js properties.
Edit: Three.js provides a built-in namespace for user data called, conveniently, userData. Access it on THREE.Object3D.userData.
https://github.com/mrdoob/three.js/blob/master/src/core/Object3D.js#L92

generate texture from array in threejs

I am trying to generate a texture from an array in threeJS and it is not working as expected.
It appears that the way I generate the texture is not correct.
If I use the following texture, it works as expected.
http://www.html5canvastutorials.com/demos/assets/crate.jpg
crateTex = THREE.ImageUtils.loadTexture('data/crate.jpg');
If I generate a dummy texture and try to display it, it is all black...
var dummyRGBA = new Uint8Array(4 * 4 * 4);
for(var i=0; i< 4 * 4; i++){
// RGB from 0 to 255
dummyRGBA[4*i] = dummyRGBA[4*i + 1] = dummyRGBA[4*i + 2] = 255*i/(4*4);
// OPACITY
dummyRGBA[4*i + 3] = 255;
}
dummyDataTex = new THREE.DataTexture( dummyRGBA, 4, 4, THREE.RGBAFormat );
dummyDataTex.needsUpdate = true;
dummyTex = new THREE.Texture(dummyDataTex);
I think your mistake is in the fact that you make a texture of a texture.
When you do:
dummyDataTex = new THREE.DataTexture( dummyRGBA, 4, 4, THREE.RGBAFormat );
the object dummyDataTex that you create here is already of type THREE.Texture.
So your next step:
dummyTex = new THREE.Texture(dummyDataTex);
is not necessary. You should instead immediately use dummyDataTex.

Categories

Resources