2D face tracking on Spark AR - javascript

I've been trying to make the face texture in a 2D canvas/plane move only along the X/Y axis, following the movements of the face without rotating, with the 2D background camera texture reflected accurately on top. Right now, when I connect the canvas to the face tracker I'm getting distorted scale, and the 2D plane rotates in 3D space. See below for the current canvas/camera texture/face tracker set-up. Manual scaling results in poor tracking.
Here is my code:
const FaceTracking = require('FaceTracking')
const Scene = require('Scene')
export const Diagnostics = require('Diagnostics');
// Locate the plane in the Scene
// Enable async/await in JS [part 1]
(async function () {
const [plane] = await Promise.all([
Scene.root.findFirst('blur_plane')
])
// Store a reference to a detected face
const face = FaceTracking.face(0)
// To access scene objects
const planeTransform = plane.transform
const faceTransform = face.cameraTransform
// const blurCanvas = Scene.root.find('canvas0');
// To access class properties
planeTransform.rotationX = faceTransform[0]
planeTransform.rotationY = faceTransform[0]
planeTransform.rotationZ = faceTransform[0]
})()
This is the current look:
I want the 375x667px canvas to look exactly like the camera layer beneath it, so that without adjustments to the camera texture the layer would not be visible.

Turns out Facebook has an example that deals with 2D movement but not scale:
https://sparkar.facebook.com/ar-studio/learn/reference/classes/facetrackingmodule

Related

Get coordintates from an 3D object in Three.js

I am building a Three.js App (React Template, if it's important). I have this 3D object model that should act like the Planet Earth in app. And I've got this space station model. I wanna rotate the station around the world by giving some specific coordinates every other second. My questions are:
How can I place the space station above London, for example, if I have this coordinates:
long: 45.926013877299 and lat: 46.524648101056 (random)
How can I use my animate function now, because that's how I create the mesh:
const loader = new GLTFLoader();
loader.load("path/to/model"), gltf => {
scene.add(gltf.scene);
});
instead of:
const earthMesh = new THREE.Mesh(earthGeometry, earthMaterial);
scene.add(earthMesh);
My animate() function:
const animate = () => {
requestAnimationFrame(animate);
earthMesh.rotation.y -= 0.0015;
renderer.render(scene, camera);
};
Since loader.load() returns nothing, I cannot create a variable to use it like earthMesh (e.g. earthMesh.position.y -= 0.0015)
Sorry for the bad code parts but for one reason I don't know the code button {} is not formatting the text as expected.

three.js: How can I target object's position to another (grouped) object, while allowing rotation to follow AR camera?

I'm using an augmented reality library that does some fancy image tracking stuff. After learning a whole lot about this project, I'm now beyond my current ability and could use some help. For our purposes, the library creates an (empty) anchor point at the center of an IRL image target in-camera. Then moves the virtual world around the IRL camera.
My goal is to drive plane.rotation to always face the camera, while keeping plane.position locked to the anchor point. Additionally, plane.rotation values will be referenced later in development.
const THREE = window.MINDAR.IMAGE.THREE;
document.addEventListener('DOMContentLoaded', () => {
const start = async() => {
// initialize MindAR
const mindarThree = new window.MINDAR.IMAGE.MindARThree({
container: document.body,
imageTargetSrc: '../../assets/targets/testQR.mind',
});
const {renderer, scene, camera} = mindarThree;
// create AR object
const geometry = new THREE.PlaneGeometry(1, 1.25);
const material = new THREE.MeshBasicMaterial({color: 0x00ffff, transparent: true, opacity: 0.5});
const plane = new THREE.Mesh(geometry, material);
// create anchor
const anchor = mindarThree.addAnchor(0);
anchor.group.add(plane);
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
});
}
start();
});
Everything I've tried so far went into the solutions already massaged into the (functioning draft) code. I have, however, done some research and found a couple avenues that might or might not work. Just tossing them out to see what might stick or inspire another solution. Skill-wise, I'm still in the beginner category, so any help figuring this out is much appreciated.
identify plane object by its group index number;
drive (override lib?) object rotation (x, y, z) to face camera;
possible solutions from dev:
"You can get those values through the anchor object, e.g. anchor.group.position. Meaning that you can use the current three.js API and get those values but without using it for rendering i.e. don't append the renderer.domElement to document."
"You can hack into the source code of mindar (it's open source)."
"Another way might be easier for you to try is to just create another camera yourself. I believe you can have multiple cameras, and just render another layer on top using your new camera."
I think it may be as simple as calling lookAt in the animation loop function:
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
plane.lookAt(new THREE.Vector3());
renderer.render(scene, camera);
});
This assumes the camera is always located at (0,0,0) (i.e., new THREE.Vector3()). This seems to be true from my limited testing. I found it helpful to debug by copy-pasting the MindAR three.js example into this codepen and printing some relevant values to the console.
Also note that, internally, MindAR's three.js module seems to directly modify the world matrix of the anchor.group object without modifying the position/rotation/scale parameters.

Blender Coordinates to three.js

I have loaded the model that I have exported through blender in three.js. I'm using the following code to position the axis,
var axesHelper = new THREE.AxesHelper( 10 );
axesHelper.position.x =0.21785;
axesHelper.position.y = 2.85146;
axesHelper.position.z = 0.800149;
I obtained these coordinates from the blender. Unfortunately, these coordinates are in wrong position when I view it from three.js. what am I missing?

Fabric.js - how to use custom cursors without drawing mode

I have no idea how to set up cursor image for drawing on the Canvas. I have noticed that I can set it only when
FABRICCANVAS.isDrawingMode = true;
However, the problem is that I have created dedicated drawing tools and I don't want to use those that are built into the Fabric.js.
Sample of my code (which doesn't work properly):
const FABRICCANVAS = new fabric.Canvas('canvas-draft');
const DRAFT = document.querySelector(".upper-canvas");
button.addEventListener('click', () => {
DRAFT.style.cursor = 'url(img/cursors/image.png) 0 34, auto';
});
But when I set isDrawingMode to true, it works. Unfortunately, I don't want to use built-in drawing tools because they leave paths (that can then be moved later, when FABRICCANVAS.selection = true).
Do you know any solution for this problem?
For the Canvas you can set different cursors:
e.g.
canvas.hoverCursor
canvas.defaultCursor
canvas.moveCursor
You can use absolute or relative paths to your cursor image:
canvas.moveCursor = 'url("...") 10 10, crosshair';

How to render SVG with PixiJS?

I'm trying to make a game using SVG images for scalability and for procedurally making physical objects from them (see matter.js for how).
The problem I'm having is if I load 2 different SVG textures and then render them, the second has the first layered underneath it.
This doesn't happen with raster images and doesn't happen with the canvas options, only with WebGL.
Is there a way to stop this or am I doing the SVGs wrong?
var renderer = PIXI.autoDetectRenderer(
window.innerWidth,
window.innerHeight,
{
backgroundColor : 0xffffff,
resolution:2
}
);
// add viewport and fix resolution doubling
document.body.appendChild(renderer.view);
renderer.view.style.width = "100%";
renderer.view.style.height = "100%";
var stage = new PIXI.Container();
//load gear svg
var texture = PIXI.Texture.fromImage('https://upload.wikimedia.org/wikipedia/commons/thumb/0/0b/Gear_icon_svg.svg/2000px-Gear_icon_svg.svg.png');
var gear = new PIXI.Sprite(texture);
//position and scale
gear.scale = {x:0.1,y:0.1};
gear.position = {x:window.innerWidth / 2,y:window.innerHeight / 2};
gear.anchor = {x:0.5,y:0.5};
//load heart svg
var texture2 = PIXI.Texture.fromImage('https://upload.wikimedia.org/wikipedia/commons/thumb/4/42/Love_Heart_SVG.svg/2000px-Love_Heart_SVG.svg.png');
var heart = new PIXI.Sprite(texture2);
//position and scale
heart.scale = {x:0.1,y:0.1};
heart.position = {x:window.innerWidth/4,y:window.innerHeight / 2};
heart.anchor = {x:0.5,y:0.5};
//add to stage
stage.addChild(gear);
stage.addChild(heart);
// start animating
animate();
function animate() {
gear.rotation += 0.05;
// render the container
renderer.render(stage);
requestAnimationFrame(animate);
}
<script src="https://github.com/pixijs/pixi.js/releases/download/v4.8.2/pixi.min.js"></script>
Well, this example seems to work pretty well!
var beeSvg = "https://s3-us-west-2.amazonaws.com/s.cdpn.io/106114/bee.svg";
beeTexture = new PIXI.Texture.fromImage(beeSvg, undefined, undefined, 1.0);
var bee = new PIXI.Sprite(beeTexture)
See more at: https://codepen.io/osublake/pen/ORJjGj
So I think you're mixing concepts a bit.
SVG is one thing and WebGL is another.
SVG's are rendered by the browser and you can scale them up or down without losing quality/resolution or whatever you want to call it.
This characteristic however is not possible in WebGL because WebGL rasterises images. A bit like taking a screenshot and putting it in a layer on Photoshop. You can manipulate that image, but u can't scale it without starting to see the pixels.
So short answer is, you can't use SVG's in a WebGL hoping to make your graphics "scale".
In regards to your example above, the result is the expected.
You are loading 2 png textures and overlaying them.

Categories

Resources