Fabric.js - how to use custom cursors without drawing mode - javascript

I have no idea how to set up cursor image for drawing on the Canvas. I have noticed that I can set it only when
FABRICCANVAS.isDrawingMode = true;
However, the problem is that I have created dedicated drawing tools and I don't want to use those that are built into the Fabric.js.
Sample of my code (which doesn't work properly):
const FABRICCANVAS = new fabric.Canvas('canvas-draft');
const DRAFT = document.querySelector(".upper-canvas");
button.addEventListener('click', () => {
DRAFT.style.cursor = 'url(img/cursors/image.png) 0 34, auto';
});
But when I set isDrawingMode to true, it works. Unfortunately, I don't want to use built-in drawing tools because they leave paths (that can then be moved later, when FABRICCANVAS.selection = true).
Do you know any solution for this problem?

For the Canvas you can set different cursors:
e.g.
canvas.hoverCursor
canvas.defaultCursor
canvas.moveCursor
You can use absolute or relative paths to your cursor image:
canvas.moveCursor = 'url("...") 10 10, crosshair';

Related

three.js: How can I target object's position to another (grouped) object, while allowing rotation to follow AR camera?

I'm using an augmented reality library that does some fancy image tracking stuff. After learning a whole lot about this project, I'm now beyond my current ability and could use some help. For our purposes, the library creates an (empty) anchor point at the center of an IRL image target in-camera. Then moves the virtual world around the IRL camera.
My goal is to drive plane.rotation to always face the camera, while keeping plane.position locked to the anchor point. Additionally, plane.rotation values will be referenced later in development.
const THREE = window.MINDAR.IMAGE.THREE;
document.addEventListener('DOMContentLoaded', () => {
const start = async() => {
// initialize MindAR
const mindarThree = new window.MINDAR.IMAGE.MindARThree({
container: document.body,
imageTargetSrc: '../../assets/targets/testQR.mind',
});
const {renderer, scene, camera} = mindarThree;
// create AR object
const geometry = new THREE.PlaneGeometry(1, 1.25);
const material = new THREE.MeshBasicMaterial({color: 0x00ffff, transparent: true, opacity: 0.5});
const plane = new THREE.Mesh(geometry, material);
// create anchor
const anchor = mindarThree.addAnchor(0);
anchor.group.add(plane);
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
renderer.render(scene, camera);
});
}
start();
});
Everything I've tried so far went into the solutions already massaged into the (functioning draft) code. I have, however, done some research and found a couple avenues that might or might not work. Just tossing them out to see what might stick or inspire another solution. Skill-wise, I'm still in the beginner category, so any help figuring this out is much appreciated.
identify plane object by its group index number;
drive (override lib?) object rotation (x, y, z) to face camera;
possible solutions from dev:
"You can get those values through the anchor object, e.g. anchor.group.position. Meaning that you can use the current three.js API and get those values but without using it for rendering i.e. don't append the renderer.domElement to document."
"You can hack into the source code of mindar (it's open source)."
"Another way might be easier for you to try is to just create another camera yourself. I believe you can have multiple cameras, and just render another layer on top using your new camera."
I think it may be as simple as calling lookAt in the animation loop function:
// start AR
await mindarThree.start();
renderer.setAnimationLoop(() => {
plane.lookAt(new THREE.Vector3());
renderer.render(scene, camera);
});
This assumes the camera is always located at (0,0,0) (i.e., new THREE.Vector3()). This seems to be true from my limited testing. I found it helpful to debug by copy-pasting the MindAR three.js example into this codepen and printing some relevant values to the console.
Also note that, internally, MindAR's three.js module seems to directly modify the world matrix of the anchor.group object without modifying the position/rotation/scale parameters.

HTML Canvas images with text differ across operating systems

I'm building a small JS class that holds a <canvas> element and uses it to draw some images dynamically. Particularly those images have both background (fill) and text.
After doing this, I wanted to write some unit tests around this to ensure that, given some input, the generated output (as a base64 string) is always the same. That means that if I want to generate an image with a pink background and white text, the generated output is always a pink image with white text, and the dimensions are the same. In other words, the base64 encoded strings are equal.
I can run these tests locally and they pass (MacOS) but they fail on my Jenkins integration job (runs on Linux). That's because the image's font is slightly different. After reading a lot on this, I suspect this is because the fonts are implemented differently across different OSs.
See the images:
Expected, obtained locally in MacOS:
Actual, obtained in Linux:
You can see that the actual image I'm obtaining is a little bit taller than the expected (bottom padding below the text is smaller). Current font I'm using is "44px Arial"
See code:
Implementation
generateImage(backgroundColor, fontColor = "#000000") {
if (backgroundColor) {
this.drawBackgroundWithColor(backgroundColor);
}
this.drawTexOverBackground("Text to draw", X_POSITION, Y_POSITION, fontColor, "44px Arial");
return this.getCurrentCanvasAsBase64String();
}
drawBackgroundWithColor(backgroundColor){
if (backgroundColor) {
const context = this.getCanvasContext();
context.save();
context.fillStyle = backgroundColor;
context.fillRect(0, 0, this.width, this.height);
context.restore();
}
}
drawTexOverBackground(text, x, y, hexColor, font = this.font) {
const context = this.getCanvasContext();
context.save();
context.font = font;
context.textBaseline = "top";
context.fillStyle = hexColor;
context.fillText(text, x, y);
context.restore();
}
Unit test:
it('should create a basic pink background with black text', () => {
const backgroundGenerator = new ImageGenerator();
const createdBackground = backgroundGenerator.generateImage("#FFC0CB");
assert.equal(createdBackground, expectedTransparentImageWithText);
});
Being expectedTransparentImageWithText a global variable of the module that holds the string base64 value I expect and obtained in MacOS.
I must clarify that for unit testing this I'm using the canvas npm module which runs on a node and as the docs say it "implements that API as closely as possible" (referring to the Canvas browser API). I suspect that that's the reason why the fonts differ. I also got here and found that fonts DO differ between operating systems, and there's no 100% compatibility among them.
Is there any suggestion you can give me to unit test this better and take into account the fonts a well? Right now I'm only testing images without text, so I can avoid running into the explained scenario, but of course, not the best thing to do.

PIXI remove spriteSheet Texture cache

I have loaded spriteSheetFrame using json.
const loader = new PIXI.loaders.Loader();
loader.add('bunny', 'data/bunny.png')
.add('spaceship', 'assets/spritesheet.json');
loader.load((loader, resources) => {
});
I want to remove all the TextureCache which was loaded using this spritesheet.json only.
I have tried.
PIXI.Texture.removeFromCache("spaceship");
PIXI.Texture.removeTextureFromCache("spaceship");
But in PIXI.TextureCache names of all the spriteFrame were included there.
And still i am able to use image form frame. Using this.
var bgSprite2 = PIXI.Sprite.fromFrame("ship1");
bgSprite2.anchor.set(0.5, 0.5);
var pos = {x: 300, y: 200};
bgSprite2.position.set(pos.x, pos.y);
stage.addChild(bgSprite2);
I want to remove all the entries of spriteFrame in TextureCache and i want to load new set of spriteFrame.
I am doing this because i have spritesheet animations of two diffrent spaceship but the individual symbol name of both spaceship are same.
I would agree with Hachi that you could gain some performance from just replacing the texture rather than destroying and re-creating over and over. Caching could be the answer.
You could then eventually call destroy when your done with them to make sure there is nothing lingering around.

How to render SVG with PixiJS?

I'm trying to make a game using SVG images for scalability and for procedurally making physical objects from them (see matter.js for how).
The problem I'm having is if I load 2 different SVG textures and then render them, the second has the first layered underneath it.
This doesn't happen with raster images and doesn't happen with the canvas options, only with WebGL.
Is there a way to stop this or am I doing the SVGs wrong?
var renderer = PIXI.autoDetectRenderer(
window.innerWidth,
window.innerHeight,
{
backgroundColor : 0xffffff,
resolution:2
}
);
// add viewport and fix resolution doubling
document.body.appendChild(renderer.view);
renderer.view.style.width = "100%";
renderer.view.style.height = "100%";
var stage = new PIXI.Container();
//load gear svg
var texture = PIXI.Texture.fromImage('https://upload.wikimedia.org/wikipedia/commons/thumb/0/0b/Gear_icon_svg.svg/2000px-Gear_icon_svg.svg.png');
var gear = new PIXI.Sprite(texture);
//position and scale
gear.scale = {x:0.1,y:0.1};
gear.position = {x:window.innerWidth / 2,y:window.innerHeight / 2};
gear.anchor = {x:0.5,y:0.5};
//load heart svg
var texture2 = PIXI.Texture.fromImage('https://upload.wikimedia.org/wikipedia/commons/thumb/4/42/Love_Heart_SVG.svg/2000px-Love_Heart_SVG.svg.png');
var heart = new PIXI.Sprite(texture2);
//position and scale
heart.scale = {x:0.1,y:0.1};
heart.position = {x:window.innerWidth/4,y:window.innerHeight / 2};
heart.anchor = {x:0.5,y:0.5};
//add to stage
stage.addChild(gear);
stage.addChild(heart);
// start animating
animate();
function animate() {
gear.rotation += 0.05;
// render the container
renderer.render(stage);
requestAnimationFrame(animate);
}
<script src="https://github.com/pixijs/pixi.js/releases/download/v4.8.2/pixi.min.js"></script>
Well, this example seems to work pretty well!
var beeSvg = "https://s3-us-west-2.amazonaws.com/s.cdpn.io/106114/bee.svg";
beeTexture = new PIXI.Texture.fromImage(beeSvg, undefined, undefined, 1.0);
var bee = new PIXI.Sprite(beeTexture)
See more at: https://codepen.io/osublake/pen/ORJjGj
So I think you're mixing concepts a bit.
SVG is one thing and WebGL is another.
SVG's are rendered by the browser and you can scale them up or down without losing quality/resolution or whatever you want to call it.
This characteristic however is not possible in WebGL because WebGL rasterises images. A bit like taking a screenshot and putting it in a layer on Photoshop. You can manipulate that image, but u can't scale it without starting to see the pixels.
So short answer is, you can't use SVG's in a WebGL hoping to make your graphics "scale".
In regards to your example above, the result is the expected.
You are loading 2 png textures and overlaying them.

Pattern for using Meteor with advanced SVG or Canvas Ouput

Is it sensible to use Meteor for a reactive data display that isn't primarily HTML based?
To be specific, I want to display a graph database as a set of boxes connected by lines. I'd like to allow live interaction with these boxes, and I'd also like them to be reactive, so if one user edits the data the display of any other users currently viewing the graph will update.
Meteor seems great for the reactivity, but most of the examples I've found focus on either HTML templates or very simple API interactions for doing things like adding a pin to a map.
I am currently thinking about using SVG or Canvas to display the graph database, but I am very unsure how best to integrate that with Meteor and/or some other library like maybe D3.
I found that Meteor works perfectly with canvas, I don't know if what I do is the best practice but I got the best results using Kinetic.js (available for Meteor via "mrt install kineticjs" and I use the template engine to call on functions that set up the elements on my canvas, this is a small example of a code I use to place the players on my map:
the Template:
<template name="canvas_map">
<div id="grid_map" class="grid"></div>
{{#with clear_canvas}}
{{#each human}}
{{handle_member_pos}}
{{/each}}
{{/with}}
the "clear_canvas" helper sets up a new Kinetic.Stage and the "handle_member_pos" helper gets a human object and places it on said canvas.
here are the helpers (coffeescript):
Template.canvas_map.clear_canvas = =>
if Session.get('init')
kinetic_elements.stage = new Kinetic.Stage
container: 'grid_map'
width: 385
height: 375
kinetic_elements.layer = new Kinetic.Layer()
else
false
Template.canvas_map.handle_member_pos = ->
[x, y] = pos_to_str #profile.pos
left = Math.floor(11 * x)
top = Math.floor(11 * y)
name = #profile.name
unless kinetic_elements.avatars[name]?
imageObj = new Image()
imageObj.onload = =>
element = new Kinetic.Image
x: left
y: top
image: imageObj
width: 50
height: 50
element.on 'click', (evt) =>
Session.set 'selected', #profile._id
window.propogation = false
false
kinetic_elements.layer.add element
kinetic_elements.avatars[name] = [element, text]
kinetic_elements.stage.add kinetic_elements.layer
imageObj.src = 'human.png'
else
element = kinetic_elements.avatars[name]
layer = kinetic_elements.layer
element.setX left
element.setY top
layer.draw()
return
as I said, I'm not sure if that is the best practice, but it works great for me, hope this helps in any way.

Categories

Resources