How to calculate the time taken by PixiJs for initial render?
This is my function which renders thousand rectangles. I want to calculate the time taken by PixiJS to complete the initial render?
Is there any renderComplete event in PixiJS?
function createRectsUsingPixi (container, width, height, points) {
const app = new PIXI.Application({ antialias: true, width, height });
container.appendChild(app.view);
const rect = new PIXI.Graphics();
rect.beginFill(0x626262);
for (let i = 0; i < points.length; i++) {
const { x, y, width, height } = points[i];
rect.drawRect(x, y, width, height));
}
rect.endFill();
app.stage.addChild(rect);
}
Please check this example: https://codepen.io/domis86/pen/poJrKdq - and focus on following lines:
This creates Ticker and turns off "automatic rendering":
// See: https://pixijs.download/dev/docs/PIXI.Ticker_.html
let ticker = PIXI.Ticker.shared;
ticker.autoStart = false;
ticker.stop();
Then inside "game loop" we call render method manually:
renderer.render(app.stage);
This render method seems to be synchronous according to experts (including one of authors of pixi.js) here: https://github.com/pixijs/pixi.js/issues/5299 (comment about manual render: https://github.com/pixijs/pixi.js/issues/5299#issuecomment-449080238 ), and here: https://www.html5gamedevs.com/topic/27849-detect-when-view-has-been-rendered/?do=findComment&comment=160061
So, you can calculate execution time of render method.
See also description how to setup "custom game loop" without "Ticker": https://github.com/pixijs/pixi.js/wiki/v5-Custom-Application-GameLoop#custom-gameloop
Is there any renderComplete event in PixiJS?
You can try https://pixijs.download/dev/docs/PIXI.Renderer.html#postrender
Also additional performance tip: in your example i see that you create one PIXI.Graphics objects and draw many rectangles inside it. Please try creating one PIXI.Graphics per rectangle (draw one rectangle in one PIXI.Graphics object) and compare performance.
Also, usually you want to create PIXI.Application only once (at beginning of program). So line:
const app = new PIXI.Application({ antialias: true, width, height });
should be probably extracted from function createRectsUsingPixi and called separately elsewhere once.
Related
I'm trying to add an Object3D to my gltf model and place it above the model. I'm doing it the following way:
this.el.addEventListener('model-loaded', () => {
this.bar = new MyCustomObject3D();
const size = new THREE.Vector3();
let box = new THREE.Box3().setFromObject(this.el.object3D);
box.getSize(size)
let height = size.y + 1;
this.bar.position.set(0, height, 0);
this.el.setObject3D("bar", this.bar);
// same result:
// this.el.object3D.add(this.bar);
})
The height is 2 and if I placed an element with this position into root (i.e. scene) it would be placed correctly right above the model. But when I add it to the Object3D it's being placed somewhere below the model on height ~0.5. Only by multiplying the height by 25 I could achieve the right position.
So how to calculate the exact offset needed to place the new Object3D above the model without multiplying it to a random number?
UPDATE:
Adding reproducible example. Note the width and height I had to pass to GLTF model.
One way of placing objects above a model, would be grabbing its bounding box, and placing an object above it.
In general, it it simple - just like you did it:
let box = new THREE.Box3().setFromObject(this.el.object3D);
box.getSize(size)
let height = size.y + 1;
this.bar.position.set(0, height, 0);
But in this case - the bounding box is off. Way off. The minimum is way too low, and the maximum is somewhere in the middle. Why is that? (tldr: check it out here)
The cuprit is: skinning. The model is transformed by its bones - which is a form of vertex displacement that happens on the GPU (vertex shader), and has nothing to do with the geometry (source).
Here is some visual aid - the model with its armature:
And without the armature applied:
Now we see why the box is off - its corresponding to the bottom picture!
So we need to re-create what the bones are doing to the geometry:
1. The hard route
You need to take a THREE.Box3.
Iterate through each geometry point of the model
Apply the bone transform to each point (it is done here - but not available in a-frame 1.0.4)
Expand the THREE.Box3
2. The easy route
While looking into this, I've made a utility function THREE.Box3Utils.fromSkinnedMesh(mesh, box3); - box3 will be the bounding box of the model at the time when the function is called.
The function is a part of this repo.
Its used on this example.
I've been experimenting with a basic game loop with HTML's Canvas element. Numerous tutorials online don't go into enough detail with the concepts of rendering and canvas.ctx (context).
What I'm trying to do is something very simple: Render an image on a canvas element and, on keydown, update its position and render it at the new location, making it move across the screen. Basically, what every video game does with its sprites.
I've been told through these tutorials that ctx.drawImage(image, x, y, ...) will work for this. However, what ends up happening in my version is essentially what happens when you win a game of solitaire on windows. It repeats the sprite's image as if it's creating a brand new sprite each time the game loops. The sprite itself doesn't move, a new sprite seems to be generated to the left/right/etc of the original one. I understand that I'm calling ctx.drawImage(...) every time I'm iterating through the game loop. However, this didn't happen when I used ctx.clearRect(...). It worked precisely how I expected it to. I'm not exactly sure why creating a rectangle with ctx works while creating an image doesn't.
My question is: Is there a way to simply update the position of the sprite without creating a brand new version of it every single loop?
Here's my relevant code:
let lastRender = 0; // For the general loop
let image = new Image();
image.src = "/img/image.png";
let state = {
pressedKeys: {
// left, right, up, down: false
},
position: {
x: canvas.width / 2,
y: canvas.width / 2
},
speed: 20
}
let pepsi = new Sprite({
img: image,
width: 100,
height: 100
)};
function Sprite (options) {
this.img = options.img;
this.width = options.width;
this.height = options.height;
this.render = function(){
ctx.drawImage(
this.img,
state.position.x,
state.position.y
)
}
}
function updatePosition(progress) {
//pressedKeys is just an object that relates WASD to the key codes
// and their respective directions, it's ignorable
if (state.pressedKeys.left) {
state.position.x -= state.speed;
}
if (state.pressedKeys.right) {
state.position.x += state.speed;
}
if (state.pressedKeys.up) {
state.position.y -= state.speed;
}
if (state.pressedKeys.down) {
state.position.y += state.speed;
}
}
function draw() {
pepsi.render();
}
function loop(timestamp) {
let progress = timestamp - lastRender;
update(progress) // <-- Updates position, doesn't touch draw()
draw(); // <-- Runs pepsi.render(); each loop
lastRender = timestamp;
window.requestAnimationFrame(loop);
}
window.requestAnimationFrame(loop); // for the general loop
If you have any qualms with the way this project is set up (for example, using the state.position for each Sprite), then I'd be glad to hear them in addition to the solution to my problem. Not in isolation. I got most of this code from contextless, non-specific online tutorials, but I understand most of it, save for the rendering.
Also, if you've seen this kind of question before and are on the fence about saying "Possible duplicate of {Borderline Tangentially-Related Post from Four Years Ago}", then here's some advice: Just answer the question again. It literally does nothing negative to you.
The solitaire smearing effect that you are getting, comes from the fact each frame is being drawn over the top of the last one. The canvas doesn't get cleared automatically between frames.
You mentioned that you have used clearRect, the use of clearRect is to clear all the pixels in the specified rectangle.
So if you put ctx.clearRect(0, 0, canvas.width, canvas.height) in the draw function before pepsi.render(), that should clear the canvas before drawing the next frame.
I'm using Physijs script for physics like gravitation.
I want to move objects in my scene with Raycaster from THREE.js script.
My problem is that Raycaster only move objects (simple box) declared like:
var box = new Physijs.Mesh(cubeGeomtery.clone(), createMaterial);
But here physics does not work. It only works if I declare it like:
var create = new Physijs.BoxMesh(cubeGeomtery.clone(), createMaterial);
But here Raycaster / moving does not work.
The difference between these two is that in the first it's just Mesh and in the second it's BoxMesh.
Does anyone know why this doesn't work? I need BoxMesh in order to use gravity and other physics.
Code to add cube
function addCube()
{
controls.enable = false;
var cubeGeomtery = new THREE.CubeGeometry(85, 85, 85);
var createTexture = new THREE.ImageUtils.loadTexture("images/rocks.jpg");
var createMaterial = new THREE.MeshBasicMaterial({ map: createTexture });
var box = new Physijs.BoxMesh(cubeGeomtery.clone(), createMaterial);
box.castShadow = true;
box.receiveShadow = true;
box.position.set(0, 300, 0);
objects.push(box);
scene.add(box);
}
Explanation
In Physijs, all primitive shapes (such as the Physijs.BoxMesh) inherit from Physijs.Mesh, which in turn inherits from THREE.Mesh. In the Physijs.Mesh constructor, there is a small internal object: the ._physijs field. And, in that object, there is... a shape type declaration, set to null by default. That field must be re-assigned by one of its children. If not, when the shape is passed to the scene, the Physijs worker script won't know what kind of shape to generate and simply abort. Since the Physijs.Scene inherits from the THREE.Scene, the scene keeps a reference of the mesh internally like it should, which means that all methods from THREE.js will work (raycasting, for instance). However, it is never registered as a physical object because it has no type!
Now, when you are trying to move the Physijs.BoxMesh directly with its position and rotation fields, it is immediately overridden by the physics updates, which started with the .simulate method in your scene object. When called, it delegates to the worker to compute new positions and rotations that correspond to the physics configurations in your scene. Once it's finished, the new values are transferred back to the main thread and updated automatically so that you don't have to do anything. This can be a problem in some cases (like this one!). Fortunately, the developer included 2 special fields in Physijs.Mesh: the .__dirtyPosition and .__dirtyRotation flags. Here's how you use them:
// Place box already in scene somewhere else
box.position.set(10, 10, 10);
// Set .__dirtyPosition to true to override physics update
box.__dirtyPosition = true;
// Rotate box ourselves
box.rotation.set(0, Math.PI, 0);
box.__dirtyRotation = true;
The flags get reset to false after updating the scene again via the .simulate method.
Conclusion
It is basically useless to create a Physijs.Mesh yourself, use one of the primitives provided instead. It is just a wrapper for THREE.Mesh for Physijs and has no physical properties until modified properly by one of its children.
Also, when using a Physijs mesh, always set either the .__dirtyPosition or the .__dirtyRotation property in the object to directly modify position or rotation, respectively. Take a look in the above code snippet and here.
My program creates dynamic number of point cloud objects with custom attributes that includes the alpha value of each particle. This works fine, however, when the objects are nested within each other (say spheres) the smaller (inner) ones are getting obscured by the bigger ones, even though their particles' alpha is set properly. When I reverse the order of adding the point-cloud objects to the scene, starting with the bigger ones, going down to the smaller ones, I can see the smaller ones thru the bigger ones.
My question is whether there is a way to tell the renderer to update or recalculate the alpha values or re-render the smaller inner objects so that they show up?
I ran into the same problem as you do. I fixed it to calculate and set the renderdepth for each mesh. For this you need the camera position and the center of your mesh.
You probably already created meshes for each object. If you save all these meshes into an array, it's easier to calculate and set the renderdepth on these objects.
Here's an example how I did it.
updateRenderDepthOnRooms(cameraPosition: THREE.Vector3): void {
var rooms: Room[] = this.getAllRooms();
rooms.forEach((room) => {
var roomCenter = getCenter(room.mesh.geometry);
var renderDepth = 0 - roomCenter.distanceToSquared(cameraPosition);
room.mesh.renderDepth = renderDepth;
});
}
function getCenter(geometry: THREE.Geometry): THREE.Vector3 {
geometry.computeBoundingBox();
var bb = geometry.boundingBox;
var offset = new THREE.Vector3();
offset.addVectors(bb.min, bb.max);
offset.multiplyScalar(0.5);
return offset;
}
So, to get the center of your object, you can ask the geometry from your mesh and use the getCenter(..) function from my example. Then you calculate the renderdepth with the ThreeJs function distanceToSquared(..) and then set this renderdepth to your mesh.
That's it. Hope this will help you.
I'm working with EaselJS to recreate something I've seen in real life and I'm having a slight issue with triangle strokes.
In the above image you can see my triangle. I understand corner A and why it isn't filled like the others but I want it filled. How can I do this exactly?
Because it won't include my code snippet, my JavaScript is:
var stage = new createjs.Stage('c'),
poly = new createjs.Shape(),
s = 400,
h = s * (Math.sqrt(3)/2),
x = stage.canvas.width/2+s,
y = stage.canvas.height/2+s/2;
poly.graphics.beginStroke('#0da4d3').setStrokeStyle(75)
.moveTo(x,y).lineTo(x+s/2,y+h).lineTo(x-s/2,y+h).lineTo(x,y);
stage.addChild(poly);
stage.update();
createjs.Ticker.addEventListener('tick', handleTick);
function handleTick(e) {
stage.update();
}
window.onresize = function() {
stage.canvas.width = $(window).width();
stage.canvas.height = $(window).height();
}
stage.canvas.width = $(window).width();
stage.canvas.height = $(window).height();
and a link to CodePen: http://codepen.io/Spedwards/pen/hqvsc
Also as a small sub-question, why is my stage only updating in a Ticker?
As kihu answered, you only need to add closePath to the graphics. Take a look at the documentation: http://www.createjs.com/Docs/EaselJS/classes/Graphics.html#method_closePath
For your sub question: the stage draw things on the screen on the stage.update() call. In your example, this call is inside a function executed every tick event, i.e., ~ 24 times per second. You only need to call stage.update when you have new things to draw (e.g., when you add other object to the stage or when you move, rotate or perform other transformations to the objects already in stage). Thus, in your case, you only need to call the update method after adding the shape to the stage and after the window resize event.
You can fix the corner issue using closePath();
poly.graphics.beginStroke('#0da4d3').setStrokeStyle(75);
poly.graphics.moveTo(x,y).lineTo(x+s/2,y+h).lineTo(x-s/2,y+h).closePath();
http://codepen.io/anon/pen/fyxvI
As for the ticker - this is how CreateJS was designed. I think it's related to game development. When animating things, you are 100% sure that all the operations inside 'tick' handler have been executed before the next 'tick' is handled.