Pixi.js zoom / not re-rendering sprites? - javascript

I am learning pixi.js, intending to use it to render a large directed graph with many nodes. I forked a codepen that had something similar, but when I use a simple circle texture for sprites, and when I zoom in, the edges get blurry:
function makeParicleTexture(props) {
const gfx = new PIXI.Graphics();
gfx.beginFill(props.fill);
gfx.lineStyle(props.strokeWidth, props.stroke);
gfx.drawCircle(props.size / 2, props.size / 2, 3)
gfx.endFill();
const texture = app.renderer.generateTexture(gfx, PIXI.SCALE_MODES.LINEAR, 2);
return texture;
}
Here is the codepen: https://codepen.io/mfbridges/pen/xxRqGRz
How can I ask pixi.js to re-rasterize the circle at this new zoom level so that edges of the circle are crisp when you zoom in?
I have seen many examples where this happens seemingly automatically (e.g. here: https://bl.ocks.org/pkerpedjiev/cf791db09ebcabaec0669362f4df1776) so I'm confused why it's not working in the codepen above.

I have seen many examples where this happens seemingly automatically (e.g. here: https://bl.ocks.org/pkerpedjiev/cf791db09ebcabaec0669362f4df1776) so I'm confused why it's not working in the codepen above.
It works there because they just draw "Graphics" object (no texture):
var graphics = new PIXI.Graphics();
graphics.beginFill(0xe74c3c); // Red
d3.range(numCircles).map(function() {
graphics.drawCircle(randomX(), randomY(), 1);
});
stage.addChild(graphics);
"Graphics" are always "scaling" correctly because they are calculated on each render (i think). But textures are generated once and then reused.
What can help in your code:
make bigger Texture and then scale the Sprite that is created from it
gfx.drawCircle(props.size / 2, props.size / 2, 3)
// make radius bigger:
gfx.drawCircle(props.size / 2, props.size / 2, 30)
//then in "makeSprites" function add this line somewhere after Sprite is created:
sprite.scale.set(0.1, 0.1);
^ see: https://www.html5gamedevs.com/topic/16601-resize-texture/
increase "resolution" of texture ( "The resolution / device pixel ratio of the texture being generated." - https://pixijs.download/dev/docs/PIXI.AbstractRenderer.html#generateTexture ) :
const texture = app.renderer.generateTexture(gfx, PIXI.SCALE_MODES.LINEAR, 2);
// change to:
const texture = app.renderer.generateTexture(gfx, PIXI.SCALE_MODES.LINEAR, 20);
You need to exepriment and decide which way to use :)

Related

Convert 2D shape into 3D in d3.js and adjust height according to the value in ANGULAR

I am using d3.js v6 to create a 3D graph of the below 2D chart representation. This circle has multiple squares in it and each square has been assigned a color based on the value. The bigger the value, more darker the square.
Now I want to convert this in 3D shape where only the height of a particular square increases when the value gets high, so the result would be somehow similar to the image below. The base would be circular but the height of each value would go up based on the value
I am trying to achieve this in angular, if anyone could please help me out. Here is the Stackblitz Link
I made the one as you requested.
source code on github
here's working demo: https://stackoverflow-angular-3d-chart.surge.sh/
This involved several intricate steps.
I couldn't go any deeper from this answer because every part that I mentioned here could be hours worth tutorial. These are what I've felt interesting when I was working on it.
Used Stacks
EDIT: the stackblitz code is now outdated. I've used the most recent version for each package.
Three.js r143
D3.js v7.6.1
Angular.js v14
Getting Circle Grid
experiment note on ObservableHQ: https://observablehq.com/#rabelais/circle-inside-grids
First I've experimented on SVG with D3.js to get proper circle grid.
It seemed daunting but turned out very simple. I've slightly modified Midpoint circle algorithm to fill box grids in circular shape. It is little different from filling grids in 3d space; 2d space has top left corner as beginning of everything. In 3d space, everything starts from center.
const midPointX = gridWidth / 2;
const midPointY = gridHeight / 2;
const { midPointX, midPointY, radius } = config;
const getCollision = ({ x, y }) => {
return (midPointX - x) ** 2 + (midPointY - y) ** 2 - radius ** 2 > 0;
}
Calculating Gaps
d3's scale band supports automatic calculation of gaps and content size in responsive environment.
const scaleBandX = d3
.scaleBand()
.domain(d3.range(0, config.gridWidth))
.range([config.margin, config.svgWidth - config.margin * 2])
.paddingInner(0.2);
const scaleBandY = d3
.scaleBand()
.domain(d3.range(0, config.gridHeight))
.range([config.margin, config.svgHeight - config.margin * 2])
.paddingInner(0.2);
scaleBandX.bandwidth(); // width of box in 2d space
scaleBandY.bandwidth(); // height of box in 2d space
scaleBandX(boxIndex); // x position of box in 2d space with gap
scaleBandY(boxIndex); // y position of box in 2d space with gap
as D3 assumes vector calculation as normal, it was pretty easy to apply the very same method in 3D.
Expressing on 3D space
I've used Three.js to express everything in 3D. The app is running on Angular per request but it does not matter which frontend framework is used.
Everything about expressing 2d bar chart on 3d is very trivial. However, the dimension is different from 2d; the positions have to be swapped.
// code to make a single bar mesh
makeBar(d: typeof gridData[0]) {
// length and height is swapped. because camera is looking from 90 degree angle by default.
const geo = new T.BoxGeometry(d.w, d.l, d.h, 32, 32);
const mat = new T.MeshPhysicalMaterial({ color: 'red' });
const mesh = new T.Mesh(geo, mat);
mesh.position.x = d.x;
// z and y is also swapped. because of the same reason.
mesh.position.z = d.y;
mesh.position.y = d.z;
return mesh;
}
then each element is assigned as 3d Group, to make them centered altogether.
EDIT: color scheme was missing. it is now added.

understand positioning system in three.js

I'm new to javascript and three.js, i'm trying to figure out how to get the 3d positions on a webpage.
for ex. i want to set a point light at (50,20,10) x,y,z values. how can I know that where this x,y,z values will come on a webpage?.I have seen code like below.
var light2 = new THREE.PointLight(0xffffff, 10)
light2.position.set(500, 100, 0)
scene.add(light2)
I have googled but I didn't get enough information to sort out the things properly, can somebody help me with a good explanation or some article/tutorial link?
Just a little background first...
3D space is infinite.
The canvas is a viewport into that space.
The size of the canvas on the page has no direct relation to anything in 3D space. You can double the size of the canvas, and all it does is make the rendered image bigger.
To your question...
That's not to say you can't figure out where on the canvas a 3D thing might appear. You can project any point in 3D space into Normalized Device Coordinates with Vector3.project.
var light2 = new THREE.PointLight(0xffffff, 10)
light2.position.set(500, 100, 0)
scene.add(light2)
// Where is it in NDC?
var ndc = new THREE.Vector3().copy( light2.position ).project( camera );
Normalized Device Coordinates range from -1 to 1, and represent the normalized width and height of the viewport with (0,0) at the center of the rendered image. So, you will need to convert them from NDC into pixel values on your canvas. (Also, you can ignore the z component, because the screen has no depth.)
var canvasX = ( ndc.x + 1 ) * canvasWidth / 2;
var canvasY = ( ndc.y + 1 ) * canvasHeight / -2;

Make the SpriteMaterial 'particles' rounded, or circular in Three.js with React

This was the code initially (which works fine in most environments):
var PI2 = Math.PI * 2;
var material = new THREE.SpriteCanvasMaterial( {
color: 0x939393, //changes color of particles
program: function ( context ) {
context.beginPath();
context.arc( 0, 0, 0.1, 0, PI2, true );
context.fill();
}
} );
However when I attempt to run that (rendering via THREE.CanvasRendered) I get an error saying that SpriteCanvasMaterial is not a valid constructor. I did some research and apparently within my react environment (Gatsby), I should use SpriteMaterial instead.
Making that change (as well as changing the rendering to WebGLRenderer) worked, however the material is square shaped instead of round. The "program function" no longer seems to work.
I tried doing texture mapping as such:
var spriteMap = new THREE.TextureLoader().load('mycircle.png');
var material = new THREE.SpriteMaterial({
map: spriteMap,
color: 0xFAFAFA, //changes color of particles
});
And that changes the shape to a circle as I wanted, however it hits the quality really badly and the circles are blurry (despite excellent quality of the png).
Been researching this for hours and can't seem to find a solution to just render good quality circles instead of the squares I have.
Thanks.

How to simulate mouse movement to procedurally generate beautiful galaxies in JS?

I'm working on a project that procedurally generates images of galaxies like this one:
This sample was "hand drawn" (by waving the cursor around). See this pen:
https://codepen.io/OpherV/pen/JQBKVq?editors=0110
I would like to procedurally generate these types of images, but rather than generate them at one go I'd like the generation be performed using a "drawing" process, that is, moving the drawing cursor in a pattern that achieves these visual structures.
The mouse-simulation code that I currently have is lifted directly from Louis Hoebregts' "Simulating Mouse Movement" article on CSS Tricks.
The principle function relies on Simplex noise:
const s = 0.001 * (speed / 100);
const noiseX = (noise.simplex3(1, 0, a * s) + 1) / 2;
const noiseY = (noise.simplex3(11, 0, a * s) + 1) / 2;
random += randomness / 1000;
const randX = noise.simplex3(1, 0, random) * window.innerWidth * 0.1;
const randY = noise.simplex3(3, 0, random) * window.innerHeight * 0.1;
const x = noiseX * innerWidth + randX;
const y = noiseY * innerHeight + randY;
updateMouse(x, y);
However this type of noise won't create the visuals I'm aiming for. Breaking down the visual structure I have in mind, we have a center-weighted blob and elliptical "arms". To achieve the former, I think more "drawing time" should be performed near the center (which creates the bright blobs inside), with less often "offshoots" performing more elliptic motion to make the latter.
I thought about somehow gradienting the Simplex noise so that it veers more towards the center, but I'm unsure how to go about doing that in 2d space. I'm also not certain how to proceed combining that with something that draws the "arms" of the galaxy.
Can you suggest an algorithm to achieve this?
Thanks 🙏
If you only want to generate images, you could look into generating a galaxy with some number of spiral arms using cos and sin, play around with the circle radius:
Math.cos(radians) * radius, Math.sin(radians) * radius
Get this to work first.
You probably want to draw something somewhat elliptical instead of a full circle.
Randomly go more often in the center of the galaxy and close to the arms.
Step 1: Randomly generate galaxy points
Step 2: Blend colors (HTML5 canvas paint blending color tool)
Step 3: if you want realtime performance use WebGL ...
Bonus: if you want to go full overkill you could even try to use realistic formulas:
https://arxiv.org/pdf/0908.0892.pdf

Create texture from Array THREE.js

I'm working on a terrain generator, but I can't seen to figure out how to do the colors. I want to be able to generate an image that will take up my whole PlaneGeometry. My question is how can I create a single image that will cover the entire PlaneGeometry (with no wrapping) based off my height map? I can think of one way, but I'm not sure it would fully cover the PlaneGeometry and it would be very inefficient. I'd draw it in a two-dimensional view with colors on a canvas. I'd then convert the canvas to the texture Is that the best/only way?
UPDATE: Using DataTexture, I got some errors. I have absolutely no idea where I went wrong. Here's the error I got:
WebGL: drawElements: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled.
Both the DataTexture and the PlaneGeometry have a size of 512^2. What can I do to fix this?
Here's some of the code I use:
EDIT: I fixed it. Here's the working code I used.
function genDataTexture(){
//Set the size.
var dataMap = new Uint8Array(1 << (Math.floor(Math.log(map.length * map[0].length * 4) / Math.log(2))));
/* ... */
//Set the r,g,b for each pixel, color determined above
dataMap[count++] = color.r;
dataMap[count++] = color.g;
dataMap[count++] = color.b;
dataMap[count++] = 255;
}
var texture = new THREE.DataTexture(dataMap, map.length, map[0].length, THREE.RGBAFormat);
texture.needsUpdate = true;
return texture;
}
/* ... */
//Create the material
var material = new THREE.MeshBasicMaterial({map: genDataTexture()});
//Here, I mesh it and add it to scene. I don't change anything after this.
The optimal way, if the data is already in your Javascript code, is to use a DataTexture -- see https://threejs.org/docs/#api/textures/DataTexture for the general docs, or look at THREE.ImageUtils.generateDataTexture() for a fairly-handy way to make them. http://threejs.org/docs/#Reference/Extras/ImageUtils

Categories

Resources