Efficient particle system in javascript? (WebGL) - javascript

I'm trying to write a program that does some basic gravity physics simulations on particles. I initially wrote the program using the standard Javascript graphics (with a 2d context), and I could get around 25 fps w/10000 particles that way. I rewrote the tool in WebGL because I was under the assumption that I could get better results that way. I am also using the glMatrix library for vector math. However, with this implementation I'm getting only about 15fps with 10000 particles.
I'm currently an EECS undergrad and I have had a reasonable amount of experience programming, but never with graphics, and I have little clue as to how to optimize Javascript code.
There is a lot I don't understand about how WebGL and Javascript work. What key components affect performance when using these technologies? Is there a more efficient data structure to use to manage my particles (I'm just using a simple array)? What explanation could there be for the performance drop using WebGL? Delays between the GPU and Javascript maybe?
Any suggestions, explanations, or help in general would be greatly appreciated.
I'll try to include only the critical areas of my code for reference.
Here is my setup code:
gl = null;
try {
// Try to grab the standard context. If it fails, fallback to experimental.
gl = canvas.getContext("webgl") || canvas.getContext("experimental-webgl");
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
}
catch(e) {}
if(gl){
gl.clearColor(0.0,0.0,0.0,1.0);
gl.clearDepth(1.0); // Clear everything
gl.enable(gl.DEPTH_TEST); // Enable depth testing
gl.depthFunc(gl.LEQUAL); // Near things obscure far things
// Initialize the shaders; this is where all the lighting for the
// vertices and so forth is established.
initShaders();
// Here's where we call the routine that builds all the objects
// we'll be drawing.
initBuffers();
}else{
alert("WebGL unable to initialize");
}
/* Initialize actors */
for(var i=0;i<NUM_SQS;i++){
sqs.push(new Square(canvas.width*Math.random(),canvas.height*Math.random(),1,1));
}
/* Begin animation loop by referencing the drawFrame() method */
gl.bindBuffer(gl.ARRAY_BUFFER, squareVerticesBuffer);
gl.vertexAttribPointer(vertexPositionAttribute, 2, gl.FLOAT, false, 0, 0);
requestAnimationFrame(drawFrame,canvas);
The draw loop:
function drawFrame(){
// Clear the canvas before we start drawing on it.
gl.clear(gl.COLOR_BUFFER_BIT);
//mvTranslate([-0.0,0.0,-6.0]);
for(var i=0;i<NUM_SQS;i++){
sqs[i].accelerate();
/* Translate current buffer (?) */
gl.uniform2fv(translationLocation,sqs[i].posVec);
/* Draw current buffer (?) */;
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
}
window.requestAnimationFrame(drawFrame, canvas);
}
Here is the class that Square inherits from:
function PhysicsObject(startX,startY,size,mass){
/* Class instances */
this.posVec = vec2.fromValues(startX,startY);
this.velVec = vec2.fromValues(0.0,0.0);
this.accelVec = vec2.fromValues(0.0,0.0);
this.mass = mass;
this.size = size;
this.accelerate = function(){
var r2 = vec2.sqrDist(GRAV_VEC,this.posVec)+EARTH_RADIUS;
var dirVec = vec2.create();
vec2.set(this.accelVec,
G_CONST_X/r2,
G_CONST_Y/r2
);
/* Make dirVec unit vector in direction of gravitational acceleration */
vec2.sub(dirVec,GRAV_VEC,this.posVec)
vec2.normalize(dirVec,dirVec)
/* Point acceleration vector in direction of dirVec */
vec2.multiply(this.accelVec,this.accelVec,dirVec);//vec2.fromValues(canvas.width*.5-this.posVec[0],canvas.height *.5-this.posVec[1])));
vec2.add(this.velVec,this.velVec,this.accelVec);
vec2.add(this.posVec,this.posVec,this.velVec);
};
}
These are the shaders I'm using:
<script id="shader-fs" type="x-shader/x-fragment">
void main(void) {
gl_FragColor = vec4(0.7, 0.8, 1.0, 1.0);
}
</script>
<!-- Vertex shader program -->
<script id="shader-vs" type="x-shader/x-vertex">
attribute vec2 a_position;
uniform vec2 u_resolution;
uniform vec2 u_translation;
void main() {
// Add in the translation.
vec2 position = a_position + u_translation;
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace*vec2(1,-1), 0, 1);
}
</script>
I apologize for this being long-winded. Again, any suggestions or nudges in the right direction would be huge.

you should never draw primitives individualy. Draw them all at once, whenever possible. Create an ArrayBuffer that contains position and other necessary attributes of all particles and then draw the whole buffer with one call to gl.drawArrays.
I can't give exact instructions because I'm on mobile but searching for vbo, interleaved arrays, and particles in opengl will surely help you find examples and other helpful resources.
I'm rendering 5m static points that way with 10fps. Dynamic points will be slower as you'll have to continually send updated data to the graphics card but it will be way faster than 15fps for 10000 points.
Edit:
You might want to use gl.POINT instead of TRIANGLE_STRIP. That way, you only have to specify the position and and gl_PointSize(in the vertex shader) for each square. gl.POINT are rendered as squares!
You can take a look at the source of these two point cloud renderer:
https://github.com/asalga/XB-PointStream
http://potree.org/wp/download/ ( By me, following files might help you: WeightedPointSizeMaterial.js, pointSize.vs, colouredPoint.fs )

It depends on what you are trying to do. When you say "gravity" to you mean some kind of physical simulation with collisions or do you just mean velocity += acceleration; position += velocity?
If the latter then you can do all the math in the shader. Example is here
https://www.khronos.org/registry/webgl/sdk/demos/google/particles/index.html
These particles are done entirely in the shader. The only input after setup is time. Each "particle" consists of 4 vertices. Each vertex contains
local_position (for a unit quad)
texture_coord
lifetime
starting_position
starting_time
velocity
acceleration
start_size
end_size
orientation (quaterion)
color multiplier
Given time you can compute the particles's local time (time since it starts)
local_time = time - starting_time;
Then you can compute a position with
base_position = start_position +
velocity * local_time +
acceleration * local_time * local_time;
That's acceleration * time^2. You then add the local_position to that base_position to get the position needed to render the quad.
You can also compute a 0 to 1 lerp over the lifetime of the particle
lerp = local_time / lifetime;
This gives you a value you can use to lerp all the other values
size = mix(start_size, end_size, lerp);
If the particle a size of 0 if it's outside the it's lifetime
if (lerp < 0.0 || lerp > 1.0) {
size = 0.0;
}
This will make the GPU not draw anything.
Using a ramp texture (a 1xN pixel texture) you can easily have the particle change colors over time.
color = texture2D(rampTexture, vec4(lerp, 0.5));
etc...
If you follow through the shaders you'll see other things similarly handled including spinning the particle (something that would be harder with point sprites), animating across a texture for frames, doing both 2D and 3D oriented particles. 2D particles are fine for smoke, exhaust, fire, explosions. 3D particles are good for ripples, possibly tire tracks, and can be combined with 2D particles for ground puffs to hide some of the z-issues of 2D only particles. etc..
There are also examples of one shots (explosions, puffs) as well as trails. Press 'P' for a puff. Hold 'T' to see a trail.
AFAIK these are pretty efficient particles in that JavaScript is doing almost nothing.

Related

Project visible pixels in one view onto another

In WebGL or in pure matrix math I would like to match the pixels in one view to another view. That is, imagine I take pixel with x,y = 0,0. This pixel lies on the surface of a 3d object in my world. I then orbit around the object slightly. Where does that pixel that was at 0,0 now lie in my new view?
How would I calculate a correspondence between each pixel in the first view with each pixel in the second view?
The goal of all this is to run a genetic algorithm to generate camouflage patterns that disrupt a shape from multiple directions.
So I want to know what the effect of adding a texture over the object would be from multiple angles. I want the pixel correspondencies because rendering all the time would be too slow.
To transform a point from world to screen coordinates, you multiply it by view and projection matrices. So if you have a pixel on the screen, you can multiply its coordinates (in range -1..1 for all three axes) by inverse transforms to find the corresponding point in world space, then multiply it by new view/projection matrices for the next frame.
The catch is that you need the correct depth (Z coordinate) if you want to find the movement of mesh points. For that, you can either do trace a ray through that pixel and find its intersection with your mesh the hard way, or you can simply read the contents of the Z-buffer by rendering it to texture first.
A similar technique is used for motion blur, where a velocity of each pixel is calculated in fragment shader. A detailed explanation can be found in GPU Gems 3 ch27.
I made a jsfiddle with this technique: http://jsfiddle.net/Rivvy/f9kpxeaw/126/
Here's the relevant fragment code:
// reconstruct normalized device coordinates
ivec2 coord = ivec2(gl_FragCoord.xy);
vec4 pos = vec4(v_Position, texelFetch(u_Depth, coord, 0).x * 2.0 - 1.0, 1.0);
// convert to previous frame
pos = u_ToPrevFrame * pos;
vec2 prevCoord = pos.xy / pos.w;
// calculate velocity
vec2 velocity = -(v_Position - prevCoord) / 8.0;

Projecting FBO value to screen-space to read from depth texture

EDIT: Updated the JSFiddle link as it wasn't rendering correctly in Chrome on Windows 7.
Context
I'm playing around with particles in THREE.JS and using a frame buffer / render target (double buffered) to write positions to a texture. This texture is affected by its own ShaderMaterial, and then read by the PointCloud's ShaderMaterial to position the particles. All well and good so far; everything works as expected.
What I'm trying to do now is use my scene's depth texture to see if any of the particles are intersecting my scene's geometry.
The first thing I did was to reference my depth texture in the PointCloud's fragment shader, using gl_FragCoord.xy / screenResolution.xy to generate my uv for the depth texture lookup.
There's a JSFiddle of this here. It's working well - when a particle is behind something in the scene, I tell the particle to be rendered red, not white.
My issue arises when I try to do the same depth comparison in the position texture shader. In the draw fragment shader, I can use the value of gl_FragCoord to get the particle's position in screen space and use that for the depth uv lookup, since in the draw vertex shader I use the modelViewMatrix and projectionMatrix to set the value of gl_Position.
I've tried doing this in the position fragment shader, but to no avail. By the way, what I'm aiming to do with this is particle collision with the scene on the GPU.
So... the question (finally!):
Given a texture where each pixel/texel is a world-space 3d vector representing a particle's position, how can I project this vector to screen-space in the fragment shader, with the end goal of using the .xy properties of this vector as a uv lookup in the depth texture?
What I've tried
In the position texture shader, using the same transformations as the draw shader to transform a particle's position to (what I think is) screen-space using the model-view and projection matrices:
// Position texture's fragment shader:
void main() {
vec2 uv = gl_FragCoord.xy / textureResolution.xy;
vec4 particlePosition = texture2D( tPosition, uv );
vec2 screenspacePosition = modelViewMatrix * projectionMatrix * vec4( particlePosition, 1.0 );
vec2 depthUV = vec2( screenspacePosition.xy / screenResolution.xy );
float depth = texture2D( tDepth, depthUV ).x;
if( depth < screenspacePosition.z ) {
// Particle is behind something in the scene,
// so do something...
}
gl_FragColor = vec4( particlePosition.xyz, 1.0 );
}
Variations on a theme of the above:
Offsetting the depth's uv by doing 0.5 - depthUV
Using the tPosition texture resolution instead of the screen resolution to scale the depthUV.
Another depth uv variation: doing depthUV = (depthUV - 1.0) * 2.0;. This helps a little, but the scale is completely off.
Help! And thanks in advance.
After a lot of experimentation and research, I narrowed the issue down to the values of modelViewMatrix and projectionMatrix that THREE.js automatically assigns when one creates an instance of THREE.ShaderMaterial.
What I wanted to do was working absolutely fine when in my 'draw' shaders, where the modelViewMatrix for these shaders was set (by THREE.js) to:
new THREE.Matrix4().multiplyMatrices( camera.matrixWorldInverse, object.matrixWorld)
It appears that when one creates a ShaderMaterial to render values to a texture (and thus not attached to an object in the scene/world), the object.matrixWorld is essentially an identity matrix. What I needed to do was to make my position texture shaders have the same modelViewMatrix value as my draw shaders (which were attached to an object in the scene/world).
Once that was in place, the only other thing to do was make sure I was transforming a particle's position to screen-space correctly. I wrote some helper functions in GLSL to do this:
// Transform a worldspace coordinate to a clipspace coordinate
// Note that `mvpMatrix` is: `projectionMatrix * modelViewMatrix`
vec4 worldToClip( vec3 v, mat4 mvpMatrix ) {
return ( mvpMatrix * vec4( v, 1.0 ) );
}
// Transform a clipspace coordinate to a screenspace one.
vec3 clipToScreen( vec4 v ) {
return ( vec3( v.xyz ) / ( v.w * 2.0 ) );
}
// Transform a screenspace coordinate to a 2d vector for
// use as a texture UV lookup.
vec2 screenToUV( vec2 v ) {
return 0.5 - vec2( v.xy ) * -1.0;
}
I've made a JSFiddle to show this in action, here. I've commented it (probably too much) so hopefully it explains what is going on well enough for people that aren't familiar with this kind of stuff to understand.
Quick note about the fiddle: it doesn't look all that impressive, as all I'm doing is emulating what depthTest: true would do were that property set on the PointCloud, albeit in this example I'm setting the y position of particles that have collided with scene geometry to 70.0, so that's what the white band is near the top of the rendering screen. Eventually, I'll do this calculation in a velocity texture shader, so I can do proper collision response.
Hope this helps someone :)
EDIT: Here's a version of this implemented with a (possibly buggy) collision response.

Create animation of transparency, revealing the Mesh in threejs

I need to create linear animation (something like slideUp in 2d jquery object) with revealing a really complex mesh (building 3d model) - form the bottom to the top.
I was looking for opacity channel / opacity map or something like that and now I know that is not possible.
Using sprites of textures and changing offset is not the best idea because my UVs map is too complicated.
Is there any way to create that effect in THREE.JS?
Render entire scene into first framebuffer (texture).
Rendre only mesh into second framebuffer (texture).
Render a fullscreen rectangle that would use two previously mentioned textures, and use some some version of the code below:
uniform sampler2D texScene;
uniform sampler2D texMesh;
uniform vec2 uResolution;
uniform float time;
void main() {
vec2 uv = gl_FragCoord.xy / uResolution;
vec3 s = texture2D( texScene, uv ).xyz;
vec4 m = texture2D( texMesh, uv );
// slide up effect
float percent = clamp( time, 0, endAnim ) / endAnim; // endAnim is the time animation ends (assuming animation starts at time=0)
vec3 color = s;
if( uv.y > (1.0 - percent) ) {
color = s * (1.0 - m.a) + m.xyz * m.a;
}
gl_FragColor = vec4( color, 1.0 );
}
It should be intuitively understood how code works. Depending on the passed time, it checks at which percent the animation is at, and depending on that, it calculates if it should include the mesh's color, or just output background color.
Hope it helps.
Alternatively you can draw the building to the screen using gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA), and give your building an alpha gradient (from top to bottom).
The way drawing to any output works, is that WebGL evaluates the source information, and the destination information (the stuff that has already been drawn to that output) and combines the two, but you can dictate how it does that.
The equation for drawing to an output can be loosely described as:
SOURCE_VALUE * [SOURCE_FACTOR] [BLEND EQUATION] DESTINATION_VALUE * [DESTINATION_FACTOR];
By default this is:
SOURCE_VALUE * 1 + DESTINATION_VALUE * 0;
This equation discards all existing information in the buffer, and draws over it with the new information.
What we want to do is to tell WebGL to keep the existing information where we're not drawing onto the buffer, and take the new information where we are going to draw, so the equation becomes:
SOURCE_VALUE * SRC_ALPHA + DESTINATION_VALUE * ONE_MINUS_SRC_ALPHA;
If your building is 20% transparent in one fragment, then the fragment will be 20% the colour of the building, and 80% of the colour of whatever's behind the building.
This method of drawing semitransparent objects honours the depth buffer.
I figured another solution.
I use one texture for the whole building (no repeated pattern).
I put UVS progressively vertically (faces from bottom on the bottom of texture, etc) and I animate texture by filling with transparent rectangle (canvas texture).
// x - current step
// steps - number of steps
var canvas = document.getElementById('canvas-texture'),
ctx = canvas.getContext('2d');
ctx.beginPath();
ctx.drawImage(image, 0, 0);
ctx.globalCompositeOperation = 'destination-out';
ctx.fillRect(0, 0, width, height/steps * x);
ctx.closePath();
I needed it ASAP so at weekend if I find some time I'll try yours ideas and if you want I could create some fiddle with my solution.
Anyway, thanks for your help guys.

Complex shape character outline

Say I have this character and I want allow user to select it, so when it s selected I want to show an outline around it.
the character is an object3D with some meshes.
I tried to clone and set a backside material, but it did NOT work, the problem was each cube in the shape was render with backside separately so the outline was wrong.
do I need to create another mesh for the outline, is there an easier way?
What #spassvolgel wrote is correct;
What I suspect needs to be done is something like this: 1. First the background needs to be rendered 2. Then, on a separate transparent layer, the character model with a flat color, slightly bigger than the original, 3. On another transparent layer the character with its normal material / texture 4. Finally, the character layer needs to go on top of the outline layer and them combined need to be placed in the bg
You just create multiple scenes and combine them with sequential render passes:
renderer.autoClear = false;
. . .
renderer.render(scene, camera); // the entire scene
renderer.clearDepth();
renderer.render(scene2, camera); // just the selected item, larger, in a flat color
renderer.render(scene3, camera); // the selected item again
three.js.r.129
An generic solution that applies to geometries of any complexity might be to apply a fragment shader via the ShaderMaterial class in three.js. Not sure what your experience level is at, but if you need it an introduction to shaders can be found here.
A good example where shaders are used to highlight geometries can be found here. In their vertex shader, they calculate the normal for a vertex and a parameter used to express intensity of a glow effect:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main()
{
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
These parameters are passed to the fragment shader where they are used to modify the color values of pixels surrounding the geometry:
uniform vec3 glowColor;
varying float intensity;
void main()
{
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
I found something on gamedev.stackexchange.com/ that could be useful. They talk of a stencil buffer. I have no idea on how to apply this to THREE.js though..
https://gamedev.stackexchange.com/questions/59361/opengl-get-the-outline-of-multiple-overlapping-objects
You can get good results by rendering your outlined object(s) to a texture that is (ideally) the size of your destination framebuffer, then render a framebuffer-sized quad using that texture and have the fragment shader blur or do other image transforms. I have an example here that uses raw WebGL, but you can make a custom ShaderMaterial without too much trouble.
I haven't found the answer yet but I wanted to demonstrate what happens when I create multiple meshes, and put another mesh behind each of these meshes with
side: THREE.BackSide
http://jsfiddle.net/GwS9c/8/
as you can see, it's not the desired effect. I would like a clean outline behind ALL three meshes, that doesn't overlap. My level of programming shaders is really non-existent, but on most online resources people say to use this approach of cloning the meshes.

WebGL - Textured terrain with heightmap

I'm trying to create a 3D terrain using WebGL. I have a jpg with the texture for the terrain, and another jpg with the height values (-1 to 1).
I've looked at various wrapper libraries (like SpiderGL and Three.js), but I can't find a sutable example, and if I do (like in Three.js) the code is not documented and I can't figure out how to do it.
Can anyone give me a good tutorial or example?
There is an example at Three.js http://mrdoob.github.com/three.js/examples/webgl_geometry_terrain.html which is almost what I want. The problem is that they create the colour of the mountains and the height values randomly. I want to read these values from 2 different image files.
Any help would be appriciated.
Thanks
Check out this post over on GitHub:
https://github.com/mrdoob/three.js/issues/1003
The example linked there by florianf helped me to be able to do this.
function getHeightData(img) {
var canvas = document.createElement( 'canvas' );
canvas.width = 128;
canvas.height = 128;
var context = canvas.getContext( '2d' );
var size = 128 * 128, data = new Float32Array( size );
context.drawImage(img,0,0);
for ( var i = 0; i < size; i ++ ) {
data[i] = 0
}
var imgd = context.getImageData(0, 0, 128, 128);
var pix = imgd.data;
var j=0;
for (var i = 0, n = pix.length; i < n; i += (4)) {
var all = pix[i]+pix[i+1]+pix[i+2];
data[j++] = all/30;
}
return data;
}
Demo: http://oos.moxiecode.com/js_webgl/terrain/index.html
Two methods that I can think of:
Create your landscape vertices as a flat grid. Use Vertex Texture Lookups to query your heightmap and modulate the height (probably your Y component) of each point. This would probably be the easiest, but I don't think browser support for it is very good right now. (In fact, I can't find any examples)
Load the image, render it to a canvas, and use that to read back the height values. Build a static mesh based on that. This will probably be faster to render, since the shaders are doing less work. It requires more code to build the mesh, however.
For an example of reading image data, you can check out this SO question.
You may be interested in my blog post on the topic: http://www.pheelicks.com/2014/03/rendering-large-terrains/
I focus on how to efficiently create your terrain geometry such that you get an adequate level of detail in the near field as well as far away.
You can view a demo of the result here: http://felixpalmer.github.io/lod-terrain/ and all the code is up on github: https://github.com/felixpalmer/lod-terrain
To apply a texture to the terrain, you need to do a texture lookup in the fragment shader, mapping the location in space to a position in your texture. E.g.
vec2 st = vPosition.xy / 1024.0;
vec3 color = texture2D(uColorTexture, st)
Depending on your GLSL skills, you can write a GLSL vertex shader, assign the texture to one of your texture channels, and read the value in the vertex shader (I believe you need a modern card to read textures in a vertex shader but that may just be me showing my age :P )
In the vertex shader, translate the z value of the vertex based on the value read from the texture.
Babylon.js makes this extremely easy to implement. You can see an example at:
Heightmap Playground
They've even implemented the Cannon.js physics engine with it, so you can handle collisions: Heightmap with collisions
Note: as of this writing it only works with the cannon.js physics plugin, and friction doesn't work (must be set to 0). Also, make sure you set the location of a mesh/impostor BEFORE you set the physics state, or you'll get weird behavior.

Categories

Resources