THREE.js - Shifting a texture on a Mesh constructed from a TubeGeometry - javascript

I have a problem that I suspect has a very simple answer (one or two lines), but nothing I've tried so far is working. Many thanks for any suggestions.
What I have is a TubeGeometry and a Texture that I've made via canvas drawing. I then make a mesh (multi-material object if that makes a difference), and one of the materials has the map property specified as my texture. I'll call this material Mat.
The problem is that I need a way to continuously rotate the texture around the tube (in torus terms, around the meridians, not the equators). Kind of like kneading circular bread. Previously I was specifying an offset (0-1) and had a function createTexture(offset) for redrawing the texture on the canvas to mimic wraparound, constantly using the code
Mat.map = new THREE.Texture( createTexture(offset) );
Mat.map.needsUpdate = true;
Mat.needsUpdate = true;
Aesthetically, this works fine... except that that the canvas drawing is way too expensive and my performance suffers massively for it. So it's not a reasonable solution.
I also tried messing around with the Mat.map.offset property, but that isn't working at all. Seems to be leaving the original texture present and overwriting only parts of it. I can't discern exactly what's going on there, and wonder if it's a problem with using TubeGeometry because a related stackExchange question about Spheres was solved by this method.
A third thought was to go into TubeGeometry.faceVertexUvs and shift all the face coordinates around, modding by one, as in:
function transform(n){
var faceVertexUvs = tubeGeometry.faceVertexUvs[0];
for (var i =0; i < faceVertexUvs.length; i++){
for (var j=0; j< 3; j++){
faceVertexUvs[i][j].y = ((faceVertexUvs[i][j].y + n) % 1);
}
}
tubeGeometry.uvsNeedUpdate = true;
}
This comes sooo close to working, but always leaves one equatorial line where everything goes wrong. The texture looks terrible like it were having a great crisis of indecision, and the ray-casting I'm doing goes nuts at this spot too. 99% of it works just fine tho... Maybe there's a way to salvage this last attempt?
I'm not attached to any one method, though. maximum efficiency is extra appreciated!
Thanks again for any help!

You can do this with a fragment shader:
<script id="fs" type="x-shader/x-fragment">
uniform float iGlobalTime;
uniform sampler2D iChannel0;
varying vec2 vUv;
void main() {
vec2 uv = vUv;
uv.y = uv.y + iGlobalTime;
if (uv.y > 1.)
uv.y = uv.y - 1.;
gl_FragColor = texture2D( iChannel0, vec2(uv.x, uv.y));
}
</script>
Define your shader material:
_uniforms = {
iGlobalTime: { type: 'f', value: 0.1 },
iChannel0: { type: 't', value: THREE.ImageUtils.loadTexture( 'textures/cat.png') },
};
newMaterial = new THREE.ShaderMaterial( {
uniforms: _uniforms,
vertexShader: document.getElementById( 'vs' ).textContent,
fragmentShader: document.getElementById( 'fs' ).textContent,
} );
Then set iGlobalTime in your render loop:
_uniforms.iGlobalTime.value += clock.getDelta();
while (_uniforms.iGlobalTime.value > 1.)
_uniforms.iGlobalTime.value -= 1.;
Complete working example here: http://rwoodley.org/MyContent/SO/01/

The seam is created by the % 1. Remove the % 1 and enable wrapping on the texture (map.wrapS = THREE.RepeatWrapping and/or map.wrapT). This will allow you to keep incrementing the UVs and the texture will wrap on itself. You could also try changing map.offset which may just offset the texture for you.

Related

Create animation of transparency, revealing the Mesh in threejs

I need to create linear animation (something like slideUp in 2d jquery object) with revealing a really complex mesh (building 3d model) - form the bottom to the top.
I was looking for opacity channel / opacity map or something like that and now I know that is not possible.
Using sprites of textures and changing offset is not the best idea because my UVs map is too complicated.
Is there any way to create that effect in THREE.JS?
Render entire scene into first framebuffer (texture).
Rendre only mesh into second framebuffer (texture).
Render a fullscreen rectangle that would use two previously mentioned textures, and use some some version of the code below:
uniform sampler2D texScene;
uniform sampler2D texMesh;
uniform vec2 uResolution;
uniform float time;
void main() {
vec2 uv = gl_FragCoord.xy / uResolution;
vec3 s = texture2D( texScene, uv ).xyz;
vec4 m = texture2D( texMesh, uv );
// slide up effect
float percent = clamp( time, 0, endAnim ) / endAnim; // endAnim is the time animation ends (assuming animation starts at time=0)
vec3 color = s;
if( uv.y > (1.0 - percent) ) {
color = s * (1.0 - m.a) + m.xyz * m.a;
}
gl_FragColor = vec4( color, 1.0 );
}
It should be intuitively understood how code works. Depending on the passed time, it checks at which percent the animation is at, and depending on that, it calculates if it should include the mesh's color, or just output background color.
Hope it helps.
Alternatively you can draw the building to the screen using gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA), and give your building an alpha gradient (from top to bottom).
The way drawing to any output works, is that WebGL evaluates the source information, and the destination information (the stuff that has already been drawn to that output) and combines the two, but you can dictate how it does that.
The equation for drawing to an output can be loosely described as:
SOURCE_VALUE * [SOURCE_FACTOR] [BLEND EQUATION] DESTINATION_VALUE * [DESTINATION_FACTOR];
By default this is:
SOURCE_VALUE * 1 + DESTINATION_VALUE * 0;
This equation discards all existing information in the buffer, and draws over it with the new information.
What we want to do is to tell WebGL to keep the existing information where we're not drawing onto the buffer, and take the new information where we are going to draw, so the equation becomes:
SOURCE_VALUE * SRC_ALPHA + DESTINATION_VALUE * ONE_MINUS_SRC_ALPHA;
If your building is 20% transparent in one fragment, then the fragment will be 20% the colour of the building, and 80% of the colour of whatever's behind the building.
This method of drawing semitransparent objects honours the depth buffer.
I figured another solution.
I use one texture for the whole building (no repeated pattern).
I put UVS progressively vertically (faces from bottom on the bottom of texture, etc) and I animate texture by filling with transparent rectangle (canvas texture).
// x - current step
// steps - number of steps
var canvas = document.getElementById('canvas-texture'),
ctx = canvas.getContext('2d');
ctx.beginPath();
ctx.drawImage(image, 0, 0);
ctx.globalCompositeOperation = 'destination-out';
ctx.fillRect(0, 0, width, height/steps * x);
ctx.closePath();
I needed it ASAP so at weekend if I find some time I'll try yours ideas and if you want I could create some fiddle with my solution.
Anyway, thanks for your help guys.

Complex shape character outline

Say I have this character and I want allow user to select it, so when it s selected I want to show an outline around it.
the character is an object3D with some meshes.
I tried to clone and set a backside material, but it did NOT work, the problem was each cube in the shape was render with backside separately so the outline was wrong.
do I need to create another mesh for the outline, is there an easier way?
What #spassvolgel wrote is correct;
What I suspect needs to be done is something like this: 1. First the background needs to be rendered 2. Then, on a separate transparent layer, the character model with a flat color, slightly bigger than the original, 3. On another transparent layer the character with its normal material / texture 4. Finally, the character layer needs to go on top of the outline layer and them combined need to be placed in the bg
You just create multiple scenes and combine them with sequential render passes:
renderer.autoClear = false;
. . .
renderer.render(scene, camera); // the entire scene
renderer.clearDepth();
renderer.render(scene2, camera); // just the selected item, larger, in a flat color
renderer.render(scene3, camera); // the selected item again
three.js.r.129
An generic solution that applies to geometries of any complexity might be to apply a fragment shader via the ShaderMaterial class in three.js. Not sure what your experience level is at, but if you need it an introduction to shaders can be found here.
A good example where shaders are used to highlight geometries can be found here. In their vertex shader, they calculate the normal for a vertex and a parameter used to express intensity of a glow effect:
uniform vec3 viewVector;
uniform float c;
uniform float p;
varying float intensity;
void main()
{
vec3 vNormal = normalize( normalMatrix * normal );
vec3 vNormel = normalize( normalMatrix * viewVector );
intensity = pow( c - dot(vNormal, vNormel), p );
gl_Position = projectionMatrix * modelViewMatrix * vec4( position, 1.0 );
}
These parameters are passed to the fragment shader where they are used to modify the color values of pixels surrounding the geometry:
uniform vec3 glowColor;
varying float intensity;
void main()
{
vec3 glow = glowColor * intensity;
gl_FragColor = vec4( glow, 1.0 );
}
I found something on gamedev.stackexchange.com/ that could be useful. They talk of a stencil buffer. I have no idea on how to apply this to THREE.js though..
https://gamedev.stackexchange.com/questions/59361/opengl-get-the-outline-of-multiple-overlapping-objects
You can get good results by rendering your outlined object(s) to a texture that is (ideally) the size of your destination framebuffer, then render a framebuffer-sized quad using that texture and have the fragment shader blur or do other image transforms. I have an example here that uses raw WebGL, but you can make a custom ShaderMaterial without too much trouble.
I haven't found the answer yet but I wanted to demonstrate what happens when I create multiple meshes, and put another mesh behind each of these meshes with
side: THREE.BackSide
http://jsfiddle.net/GwS9c/8/
as you can see, it's not the desired effect. I would like a clean outline behind ALL three meshes, that doesn't overlap. My level of programming shaders is really non-existent, but on most online resources people say to use this approach of cloning the meshes.

Shader Materials and GL Framebuffers in THREE.js

I'm trying to use an FBO in a material in THREE.js. I have a GPU-based fluid simulation which outputs its final visualisation to a framebuffer object, which I would like to use to texture a mesh. Here's my simple fragment shader:
varying vec2 vUv;
uniform sampler2D tDiffuse;
void main() {
gl_FragColor = texture2D( tDiffuse, vUv );
}
I am then trying to use a simple THREE.ShaderMaterial:
var material = new THREE.ShaderMaterial( {
uniforms: { tDiffuse: { type: "t", value: outputFBO } },
//other stuff... which shaders to use etc
} );
But my mesh just appears black, albeit with no errors to the console. If I use the same shader and shader material, but supply the result of THREE.ImageUtils.loadTexture("someImageOrOther") as the uniform to the shader, it renders correctly, so I assume the problem is with my FBO. Is there some convenient way of converting from an FBO to a Texture2D in WebGL?
EDIT:
After some more experimentation it would appear that this isn't the problem. If I pass the FBO to a different shader I wrote that just outputs the texture to the screen then it displays fine. Could my material appear black because of something like lighting/normals?
EDIT 2:
The UVs and normals are coming straight from THREE, so I don't think it can be that. Part of the problem is that most shader errors aren't reported so I have difficulty in that regard. If I could just map the WebGLTexture somehow that would make everything easier, perhaps like this
var newMaterial = new THREE.MeshLambertMaterial({ map : outputFBO.texture });
but of course that doesn't work. I haven't been able to find any documentation that suggests THREE can read directly from WebGLTextures.
By poking a little into the sources of WebGLRenderer (look at https://github.com/mrdoob/three.js/blob/master/src/renderers/WebGLRenderer.js#L6643 and after), you may try to create a three js texture with a dummy picture, then change the data member __webglTexture of this texture by putting your own webgltexture.
Also, you may need to set to true the __webglInit data member of the texture object so that init code is not executed (because then __webglTexture is overwritten by a call to _gl.createTexture();)
If you don't mind using the Three.js data structures, here's how you do it:
Three.js use framebuffer as texture

Double sided transparent shader looks buggy

I have made a little test that allows you to experiment with shaders in a 3D environment using three.js.
There's a sphere in the scene that shows the shader.
The demo shader I have created is a very simple shader that uses a 2D noise implementation. A big part of the sphere remains black, which I made transparent. I want the other side of the sphere to be visible too. So I have enabled transparency and set rendering side to double-sided.
material = new THREE.ShaderMaterial({
'uniforms': uniforms,
'fragmentShader': $('textarea#input-fragment').val(),
'vertexShader': $('textarea#input-vertex').val()
});
material.side = THREE.DoubleSide;
material.transparent = true;
On this example, the buggyness is easier to notice.
When the sphere is viewed from the top, you only see the shader from the outer side. When viewed from the side there seems to be a bit choppyness, and when viewed from the bottom it seems to be working.
These are the different angles (top - side - bottom):
Here's the important bit of my fragment shader:
void main() {
float r = cnoise(vNormal.yz * 2.0 + t);
float g = cnoise(vNormal.xz * -1.0 + t);
float b = cnoise(vNormal.xy * -2.0 + t);
// opacity ranges assumable from 0 - 3, which is OK
gl_FragColor = vec4(r, g, b, r + g + b);
}
So why am I seeing the choppy edges and why does the viewing angle matters?
There is nothing wrong with your shader. You can also see the effect if you set:
gl_FragColor = vec4( 1.0, 1.0, 1.0, 0.5 );
Self-transparency is tricky in three.js.
For performance reasons in WebGLRenderer, depth sorting works only between objects (based on their position), not within a single object.
The rendering order of the individual faces within an object cannot be controlled.
This is why from some viewing angles your scene looks better than from others.
One work-around is to explode the geometry into individual meshes of one face each.
Another work-around (your best bet, IMO) is to replace your transparent, double-sided sphere with two transparent spheres in the same location -- a front-sided one and a back-sided one.
three.js r.56
Very similar to what I ran into. The WHY to understand this is best explained on Three.js Transparency fundamentals.
Without more details on your code or goals, here is an alternate solution as of version r128. Just add one more line to your material:
material.depthTest: false,
in a nutshell, your shader is fine as #WestLangley mentioned, but during rendering transparency, the depth of pixels in relation to one another is taken into account as well - ending up in certain pixels not rendering. This is where your "buggy-ness" came from. Not really a bug, but the way your scene is rendered by default until told to do otherwise. There are a lot of *issues you can run into that compete with your expectations so I recommend reading up on the link I posted.
*One such issue: If there are other objects in your scene, then of course since you turned off depthTest you can get the incorrect object placement as an object that should be in the background can get rendered in the foreground.

Efficient particle system in javascript? (WebGL)

I'm trying to write a program that does some basic gravity physics simulations on particles. I initially wrote the program using the standard Javascript graphics (with a 2d context), and I could get around 25 fps w/10000 particles that way. I rewrote the tool in WebGL because I was under the assumption that I could get better results that way. I am also using the glMatrix library for vector math. However, with this implementation I'm getting only about 15fps with 10000 particles.
I'm currently an EECS undergrad and I have had a reasonable amount of experience programming, but never with graphics, and I have little clue as to how to optimize Javascript code.
There is a lot I don't understand about how WebGL and Javascript work. What key components affect performance when using these technologies? Is there a more efficient data structure to use to manage my particles (I'm just using a simple array)? What explanation could there be for the performance drop using WebGL? Delays between the GPU and Javascript maybe?
Any suggestions, explanations, or help in general would be greatly appreciated.
I'll try to include only the critical areas of my code for reference.
Here is my setup code:
gl = null;
try {
// Try to grab the standard context. If it fails, fallback to experimental.
gl = canvas.getContext("webgl") || canvas.getContext("experimental-webgl");
gl.viewportWidth = canvas.width;
gl.viewportHeight = canvas.height;
}
catch(e) {}
if(gl){
gl.clearColor(0.0,0.0,0.0,1.0);
gl.clearDepth(1.0); // Clear everything
gl.enable(gl.DEPTH_TEST); // Enable depth testing
gl.depthFunc(gl.LEQUAL); // Near things obscure far things
// Initialize the shaders; this is where all the lighting for the
// vertices and so forth is established.
initShaders();
// Here's where we call the routine that builds all the objects
// we'll be drawing.
initBuffers();
}else{
alert("WebGL unable to initialize");
}
/* Initialize actors */
for(var i=0;i<NUM_SQS;i++){
sqs.push(new Square(canvas.width*Math.random(),canvas.height*Math.random(),1,1));
}
/* Begin animation loop by referencing the drawFrame() method */
gl.bindBuffer(gl.ARRAY_BUFFER, squareVerticesBuffer);
gl.vertexAttribPointer(vertexPositionAttribute, 2, gl.FLOAT, false, 0, 0);
requestAnimationFrame(drawFrame,canvas);
The draw loop:
function drawFrame(){
// Clear the canvas before we start drawing on it.
gl.clear(gl.COLOR_BUFFER_BIT);
//mvTranslate([-0.0,0.0,-6.0]);
for(var i=0;i<NUM_SQS;i++){
sqs[i].accelerate();
/* Translate current buffer (?) */
gl.uniform2fv(translationLocation,sqs[i].posVec);
/* Draw current buffer (?) */;
gl.drawArrays(gl.TRIANGLE_STRIP, 0, 4);
}
window.requestAnimationFrame(drawFrame, canvas);
}
Here is the class that Square inherits from:
function PhysicsObject(startX,startY,size,mass){
/* Class instances */
this.posVec = vec2.fromValues(startX,startY);
this.velVec = vec2.fromValues(0.0,0.0);
this.accelVec = vec2.fromValues(0.0,0.0);
this.mass = mass;
this.size = size;
this.accelerate = function(){
var r2 = vec2.sqrDist(GRAV_VEC,this.posVec)+EARTH_RADIUS;
var dirVec = vec2.create();
vec2.set(this.accelVec,
G_CONST_X/r2,
G_CONST_Y/r2
);
/* Make dirVec unit vector in direction of gravitational acceleration */
vec2.sub(dirVec,GRAV_VEC,this.posVec)
vec2.normalize(dirVec,dirVec)
/* Point acceleration vector in direction of dirVec */
vec2.multiply(this.accelVec,this.accelVec,dirVec);//vec2.fromValues(canvas.width*.5-this.posVec[0],canvas.height *.5-this.posVec[1])));
vec2.add(this.velVec,this.velVec,this.accelVec);
vec2.add(this.posVec,this.posVec,this.velVec);
};
}
These are the shaders I'm using:
<script id="shader-fs" type="x-shader/x-fragment">
void main(void) {
gl_FragColor = vec4(0.7, 0.8, 1.0, 1.0);
}
</script>
<!-- Vertex shader program -->
<script id="shader-vs" type="x-shader/x-vertex">
attribute vec2 a_position;
uniform vec2 u_resolution;
uniform vec2 u_translation;
void main() {
// Add in the translation.
vec2 position = a_position + u_translation;
// convert the rectangle from pixels to 0.0 to 1.0
vec2 zeroToOne = position / u_resolution;
// convert from 0->1 to 0->2
vec2 zeroToTwo = zeroToOne * 2.0;
// convert from 0->2 to -1->+1 (clipspace)
vec2 clipSpace = zeroToTwo - 1.0;
gl_Position = vec4(clipSpace*vec2(1,-1), 0, 1);
}
</script>
I apologize for this being long-winded. Again, any suggestions or nudges in the right direction would be huge.
you should never draw primitives individualy. Draw them all at once, whenever possible. Create an ArrayBuffer that contains position and other necessary attributes of all particles and then draw the whole buffer with one call to gl.drawArrays.
I can't give exact instructions because I'm on mobile but searching for vbo, interleaved arrays, and particles in opengl will surely help you find examples and other helpful resources.
I'm rendering 5m static points that way with 10fps. Dynamic points will be slower as you'll have to continually send updated data to the graphics card but it will be way faster than 15fps for 10000 points.
Edit:
You might want to use gl.POINT instead of TRIANGLE_STRIP. That way, you only have to specify the position and and gl_PointSize(in the vertex shader) for each square. gl.POINT are rendered as squares!
You can take a look at the source of these two point cloud renderer:
https://github.com/asalga/XB-PointStream
http://potree.org/wp/download/ ( By me, following files might help you: WeightedPointSizeMaterial.js, pointSize.vs, colouredPoint.fs )
It depends on what you are trying to do. When you say "gravity" to you mean some kind of physical simulation with collisions or do you just mean velocity += acceleration; position += velocity?
If the latter then you can do all the math in the shader. Example is here
https://www.khronos.org/registry/webgl/sdk/demos/google/particles/index.html
These particles are done entirely in the shader. The only input after setup is time. Each "particle" consists of 4 vertices. Each vertex contains
local_position (for a unit quad)
texture_coord
lifetime
starting_position
starting_time
velocity
acceleration
start_size
end_size
orientation (quaterion)
color multiplier
Given time you can compute the particles's local time (time since it starts)
local_time = time - starting_time;
Then you can compute a position with
base_position = start_position +
velocity * local_time +
acceleration * local_time * local_time;
That's acceleration * time^2. You then add the local_position to that base_position to get the position needed to render the quad.
You can also compute a 0 to 1 lerp over the lifetime of the particle
lerp = local_time / lifetime;
This gives you a value you can use to lerp all the other values
size = mix(start_size, end_size, lerp);
If the particle a size of 0 if it's outside the it's lifetime
if (lerp < 0.0 || lerp > 1.0) {
size = 0.0;
}
This will make the GPU not draw anything.
Using a ramp texture (a 1xN pixel texture) you can easily have the particle change colors over time.
color = texture2D(rampTexture, vec4(lerp, 0.5));
etc...
If you follow through the shaders you'll see other things similarly handled including spinning the particle (something that would be harder with point sprites), animating across a texture for frames, doing both 2D and 3D oriented particles. 2D particles are fine for smoke, exhaust, fire, explosions. 3D particles are good for ripples, possibly tire tracks, and can be combined with 2D particles for ground puffs to hide some of the z-issues of 2D only particles. etc..
There are also examples of one shots (explosions, puffs) as well as trails. Press 'P' for a puff. Hold 'T' to see a trail.
AFAIK these are pretty efficient particles in that JavaScript is doing almost nothing.

Categories

Resources