WebGL - Textured terrain with heightmap - javascript

I'm trying to create a 3D terrain using WebGL. I have a jpg with the texture for the terrain, and another jpg with the height values (-1 to 1).
I've looked at various wrapper libraries (like SpiderGL and Three.js), but I can't find a sutable example, and if I do (like in Three.js) the code is not documented and I can't figure out how to do it.
Can anyone give me a good tutorial or example?
There is an example at Three.js http://mrdoob.github.com/three.js/examples/webgl_geometry_terrain.html which is almost what I want. The problem is that they create the colour of the mountains and the height values randomly. I want to read these values from 2 different image files.
Any help would be appriciated.
Thanks

Check out this post over on GitHub:
https://github.com/mrdoob/three.js/issues/1003
The example linked there by florianf helped me to be able to do this.
function getHeightData(img) {
var canvas = document.createElement( 'canvas' );
canvas.width = 128;
canvas.height = 128;
var context = canvas.getContext( '2d' );
var size = 128 * 128, data = new Float32Array( size );
context.drawImage(img,0,0);
for ( var i = 0; i < size; i ++ ) {
data[i] = 0
}
var imgd = context.getImageData(0, 0, 128, 128);
var pix = imgd.data;
var j=0;
for (var i = 0, n = pix.length; i < n; i += (4)) {
var all = pix[i]+pix[i+1]+pix[i+2];
data[j++] = all/30;
}
return data;
}
Demo: http://oos.moxiecode.com/js_webgl/terrain/index.html

Two methods that I can think of:
Create your landscape vertices as a flat grid. Use Vertex Texture Lookups to query your heightmap and modulate the height (probably your Y component) of each point. This would probably be the easiest, but I don't think browser support for it is very good right now. (In fact, I can't find any examples)
Load the image, render it to a canvas, and use that to read back the height values. Build a static mesh based on that. This will probably be faster to render, since the shaders are doing less work. It requires more code to build the mesh, however.
For an example of reading image data, you can check out this SO question.

You may be interested in my blog post on the topic: http://www.pheelicks.com/2014/03/rendering-large-terrains/
I focus on how to efficiently create your terrain geometry such that you get an adequate level of detail in the near field as well as far away.
You can view a demo of the result here: http://felixpalmer.github.io/lod-terrain/ and all the code is up on github: https://github.com/felixpalmer/lod-terrain
To apply a texture to the terrain, you need to do a texture lookup in the fragment shader, mapping the location in space to a position in your texture. E.g.
vec2 st = vPosition.xy / 1024.0;
vec3 color = texture2D(uColorTexture, st)

Depending on your GLSL skills, you can write a GLSL vertex shader, assign the texture to one of your texture channels, and read the value in the vertex shader (I believe you need a modern card to read textures in a vertex shader but that may just be me showing my age :P )
In the vertex shader, translate the z value of the vertex based on the value read from the texture.

Babylon.js makes this extremely easy to implement. You can see an example at:
Heightmap Playground
They've even implemented the Cannon.js physics engine with it, so you can handle collisions: Heightmap with collisions
Note: as of this writing it only works with the cannon.js physics plugin, and friction doesn't work (must be set to 0). Also, make sure you set the location of a mesh/impostor BEFORE you set the physics state, or you'll get weird behavior.

Related

Three.js sets Texture RGB values to zero when ALPHA is zero on IOS

I am working on a WebGL project using javascript and the three.js framework. For that I am writing a custom shader with GLSL in which I have to load several lookup tables. Meaning I need to use some textures' individual RGBA values for some calculations rather than displaying them.
This works fine on all devices that I've tested. However, on iOS devices (like an iPad) the RGB values of a texture are automatically set to 0 when its alpha channel is 0. I do not think that this is due to GLSL's texture2D function but rather has something to do with how three.js loads textures on iOS. I am using the built-in TextureLoader for that:
var textureLoader = new THREE.TextureLoader();
var lutMap = textureLoader.load('path/to/lookup/table/img.png');
lutMap.minFilter = THREE.NearestFilter;
lutMap.magFilter = THREE.NearestFilter;
lutMap.generateMipmaps = false;
lutMap.type = THREE.UnsignedByteType;
lutMap.format = THREE.RGBAFormat;
For testing purposes I've created a test image with constant RGB values (255,0,0) and with a constantly decreasing alpha value from the top-right corner to the bottom-left one with some pixels' alpha values being 0:
After the texture was loaded, I checked the zero-alpha pixels and their R values were indeed set to 0. I used the following code to read the image's data:
function getImageData( image ) {
var canvas = document.createElement( 'canvas' );
canvas.width = image.width;
canvas.height = image.height;
var context = canvas.getContext( '2d' );
context.drawImage( image, 0, 0 );
return context.getImageData( 0, 0, image.width, image.height );
}
The strange thing was that this was also true on my Windows PC, but the shader works just fine. So maybe this is only due to the canvas and has nothing to do with the actual problem. On the iOS device however, the texture2D(...) lookup in the GLSL code indeed returned (0,0,0,0) for exactly those pixels. (Please note that I come from Java/C++ and I am not very familiar with javascript yet! :) )
I've also tried to set the premultipliedAlpha flag to 0 in the WebGLRenderer instance, but also in the THREE.ShaderMaterial object itself. Sadly, It did not fix the problem.
Did anyone experience similar problems and knows how to fix this unwanted behaviour?
The low level PNG reading code on iOS will go through CoreGraphics and premultiply each RGB value by the A component for each pixel, so if A = 0 then each RGB value will come out as zero. What you can do is load a 24 BPP image, so that the alpha is always 0xFF (aka 255), but you cannot disable this premultiply step under iOS when dealing with a 32 BPP image.

Create texture from Array THREE.js

I'm working on a terrain generator, but I can't seen to figure out how to do the colors. I want to be able to generate an image that will take up my whole PlaneGeometry. My question is how can I create a single image that will cover the entire PlaneGeometry (with no wrapping) based off my height map? I can think of one way, but I'm not sure it would fully cover the PlaneGeometry and it would be very inefficient. I'd draw it in a two-dimensional view with colors on a canvas. I'd then convert the canvas to the texture Is that the best/only way?
UPDATE: Using DataTexture, I got some errors. I have absolutely no idea where I went wrong. Here's the error I got:
WebGL: drawElements: texture bound to texture unit 0 is not renderable. It maybe non-power-of-2 and have incompatible texture filtering or is not 'texture complete'. Or the texture is Float or Half Float type with linear filtering while OES_float_linear or OES_half_float_linear extension is not enabled.
Both the DataTexture and the PlaneGeometry have a size of 512^2. What can I do to fix this?
Here's some of the code I use:
EDIT: I fixed it. Here's the working code I used.
function genDataTexture(){
//Set the size.
var dataMap = new Uint8Array(1 << (Math.floor(Math.log(map.length * map[0].length * 4) / Math.log(2))));
/* ... */
//Set the r,g,b for each pixel, color determined above
dataMap[count++] = color.r;
dataMap[count++] = color.g;
dataMap[count++] = color.b;
dataMap[count++] = 255;
}
var texture = new THREE.DataTexture(dataMap, map.length, map[0].length, THREE.RGBAFormat);
texture.needsUpdate = true;
return texture;
}
/* ... */
//Create the material
var material = new THREE.MeshBasicMaterial({map: genDataTexture()});
//Here, I mesh it and add it to scene. I don't change anything after this.
The optimal way, if the data is already in your Javascript code, is to use a DataTexture -- see https://threejs.org/docs/#api/textures/DataTexture for the general docs, or look at THREE.ImageUtils.generateDataTexture() for a fairly-handy way to make them. http://threejs.org/docs/#Reference/Extras/ImageUtils

Face normals on dynamic geometry

I'm trying to create a vertex animation for a mesh.
Just imagine a vertex shader, but in software instead of hardware.
Basically what I do is to apply a transformation matrix to each vertex. The mesh it's ok but the normals doesn't look good at all.
I've try to use both computeVertexNormals() and computeFaceNormals() but it just doesn't work.
The following code is the one I used for the animation (initialVertices are the initial vertices generated by the CubeGeometry):
for (var i=0;i<mesh1.geometry.vertices.length; i++)
{
var vtx=initialVertices[i].clone();
var dist = vtx.y;
var rot=clock.getElapsedTime() - dist*0.02;
matrix.makeRotationY(rot);
vtx.applyMatrix4(matrix);
mesh1.geometry.vertices[i]=vtx;
}
mesh1.geometry.verticesNeedUpdate = true;
Here there're two examples, one working correctly with CanvasRenderer:
http://kile.stravaganza.org/lab/js/dynamic/canvas.html
and the one that doesn't works in WebGL:
http://kile.stravaganza.org/lab/js/dynamic/webgl.html
Any idea what I'm missing?
You are missing several things.
(1) You need to set the ambient reflectance of the material. It is reasonable to set it equal to the diffuse reflectance, or color.
var material = new THREE.MeshLambertMaterial( {
color:0xff0000,
ambient:0xff0000
} );
(2) If you are moving vertices, you need to update centroids, face normals, and vertex normals -- in the proper order. See the source code.
mesh1.geometry.computeCentroids();
mesh1.geometry.computeFaceNormals();
mesh1.geometry.computeVertexNormals();
(3) When you are using WebGLRenderer, you need to set the required update flags:
mesh1.geometry.verticesNeedUpdate = true;
mesh1.geometry.normalsNeedUpdate = true;
Tip: is it a good idea to avoid new and clone in tight loops.
three.js r.63

Three.js outlines

Is it possible to have an black outline on my 3d models with three.js?
I would have graphics which looks like Borderlands 2. (toon shading + black outlines)
I'm sure I came in late. Let's hope this would solve someone's question later.
Here's the deal, you don't need to render everything twice, the overhead actually is not substantial, all you need to do is duplicate the mesh and set the duplicate mesh's material side to "backside". No double passes. You will be rendering two meshes instead, with most of the outline's geometry culled by WebGL's "backface culling".
Here's an example:
var scene = new THREE.Scene();
//Create main object
var mesh_geo = new THREE.BoxGeometry(1, 1, 1);
var mesh_mat = new THREE.MeshBasicMaterial({color : 0xff0000});
var mesh = new THREE.Mesh(mesh_geo, mesh_mat);
scene.add(mesh);
//Create outline object
var outline_geo = new THREE.BoxGeometry(1, 1, 1);
//Notice the second parameter of the material
var outline_mat = new THREE.MeshBasicMaterial({color : 0x00ff00, side: THREE.BackSide});
var outline = new THREE.Mesh(outline_geo, outline_mat);
//Scale the object up to have an outline (as discussed in previous answer)
outline.scale.multiplyScalar(1.5);
scene.add(outline);
For more details on backface culling, check out: http://en.wikipedia.org/wiki/Back-face_culling
The above approach works well if you want to add an outline to objects, without adding a toon shader, and thus losing "realism".
Toon shading by itself supports edge detection. They've developed the 'cel' shader in Borderlands to achieve this effect.
In cel shading devs can either use the object duplication method (done at the [low] pipeline level), or can use image processing filters for edge detection. This is the point at which performance tradeoff is compared between the two techniques.
More info on cel: http://en.wikipedia.org/wiki/Cel_shading
Cheers!
Yes it is possible but not in a simple out-of-the-box way. For toon shading there are even shaders included in /examples/js/ShaderToon.js
For the outlines I think the most commonly suggested method is to render in two passes. First pass renders the models in black, and slightly larger scale. Second pass is normal scale and with the toon shaders. This way you'll see the larger black models as an outline. It's not perfect but I don't think there's an easy way out. You might have more success searching for "three.js hidden line rendering", as, while different look, somewhat similar method is used to achieve that.
Its a old question but here is what i did.
I created a Outlined Cel-shader for my CG course. Unfortunately it takes 3 rendering passes. Im currently trying to figure out how to remove one pass.
Here's the idea:
1) Render NormalDepth image to texture.
In vertex shader you do what you normally do, position to screen space and normal to screen space.
In fragment shader you calculate the depth of the pixel and then create the normal color with the depth as the alpha value
float ndcDepth = (2.0 * gl_FragCoord.z - gl_DepthRange.near - gl_DepthRange.far) / (gl_DepthRange.far - gl_DepthRange.near);
float clipDepth = ndcDepth / gl_FragCoord.w;
2) Render the scene on to a texture with cel-shading. I changed the scene override material.
3)Make quad and render both textures on the quad and have a orto camera look at it. Cel-shaded texture is just renderd on quad but the normaldepth shaded on that you use some edge detection and then with that you know when the pixel needs to be black(edge).

Trying to clear a WebGL Texture to a solid color

I have a series of textures which stack together (using a mip-map-style resolution pyramid) to arrive at a final image.
Because of the way they stack, it's necessary to initialize them to an unsigned int value of 128 when they have not been populated by meaningful image data. IE, 128 is zero because the render shader will subtract .5 from each texture value, which allows subsequent pyramidal layers to offset the final image by positive or negative values.
I cannot seem to figure out how to initialize a (single-channel GL_LUMINANCE) texture to a value!
I've tried setting it as a renderbuffer target and rendering polys to it, but the FBO seems to be marked as incomplete. I've also tried targeting it as a renderbuffer target and using gl.clear(gl.COLOR_BUFFER_BIT) but again it's considered incomplete.
The obvious thing would be to copy values in using gl.texSubImage2D() but that seems really slow... maybe that's the only way? I was hoping for something more elegant that doesn't require allocating so much memory (at least a frame's worth of a single value) and so slow (because all that data must be written to the buffer when it's all a single value).
The only way to set a texture to a (default) value in WebGL seems to be to resize it, or allocate it, which sets it to all zeroes.
Also, there doesn't seem to be a copy mode like gl.SIGNED_BYTE which would allow zero to be zero (IE, signed values coming in)... but this also doesn't solve the problem of initializing the texture to a single value (in this case, zero).
Any ideas? How does one initialize a WebGL texture to a value aside from just plain old copying the value into it?
Being able to render to a particular type of texture is unfortunately not guaranteed by the OpenGL ES 2.0 spec on which WebGL is based on. The only way to tell if it works is to create the texture, attach it to a framebuffer and then call checkFramebufferStatus and see if it returns FRAMEBUFFER_COMPLETE. Unfortunately it won't in a lot of cases.
Only 3 combinations off attachments are guaranteed to work
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture + DEPTH_ATTACHMENT = DEPTH_COMPONENT16 renderbuffer
COLOR_ATTACHMENT0 = RGBA/UNSIGNED_BYTE texture + DEPTH_STENCIL_ATTACHMENT = DEPTH_STENCIL renderbuffer
So your options are
use an RGBA texture and render 128,128,128,??? to it in an fbo (or gl.clear it)
use a LUMINANCE texture and call texImage2D
use a LUMINANCE texture and call copyTexImage2D using the backbuffer or an fbo of the correct size cleared to the color you want (though there is no guarantee this is fast AFAIK)
In my experience (mainly with Chrome in OSX) using Canvas to initialise a texture is fast. I guess it is because the browser allocates the Canvas and draws to it all on the GPU, and its WebGL implementation uses the same GL context as Canvas, so there is no huge CPU-to-GPU memory transfer.
// Quickly init texture to (128,128,128)
var canvas = this._canvas = document.createElement("canvas");
canvas.width = wTex;
canvas.height = hTex;
var ctx = this._ctx = canvas.getContext("2d");
ctx.fillStyle = "#808080";
ctx.fillRect(0, 0, wTex, hTex);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGB, gl.RGB, gl.UNSIGNED_BYTE, canvas);

Categories

Resources