Addressing Texel in ThreeJS DataTexture - javascript

I'm looking to compute texel references for a THREE.DataTexture in Javascript for use in a fragment shader. I've succeeded in computing screen space coordinates of points and passing them to a shader in a uniform float array of x and y values, and then referencing those points by indices in my shader. I now want to render too many points to pass the coordinates in a uniform float array so I'd like to use a DataTexture and write the coordinates in the RG values of RGBA texels.
Referencing this question I am using the following method:
var tDataWidth = points.length;
var tData = new Uint8Array( Math.pow(tDataWidth, 2) );
var texelSize = 1.0 / tDataWidth;
var texelOffset = new THREE.Vector2(0.5 * texelSize, 0.5 * texelSize);
for(var i = 0; i < points.length; i++){
//convert data to 0-1, then to 0-255
//inverse is to divide by 255 then multiply by width, height respectively
tData[i * 4] = Math.round(255 * (points[i].x / window.innerWidth));
tData[i * 4 + 1] = Math.round(255 * ((window.innerHeight - points[i].y) / window.innerHeight));
tData[i * 4 + 2] = 0;
tData[i * 4 + 3] = 0;
//calculate UV texel coordinates here
//Correct after edit
var u = ((i % tDataWidth) / tDataWidth) + texelOffset;
var v = (Math.floor(i / tDataWidth) + texelOffset);
var vUV = new THREE.Vector2(u, v);
//this function inserts the reference to the texel at the index into the shader
//referenced in the frag shader:
//cvec = texture2D(tData, index);
shaderInsert += ShaderInsert(vUV, screenPos.x, window.innerHeight - screenPos.y);
}
var dTexture = new THREE.DataTexture( sdfUItData, tDataWidth, tDataWidth, THREE.RGBAFormat, THREE.UnsignedByteType );
//I think this is necessary
dTexture.magFilter = THREE.NearestFilter;
dTexture.needsUpdate = true;
//update uniforms of shader to get this DataTexture
renderer.getUniforms("circles")["tData"].value = dTexture;
//return string insert of circle
//I'm editing the shader through javascript then recompiling it
//There's more to it in the calling function, but this is the relevant part I think
...
ShaderInsert(index){
var circle = "\n\tvIndex = vec2(" + String(index.x) + ", " + String(index.y) + ");\n";
circle += "\tcvec = texture2D(tData, vIndex);\n";
circle += "\tcpos = vec2( (cvec.r / 255.0) * resolution.x, (cvec.y / 255.0) * resolution.y);\n";
circle += "\tc = circleDist(translate(p, cpos), 7.0);\n";
circle += "\tm = merge(m, c);";
return(circle);
}
Any help on where I'm going wrong? Right now output is all in the lower left corner, so (0, window.innerHeight) as far as I can tell. Thanks!

So the answer is actually straightforward. In the fragment shader rgba values are 0.0 - 1.0, so there's no need to divide by 255 as I was doing in the fragment shader.
I'd also like to say that I discovered the Spector.js Chrome extension which allows one to view all webgl calls and buffers. Pretty cool!
If anyone wants to learn more about how the drawing functions work in the fragment shader, it's all in this awesome shader which I did not write:
https://www.shadertoy.com/view/4dfXDn
<3

Related

Converting an equirectangular depth map into 3d point cloud

I have a 2D equirectangular depth map that is a 1024 x 512 array of floats, each ranging between 0 to 1. Here example (truncated to grayscale):
I want to convert it to a set of 3D points but I am having trouble finding the right formula to do so - it's sort of close - pseudocode here (using a vec3() library):
for(var y = 0; y < array_height; ++y) {
var lat = (y / array_height) * 180.0 - 90.0;
var rho = Math.cos(lat * Math.PI / 180.0);
for(var x = 0; x < array_width; ++x) {
var lng = (x / array_width) * 360.0 - 180.0;
var pos = new vec3();
pos.x = (r * Math.cos(lng * Math.PI / 180.0));
pos.y = (Math.sin(lat * Math.PI / 180.0));
pos.z = (r * Math.sin(lng * Math.PI / 180.0));
pos.norm();
var depth = parseFloat(depth[(y * array_width) + x] / 255);
pos.multiply(depth);
// at this point I can plot pos as an X, Y, Z point
}
}
What I end up with isn't quite right and I can't tell why not. I am certain the data is correct. Can anyone suggest what I am doing wrong.
Thank you.
Molly.
Well looks like the texture is half-sphere in spherical coordinates:
x axis is longitude angle a <0,180> [deg]
y axis is latitude angle b <-45,+45> [deg]
intensity is radius r <0,1> [-]
So for each pixel simply:
linearly convert x,y to a,b
in degrees:
a = x*180 / (width -1)
b = -45 + ( y* 90 / (height-1) )
or in radians:
a = x*M_PI / (width -1)
b = -0.25*M_PI + ( 0.5*y*M_PI / (height-1) )
apply spherical to cartesian conversion
x=r*cos(a)*cos(b);
y=r*sin(a)*cos(b);
z=r* sin(b);
Looks like you have wrongly coded this conversion as latitude angle should be in all x,y,z not just y !!! Also you should not normalize the resulting position that would corrupt the shape !!!
store point into point cloud.
When I put all together in VCL/C++ (sorry do not code in javascript):
List<double> pnt; // 3D point list x0,y0,z0,x1,y1,z1,...
void compute()
{
int x,y,xs,ys; // texture positiona and size
double a,b,r,da,db; // spherical positiona and angle steps
double xx,yy,zz; // 3D point
DWORD *p; // texture pixel access
// load and prepare BMP texture
Graphics::TBitmap *bmp=new Graphics::TBitmap;
bmp->LoadFromFile("map.bmp");
bmp->HandleType=bmDIB;
bmp->PixelFormat=pf32bit;
xs=bmp->Width;
ys=bmp->Height;
/*
// 360x180 deg
da=2.0*M_PI/double(xs-1);
db=1.0*M_PI/double(ys-1);
b=-0.5*M_PI;
*/
// 180x90 deg
da=1.0*M_PI/double(xs-1);
db=0.5*M_PI/double(ys-1);
b=-0.25*M_PI;
// proces all its pixels
pnt.num=0;
for ( y=0; y<ys; y++,b+=db)
for (p=(DWORD*)bmp->ScanLine[y],a=0.0,x=0; x<xs; x++,a+=da)
{
// pixel access
r=DWORD(p[x]&255); // obtain intensity from texture <0..255>
r/=255.0; // normalize to <0..1>
// convert to 3D
xx=r*cos(a)*cos(b);
yy=r*sin(a)*cos(b);
zz=r* sin(b);
// store to pointcloud
pnt.add(xx);
pnt.add(yy);
pnt.add(zz);
}
// clean up
delete bmp;
}
Here preview for 180x90 deg:
and preview for 360x180 deg:
Not sure which one is correct (as I do not have any context to your map) but the first option looks more correct to me ...
In case its the second just use different numbers (doubled) for the interpolation in bullet #1
Also if you want to remove the background just ignore r==1 pixels:
simply by testing the intensity to max value (before normalization) in my case by adding this line:
if (r==255) continue;
after this one
r=DWORD(p[x]&255);
In your case (you have <0..1> already) you should test r>=0.9999 or something like that instead.

Post Effects and Transparent background in three.js

Trying to use the transparent background with some post effect like the Unreal Bloom, SMAA and Tonemapping provided in the examples but it seems to break the transparency from my render.
renderer = new THREE.WebGLRenderer({ canvas, alpha: true });
renderer.setClearColor(0xFF0000, 0);
composer = new EffectComposer(renderer);
composer.addPass(new RenderPass(scene, camera));
// Bloom pass
canvasSize = new THREE.Vector2(canvas.width, canvas.height);
pass = new UnrealBloomPass(canvasSize, strength, radius, threshhold);
composer.addPass(pass);
// SMAA pass
size = canvasSize.multiplyScalar(this.renderer.getPixelRatio());
pass = new SMAAPass(size.x, size.y);
pass.renderToScreen = true
composer.addPass(pass);
// Tonemapping
renderer.toneMappingExposure = exposure;
renderer.toneMappingWhitePoint = whitePoint;
renderer.toneMapping = type;
composer.render();
If I deactivate the bloom pass I get a correct transparent background but when activated, I obtain a black background. I looked at the sources and it seems that it should correctly handle alpha texture channel as the format is set correctly to THREE.RGBAFormat.
Edit: After some research, I found where does this comes from. It comes from getSeperableBlurMaterial in js\postprocessing\UnrealBloomPass.js.
The fragment's alpha channel is always set to 1.0 which results in a complete removal of the previous alpha values when doing the additive blending at the end.
The cool thing would be to find a proper way to apply the alpha inside the Gaussian blur. Any idea how ?
I found a solution and this can be sorted like this :
https://github.com/mrdoob/three.js/issues/14104
void main()
{
vec2 invSize = 1.0 / texSize;
float fSigma = float(SIGMA);
float weightSum = gaussianPdf(0.0, fSigma);
float alphaSum = 0.0;
vec3 diffuseSum = texture2D(colorTexture, vUv).rgb * weightSum;
for( int i = 1; i < KERNEL_RADIUS; i ++ )
{
float x = float(i);
float weight = gaussianPdf(x, fSigma);
vec2 uvOffset = direction * invSize * x;
vec4 sample1 = texture2D( colorTexture, vUv + uvOffset);
float weightAlpha = sample1.a * weight;
diffuseSum += sample1.rgb * weightAlpha;
alphaSum += weightAlpha;
weightSum += weight;
vec4 sample2 = texture2D( colorTexture, vUv - uvOffset);
weightAlpha = sample2.a * weight;
diffuseSum += sample2.rgb * weightAlpha;
alphaSum += weightAlpha;
weightSum += weight;
}
alphaSum /= weightSum;
diffuseSum /= alphaSum; // Should apply discard here if alphaSum is 0
gl_FragColor = vec4(diffuseSum.rgb, alphaSum);
}

initiate a number of vertices/triangles for vertex shader to use

I've been playing around with vertexshaderart.com and I'd like to use what I learned on a separate website. While I have used shaders before, some effects achieved on the site depend on having access to vertices/lines/triangles. While passing vertices is easy enough (at least it was with THREE.js, though it is kind of an overkill for simple shaders, but in some cases in need shader materials too), creating triangles seems a bit more complex.
I can't figure it out from the source, how exactly are triangles created there, when you switch the mode here?
I'd like to replicate that behavior but I honestly have no idea how to approach it. I could just create a number of triangles through THREE but with so many individual objects performance takes a hit rapidly. Are the triangles created here separate entities or are they a part of one geometry?
vertexshaderart.com is more of a puzzle, toy, art box, creative coding experiment than an example of the good WebGL. The same is true of shadertoy.com. An example like this is beautiful but it runs at 20fps in it's tiny window and about 1fps fullscreen on my 2014 Macbook Pro and yet my MBP can play beautiful games with huge worlds rendered fullscreen at 60fps. In other words, the techniques are more for art/fun/play/mental exercise and for the fun of trying to make things happen with extreme limits than to actually be good techniques.
The point I'm trying to make is both vertexshaderart and shadertoy are fun but impractical.
The way vertexshaderart works is it provides a count vertexId that counts vertices. 0 to N where N is the count setting the top of the UI. For each count you output gl_Position and a v_color (color).
So, if you want to draw something you need to provide the math to generate vertex positions based on the count. For example let's do it using Canvas 2D first
Here's a fake JavaScript vertex shader written in JavaScript that given nothing but vertexId will draw a grid 1 unit high and N units long where N = the number of vertices (vertexCount) / 6.
function ourPseudoVertexShader(vertexId, time) {
// let's compute an infinite grid of points based off vertexId
var x = Math.floor(vertexId / 6) + (vertexId % 2);
var y = (Math.floor(vertexId / 2) + Math.floor(vertexId / 3)) % 2;
// color every other triangle red or green
var triangleId = Math.floor(vertexId / 3);
var color = triangleId % 2 ? "#F00" : "#0F0";
return {
x: x * 0.2,
y: y * 0.2,
color: color,
};
}
We call it from a loop supplying vertexId
for (var count = 0; count < vertexCount; count += 3) {
// get 3 points
var position0 = ourPseudoVertexShader(count + 0, time);
var position1 = ourPseudoVertexShader(count + 1, time);
var position2 = ourPseudoVertexShader(count + 2, time);
// draw triangle
ctx.beginPath();
ctx.moveTo(position0.x, position0.y);
ctx.lineTo(position1.x, position1.y);
ctx.lineTo(position2.x, position2.y);
ctx.fillStyle = position0.color;
ctx.fill();
}
If you run it here you'll see a grid 1 unit high and N units long. I've set the canvas origin so 0,0 is in the center just like WebGL and so the canvas is addressed +1 to -1 across and +1 to -1 down
var vertexCount = 100;
function ourPseudoVertexShader(vertexId, time) {
// let's compute an infinite grid of points based off vertexId
var x = Math.floor(vertexId / 6) + (vertexId % 2);
var y = (Math.floor(vertexId / 2) + Math.floor(vertexId / 3)) % 2;
// color every other triangle red or green
var triangleId = Math.floor(vertexId / 3);
var color = triangleId % 2 ? "#F00" : "#0F0";
return {
x: x * 0.2,
y: y * 0.2,
color: color,
};
}
var ctx = document.querySelector("canvas").getContext("2d");
requestAnimationFrame(render);
function render(time) {
time *= 0.001;
ctx.clearRect(0, 0, ctx.canvas.width, ctx.canvas.height);
ctx.save();
ctx.translate(ctx.canvas.width / 2, ctx.canvas.height / 2);
ctx.scale(ctx.canvas.width / 2, -ctx.canvas.height / 2);
// lets assume triangles
for (var count = 0; count < vertexCount; count += 3) {
// get 3 points
var position0 = ourPseudoVertexShader(count + 0, time);
var position1 = ourPseudoVertexShader(count + 1, time);
var position2 = ourPseudoVertexShader(count + 2, time);
// draw triangle
ctx.beginPath();
ctx.moveTo(position0.x, position0.y);
ctx.lineTo(position1.x, position1.y);
ctx.lineTo(position2.x, position2.y);
ctx.fillStyle = position0.color;
ctx.fill();
}
ctx.restore();
requestAnimationFrame(render);
}
canvas { border: 1px solid black; }
<canvas width="500" height="200"></canvas>
Doing the same thing in WebGL means making a buffer with the count
var count = [];
for (var i = 0; i < vertexCount; ++i) {
count.push(i);
}
Then putting that count in a buffer and using that as an attribute for a shader.
Here's the equivalent shader to the fake shader above
attribute float vertexId;
uniform float time;
varying vec4 v_color;
void main() {
// let's compute an infinite grid of points based off vertexId
float x = floor(vertexId / 6.) + mod(vertexId, 2.);
float y = mod(floor(vertexId / 2.) + floor(vertexId / 3.), 2.);
// color every other triangle red or green
float triangleId = floor(vertexId / 3.);
v_color = mix(vec4(0, 1, 0, 1), vec4(1, 0, 0, 1), mod(triangleId, 2.));
gl_Position = vec4(x * 0.2, y * 0.2, 0, 1);
}
If we run that we'll get the same result
var vs = `
attribute float vertexId;
uniform float vertexCount;
uniform float time;
varying vec4 v_color;
void main() {
// let's compute an infinite grid of points based off vertexId
float x = floor(vertexId / 6.) + mod(vertexId, 2.);
float y = mod(floor(vertexId / 2.) + floor(vertexId / 3.), 2.);
// color every other triangle red or green
float triangleId = floor(vertexId / 3.);
v_color = mix(vec4(0, 1, 0, 1), vec4(1, 0, 0, 1), mod(triangleId, 2.));
gl_Position = vec4(x * 0.2, y * 0.2, 0, 1);
}
`;
var fs = `
precision mediump float;
varying vec4 v_color;
void main() {
gl_FragColor = v_color;
}
`;
var vertexCount = 100;
var gl = document.querySelector("canvas").getContext("webgl");
var count = [];
for (var i = 0; i < vertexCount; ++i) {
count.push(i);
}
var bufferInfo = twgl.createBufferInfoFromArrays(gl, {
vertexId: { numComponents: 1, data: count, },
});
var programInfo = twgl.createProgramInfo(gl, [vs, fs]);
var uniforms = {
time: 0,
vertexCount: vertexCount,
};
requestAnimationFrame(render);
function render(time) {
uniforms.time = time * 0.001;
gl.useProgram(programInfo.program);
twgl.setBuffersAndAttributes(gl, programInfo, bufferInfo);
twgl.setUniforms(programInfo, uniforms);
twgl.drawBufferInfo(gl, gl.TRIANGLES, bufferInfo);
requestAnimationFrame(render);
}
canvas { border: 1px solid black; }
<script src="https://twgljs.org/dist/twgl.min.js"></script>
<canvas width="500" height="200"></canvas>
Everything else on vertexshartart is just creative math to make interesting patterns. You can use time to do animation. a texture with sound data is also provided.
There are some tutorials here
So, in answer to your question, when you switch modes (triangles/lines/points) on vertexshaderart.com all that does is change what's passed to gl.drawArrays (gl.POINTS, gl.LINES, gl.TRIANGLES). The points themselves are generated in the vertex shader like the example above.
So that leaves the question, what specific effect are you trying to achieve. Then we can know what to suggest to achieve it. You might want to ask a new question for that (so that this answer still matches the question above)

Get color of the texture at UV coordinate

I am using three v.73
I have UV coordinate from intersection of raycaster.
Also I have texture of this object. How can I get color (RGB or RGBA) of used texture at the UV coordinate?
I have tried to use get pixel of Image from texture, but it was using a lot of memory
If you want it to be fast keep your texture's images around. At init time for each image you're making a texture from also make a copy of its data with something like
// make the canvas same size as the image
some2dCanvasCtx.canvas.width = img.width;
some2dCanvasCtx.canvas.height = img.height;
// draw the image into the canvas
some2dCanvasCtx.drawImage(img, 0, 0);
// copy the contents of the canvas
var texData = some2dCanvasCtx.getImageData(0, 0, img.width, img.height);
Now if you have a UV coord you can just look it up
var tx = Math.min(emod(u, 1) * texData.width | 0, texData.width - 1);
var ty = Math.min(emod(v, 1) * texData.height | 0, texData.height - 1);
var offset = (ty * texData.width + tx) * 4;
var r = texData.data[offset + 0];
var g = texData.data[offset + 1];
var b = texData.data[offset + 2];
var a = texData.data[offset + 3];
// this is only needed if your UV coords are < 0 or > 1
// if you're using CLAMP_TO_EDGE then you'd instead want to
// clamp the UVs to 0 to 1.
function emod(n, m) {
return ((n % m) + m) % m;
}
Otherwise you can ask WebGL for the color of the texture. Use tx and ty from above. See this answer.

Drawing a circle with triangles WebGL

I'm new to WebGL and was trying to draw a circle with triangle_fan.
I set up the variables
var pi = 3.14159;
var x = 2*pi/100;
var y = 2*pi/100;
var r = 0.05;
points = [ vec2(0.4, 0.8) ]; //establish origin
And then drew the circle using this for loop.
for(var i = 0.4; i < 100; i++){
points.push(vec2(r*Math.cos(x*i), r*Math.sin(y*i)));
points.push(vec2(r*Math.cos(x*(i+1)), r*Math.sin(y*(i+1))));
}
The issue is that I am actually pushing in the second point again when i increases which I don't want to do.
Also, the image below is that is drawn :/
I don't have enough reputation to comment on mlkn's answer, but I think there was one piece he was missing. Here's how I ended up using his example
vec2 center = vec2(cX, cY);
points.push(center);
for (i = 0; i <= 200; i++){
points.push(center + vec2(
r*Math.cos(i*2*Math.PI/200),
r*Math.sin(i*2*Math.PI/200)
));
}
Otherwise, if the 200 supplied in the start of the loop is a fraction of the 200 given in the calculation (r*Math.cos(i*2*Math.PI/200)), then only a fraction of the circle will be drawn. Also, without adding in the i to the calculation in the loop, the points are all the same value, resulting in a line.
Using triangle fan you don't need to duplicate vertices. WebGL will form ABC, ACD and ADE triangles from [A,B,C,D,E] array with TRIANGLE_FAN mode.
Also, you don't take into account center of your sphere. And i can't get why i = 0.4.
Here is corrected version of your code:
vec2 center = vec2(cX, cY);
points.push(center);
for (i = 0; i <= 100; i++){
points.push(center + vec2(
r*Math.cos(i * 2 * Math.PI / 200),
r*Math.sin(i * 2 * Math.PI / 200)
));
}
Also if you want to draw a sphere you could often draw one triangle or gl.point and discard pixels which are out of circle in fragment shader.
Both the Ramil and Nicks answer helped me lot, i would like to add a point here.
For some one who might be confused why almost every circle generation deals with this step
i*2*Math.PI/200 --->(i*2*Math.PI/someNumber)
and the loop goes from 0 to 200---> again 0 to someNumber ,Here is how it works,since a complete circle spans from 0 to 2*Math.PI and to draw a circle by points we might want more points or the circle points will be having some gaps between them along the edge,We divide this into intervals by some number effectively giving more points to plot.Say we need to divide the interval from 0 to 2*PI into 800 points we do this by
const totalPoints=800;
for (let i = 0; i <= totalPoints; i++) {
const angle= 2 * Math.PI * i / totalPoints;
const x = startX + radius * Math.cos(angle);
const y = startY + radius * Math.sin(angle);
vertices.push(x, y);
}
Since the loop goes from 0 to 800 the last value will be equal to 2*Math.PI*800/800 giving the last value of the interval [0,2*PI]

Categories

Resources