WebGL Error - Error: WebGL: bindBuffer: buffer already contains element data - javascript

I am trying to draw a simple circle using WebGL but am getting a few errors. I am very new to writing WebGL code and would love if anyone could explain this to me and what the problem is.
I can create a simple square using the same code but with 5 vertices and this works perfectly. But when I try to create an array using this method, it doesn't seem to like it. I am sorry if it is a trivial mistake but an explanation would be very helpful.
Thank you in advance.
Error: WebGL: bindBuffer: buffer already contains element data.
webgl-debug.js:232:20 Error: WebGL: vertexAttribPointer: invalid
element size webgl-debug.js:232:20 TypeError: value is undefined
These are shown in the console. Here is the code I am using.
function setupBuffers() {
//Setup the circle vertices
circleVertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, circleVertexBuffer);
var r = 0.2;
var centre = 0;
var circleVertices = [];
var z = 0;
theta = 178;
circleVertices.push(centre);
circleVertices.push(r);
circleVertices.push(z);
for(var i = 0; i<theta; i++){
var rads2deg = i * (Math.PI/180);
var x = r * Math.cos(rads2deg);
var y = r * Math.sin(rads2deg);
circleVertices.push(x);
circleVertices.push(y);
circleVertices.push(z);
}
circleVertices.push(centre);
circleVertices.push(r);
circleVertices.push(z);
console.log(circleVertices);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(circleVertices), gl.STATIC_DRAW);
circleVertices.itemSize = 3;
circleVertices.numberOfItems = circleVertices.length/circleVertices.itemSize;
}
function draw() {
//set up a viewport that is the same as the canvas using function viewport (int x, int y, sizei w, sizei h) where x and y give the x and y window coordinates of the viewports width and height.
gl.viewport(0, 0, gl.viewportWidth, gl.viewportHeight);
//fill the canvas with solid colour. Default is black. If other color is desiarible using function gl.clearColor (r,g,b,a)
gl.clear(gl.COLOR_BUFFER_BIT);
gl.bindBuffer(gl.ARRAY_BUFFER, circleVertexBuffer);
gl.vertexAttrib4f(shaderProgram.vertexColorAttribute, 1.0, 1.0, 1.0, 1.0);
gl.vertexAttribPointer(shaderProgram.vertexPositionAttribute, circleVertexBuffer.itemSize, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.LINE_STRIP, 0, circleVertexBuffer.numberOfItems);
}

The problem is this line
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, circleVertexBuffer);
There are 2 types of buffers in WebGL.
ELEMENT_ARRAY_BUFFER buffers
These buffers hold indices for gl.drawElements
ARRAY_BUFFER buffers.
These buffers hold attribute data (positions,normals,texcoords, etc)
When you create a buffer with gl.createBuffer it doesn't have a buffer type yet. The first time you bind that buffer with gl.bindBuffer it becomes whatever type of buffer you bound it to. If you bind it to ARRAY_BUFFER it's now an ARRAY_BUFFER buffer. If you bind it to ELEMENT_ARRAY_BUFFER it's now an ELEMENT_ARRAY_BUFFER buffer. Once it comes one of those types you can not change it's type or use it for the other type.
So, in your code you do this
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, circleVertexBuffer);
Which makes circleVertexBuffer an ELEMENT_ARRAY_BUFFER type buffer. But then in draw you have this
gl.bindBuffer(gl.ARRAY_BUFFER, circleVertexBuffer);
The buffer can't be both types. Change the first one in setupBuffers to
gl.bindBuffer(gl.ARRAY_BUFFER, circleVertexBuffer);
You might find this answer helpful
https://stackoverflow.com/a/27164577/128511

Related

WebGL is not rendering 3d objects properly

I have a generator of 3d objects in a canvas context.
The rendering is carried out with the painter's algorithm. However, I need a more accurate approach for my project.
Therefore, I have implemented a WebGL renderer. The idea is to transform the objects generated in the canvas context to the WebGL context ( I have two overlayed canvases for this purpose) in order to render them accurately with regards to the HSA (hidden surface algorithm) problem.
I have a function that transforms the canvas coordinates to clip coordinates, and that sets up the different elements and peculiarities required by WebGL, basically, the function prepares the 3d objects to be rendered by WebGL.
As I also need the segments of the shapes to be rendered, my approach consists in creating a buffer, in which all the coordinates of all the objects are stored, together with the coordinates of the segments of these objects, which will be represented and drawn as thin 3d squares (that is, each line will be formed by 2 thin coupled-up triangles, whose vertices and positions will be determined beforehand in canvas coordinates).
So far I am only testing and have coded the rendering of the shapes without lines. The problem is, it is not working properly. Triangles are drawn wrongly, with points and positions that either do not exist in the buffer or are mistaken.
Here is what should be drawn: (this cube is rendered in the canvas with the painter's algorithm)
Here is the colored silhouette that is drawn, however:
If only drawing the face 0 with LINE_LOOP
soon things get pretty messed up, face 0 + face 1 (face 1 is obviously being mistakenly drawn)
3 faces
Things get worse with more complicated objects (notice that this one is not perfectly rendered either)
I do not really know what is happening. My knowledge of WebGL and of 3d graphics, in general, is pretty limited, not to say inexistent. I do not have studies in computer science or any IT-related domain either. I just need a function to render my javascript 3d objects properly for a personal project. Here is the code that I am using:
function webglPrepare(escena){ //Takes canvas 3d objects as inputs, and outputs the arrays requiered by webgl; vertices, indices, and colors
var zprep=2000;
var nbVertices; var FacesHSA=[]; var cont=0; var zmean=[]; var temp=[]; var vertices=[]; var cont2=0; var indices=[]; var sumer=0; var controlador=0;
var colors=[]; var cont3=0; var prueba=0; //These variables are irrelevant
object_for: for(var i=0; i<escena.length; i++){ //this is irrelevant
face_for: for(var a=0; a<escena[i].arrayObjetos.length; a++){ //Just looping over all the objects
check_loop: for(var ff=0; ff<escena[i].arrayObjetos[a].faces.length; ff++){ //for each face of the object
if(escena[i].arrayObjetos[a].faces[ff].vertices.length==3){ //If the face has 3 vertices, then
indices[cont2]=sumer; cont2=cont2+1; sumer=sumer+1; //We setup the indices array, which will store the indices to form the triangles, needed by webgl
indices[cont2]=sumer; cont2=cont2+1; sumer=sumer+1;
indices[cont2]=sumer; cont2=cont2+1; sumer=sumer+1;
}
else if(escena[i].arrayObjetos[a].faces[ff].vertices.length==4){ //The same, but if the face has 4 vertices. I do not have faces longer than that
indices[cont2]=sumer; cont2=cont2+1; sumer=sumer+1;
indices[cont2]=sumer; cont2=cont2+1; sumer=sumer+1;
indices[cont2]=sumer; cont2=cont2+1;
indices[cont2]=indices[cont2-3]; cont2=cont2+1;
indices[cont2]=indices[cont2-2]; cont2=cont2+1; sumer=sumer+1;
indices[cont2]=sumer; sumer=sumer+1; cont2=cont2+1;
}
for (var j = 0; j < (nbVertices = escena[i].arrayObjetos[a].faces[ff].vertices.length) ; j++) { // For each vertex of the face.
vertices[cont] = escena[i].arrayObjetos[a].vertices[j].x/ gl.canvas.width * 2 - 1; //The x coordinate is transformed to clip coordinates
cont=cont+1;
vertices[cont] = escena[i].arrayObjetos[a].faces[ff].vertices[j].y/ gl.canvas.height * -2 + 1; //Same with the y coordinate
cont=cont+1;
vertices[cont] = escena[i].arrayObjetos[a].faces[ff].vertices[j].z; //Same with the Z. Zprep is an arbitrary Zmax value, used to
//Used to carry out the transformation
if(vertices[cont]>=0){ if(vertices[cont]>zprep){alert("error en Z, es mayor");} vertices[cont]=vertices[cont]/zprep; }
else if(vertices[cont]<=0){ if(vertices[cont]>zprep){alert("error en Z, es menor");} vertices[cont]= -(vertices[cont]/zprep); }
//Supuestamente el z- es el mas cercano y el + el mas
cont=cont+1; //The colours are also prepared
//cont=cont+3;
colors[cont3]=0;
colors[cont3+1]=0;
colors[cont3+2]=0;
cont3=cont3+3;
}
}
}}
webgl2(vertices, indices, colors); //Once everything is ready, we call the WebGL renderer
}
The webgl2 function (which does the rendering)
function webgl2(vertices, indices, colors){
// Create and store data into vertex buffer
var vertex_buffer = gl.createBuffer ();
gl.bindBuffer(gl.ARRAY_BUFFER, vertex_buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);
// Create and store data into color buffer
var color_buffer = gl.createBuffer ();
gl.bindBuffer(gl.ARRAY_BUFFER, color_buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(colors), gl.STATIC_DRAW);
// Create and store data into index buffer
var index_buffer = gl.createBuffer ();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, index_buffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indices), gl.STATIC_DRAW);
/*=================== SHADERS =================== */
var vertCode = 'attribute vec3 position;'+
'uniform mat4 Pmatrix;'+
'uniform mat4 Vmatrix;'+
'uniform mat4 Mmatrix;'+
'attribute vec3 color;'+//the color of the point
'varying vec3 vColor;'+
'void main(void) { '+//pre-built function
'gl_Position = Pmatrix*Vmatrix*Mmatrix*vec4(position, 1.);'+
'vColor = color;'+
'}';
var fragCode = 'precision mediump float;'+
'varying vec3 vColor;'+
'void main(void) {'+
'gl_FragColor = vec4(vColor, 1.);'+
'}';
var vertShader = gl.createShader(gl.VERTEX_SHADER);
gl.shaderSource(vertShader, vertCode);
gl.compileShader(vertShader);
var fragShader = gl.createShader(gl.FRAGMENT_SHADER);
gl.shaderSource(fragShader, fragCode);
gl.compileShader(fragShader);
var shaderprogram = gl.createProgram();
gl.attachShader(shaderprogram, vertShader);
gl.attachShader(shaderprogram, fragShader);
gl.linkProgram(shaderprogram);
/*======== Associating attributes to vertex shader =====*/
var _Pmatrix = gl.getUniformLocation(shaderprogram, "Pmatrix");
var _Vmatrix = gl.getUniformLocation(shaderprogram, "Vmatrix");
var _Mmatrix = gl.getUniformLocation(shaderprogram, "Mmatrix");
gl.bindBuffer(gl.ARRAY_BUFFER, vertex_buffer);
var _position = gl.getAttribLocation(shaderprogram, "position");
gl.vertexAttribPointer(_position, 3, gl.FLOAT, false,0,0);
gl.enableVertexAttribArray(_position);
gl.bindBuffer(gl.ARRAY_BUFFER, color_buffer);
var _color = gl.getAttribLocation(shaderprogram, "color");
gl.vertexAttribPointer(_color, 3, gl.FLOAT, false,0,0) ;
gl.enableVertexAttribArray(_color);
gl.useProgram(shaderprogram);
var proj_matrix = get_projection(40, canvas.width/canvas.height, 1, 100); //The parameters inserted here are not used.
// Right now, get_projection returns an identity matrix
var mo_matrix = [ 1,0,0,0, 0,1,0,0, 0,0,1,0, 0,0,0,1 ];
var view_matrix = [ 1,0,0,0, 0,1,0,0, 0,0,1,0, 0,0,0,1 ];
gl.enable(gl.DEPTH_TEST);
// gl.depthFunc(gl.LEQUAL);
gl.clearColor(0.5, 0.5, 0.5, 0.9);
// gl.clearDepth(1.0);
gl.viewport(0.0, 0.0, canvas.width, canvas.height);
// gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.uniformMatrix4fv(_Pmatrix, false, proj_matrix);
gl.uniformMatrix4fv(_Vmatrix, false, view_matrix);
gl.uniformMatrix4fv(_Mmatrix, false, mo_matrix);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, index_buffer);
// gl.drawElements(gl.LINE_LOOP, indices.length, gl.UNSIGNED_SHORT, 0);
gl.drawElements(gl.TRIANGLES, indices.length , gl.UNSIGNED_SHORT, 0);
// gl.drawArrays(gl.TRIANGLES, 0, indices.length);
// gl.drawArrays(gl.LINE_LOOP, 0, vertices.length/3);
// gl.drawArrays(gl.LINE_LOOP, 0, 56);
}
The get_projection function:
function get_projection(angle, a, zMin, zMax) {
var ang = Math.tan((angle*.5)*Math.PI/180);//angle*.5
/* return [
0.5/ang, 0 , 0, 0,
0, 0.5*a/ang, 0, 0,
0, 0, -(zMax+zMin)/(zMax-zMin), -1,
0, 0, (-2*zMax*zMin)/(zMax-zMin), 0
]; */
return [
1, 0 , 0, 0,
0, 1 , 0, 0,
0, 0, 1, 0,
0, 0, 0, 1
];
}
It might probably be due to several errors instead of just 1, I just can't find any of them.

Unable to render to framebuffer (texture)

I'm trying to implement shadows using shadow maps, so I need to render a scene to a separate framebuffer (texture). I cannot get it to work properly, so after stripping down my codebase I'm left with a relatively simple set of instructions which should render a scene to a texture, and then simply render the texture.
The program consists of two programs:
Ground program
Teapot program
The first should render a rectangle, with a certain texture. The second one should render a teapot (with colors based on its position). Eech render step does the following (well, that's the idea anyway):
Switch to framebuffer
Render teapot
Switch to normal buffer
Render teapot
Render ground
Now, the ground fragment shader looks like:
gl_FragColor = texture2D(shadowMap, fTexCoord);
'shadowMap' is the texture I render to in step 2. I expect to see a floating teapot with a rectangle drawn under it. That indeed works. Now, I also expect to have the 'ground' to contain a teapot. After all, we rendered the scene we are looking at without the ground to the framebuffer/texture.
Code
var UNSIGNED_SHORT_SIZE = 2;
// Variables filled by setup()
var glCanvas;
var gl, teapotProgram, groundProgram;
var vBuffer, iBuffer, fBuffer;
var vertices, indices, textures;
var teapot = null;
var model;
var view;
var light;
var projection;
var BASE_URL = "https://hmbastiaan.nl/martijn/webgl/W08P02_SO/";
var WIDTH = 150, HEIGHT = 150;
function makeTeapot(){
var drawingInfo = teapot.getDrawingInfoObjects();
var indices = drawingInfo.indices;
for(var i=0; i < indices.length; i++){
indices[i] += 4; // Add offset for 'ground'
}
return {
indices: drawingInfo.indices,
vertices: drawingInfo.vertices
}
}
function makeRectangle(x1, x2, y1, y2, z1, z2){
var x1 = -2,
x2 = 2,
y1 = -1,
y2 = -1,
z1 = -1,
z2 = -5;
var vertices = [
vec4(x1, y2, z1, 1),
vec4(x2, y1, z1, 1),
vec4(x2, y1, z2, 1),
vec4(x1, y2, z2, 1)
];
var textures = [
vec2(-1.0, -1.0),
vec2( 1.0, -1.0),
vec2( 1.0, 1.0),
vec2(-1.0, 1.0)
];
var indices = [
0, 1, 2,
0, 2, 3
];
return {
indices: indices,
vertices: vertices,
textures: textures
}
}
function resetBuffers(){
vertices = [];
indices = [];
textures = [];
// Add rectangle
var rectangle = makeRectangle();
Array.prototype.push.apply(vertices, rectangle.vertices);
Array.prototype.push.apply(indices, rectangle.indices);
Array.prototype.push.apply(textures, rectangle.textures);
// Add teapot
var teapot = makeTeapot();
Array.prototype.push.apply(vertices, teapot.vertices);
Array.prototype.push.apply(indices, teapot.indices);
console.log(vertices);
console.log(indices);
console.log(textures);
// Send to GPU
gl.bindBuffer(gl.ARRAY_BUFFER, vBuffer);
gl.bufferData(gl.ARRAY_BUFFER, flatten(vertices), gl.STATIC_DRAW);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, iBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indices), gl.STATIC_DRAW);
}
function setup(){
$.get(BASE_URL + "teapot.obj", function(teapot_obj_data){
teapot = new OBJDoc(BASE_URL + "teapot.obj");
if(!teapot.parse(teapot_obj_data, 1)){
alert("Parsing teapot.obj failed.");
return;
}
setup2();
}).fail(function(){
alert("Getting teapot.obj failed.");
});
}
function setup2(){
glCanvas = document.getElementById("gl-canvas");
gl = WebGLUtils.setupWebGL(glCanvas, {stencil: true, alpha: false});
gl.viewport(0, 0, WIDTH, HEIGHT);
teapotProgram = initShaders(gl, BASE_URL + "vshader-teapot.glsl", BASE_URL + "fshader-teapot.glsl");
groundProgram = initShaders(gl, BASE_URL + "vshader-ground.glsl", BASE_URL + "fshader-ground.glsl");
light = vec3(0.0, 2.0, -2.0);
view = lookAt(vec3(0, 0, 3), vec3(0,0,0), vec3(0,1,0));
projection = perspective(45, 1.0, 1, 100.0);
// Get teapot uniforms
gl.useProgram(teapotProgram);
teapotProgram.modelLoc = gl.getUniformLocation(teapotProgram, "Model");
teapotProgram.viewLoc = gl.getUniformLocation(teapotProgram, "View");
teapotProgram.projectionLoc = gl.getUniformLocation(teapotProgram, "Projection");
// Upload uniforms
gl.uniformMatrix4fv(teapotProgram.projectionLoc, false, flatten(projection));
gl.uniformMatrix4fv(teapotProgram.viewLoc, false, flatten(view));
gl.uniformMatrix4fv(teapotProgram.modelLoc, false, flatten(scalem(0.25, 0.25, 0.25)));
// Get teapot attributes
teapotProgram.vPosition = gl.getAttribLocation(teapotProgram, "vPosition");
// Get ground uniforms
gl.useProgram(groundProgram);
groundProgram.modelLoc = gl.getUniformLocation(groundProgram, "Model");
groundProgram.viewLoc = gl.getUniformLocation(groundProgram, "View");
groundProgram.projectionLoc = gl.getUniformLocation(groundProgram, "Projection");
groundProgram.shadowMap = gl.getUniformLocation(groundProgram, "shadowMap");
// Get ground attributes
groundProgram.vTexCoord = gl.getAttribLocation(groundProgram, "vTexCoord");
groundProgram.vPosition = gl.getAttribLocation(groundProgram, "vPosition");
// Allocate and fill vertices buffer
vBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vBuffer);
gl.vertexAttribPointer(teapotProgram.vPosition, 4, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(teapotProgram.vPosition);
gl.vertexAttribPointer(groundProgram.vPosition, 4, gl.FLOAT, false, 0, 0);
gl.enableVertexAttribArray(groundProgram.vPosition);
// Allocate indices buffer
iBuffer = gl.createBuffer();
// Setup FBO
fBuffer = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, fBuffer);
fBuffer.renderbuffer = gl.createRenderbuffer();
gl.bindRenderbuffer(gl.RENDERBUFFER, fBuffer.renderbuffer);
gl.renderbufferStorage(gl.RENDERBUFFER, gl.DEPTH_COMPONENT16, 512, 512);
fBuffer.texture = gl.createTexture();
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, fBuffer.texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 512, 512, 0, gl.RGBA, gl.UNSIGNED_BYTE, null);
gl.generateMipmap(gl.TEXTURE_2D);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST_MIPMAP_LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
gl.framebufferTexture2D(gl.FRAMEBUFFER, gl.COLOR_ATTACHMENT0, gl.TEXTURE_2D, fBuffer.texture, 0);
gl.framebufferRenderbuffer(gl.FRAMEBUFFER, gl.DEPTH_ATTACHMENT, gl.RENDERBUFFER, fBuffer.renderbuffer);
// Sanity checking: framebuffer seems to throw now errors
if (!gl.isFramebuffer(fBuffer)) {
throw("Invalid framebuffer");
}
var status = gl.checkFramebufferStatus(gl.FRAMEBUFFER);
switch (status) {
case gl.FRAMEBUFFER_COMPLETE:
break;
case gl.FRAMEBUFFER_INCOMPLETE_ATTACHMENT:
throw("Incomplete framebuffer: FRAMEBUFFER_INCOMPLETE_ATTACHMENT");
break;
case gl.FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT:
throw("Incomplete framebuffer: FRAMEBUFFER_INCOMPLETE_MISSING_ATTACHMENT");
break;
case gl.FRAMEBUFFER_INCOMPLETE_DIMENSIONS:
throw("Incomplete framebuffer: FRAMEBUFFER_INCOMPLETE_DIMENSIONS");
break;
case gl.FRAMEBUFFER_UNSUPPORTED:
throw("Incomplete framebuffer: FRAMEBUFFER_UNSUPPORTED");
break;
default:
throw("Incomplete framebuffer: " + status);
}
// Set ground textures
gl.uniform1i(groundProgram.shadowMap, 0);
// Upload uniforms
gl.uniformMatrix4fv(groundProgram.projectionLoc, false, flatten(projection));
gl.uniformMatrix4fv(groundProgram.viewLoc, false, flatten(view));
gl.uniformMatrix4fv(groundProgram.modelLoc, false, flatten(mat4()));
// Restore default buffers
gl.bindTexture(gl.TEXTURE_2D, null);
gl.bindRenderbuffer(gl.RENDERBUFFER, null);
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
// Set background colour
gl.clearColor(0.3921, 0.5843, 0.9294, 1.0);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.CULL_FACE);
resetBuffers();
window.requestAnimationFrame(render);
}
function render(){
var teapot = makeTeapot();
gl.useProgram(teapotProgram);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT | gl.STENCIL_BUFFER_BIT);
// Switch to framebuffer
gl.bindFramebuffer(gl.FRAMEBUFFER, fBuffer);
// Draw teapot
teapot = makeTeapot();
gl.drawElements(gl.TRIANGLES, teapot.indices.length, gl.UNSIGNED_SHORT, 6 * UNSIGNED_SHORT_SIZE);
// Set framebuffer to defualt buffer (in-browser output)
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
// Draw ground
gl.useProgram(groundProgram);
gl.drawElements(gl.TRIANGLES, 6, gl.UNSIGNED_SHORT, 0);
// Render teapot
gl.useProgram(teapotProgram);
gl.drawElements(gl.TRIANGLES, teapot.indices.length, gl.UNSIGNED_SHORT, 6 * UNSIGNED_SHORT_SIZE);
}
setup();
<div>
<br/>
<canvas width="150" height="150" id="gl-canvas">Sorry :|</canvas>
</div>
<script type='text/javascript' src="https://code.jquery.com/jquery-2.1.4.min.js"></script>
<script type='text/javascript' src="https://hmbastiaan.nl/martijn/webgl/angel/webgl-utils.js"></script>
<script type='text/javascript' src="https://hmbastiaan.nl/martijn/webgl/angel/initShaders2.js"></script>
<script type='text/javascript' src="https://hmbastiaan.nl/martijn/webgl/angel/MV.js"></script>
<script type='text/javascript' src="https://hmbastiaan.nl/martijn/webgl/angel/objParser.js"></script>
Functions of interest:
setup2(): sets up all the buffers and uniforms.
render(): renders the scene.
Disclaimer: this is for an assignment, although this code is simplified enough to not look like the original assignment at all :).
At a glance there are several issues.
Texture bindings are global. Since in setup2 you unbind the 1 texture that means it's never used.
You need to bind whatever textures are needed before each draw call. In other words when you draw the ground you need to bind the teapot texture as in
gl.bindTexture(gl.TEXTURE_2D, fBuffer.texture);
Note: This is an over simplification of what's really needed. You really need to
Choose a texture unit to bind the texture to
var unit = 5;
gl.activeTexture(gl.TEXTURE0 + unit);
Bind the texture to that unit.
gl.bindTexture(gl.TEXTURE_2D, fBuffer.texture);
Set the uniform sampler to that texture unit
gl.uniform1i(groundProgram.shadowMap, unit);
The reason you don't need those extra steps is because (a) you only
have 1 texture so you're using texture unit #0, the default and (b) because
uniforms default to 0 so shadowMap is looking at texture unit #0.
Because you've made a mipmapped texture just rendering to level 0 will not update the mips.
In other words after you render the teapot you'll have a teapot in mip level 0 but mip levels 1, 2, 3, 4, 5 etc will still have nothing in them. You need to call
gl.generateMipmap(gl.TEXTURE_2D)
For that texture after you've rendered the teapot to it. Either that or stop using mips
You need to set the viewport every time you call gl.bindFramebuffer.
gl.bindFramebuffer should almost always be followed by a call to gl.viewport to make the viewport match the size of the thing you're rendering to
gl.bindFramebuffer(gl.FRAMEBUFFER, fb);
// set to size of fb
gl.viewport(0, 0, widthOfFb, heightOfFb);
renderSomething();
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
// set to size of canvas's drawingBuffer
gl.viewport(0, 0, gl.drawingBufferWidth, gl.drawingBufferHeight);
Attributes settings are global
You setup the teapot attributes. Then you draw a teapot to the texture. You then draw ground, but you're still using the teapot attributes.
Just like textures you need to setup attributes before each draw call.
I'm also guessing you really should not be calling makeTeapot in your render function but instead it should be called in setup.
You might find this article useful
You should also consider not putting properties on WebGL objects as it's arguably an anti-pattern.
Also synchronous XHR requests are not cool. You're getting this message in the JavaScript console
Synchronous XMLHttpRequest on the main thread is deprecated because
of its detrimental effects to the end user's experience. For more
help, check http://xhr.spec.whatwg.org/.

webgl how to draw many cubes

I am trying to draw 5161 cubes using webGL. The problem is not all cubes are drawn. Upon some searching, I think its because I am passing too many vertices in one VBO call. You can take a look at jsfiddle here: http://jsfiddle.net/n5fjhe21/. You can move around with QWERASDF and arrows keys but it isnt well implemented right now.
My draw call used to look like this:
function render(){
gl.uniformMatrix4fv(u_matrixLoc, false, new Float32Array(pMatrix));
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.drawElements(gl.TRIANGLES, data.triangles.length, gl.UNSIGNED_SHORT, 0);
}
So I would do is data.pushData() once and render as needed; It was fast. glObject is an array of Cubes.
data.pushData = function(){
// pushData once then call drawElements on every render call doesnt work as I hit some kind of limit;
// not all cubes are drawn; I think the draw calls must be split up;
data.vertices = [];
data.uv = [];
data.triangles = [];
var vertexOffset = 0;
glObjects.forEach(function pushingObject(o){
data.vertices.push.apply(data.vertices,o.vertices);
data.uv.push.apply(data.uv,o.uv);
o.triangles.forEach(function pushingTriangles(index){
data.triangles.push(index+vertexOffset);
});
vertexOffset += o.vertices.length/3; // change to component length later
});
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data.vertices),gl.DYNAMIC_DRAW );
gl.bindBuffer(gl.ARRAY_BUFFER, uvBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data.uv),gl.STATIC_DRAW);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, triangleBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(data.triangles), gl.DYNAMIC_DRAW );
};
But the problem (I think) is that I am passing in too many vertices at once. So I tried to merge pushData and render together:
data.render = function(){
data.vertices = [];
data.uv = [];
data.triangles = [];
var vertexOffset = 0;
glObjects.forEach(function pushingObject(o){
if (vertexOffset + o.vertices.length > 65536){
vertexOffset = 0;
glDraw();
data.vertices.length = 0;
data.uv.length = 0;
data.triangles.length = 0;
}
data.vertices.push.apply(data.vertices,o.vertices);
data.uv.push.apply(data.uv,o.uv);
o.triangles.forEach(function pushingTriangles(index){
data.triangles.push(index+vertexOffset);
});
vertexOffset += o.vertices.length/3; // change to component length later
});
glDraw();
function glDraw(){
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data.vertices),gl.STATIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, uvBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(data.uv),gl.STATIC_DRAW);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, triangleBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(data.triangles), gl.STATIC_DRAW);
gl.drawElements(gl.TRIANGLES, data.triangles.length, gl.UNSIGNED_SHORT, 0);
}
};
But this isnt fast enough because as I learnt, passing in new bufferData is slow. So my question is, what does one do in this situation? I was unable to locate any webgl resource that deal with this. My feeling leans towards creating multiple VBO objects but I want to make sure I am going in the right direction first. And as a follow up question, suppose if one need to draw many cubes all with unique position (x,y,z) and orientation (rX,rY,rZ), how does one go about implementing it? Thanks in advance.
Ok I solved my problem and I'll leave this here for stragglers:
Basically, I had the right idea in that I need to use multiple draw calls as each indexed draw (drawElements) can only refer to 2^16 elements in a VBO. The flaw in my first implementation is that I actually tried to reconstruct a new big typedArray made of multiple cube vertices in every render call. Needless to say, that is very slow. So instead of that, I really should have only created the typedArray/buffer once. To overcome the 2^16 element reference limitation, all I have to do is to separate the one bigass typedArray into manageable sizes, and this is exactly what this new version of pushData does:
data.pushData = function(){
// ensure each vertex attribute has less than 2^16 vertices because that is how many that be be referenced each time
// with gl.drawElements call
function newChunk(){
return {
vertices: [],
uv: [],
triangles: []
}
}
var chunk = newChunk();
var vertexOffset = 0;
glObjects.forEach(function pushingVerts(o){
if (vertexOffset + o.vertices.length > 65536){
vertexOffset = 0;
data.chunks.push(chunk);
chunk = newChunk();
}
chunk.vertices.push.apply(chunk.vertices,o.vertices);
chunk.uv.push.apply(chunk.uv,o.uv);
o.triangles.forEach(function pushingTriangles(index){
chunk.triangles.push(index+vertexOffset);
});
vertexOffset += o.vertices.length/3; // change to component length later
});
data.chunks.push(chunk);
data.chunks.forEach(function toTypeArray(c){
c.vertices = new Float32Array(c.vertices);
c.uv = new Float32Array(c.uv);
c.triangles = new Uint16Array(c.triangles);
});
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, sizeofFloat * 65536*3,gl.DYNAMIC_DRAW);
gl.bindBuffer(gl.ARRAY_BUFFER, uvBuffer);
gl.bufferData(gl.ARRAY_BUFFER, sizeofFloat * 65536*2,gl.DYNAMIC_DRAW);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, triangleBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, sizeofFloat * 65536, gl.DYNAMIC_DRAW);
// for some reason only allocating sizeofUnsignedShort * 65536 is not enough.
return data.chunks;
};
Then for render its simply:
data.renderChunks = function(){
data.chunks.forEach(function renderChunk(c){
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferSubData(gl.ARRAY_BUFFER, 0, c.vertices);
gl.bindBuffer(gl.ARRAY_BUFFER, uvBuffer);
gl.bufferSubData(gl.ARRAY_BUFFER, 0, c.uv);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, triangleBuffer);
gl.bufferSubData(gl.ELEMENT_ARRAY_BUFFER, 0, c.triangles);
gl.drawElements(gl.TRIANGLES, c.triangles.length, gl.UNSIGNED_SHORT, 0);
});
};
Also I changed from using gl.bufferData to gl.bufferSubData to avoid the overhead of constructing a new buffer.
And with this I can now draw 60,000 cubes (at least):
http://jsfiddle.net/n5fjhe21/1/

WebGL: Object moves according to current camera orientation

I created a simple scene with a cube moving parallel to the x-axis. Everything works as expected until I rotate the camera around the y-axis. Then the cube follows this rotation and moves parallel to the screen (x-axis in camera coordinates).
Again the initial setup:
Camera at [0, 2, 10] looking at [0, 0, 0]
Cube initially placed at [0, 0, 0], moving along the x-axis between [-10, 10]
Why does my camera movement affect the orientation of the cube?
Here is some of the relevant code. I you would like to see more, don't hesitate to ask. I am using glMatrix for vector and matrix operations.
Main drawing routine:
// Clear the canvas before we start drawing on it.
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
// Use the full window (minus border)
canvas.width = window.innerWidth - 16;
canvas.height = window.innerHeight - 16;
// Set viewport
gl.viewport(0, 0, canvas.width, canvas.height);
// Reset the perspective matrix
cam.aspectRatio = canvas.width / canvas.height;
mat4.perspective(perspectiveMatrix, cam.fovy, cam.aspectRatio, cam.nearPlane, cam.farPlane);
// Create the mvMatrix
mat4.lookAt(mvMatrix, cam.position, cam.poi, cam.up);
// Draw all objects
for (i = 0; i < ObjectStack.length; i++) {
ObjectStack[i].draw();
}
Camera rotation:
// Rotation via yaw and pitch (FPS-style)
this.rotateYP = function (yaw, pitch) {
// Rotation speed
var rotSpeed = 0.5;
yaw *= rotSpeed;
pitch *= rotSpeed;
// Update rotation
var quatYaw = quat.create();
quat.setAxisAngle(quatYaw, this.up, degToRad(yaw));
var quatPitch = quat.create();
quat.setAxisAngle(quatPitch, this.right, degToRad(pitch));
var quatCombined = quat.create();
quat.multiply(quatCombined, quatYaw, quatPitch);
// Update camera vectors
var tmp = vec3.create();
vec3.subtract(tmp, this.poi, this.position);
vec3.transformQuat(tmp, tmp, quatCombined);
vec3.add(tmp, this.position, tmp);
this.setPOI(tmp);
};
My setPOI() method (POI = point of interest):
this.setPOI = function (poi) {
// Set new poi
vec3.copy(this.poi, poi);
// Set new view vector
vec3.subtract(this.view, poi, this.position);
vec3.normalize(this.view, this.view);
// Set new right vector
vec3.cross(this.right, this.view, [0.0, 1.0, 0.0]);
vec3.normalize(this.right, this.right);
// Set new up vector
vec3.cross(this.up, this.right, this.view);
vec3.normalize(this.up, this.up);
};
Object draw method for the cube:
this.draw = function () {
// Save current mvMatrix
mvPushMatrix();
// Object movement
mat4.translate(mvMatrix, mvMatrix, position);
// Object rotation
//mat4.mul(mvMatrix, mvMatrix, orientation);
// Object scaling
// ...
// Set shader
setShader();
// Bind the necessary buffers
gl.bindBuffer(gl.ARRAY_BUFFER, verticesBuffer);
gl.vertexAttribPointer(positionAttribute, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, normalsBuffer);
gl.vertexAttribPointer(normalAttribute, 3, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer);
gl.vertexAttribPointer(texCoordAttribute, 2, gl.FLOAT, false, 0, 0);
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, vertexIndexBuffer);
// Set active texture
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, cubeTexture);
gl.uniform1i(gl.getUniformLocation(ShaderStack[shader], "uSampler"), 0);
// Send the triangles to the graphics card for drawing
gl.drawElements(gl.TRIANGLES, 36, gl.UNSIGNED_SHORT, 0);
gl.bindTexture(gl.TEXTURE_2D, null);
// Clean up the changed mvMatrix
mvPopMatrix();
};
And finally the setShader() used above:
function setShader() {
var shaderProgram = ShaderStack[shader];
gl.useProgram(shaderProgram);
var pUniform = gl.getUniformLocation(shaderProgram, "uPMatrix");
gl.uniformMatrix4fv(pUniform, false, perspectiveMatrix);
var mvUniform = gl.getUniformLocation(shaderProgram, "uMVMatrix");
gl.uniformMatrix4fv(mvUniform, false, mvMatrix);
var normalMatrix = mat4.create();
mat4.invert(normalMatrix, mvMatrix);
mat4.transpose(normalMatrix, normalMatrix);
var nUniform = gl.getUniformLocation(shaderProgram, "uNormalMatrix");
gl.uniformMatrix4fv(nUniform, false, normalMatrix);
normalAttribute = gl.getAttribLocation(shaderProgram, "aVertexNormal");
gl.enableVertexAttribArray(normalAttribute);
positionAttribute = gl.getAttribLocation(shaderProgram, "aVertexPosition");
gl.enableVertexAttribArray(positionAttribute);
texCoordAttribute = gl.getAttribLocation(shaderProgram, "aTextureCoord");
gl.enableVertexAttribArray(texCoordAttribute);
};
Sorry for posting all this code here. If you have any idea, please let me know!
I suspect you answered your question in your own question:
a simple scene with a cube moving parallel to the x-axis ... Then the cube follows this rotation and moves parallel to the screen (x-axis in camera coordinates).
Something like this happening leads me to believe that you applied the translation operation to your model-view matrix, not your model matrix, and from your code, I think I am right:
mat4.translate(mvMatrix, mvMatrix, position);
To fix this, you'll want to separate out your model and your view matrix, apply the translation to your model matrix, and then multiply the result by your view. Let me know how it goes!
If you're still confused by matrices, give this a read:
http://solarianprogrammer.com/2013/05/22/opengl-101-matrices-projection-view-model/

Float32Array not updating ArrayBuffer to reflect assigned values when interacting with WebGL (Chrome)

In order to manually pack and bind an RGBA float component texture to the GPU using WebGL in chrome using the OES_texture_float extension, pixel component data must be stored in a Float32Array.
For example, for a simple 3 pixel texture, with 4 float components each, a plain JS array would first be declared:
var pixels = [1.01, 1.02, 1.03, 1.04, 2.01, 2.02, 2.03, 2.04, 3.01, 3.02, 3.03];
Then to convert the plain JS array to a strongly typed array of floats which can be fed to the GPU, we simply use the Float32Array constructor which can take a plain JS array of numbers as input:
pixels = new Float32Array(pixels);
Now that we have our texture represented as a strongly typed array of floats, we can feed it to the GPU using an already established WebGL context (which is working and beyond the scope of this question), using texImage2D:
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, 3, 1, 0, gl.RGBA, gl.FLOAT, pixels);
Making the appropriate render calls shows that these floats are being passed into, and back out of (by encoding an output float into the fragment color) the GPU without error (albeit a slight loss of precision due to conversions).
The Problem
Converting from a plain JS array to a Float32Array is actually a pretty expensive operation, and it's far quicker to manipulate already converted floats in the Float32Array - an operation that seems to be supported according to most JS specs floating around: https://developer.mozilla.org/en-US/docs/Web/API/Float32Array
Once established, you can reference elements in the array using the object's methods, or using standard array index syntax (that is, using bracket notation).
The problem occurs when:
a Float32Array is created with a plain JS array of preset values
we change one or many values in the Float32Array using [] notation, ie:
pixels[0] = 420.4;
pixels[1] = 420.4;
we pass the Float32Array to the GPU using texImage2D, and using the same method mentioned above shows that the inititally set values to the Float32Array somehow made it to the GPU without the two values changed to 420.4
WTF?
My best guess is that because strongly typed arrays are (usually) internally represented as a buffer and a view, that I'm updating the view, and the buffer is not reflecting the changes. Logging the Float32Array to the browser console shows that the two changed numbers in this case appear to indeed be changed. However because the contents of an ArrayBuffer can't be read through the console in Chrome, it's a debugging dead end as far as my skillset goes.
Trying the same create, convert, update and check methodology (without the GPU involved) using a NodeJS REPL, reveals that the buffer values are updated during the evaluation of pixels[0] = 420.4; and are not updated in a 'lazy' fashion when the buffer is read.
It may be possible that Chrome is lazy updating underlying buffers, but copying that data to the GPU does not trigger the getter but rather copies it raw from memory.
Temporary Solution
Until the underlying problem is found and remedied (if even applicable), it would seem that Float32Arrays are essentially immutable (cannot be changed) once created in the context of WebGL textures. There also appears to be a .set() method attached to typed arrays, but this:
pixels.set(new Float32Array([420.4]), index);
Seems like a lot of external boxing/conversion to get around a lazy buffer, especially one that claims to allow [] access.
Typed Arrays including Float32Arrays are just packed arrays (think C/C++). Updating them is instant. If you want to see data on the GPU you have to upload the data again with texImage2D but otherwise, there's no magic, no crazy buffering, very straight forward. If you know C/C++ it's functionally equivalent to
void* arrayBuffer = malloc(sizeOfBuffer);
float* viewAsFloat = (float*)arrayBuffer;
Typed Arrays are not views into JS arrays. Using a native JS array to initialize a typed array is just a convenient way of initializing the typed array. Once created the TypedArray is a new array.
You can get multiple ArrayBuffer views into the same ArrayBuffer though.
Example
var b = new ArrayBuffer(16); // make a 16 byte ArrayBuffer
var bytes = new Uint8Array(b); // make a Uint8Array view into ArrayBuffer
var longs = new Uint32Array(b); // make a Uint32Array view into ArrayBuffer
var floats = new Float32Array(b); // make a Float32Array view into ArrayBuffer
// print the contents of the views
console.log(bytes);
console.log(longs);
console.log(floats);
// change a byte using one of the views
bytes[1] = 255;
// print the contents again
console.log(bytes);
console.log(longs);
console.log(floats);
Copy and paste all that code into your JavaScript console. You should see something like
[0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[0, 0, 0, 0]
[0, 0, 0, 0]
[0, 255, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
[65280, 0, 0, 0]
[9.147676375112406e-41, 0, 0, 0]
Note: using multiple views of different types on the same array buffer is NOT cross platform compatible. In other words, you'll get different results on a big endian platform vs a little-endian platform. At the moment there are no popular big endian platforms with browsers that support TypedArrays so you can kind of ignore this issue though your page may break on some future platform. If you want to read/write data in a platform independent way you should use a DataView. Otherwise, the main point of using multiple views on the same buffer is to upload packed vertex data for example float positions packed with uint32 RGBA colors. In that case, it will work cross platform because you aren't reading/writing the same data with the views.
As pointed out, JS native arrays and TypedArrays are not related except in that you can use a JS native array to initialize a TypedArray
var jsArray = [1, 2, 3, 4];
var floats = new Float32Array(jsArray); // this is a new array, not a view.
console.log(jsArray);
console.log(floats);
jsArray[1] = 567; // effects only the JS array
console.log(jsArray);
console.log(floats);
floats[2] = 89; // effects only the float array
console.log(jsArray);
console.log(floats);
Pasting into console I get
[1, 2, 3, 4]
[1, 2, 3, 4]
[1, 567, 3, 4]
[1, 2, 3, 4]
[1, 567, 3, 4]
[1, 2, 89, 4]
Note that you can get the underlying ArrayBuffer from any typed array.
var buffer = floats.buffer;
And create new views
var longs = new Uint8Array(buffer);
console.log(longs);
// prints [0, 0, 128, 63, 0, 0, 0, 64, 0, 0, 178, 66, 0, 0, 128, 64]
You can also create views that cover a portion of the buffer.
var offset = 8; // Offset is in bytes
var length = 2; // Length is in units of type
// a buffer that looks at the last 2 floats
var f2 = new Float32Array(buffer, offset, length);
console.log(f2);
// prints [89, 4]
As for textures and typed arrays here's a snippet using a Float32Array to update a Floating point texture.
main();
function main() {
var canvas = document.getElementById("canvas");
var gl = canvas.getContext("webgl");
if (!gl) {
alert("no WebGL");
return;
}
var f = gl.getExtension("OES_texture_float");
if (!f) {
alert("no OES_texture_float");
return;
}
var program = twgl.createProgramFromScripts(
gl, ["2d-vertex-shader", "2d-fragment-shader"]);
gl.useProgram(program);
var positionLocation = gl.getAttribLocation(program, "a_position");
var resolutionLocation = gl.getUniformLocation(program, "u_resolution");
gl.uniform2f(resolutionLocation, canvas.width, canvas.height);
var buffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, buffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array([
-1, -1, 1, -1, -1, 2,
-1, 1, 1, -1, 1, 1]), gl.STATIC_DRAW);
gl.enableVertexAttribArray(positionLocation);
gl.vertexAttribPointer(positionLocation, 2, gl.FLOAT, false, 0, 0);
var tex = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, tex);
var width = 64;
var height = 64;
var pixels = new Float32Array(width * height * 4);
for (var y = 0; y < height; ++y) {
for (var x = 0; x < width; ++x) {
var offset = (y * width + x) * 4;
pixels[offset + 0] = (x * 256 / width) * 1000;
pixels[offset + 1] = (y * 256 / height) * 1000;
pixels[offset + 2] = (x * y / (width * height)) * 1000;
pixels[offset + 3] = 256000;
}
}
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.FLOAT,
pixels);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.NEAREST);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.NEAREST);
function randInt(range) {
return Math.floor(Math.random() * range);
}
function render() {
// update a random pixel
var x = randInt(width);
var y = randInt(height);
var offset = (y * width + x) * 4;
pixels[offset + 0] = randInt(256000);
pixels[offset + 1] = randInt(256000);
pixels[offset + 2] = randInt(256000);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, width, height, 0, gl.RGBA, gl.FLOAT,
pixels);
gl.drawArrays(gl.TRIANGLES, 0, 6);
requestAnimationFrame(render);
}
render();
}
<script src="https://twgljs.org/dist/2.x/twgl.min.js"></script>
<script id="2d-vertex-shader" type="x-shader/x-vertex">
attribute vec2 a_position;
void main() {
gl_Position = vec4(a_position, 0, 1);
}
</script>
<script id="2d-fragment-shader" type="x-shader/x-fragment">
precision mediump float;
uniform vec2 u_resolution;
uniform sampler2D u_tex;
void main() {
vec2 texCoord = gl_FragCoord.xy / u_resolution;
vec4 floatColor = texture2D(u_tex, texCoord);
gl_FragColor = floatColor / 256000.0;
}
</script>
<canvas id="canvas" width="400" height="300"></canvas>
All of that suggests the issue you are seeing is maybe related to something else? As for debugging, as pointed out above ArrayBuffers are very straight forward, there's no lazy buffering or anything. So if you want to see into an ArrayBuffer make a view for it so the debugger has some way to know what you want to be displayed.

Categories

Resources