I want to read and process the pixel data rendered in a-frame.
I tried the code below
var canvas = document.querySelector('canvas'),
params = {
preserveDrawingBuffer: true,
},
gl = canvas.getContext('experimental-webgl', params);
var pixels = new Uint8Array(canvas.width * canvas.height * 4);
gl.readPixels(
0,
0,
canvas.width,
canvas.height,
WebGLRenderingContext.RGBA,
WebGLRenderingContext.UNSIGNED_BYTE,
pixels
);
But the pixels array was left of 0, 0, 0, 0
How can I read the pixel data on the canvas?
I'd appreciate it if you could answer this problem
In the demo you posted, find and put a breakpoint on the line that says:
_gl = _context || _canvas.getContext( 'webgl', attributes )
Take a look at attributes, it's an options object, and among other settings it contains this option:
preserveDrawingBuffer: false
Although your original post shows you manually setting this option to true, your option has not carried through into the demo you posted. If you can make this option take effect, you should be able to read the pixels back.
Related
I have an existing line initiated
// material
const material = new THREE.LineBasicMaterial({ color: 0xffffff });
// array of vertices
vertices.push(new THREE.Vector3(0, 0, 0));
vertices.push(new THREE.Vector3(0, 0, 5));
//
const geometry = new THREE.BufferGeometry().setFromPoints(vertices);
const line = new THREE.Line(geometry, material);
And what I want to do is extend this line following its initiation. I've read this page on how to update things and I don't think it fits this situation because instead of adding vertices to my shape, I want to move them. Then again, it's very likely I misunderstood. I've tried deleting the line and then redrawing it longer, but I can't get it to work without my browser crashing.
The BufferGeometry exposes its vertices through its positions BufferAttribute. To change the positions, you should do something like the following:
//
// Assuming we want to move your line segment (0, 0, 0)-(0, 0, 5) by
// one unit in the direction of positive x, to (1, 0, 0)-(1, 0, 5).
//
// Get a reference to the "position" buffer attribute
const pos = geometry.getAttribute("position");
// Set the new positions
pos.setXYZ(0, vertices[0].x + 1, vertices[0].y, vertices[0].z);
pos.setXYZ(1, vertices[1].x + 1, vertices[1].y, vertices[1].z);
// Update the vertex buffer in graphics memory
pos.needsUpdate = true;
// Update the bounds to support, e.g., frustum culling
geometry.computeBoundingBox();
geometry.computeBoundingSphere();
Other methods exist, such as modifying the attribute's backing array directly and copying in a new array, but the general process will be the same.
I would like to know wether it is possible to get the dimensions of a texture ?
I used this line to set my texture :
const texture = THREE.ImageUtils.loadTexture(src)
Maybe it is necessary to load the image to get its dimensions (but is it possible only with javascript, not html and its div ?) ?
I would like the material I create afterwards to fit the texture dimensions.
First, THREE.ImageUtils.loadTexture is deprecated in the latest THREE.js (r90). Take a look at THREE.TextureLoader instead.
That said, you can get to the image and its properties from a loaded texture.
texture.image
Depending on the image format, you should be able to access the width/height properties, which will be your texture's dimensions.
Just a note: Loading a texture is asynchronous, so you'll need to define the onLoad callback.
var loader = new THREE.TextureLoader();
var texture = loader.load( "./img.png", function ( tex ) {
// tex and texture are the same in this example, but that might not always be the case
console.log( tex.image.width, tex.image.height );
console.log( texture.image.width, texture.image.height );
} );
If you turn off sizeAttenuation, and you have a function that scales the Sprite according to the desired width, then this function will be like:
scaleWidth(width) {
const tex = sprite.material.map;
const scaleY = tex.image.height / tex.image.width;
sprite.scale.setX(width).setY(width * scaleY);
}
So, at this point, you can set the scale according to the desired width, maintaining the aspectRatio of the image.
Then you must have a function that receives the camera, and depending on the type of camera, updates the sprite's width:
updateScale(cam) {
let cw = 1;
if(cam.isOrthographicCamera) {
cw = cam.right - cam.left;
}
else if(cam.isPerspectiveCamera) {
cw = 2 * cam.aspect * Math.tan(cam.fov * 0.5 * 0.01745329);
}
scaleWidth(cw * desiredScaleFactor);
}
I'm using Konvajs framework to work with canvas. I need to create three shapes (slots) and make some manipulations. It works but also I need to work with pixels, I get image via getImageData, store it in internal structure then use for manipulation.
var c = this.layer.getCanvas();
var ctx = c.getContext();
this.slots[name].data = ctx.getImageData(0, 0, this.stage.getWidth(), this.stage.getHeight());
when the job is done, I want to combine those imageData structures into one but I can't. Using this answer I try to do it, but always get:
konva.min.js:29 Uncaught TypeError: Failed to execute 'drawImage' on 'CanvasRenderingContext2D': The provided value is not of type '(HTMLImageElement or HTMLVideoElement or HTMLCanvasElement or ImageBitmap)'
This is my snippet of code:
var c = layer.getCanvas();
var ctx = c.getContext();
for (var slot_name in this.slots) {
console.log('Slot', slot_name);
var slot_data = this.slots[slot_name].data;
var c2 = layer.getCanvas();
var ctx2 = c2.getContext();
ctx2.putImageData(slot_data, 0, 0);
ctx.drawImage(c2, 0, 0);
}
var imageData3 = c.toDataURL({pixelRatio: 1});
zip.file('scene.png', imageData3.substr(imageData3.indexOf(',') + 1), {base64: true});
Where is my mistake?
UPDATE:
changed line:
ctx.drawImage(c2, 0, 0);
to:
ctx.drawImage(c2._canvas, 0, 0);
and canvas saves but I see only the last saved picture. Why?
var c2 = layer.getCanvas(); - canvas here is not native canvas, it is Konva wrapper. So you have an error why you are trying to draw it.
To get reference to native canvas you can use: var c2 = layer.getCanvas()._canvas;.
I'm working on a really small widget which displays a simple bar chart:
I'm using Chart.js for that specific task.
var canvas = this.$(".chart-canvas")[0];
if (canvas) {
var ctx = canvas.getContext("2d");
ctx.translate(0.5, 0.5);
window.barChart = new Chart(ctx).Bar(barChartData, {
responsive: true,
maintainAspectRatio: false,
showScale: false,
scaleShowGridLines: false,
scaleGridLineWidth: 0,
barValueSpacing: 1,
barDatasetSpacing: 0,
showXAxisLabel: false,
barShowStroke: false,
showTooltips: false,
animation: false
});
As you can see, I've tried
ctx.translate(0.5, 0.5);
but that didn't really help.
Is there any way to get rid of the subpixel rendering?
I've read about Bresenham's line algorithm, but don't know how to implement it there.
Any ideas/suggestions appreciated.
Thank you in advance!
Assuming you have only one color, you can do this by extending the chart and overriding the draw to do a getImageData, "rounding" (if the pixel has a R, G or B value, set it to the color) the pixel colors and a putImageData.
You could do this for multiple colors too but it becomes a tad complicated when there are two colors close by.
However the difference in bar value spacing you are seeing is because of the way Chart.js calculates the x position for the bars - there's a rounding off that happens.
You can extend the chart and override the method that calculates the x position to get rid of the rounding off
Chart.types.Bar.extend({
// Passing in a name registers this chart in the Chart namespace in the same way
name: "BarAlt",
initialize: function (data) {
Chart.types.Bar.prototype.initialize.apply(this, arguments);
// copy paste from library code with only 1 line changed
this.scale.calculateX = function (index) {
var isRotated = (this.xLabelRotation > 0),
innerWidth = this.width - (this.xScalePaddingLeft + this.xScalePaddingRight),
valueWidth = innerWidth / Math.max((this.valuesCount - ((this.offsetGridLines) ? 0 : 1)), 1),
valueOffset = (valueWidth * index) + this.xScalePaddingLeft;
if (this.offsetGridLines) {
valueOffset += (valueWidth / 2);
}
// the library code rounds this off - we don't
return valueOffset;
}
// render again because the original initialize call does a render
// when animation is off this is the only render that happens
this.render();
}
});
You'd call it like so
var ctx = document.getElementById('canvas').getContext('2d');
var myBarChart = new Chart(ctx).BarAlt(data, {
...
Fiddle - http://jsfiddle.net/gf2c4ue4/
You can see the difference better if you zoom in.
The top one is the extended chart
I sometimes find myself struggling between declaring the buffers (with createBuffer/bindBuffer/bufferdata) in different order and rebinding them in other parts of the code, usually in the draw loop.
If I don't rebind the vertex buffer before drawing arrays, the console complains about an attempt to access out of range vertices. My suspect is the the last bound object is passed at the pointer and then to the drawarrays but when I change the order at the beginning of the code, nothing changes. What effectively works is rebinding the buffer in the draw loop. So, I can't really understand the logic behind that. When do you need to rebind? Why do you need to rebind? What is attribute0 referring to?
I don't know if this will help. As some people have said, GL/WebGL has a bunch of internal state. All the functions you call set up the state. When it's all setup you call drawArrays or drawElements and all of that state is used to draw things
This has been explained elsewhere on SO but binding a buffer is just setting 1 of 2 global variables inside WebGL. After that you refer to the buffer by its bind point.
You can think of it like this
gl = function() {
// internal WebGL state
let lastError;
let arrayBuffer = null;
let vertexArray = {
elementArrayBuffer: null,
attributes: [
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
{ enabled: false, type: gl.FLOAT, size: 3, normalized: false,
stride: 0, offset: 0, buffer: null },
...
],
}
// these values are used when a vertex attrib is disabled
let attribValues = [
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
[0, 0, 0, 1],
...
];
...
// Implementation of gl.bindBuffer.
// note this function is doing nothing but setting 2 internal variables.
this.bindBuffer = function(bindPoint, buffer) {
switch(bindPoint) {
case gl.ARRAY_BUFFER;
arrayBuffer = buffer;
break;
case gl.ELEMENT_ARRAY_BUFFER;
vertexArray.elementArrayBuffer = buffer;
break;
default:
lastError = gl.INVALID_ENUM;
break;
}
};
...
}();
After that other WebGL functions reference those. For example gl.bufferData might do something like
// implementation of gl.bufferData
// Notice you don't pass in a buffer. You pass in a bindPoint.
// The function gets the buffer one of its internal variable you set by
// previously calling gl.bindBuffer
this.bufferData = function(bindPoint, data, usage) {
// lookup the buffer from the bindPoint
var buffer;
switch (bindPoint) {
case gl.ARRAY_BUFFER;
buffer = arrayBuffer;
break;
case gl.ELEMENT_ARRAY_BUFFER;
buffer = vertexArray.elemenArrayBuffer;
break;
default:
lastError = gl.INVALID_ENUM;
break;
}
// copy data into buffer
buffer.copyData(data); // just making this up
buffer.setUsage(usage); // just making this up
};
Separate from those bindpoints there's number of attributes. The attributes are also global state by default. They define how to pull data out of the buffers to supply to your vertex shader. Calling gl.getAttribLocation(someProgram, "nameOfAttribute") tells you which attribute the vertex shader will look at to get data out of a buffer.
So, there's 4 functions that you use to configure how an attribute will get data from a buffer. gl.enableVertexAttribArray, gl.disableVertexAttribArray, gl.vertexAttribPointer, and gl.vertexAttrib??.
They're effectively implemented something like this
this.enableVertexAttribArray = function(location) {
const attribute = vertexArray.attributes[location];
attribute.enabled = true; // true means get data from attribute.buffer
};
this.disableVertexAttribArray = function(location) {
const attribute = vertexArray.attributes[location];
attribute.enabled = false; // false means get data from attribValues[location]
};
this.vertexAttribPointer = function(location, size, type, normalized, stride, offset) {
const attribute = vertexArray.attributes[location];
attribute.size = size; // num values to pull from buffer per vertex shader iteration
attribute.type = type; // type of values to pull from buffer
attribute.normalized = normalized; // whether or not to normalize
attribute.stride = stride; // number of bytes to advance for each iteration of the vertex shader. 0 = compute from type, size
attribute.offset = offset; // where to start in buffer.
// IMPORTANT!!! Associates whatever buffer is currently *bound* to
// "arrayBuffer" to this attribute
attribute.buffer = arrayBuffer;
};
this.vertexAttrib4f = function(location, x, y, z, w) {
const attrivValue = attribValues[location];
attribValue[0] = x;
attribValue[1] = y;
attribValue[2] = z;
attribValue[3] = w;
};
Now, when you call gl.drawArrays or gl.drawElements the system knows how you want to pull data out of the buffers you made to supply your vertex shader. See here for how that works.
Since the attributes are global state that means every time you call drawElements or drawArrays how ever you have the attributes setup is how they'll be used. If you set up attributes #1 and #2 to buffers that each have 3 vertices but you ask to draw 6 vertices with gl.drawArrays you'll get an error. Similarly if you make an index buffer which you bind to the gl.ELEMENT_ARRAY_BUFFER bindpoint and that buffer has an indice that is > 2 you'll get that index out of range error. If your buffers only have 3 vertices then the only valid indices are 0, 1, and 2.
Normally, every time you draw something different you rebind all the attributes needed to draw that thing. Drawing a cube that has positions and normals? Bind the buffer with position data, setup the attribute being used for positions, bind the buffer with normal data, setup the attribute being used for normals, now draw. Next you draw a sphere with positions, vertex colors and texture coordinates. Bind the buffer that contains position data, setup the attribute being used for positions. Bind the buffer that contains vertex color data, setup the attribute being used for vertex colors. Bind the buffer that contains texture coordinates, setup the attribute being used for texture coordinates.
The only time you don't rebind buffers is if you're drawing the same thing more than once. For example drawing 10 cubes. You'd rebind the buffers, then set the uniforms for one cube, draw it, set the uniforms for the next cube, draw it, repeat.
I should also add that there's an extension [OES_vertex_array_object] which is also a feature of WebGL 2.0. A Vertex Array Object is the global state above called vertexArray which includes the elementArrayBuffer and all the attributes.
Calling gl.createVertexArray makes new one of those. Calling gl.bindVertexArray sets the global attributes to point to the one in the bound vertexArray.
Calling gl.bindVertexArray would then be
this.bindVertexArray = function(vao) {
vertexArray = vao ? vao : defaultVertexArray;
}
This has the advantage of letting you set up all attributes and buffers at init time and then at draw time just 1 WebGL call will set all buffers and attributes.
Here is a webgl state diagram that might help visualize this better.