three.js plane buffergeometry uvs - javascript

I'm trying to create a buffergeometry plane, I'm having troubles with the uv coordinates though. I've tried to follow Correct UV mapping Three.js yet I don't get a correct result.
The uv code is below. I also saved the entire buffergeometry code at http://jsfiddle.net/94xaL/.
I would very much appreciate a hint on what I'm doing wrong here!
Thanks!
var uvs = terrainGeom.attributes.uv.array;
var gridX = gridY = TERRAIN_RES - 1;
for ( iy = 0; iy < gridY; iy++ ) {
for ( ix = 0; ix < gridX; ix++ ) {
var i = (iy * gridY + ix) * 12;
//0,0
uvs[ i ] = ix / gridX
uvs[ i + 1 ] = iy / gridY;
//0,1
uvs[ i + 2 ] = ix / gridX
uvs[ i + 3 ] = ( iy + 1 ) / gridY;
//1,0
uvs[ i + 4 ] = ( ix + 1 ) / gridX
uvs[ i + 5 ] = iy / gridY;
//0,1
uvs[ i + 6 ] = ix / gridX
uvs[ i + 7 ] = ( iy + 1 ) / gridY;
//1,1
uvs[ i + 8 ] = ( ix + 1 ) / gridX
uvs[ i + 9 ] = ( iy + 1 ) / gridY;
//1,0
uvs[ i + 10 ] = ( ix + 1 ) / gridX
uvs[ i + 11 ] = iy / gridY;
}
}

The latest three.js version in the dev branch now builds planes with BufferGeometry: https://github.com/mrdoob/three.js/blob/dev/src/extras/geometries/PlaneGeometry.js
If you still want to build your own you can get some inspiration there.

Related

Making and connecting hexagons on a rectangle grid

I am creating a pathfinding application and I want to connect every hexgon(H) to its adjacent hexagons. The grid is a rectangle but it is populated with hexagons. The issue is the code right now to connect these hexagons is lengthy and extremely finicky. An example of what i am trying to achieve is:
The issue is that the connections between say one hexagon and its neighbours (range from 2-6 depending on their placement in the grid) is not working properly. An example of the code i am using right now to connect a hexagon with 6 neighbours is:
currentState.graph().addEdge(i, i + 1, 1);
currentState.graph().addEdge(i, i - HexBoard.rows + 1, 1);
currentState.graph().addEdge(i, i - HexBoard.rows, 1);
currentState.graph().addEdge(i, i + HexBoard.rows +1, 1);
currentState.graph().addEdge(i, i + HexBoard.rows , 1);
The graph is essetialy the grid, addEdge adds a connection from src ->dest with cost(c) in order. Is there any algorithm or way to make my code less bulky ? (right now it is polluted with if-else clauses)?
The site which inspired me :https://clementmihailescu.github.io/Pathfinding-Visualizer/#
EDIT : The problem is not in drawing hexagons (They are already SVGs), it in assigning them edges and connections.
Interesting problem... To set a solid foundation, here's a hexagon grid class that is neither lengthy nor finicky, based on a simple data structure of a linear array. A couple of notes...
The HexagonGrid constructor accepts the hexagon grid dimensions in terms of the number of hexagons wide (hexWidth) by number of hexagons high (hexHeight).
The hexHeight alternates by an additional hexagon every other column for a more pleasing appearance. Thus an odd number for hexWidth bookends the hexagon grid with the same number of hexagons in the first and last columns.
The length attribute represents the total number of hexagons in the grid.
Each hexagon is referenced by a linear index from 0..length.
The hexagonIndex method which takes (x,y) coordinates returns an the linear index based on an approximation of the closest hexagon. Thus, when near the edges of a hexagon, the index returned might be a close neighbor.
Am not totally satisfied with the class structure, but is sufficient to show the key algorithms involved in a linear indexed hexagon grid.
To aid in visualizing the linear indexing scheme, the code snippet displays the linear index value in the hexagon. Such an indexing scheme offers the opportunity to have a parallel array of the same length which represents the characteristics of each specific hexagon by index.
Also exemplified is the ability to translate from mouse coordinates to the hexagon index, by clicking on any hexagon, which will redraw the hexagon with a thicker border.
const canvas = document.getElementById( 'canvas' );
const ctx = canvas.getContext( '2d' );
class HexagonGrid {
constructor( hexWidth, hexHeight, edgeLength ) {
this.hexWidth = hexWidth;
this.hexHeight = hexHeight;
this.edgeLength = edgeLength;
this.cellWidthPair = this.hexHeight * 2 + 1;
this.length = this.cellWidthPair * ( hexWidth / 2 |0 ) + hexHeight * ( hexWidth % 2 );
this.dx = edgeLength * Math.sin( Math.PI / 6 );
this.dy = edgeLength * Math.cos( Math.PI / 6 );
}
centerOfHexagon( i ) {
let xPairNo = i % this.cellWidthPair;
return {
x: this.dx + this.edgeLength / 2 + ( i / this.cellWidthPair |0 ) * ( this.dx + this.edgeLength ) * 2 + ( this.hexHeight <= i % this.cellWidthPair ) * ( this.dx + this.edgeLength ),
y: xPairNo < this.hexHeight ? ( xPairNo + 1 ) * this.dy * 2 : this.dy + ( xPairNo - this.hexHeight ) * this.dy * 2
};
}
hexagonIndex( point ) {
let col = ( point.x - this.dx / 2 ) / ( this.dx + this.edgeLength ) |0;
let row = ( point.y - ( col % 2 === 0 ) * this.dy ) / ( this.dy * 2 ) |0;
let hexIndex = ( col / 2 |0 ) * this.cellWidthPair + ( col % 2 ) * this.hexHeight + row;
//console.log( `(${point.x},${point.y}): col=${col} row=${row} hexIndex=${hexIndex}` );
return ( 0 <= hexIndex && hexIndex < this.length ? hexIndex : null );
}
edge( i ) {
let topCheck = i % ( this.hexHeight + 0.5 );
return (
i < this.hexHeight
|| ( i + 1 ) % ( this.hexHeight + 0.5 ) === this.hexHeight
|| i % ( this.hexHeight + 0.5 ) === this.hexHeight
|| ( i + 1 ) % ( this.hexHeight + 0.5 ) === 0
|| i % ( this.hexHeight + 0.5 ) === 0
|| this.length - this.hexHeight < i
);
}
drawHexagon( ctx, center, lineWidth ) {
let halfEdge = this.edgeLength / 2;
ctx.lineWidth = lineWidth || 1;
ctx.beginPath();
ctx.moveTo( center.x - halfEdge, center.y - this.dy );
ctx.lineTo( center.x + halfEdge, center.y - this.dy );
ctx.lineTo( center.x + halfEdge + this.dx, center.y );
ctx.lineTo( center.x + halfEdge, center.y + this.dy );
ctx.lineTo( center.x - halfEdge, center.y + this.dy );
ctx.lineTo( center.x - halfEdge - this.dx, center.y );
ctx.lineTo( center.x - halfEdge, center.y - this.dy );
ctx.stroke();
}
drawGrid( ctx, topLeft ) {
ctx.font = '10px Arial';
for ( let i = 0; i < this.length; i++ ) {
let center = this.centerOfHexagon( i );
this.drawHexagon( ctx, { x: topLeft.x + center.x, y: topLeft.y + center.y } );
ctx.fillStyle = this.edge( i ) ? 'red' : 'black';
ctx.fillText( i, topLeft.x + center.x - 5, topLeft.y + center.y + 5 );
}
}
}
let myHexGrid = new HexagonGrid( 11, 5, 20 );
let gridLeftTop = { x: 20, y: 20 };
myHexGrid.drawGrid( ctx, gridLeftTop );
canvas.addEventListener( 'mousedown', function( event ) {
let i = myHexGrid.hexagonIndex( { x: event.offsetX - gridLeftTop.x, y: event.offsetY - gridLeftTop.y } );
if ( i !== null ) {
let center = myHexGrid.centerOfHexagon( i );
myHexGrid.drawHexagon( ctx, { x: gridLeftTop.x + center.x, y: gridLeftTop.y + center.y }, 3 );
}
} );
<canvas id=canvas width=1000 height=1000 />
A large benefit of the linear index is that it makes path searching easier, as each interior hexagon is surrounded by hexagons with relative indexes of -1, -6, -5, +1, +6, +5. For example, applying the relative indexes to hexagon 18 results in a list of surrounding hexagons of 17, 12, 13, 19, 24, 23.
As a bonus, the edge method indicates whether the hexagon is on the edge of the grid. (In the code snippet, the edge cells are identified by red text.) Highly recommend that edge cells not be part of the pathing (ie, they are unreachable) as this simplifies any path searching. Otherwise the pathing logic becomes very complex, as now if on an edge cell, the relative indexes indicating the surrounding hexagons no longer fully apply...

Porting 3D Rose written by Wolfram Language into JavaScript

I'd like to get help from Geometry / Wolfram Mathematica people.
I want to visualize this 3D Rose in JavaScript (p5.js) environment.
This figure is originally generated using wolfram language by Paul Nylanderin 2004-2006, and below is the code:
Rose[x_, theta_] := Module[{
phi = (Pi/2)Exp[-theta/(8 Pi)],
X = 1 - (1/2)((5/4)(1 - Mod[3.6 theta, 2 Pi]/Pi)^2 - 1/4)^2},
y = 1.95653 x^2 (1.27689 x - 1)^2 Sin[phi];
r = X(x Sin[phi] + y Cos[phi]);
{r Sin[theta], r Cos[theta], X(x Cos[phi] - y Sin[phi]), EdgeForm[]
}];
ParametricPlot3D[
Rose[x, theta], {x, 0, 1}, {theta, -2 Pi, 15 Pi},
PlotPoints -> {25, 576}, LightSources -> {{{0, 0, 1}, RGBColor[1, 0, 0]}},
Compiled -> False
]
I tried implement that code in JavaScript like this below.
function rose(){
for(let theta = 0; theta < 2700; theta += 3){
beginShape(POINTS);
for(let x = 2.3; x < 3.3; x += 0.02){
let phi = (180/2) * Math.exp(- theta / (8*180));
let X = 1 - (1/2) * pow(((5/4) * pow((1 - (3.6 * theta % 360)/180), 2) - 1/4), 2);
let y = 1.95653 * pow(x, 2) * pow((1.27689*x - 1), 2) * sin(phi);
let r = X * (x*sin(phi) + y*cos(phi));
let pX = r * sin(theta);
let pY = r * cos(theta);
let pZ = (-X * (x * cos(phi) - y * sin(phi)))-200;
vertex(pX, pY, pZ);
}
endShape();
}
}
But I got this result below
Unlike original one, the petal at the top is too stretched.
I suspected that the
let y = 1.95653 * pow(x, 2) * pow((1.27689*x - 1), 2) * sin(phi);
may should be like below...
let y = pow(1.95653*x, 2*pow(1.27689*x - 1, 2*sin(theta)));
But that went even further away from the original.
Maybe I'm asking a dumb question, but I've been stuck for several days.
If you see a mistake, please let me know.
Thank you in advanse🙏
Update:
I changed the x range to 0~1 as defined by the original one.
Also simplified the JS code like below to find the error.
function rose_debug(){
for(let theta = 0; theta < 15*PI; theta += PI/60){
beginShape(POINTS);
for(let x = 0.0; x < 1.0; x += 0.005){
let phi = (PI/2) * Math.exp(- theta / (8*PI));
let y = pow(x, 4) * sin(phi);
let r = (x * sin(phi) + y * cos(phi));
let pX = r * sin(theta);
let pY = r * cos(theta);
let pZ = x * cos(phi) - y * sin(phi);
vertex(pX, pY, pZ);
}
endShape();
}
}
But the result still keeps the wrong proportion↓↓↓
Also, when I remove the term "sin(phi)" in the line "let y =..." like below
let y = pow(x, 4);
then I got a figure somewhat resemble the original like below🤣
At this moment I was starting to suspect the mistake on the original equation, but I found another article by Jorge García Tíscar(Spanish) that implemented the exact same 3D rose in wolfram language successfully.
So, now I really don't know how the original is formed by the equation😇
Update2: Solved
I followed a suggestion by Trentium (Answer No.2 below) that stick to 0 ~ 1 as the range of x, then multiply the r and X by an arbitrary number.
for(let x = 0; x < 1; x += 0.05){
r = r * 200;
X = X * 200;
Then I got this correct result looks exactly the same as the original🥳
Simplified final code:
function rose_debug3(){
for(let x = 0; x <= 1; x += 0.05){
beginShape(POINTS);
for(let theta = -2*PI; theta <= 15*PI; theta += 17*PI/2000){
let phi = (PI / 2) * Math.exp(- theta / (8 * PI));
let X = 1 - (1/2) * ((5/4) * (1 - ((3.6 * theta) % (2*PI))/PI) ** 2 - 1/4) ** 2;
let y = 1.95653 * (x ** 2) * ((1.27689*x - 1) ** 2) * sin(phi);
let r = X * (x * sin(phi) + y * cos(phi));
if(0 < r){
const factor = 200;
let pX = r * sin(theta)*factor;
let pY = r * cos(theta)*factor;
let pZ = X * (x * cos(phi) - y * sin(phi))*factor;
vertex(pX, pY, pZ);
}
}
endShape();
}
}
The reason I got the vertically stretched figure at first was the range of the x. I thought that changing the range of the x just affect the whole size of the figure. But actually, the range affects like this below.
(1): 0 ~ x ~ 1, (2): 0 ~ x ~ 1.2
(3): 0 ~ x ~ 1.5, (4): 0 ~ x ~ 2.0
(5): flipped the (4)
So far I saw the result like (5) above, didn't realize that the correct shape was hiding inside that figure.
Thank you Trentium so much for kindly helping me a lot!
Since this response is a significant departure from my earlier response, am adding a new answer...
In rendering the rose algorithm in ThreeJS (sorry, I'm not a P5 guy) it became apparent that when generating the points, that only the points with a positive radius are to be rendered. Otherwise, superfluous points are rendered far outside the rose petals.
(Note: When running the code snippet, use the mouse to zoom and rotate the rendering of the rose.)
<script type="module">
import * as THREE from 'https://cdn.jsdelivr.net/npm/three#0.115.0/build/three.module.js';
import { OrbitControls } from 'https://cdn.jsdelivr.net/npm/three#0.115.0/examples/jsm/controls/OrbitControls.js';
//
// Set up the ThreeJS environment.
//
var renderer = new THREE.WebGLRenderer();
renderer.setSize( window.innerWidth, window.innerHeight );
document.body.appendChild( renderer.domElement );
var camera = new THREE.PerspectiveCamera( 45, window.innerWidth / window.innerHeight, 1, 500 );
camera.position.set( 0, 0, 100 );
camera.lookAt( 0, 0, 0 );
var scene = new THREE.Scene();
let controls = new OrbitControls(camera, renderer.domElement);
//
// Create the points.
//
function rose( xLo, xHi, xCount, thetaLo, thetaHi, thetaCount ){
let vertex = [];
let colors = [];
let radius = [];
for( let x = xLo; x <= xHi; x += ( xHi - xLo ) / xCount ) {
for( let theta = thetaLo; theta <= thetaHi; theta += ( thetaHi - thetaLo ) / thetaCount ) {
let phi = ( Math.PI / 2 ) * Math.exp( -theta / ( 8 * Math.PI ) );
let X = 1 - ( 1 / 2 ) * ( ( 5 / 4 ) * ( 1 - ( ( 3.6 * theta ) % ( 2 * Math.PI ) ) / Math.PI ) ** 2 - 1 / 4 ) ** 2;
let y = 1.95653 * ( x ** 2 ) * ( (1.27689 * x - 1) ** 2 ) * Math.sin( phi );
let r = X * ( x * Math.sin( phi ) + y * Math.cos( phi ) );
//
// Fix: Ensure radius is positive, and scale up accordingly...
//
if ( 0 < r ) {
const factor = 20;
r = r * factor;
radius.push( r );
X = X * factor;
vertex.push( r * Math.sin( theta ), r * Math.cos( theta ), X * ( x * Math.cos( phi ) - y * Math.sin( phi ) ) );
}
}
}
//
// For the fun of it, lets adjust the color of the points based on the radius
// of the point such that the larger the radius, the deeper the red.
//
let rLo = Math.min( ...radius );
let rHi = Math.max( ...radius );
for ( let i = 0; i < radius.length; i++ ) {
let clr = new THREE.Color( Math.floor( 0x22 + ( 0xff - 0x22 ) * ( ( radius[ i ] - rLo ) / ( rHi - rLo ) ) ) * 0x10000 + 0x002222 );
colors.push( clr.r, clr.g, clr.b );
}
return [ vertex, colors, radius ];
}
//
// Create the geometry and mesh, and add to the THREE scene.
//
const geometry = new THREE.BufferGeometry();
let [ positions, colors, radius ] = rose( 0, 1, 20, -2 * Math.PI, 15 * Math.PI, 2000 );
geometry.setAttribute( 'position', new THREE.Float32BufferAttribute( positions, 3 ) );
geometry.setAttribute( 'color', new THREE.Float32BufferAttribute( colors, 3 ) );
const material = new THREE.PointsMaterial( { size: 4, vertexColors: true, depthTest: false, sizeAttenuation: false } );
const mesh = new THREE.Points( geometry, material );
scene.add( mesh );
//
// Render...
//
var animate = function () {
requestAnimationFrame( animate );
renderer.render( scene, camera );
};
animate();
</script>
Couple of notables:
When calling rose( xLo, xHi, xCount, thetaLo, thetaHi, thetaCount ), the upper range thetaHi can vary from Math.PI to 15 * Math.PI, which varies the number of petals.
Both xCount and thetaCount vary the density of the points. The Wolfram example uses 25 and 576, respectively, but this is to create a geometry mesh, whereas if creating a point field the density of points needs to be increased. Hence, in the code the values are 20 and 2000.
Enjoy!
Presumably the algorithm above is referencing cos() and sin() functions that handle the angles in degrees rather than radians, but wherever using angles while employing non-trigonometric transformations, the result will be incorrect.
For example, the following formula using radians...
phi = (Pi/2)Exp[-theta/(8 Pi)]
...has been incorrectly translated to...
phi = ( 180 / 2 ) * Math.exp( -theta / ( 8 * 180 ) )
To test, let's assume theta = 2. Using the original formula in radians...
phi = ( Math.PI / 2 ) * Math.exp( -2 / ( 8 * Math.PI ) )
= 1.451 rad
= 83.12 deg
...and now the incorrect version using degrees, which returns a different angle...
phi = ( 180 / 2 ) * Math.exp( -2 / ( 8 * 180 ) )
= 89.88 deg
= 1.569 rad
A similar issue will occur with the incorrectly translated expression...
pow( ( 1 - ( 3.6 * theta % 360 ) / 180 ), 2 )
Bottom line: Stick to radians.
P.S. Note that there might be other issues, but using radians rather than degrees needs to be corrected foremost...

Pixel Mapping for Rendering DICOM Monochrome2

Trying to render dicom monochrome2 onto HTML5 canvas
what is the correct pixel mapping from grayscale to canvas rgb ?
Currently using incorrect mapping of
const ctx = canvas.getContext( '2d' )
const imageData = ctx.createImageData( 512, 512 )
const pixelData = getPixelData( dataSet )
let rgbaIdx = 0
let rgbIdx = 0
let pixelCount = 512 * 512
for ( let idx = 0; idx < pixelCount; idx++ ) {
imageData.data[ rgbaIdx ] = pixelData[ rgbIdx ]
imageData.data[ rgbaIdx + 1 ] = pixelData[ rgbIdx + 1 ]
imageData.data[ rgbaIdx + 2 ] = 0
imageData.data[ rgbaIdx + 3 ] = 255
rgbaIdx += 4
rgbIdx += 2
}
ctx.putImageData( imageData, 0, 0 )
Reading through open source libraries, not very clear how, could you please suggest a clear introduction of how to render?
Fig 1. incorrect mapping
Fig 2. correct mapping, dicom displayed in IrfanView
There are two problems here: your monochrome data has a higher resolution (e.g. value range) than can be shown in RGB, so you cannot just map the pixel data into the RGB data directly.
The value range depends on the Bits Stored tag - for a typical value of 12 the data range would be 4096. The simplest implementation could just downscale the number, in this case by 16.
The second problem with your code: to represent a monochrome value in RGB, you have to add 3 color components with the same value:
let rgbaIdx = 0
let rgbIdx = 0
let pixelCount = 512 * 512
let scaleFactor = 16 // has to be calculated in real code
for ( let idx = 0; idx < pixelCount; idx++ ) {
# assume Little Endian
let pixelValue = pixelData[ rgbIdx ] + pixelData[ rgbIdx + 1 ] * 256
let displayValue = Math.round(pixelValue / scaleFactor)
imageData.data[ rgbaIdx ] = displayValue
imageData.data[ rgbaIdx + 1 ] = displayValue
imageData.data[ rgbaIdx + 2 ] = displayValue
imageData.data[ rgbaIdx + 3 ] = 255
rgbaIdx += 4
rgbIdx += 2
}
To get a better representation, you have to take the VOI LUT into account instead of just downscaling. In case you have the Window Center / Window Width tags defined, you can calulate the minimum and maximum values and get the scale factor from that range:
let minValue = windowCenter - windowWidth / 2
let maxValue = windowCenter + windowWidth / 2
let scaleFactor = (maxValue - minValue) / 256
...
let pixelValue = pixelData[ rgbIdx ] + pixelData[ rgbIdx + 1 ] * 256
let displayValue = max((pixelValue - minValue) / scaleFactor), 255)
...
EDIT: As observed by #WilfRosenbaum: if you don't have a VOI LUT (as suggested by the empty values of WindowCenter and WindowWidth) you best calculate your own one. To do this, you have to calculate the min/max values of your pixel data:
let minValue = 1 >> 16
let maxValue = 0
for ( let idx = 0; idx < pixelCount; idx++ ) {
let pixelValue = pixelData[ rgbIdx ] + pixelData[ rgbIdx + 1 ] * 256
minValue = min(minValue, pixelValue)
maxValue = max(maxValue, pixelValue)
}
let scaleFactor = (maxValue - minValue) / 256
and then use the same code as shown for the VOI LUT.
A few notes:
if you have a modality LUT, you have to apply it before the VOI LUT; CT images usually have one (RescaleSlope/RescaleIntercept), though this one only has an identity LUT, so you can ignore it
you can have more than one WindowCenter / WindowWindow value pairs, or could have a VOI LUT sequence, which is also not considered here
the code is out of my head, so it may have bugs
Turned out 4 main things needed to be done (reading through fo-dicom source code to find these things out)
Prepare Monochrome2 LUT
export const LutMonochrome2 = () => {
let lut = []
for ( let idx = 0, byt = 255; idx < 256; idx++, byt-- ) {
// r, g, b, a
lut.push( [byt, byt, byt, 0xff] )
}
return lut
}
Interpret pixel data as unsigned short
export const bytesToShortSigned = (bytes) => {
let byteA = bytes[ 1 ]
let byteB = bytes[ 0 ]
let pixelVal
const sign = byteA & (1 << 7);
pixelVal = (((byteA & 0xFF) << 8) | (byteB & 0xFF));
if (sign) {
pixelVal = 0xFFFF0000 | pixelVal; // fill in most significant bits with 1's
}
return pixelVal
}
Get Minimum and Maximum Pixel Value and then compute WindowWidth to eventually map each pixel to Monochrome2 color map
export const getMinMax = ( pixelData ) => {
let pixelCount = pixelData.length
let min = 0, max = 0
for ( let idx = 0; idx < pixelCount; idx += 2 ) {
let pixelVal = bytesToShortSigned( [
pixelData[idx],
pixelData[idx+1]
] )
if (pixelVal < min)
min = pixelVal
if (pixelVal > max)
max = pixelVal
}
return { min, max }
}
Finally draw
export const draw = ( { dataSet, canvas } ) => {
const monochrome2 = LutMonochrome2()
const ctx = canvas.getContext( '2d' )
const imageData = ctx.createImageData( 512, 512 )
const pixelData = getPixelData( dataSet )
let pixelCount = pixelData.length
let { min: minPixel, max: maxPixel } = getMinMax( pixelData )
let windowWidth = Math.abs( maxPixel - minPixel );
let windowCenter = ( maxPixel + minPixel ) / 2.0;
console.debug( `minPixel: ${minPixel} , maxPixel: ${maxPixel}` )
let rgbaIdx = 0
for ( let idx = 0; idx < pixelCount; idx += 2 ) {
let pixelVal = bytesToShortSigned( [
pixelData[idx],
pixelData[idx+1]
] )
let binIdx = Math.floor( (pixelVal - minPixel) / windowWidth * 256 );
let displayVal = monochrome2[ binIdx ]
if ( displayVal == null )
displayVal = [ 0, 0, 0, 255]
imageData.data[ rgbaIdx ] = displayVal[0]
imageData.data[ rgbaIdx + 1 ] = displayVal[1]
imageData.data[ rgbaIdx + 2 ] = displayVal[2]
imageData.data[ rgbaIdx + 3 ] = displayVal[3]
rgbaIdx += 4
}
ctx.putImageData( imageData, 0, 0 )
}

Adjusting mobile accelerometer data to account for phone rotation

I am looking to record mobile accelerometer data (x/y/z) and adjust it to be consistent irrespective of the orientation/rotation of the phone. The usecase here is to record and normalize these parameters while driving to detect turns/twists etc. A key element of this is to ensure that the reported data is independent of how the phone is oriented in the car. I am using gyronorm.js to get the device motion and orientation details.
I've looked at previous answers related to this topic on SO (such as this one) and have tried implementing their approach to get earth coordinates.
However, I am seeing readings completely change as I turn/twist my phone. Can anyone tell me what I'm doing wrong?
This is how I am calculating earth coordinates:
const deg2rad = Math.PI / 180;
let alpha = gdata.do.alpha;
let beta = gdata.do.beta;
let gamma = gdata.do.gamma;
let rotatematrix = this.getRotationMatrix(alpha * deg2rad, beta * deg2rad, gamma * deg2rad);
let relativeacc = new Array(3);
let earthacc = new Array(3);
let inv = new Array(9)
relativeacc[0] = gdata.dm.gx;
relativeacc[1] = gdata.dm.gy;
relativeacc[2] = gdata.dm.gz;
//console.log ("FIRST MATRIX")
mat3.invert(inv,rotatematrix);
//console.log ("SECOND MATRIX")
mat3.multiply(earthacc, inv, relativeacc);
let accEarthX = earthacc[0];
let accEarthY = earthacc[1];
let accEarthZ = earthacc[2];
let aMag = Math.sqrt(accEarthX*accEarthX + accEarthY*accEarthY + accEarthZ*accEarthZ)
console.log (`---RAW DATA --- ` + JSON.stringify(gdata));
console.log (`*** EARTH DATA X=${accEarthX}, Y=${accEarthY} Z=${accEarthZ}`)
This is the getRotationMatrix code
// credit:https://stackoverflow.com/a/36662093/1361529
getRotationMatrix(alpha, beta, gamma) {
const getScreenOrientation = () => {
switch (window.screen.orientation || window.screen.mozOrientation) {
case 'landscape-primary':
return 90;
case 'landscape-secondary':
return -90;
case 'portrait-secondary':
return 180;
case 'portrait-primary':
return 0;
}
if (window.orientation !== undefined)
return window.orientation;
};
const screenOrientation = getScreenOrientation();
console.log ("SCREEN ORIENTATIIN = "+screenOrientation);
let out = [];
let _z = alpha;
let _x = beta;
let _y = gamma;
if (screenOrientation === 90) {
_x = - gamma;
_y = beta;
}
else if (screenOrientation === -90) {
_x = gamma;
_y = - beta;
}
else if (screenOrientation === 180) {
_x = - beta;
_y = - gamma;
}
else if (screenOrientation === 0) {
_x = beta;
_y = gamma;
}
let cX = Math.cos( _x );
let cY = Math.cos( _y );
let cZ = Math.cos( _z );
let sX = Math.sin( _x );
let sY = Math.sin( _y );
let sZ = Math.sin( _z );
out[0] = cZ * cY + sZ * sX * sY, // row 1, col 1
out[1] = cX * sZ, // row 2, col 1
out[2] = - cZ * sY + sZ * sX * cY , // row 3, col 1
out[3] = - cY * sZ + cZ * sX * sY, // row 1, col 2
out[4] = cZ * cX, // row 2, col 2
out[5] = sZ * sY + cZ * cY * sX, // row 3, col 2
out[6] = cX * sY, // row 1, col 3
out[7] = - sX, // row 2, col 3
out[8] = cX * cY // row 3, col 3
return out

Translating pixels in canvas on sine wave

I am trying to create an image distortion effect on my canvas, but nothing appears to be happening. Here is my code:
self.drawScreen = function (abilityAnimator, elapsed) {
if (!self.initialized) {
self.initialized = true;
self.rawData = abilityAnimator.context.getImageData(self.targetX, self.targetY, self.width, self.height);
self.initialImgData = self.rawData.data;
}
abilityAnimator.drawBackground();
self.rawData = abilityAnimator.context.getImageData(self.targetX, self.targetY, self.width, self.height);
var imgData = self.rawData.data, rootIndex, translationIndex, newX;
for (var y = 0; y < self.height; y++) {
for (var x = 0; x < self.width; x++) {
rootIndex = (y * self.height + x) * 4;
newX = Math.ceil(self.amplitude * Math.sin(self.frequency * (y + elapsed)));
translationIndex = (y * self.width + newX) * 4;
imgData[translationIndex + 0] = self.initialImgData[rootIndex + 0];
imgData[translationIndex + 1] = self.initialImgData[rootIndex + 1];
imgData[translationIndex + 2] = self.initialImgData[rootIndex + 2];
imgData[translationIndex + 3] = self.initialImgData[rootIndex + 3];
}
}
abilityAnimator.context.putImageData(self.rawData, self.targetX, self.targetY);
};
abilityAnimator is a wrapper for my canvas object:
abilityAnimator.context = //canvas.context
abilityAnimator.drawBackground = function(){
this.canvas.width = this.canvas.width;
}
elapsed is simply the number of milliseconds since the animation began (elapsed is always <= 2000)
My member variables have the following values:
self.width = 125;
self.height = 125;
self.frequency = 0.5;
self.amplitude = self.width / 4;
self.targetX = //arbitrary value within canvas
self.targetY = //arbitrary value within canvas
I can translate the image to the right very easily so long as there is no sine function, however, introducing these lines:
newX = Math.ceil(self.amplitude * Math.sin(self.frequency * (y + elapsed)));
translationIndex = (y * self.width + newX) * 4;
Causes nothing to render at all. The translation indexes don't appear to be very strange, and the nature of the sinusoidal function should guarantee that the offset is no greater than 125 / 4 pixels.
Your formula using sin is wrong, the frequency will be so high it will be seen as noise.
The typical formula to build a sinusoid is :
res = sin ( 2 * PI * frequency * time ) ;
where frequency is in Hz and time in s.
So in js that would translate to :
res = Math.sin ( 2 * Math.PI * f * time_ms * 1e-3 ) ;
you can obviously compute just once the constant factor :
self.frequency = 0.5 * ( 2 * Math.PI * 1e-3 );
// then use
res = Math.sin ( self.frequency * time_ms ) ;
So you see you were 1000 times too fast.
Second issue :
Now that you have your time frequency ok, let's fix your spatial frequency : when multiplying time frequency by y, you're quite adding apples and cats.
To build the formula, think that you want to cross n time 2*PI during the height of the canvas.
So :
spatialFrequency = ( n ) * 2 * Math.PI / canvasHeight ;
and your formula becomes :
res = Math.sin ( self.frequency * time_ms + spatialFrequency * y ) ;
You can play with various values with this jsbin i made so you can visualize the effect :
http://jsbin.com/ludizubo/1/edit?js,output

Categories

Resources