Three.js: How do I scale and offset my image textures? - javascript

How do I scale and offset my image textures?
My image dimensions is 1024px x 1024px.
var textureMap = THREE.ImageUtils.loadTexture( 'texture.png' );

Have a look at the texture documentation:
.repeat - How many times the texture is repeated across the surface, in each direction U and V.
.offset - How much a single repetition of the texture is offset from the beginning, in each direction U and V. Typical range is 0.0 to 1.0.
.wrapS - The default is THREE.ClampToEdgeWrapping, where the edge is clamped to the outer edge texels. The other two choices are THREE.RepeatWrapping and THREE.MirroredRepeatWrapping.
.wrapT - The default is THREE.ClampToEdgeWrapping, where the edge is clamped to the outer edge texels. The other two choices are THREE.RepeatWrapping and THREE.MirroredRepeatWrapping.
NOTE: tiling of images in textures only functions if image dimensions are powers of two (2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, ...) in terms of pixels. Individual dimensions need not be equal, but each must be a power of two. This is a limitation of WebGL, not Three.js.
Example of scale:
// scale x2 horizontal
texture.repeat.set(0.5, 1);
// scale x2 vertical
texture.repeat.set(1, 0.5);
// scale x2 proportional
texture.repeat.set(0.5, 0.5);

Offset with texture.offset.set(u, v); where u and v are percents (e.g. 0.67).
There's not a specific scale method, but it's basically the arguments to .repeat(): texture.repeat.set(countU, countV). Smaller numbers will scale bigger: consider fitting 2 vs fitting 20 across the same axis.

Related

HTML canvas shows tiny slice of adjacent image in tileset

I've been programming a game using the HTML5 canvas and JavaScript, and when I try to rotate an image, it displays a tiny sliver of the adjacent image from the sprite sheet. I know that I could separate the images in the sprite sheet, but I'm trying to find another way to solve the problem, like changing a setting.
It isn't a big problem, but it's strange that a piece of an adjacent image would be grabbed when it was not specified. The sprites are 16 by 16 pixels.
screenshot
another one
in the sprite sheet
The line of code that draws the hand sprite is the second draw image, and I'm using an index to grab the images. Here, the result is 208, which is where the green square is in the image.
c.save();
c.translate(canvas.width/2, canvas.height/2);
if(mouseAngle >= 90 || mouseAngle <= -90) {
c.scale(-1, 1);
c.rotate(Math.PI / 180 * (180 + -mouseAngle));
} else {
c.rotate(Math.PI / 180 * (mouseAngle));
};
c.drawImage(Images.items, itemID[this.heldItem] * 16, 0, 16, 16, scale, -12 * scale, 16 * scale, 16 * scale);
c.drawImage(Images.player, this.handFramePath[this.dmgIndex] * 16, 0, 16, 16, scale, -12 * scale, 16 * scale, 16 * scale);
c.restore();
Yes, textures do bleed from the cropping of drawImage.
Usually we can try to prevent that by ensuring that our context's transforms are on integer coordinates, as to avoid any antialiasing, but for rotation... that's more complex.
So the best in your case (with or without bleeding actually), is to extract each sprite from the sprite-sheet in its own ImageBitmap object.
This way the cropping will be done without any transformation messing in, and it will have the added benefit of allowing the browser to optimize the sprites that are used more often (rather than moving the whole sprite-sheet every time).
(async() => {
const spritesheet = document.querySelector("img");
await spritesheet.decode();
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
// apply some transforms
ctx.translate(50, 50);
ctx.rotate(Math.PI/32);
ctx.translate(-50, -50);
// draw only the gray rectangle
// cropped from the full spritesheet on the left
// (bleeds in all directions on Chrome)
ctx.drawImage(spritesheet,
8, 8, 8, 8,
50, 50, 50, 50
);
// single sprite on the right
const sprite = await createImageBitmap(spritesheet, 8, 8, 8, 8);
ctx.drawImage(sprite,
150, 50, 50, 50
);
})().catch(console.error);
<p>The original sprite-sheet:<img src="https://i.stack.imgur.com/I1xPN.png"></p>
<canvas></canvas>
createImageBitmap is now supported in all up to date browsers, but for older ones (e.g Safari did expose it only a few weeks ago), I made a polyfill you can find here.

Get face rotation Three.js

I am getting the intersections of mouse click with Three.js like this
me.vector.set(
(event.clientX / window.innerWidth) * 2 - 1,
-(event.clientY / window.innerHeight) * 2 + 1,
0.5);
me.vector.unproject(app.camera);
me.ray.set(app.camera.position, me.vector.sub(app.camera.position).normalize());
var intersects = me.ray.intersectObjects(app.colliders, false);
So, i got intersects perfectly, with following properties:
distance, face, faceIndex, object, point, and then I execute a function.
The problem is the following:
I want to detect when i click a face of a cube, that is like a floor, in the next example would be the gray face.
sorry about my engilsh D:
WebGL defines vertices and faces with coordinates, colors, and normals. A face normal is a normalized vector, perpendicular to the face plane (and generally pointing 'outside' the mesh). It defines its orientation and enables calculation of lightning for instance. In three.js you can access it via face.normal.
If your floor-like faces are stricly horizontal, then their normals are all precisely {x:0,y:1,z:0}. And since normals are normalized, simply check whether face.normal.y === 1 also checks that x and y equal 0.
If your faces are not strictly horizontal, you may need to set a limit angle with the y-axis. You can calculate this angle with var angle=Math.acos(Yaxis.dot(faceNormal)) where Yaxis=new THREE.Vector3(0,1,0).

WebGL Vertex Space Coordinates

I try to draw a simple rectangle in webgl ( i use webgl like a 2d api ). The idea is to send attributes ( the points ), and transform them in the vertex shader to fit on the screen. But when i render with the vertex shader : gl_Position = vec4( a_point, 0.0, 1.0 ); i don't see anything. I saw WebGL Fundamentals for 2d webgl and it does not seem to work on my computer. There's rectangles but i think they are not on the good coordinates !
Can you explain me how to draw a rectangle in a special coordinate system :
-width/2 < x < width/2
-height/2 < y < height/2
and then transform them in the vertex shader to have the same position in each browser( chrome, firefox, internet explorer 11. It seems to be very simple but i have not reach my goal. I tried to make a transformation of the vertex in the vertex shader too. Maybe i can use viewport ?
In WebGL, all coordinates are in the range from -1.00f (f=float) to +1.00f. They automatically represent whatever canvas width and height you got. By default, you don't use absolute pixel numbers in WebGL.
If you set a vertex (point) to be on x=0.00 and y=0.00, it will be in the center of your screen. If one of the coordinates goes below -1 or above +1, it will be outside of your rendered canvas and some pixels from your triangle won't even be passed to fragment shader (fragment = a pixel of your framebuffer).
This way guarantees that all of your objects will have the same relative position and size, no matter how many pixels your canvas will be.
If you want to have an object of a specific pixel size, you can pre-calculate it like this:
var objectWidth = OBJECT_WIDTH_IN_PIXEL / CANVAS_WIDTH;
var objectHeight = OBJECT_HEIGHT_IN_PIXEL / CANVAS_HEIGHT;
In some cases, as you might see down below, it's better to know the half width and height in floating point -1.00 to +1.00 universe. To position this object's center into the center of your canvas, you need to setup your vertex data like:
GLfloat vVertices[] = {
-(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 1, Triangle 1
+(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 2, Triangle 1
-(objectWidth/2.0), +(objectHeight/2.0), 0.0, // Point 3, Triangle 1
-(objectWidth/2.0), +(objectHeight/2.0), 0.0, // Point 4, Triangle 2
+(objectWidth/2.0), -(objectHeight/2.0), 0.0, // Point 5, Triangle 2
+(objectWidth/2.0), +(objectHeight/2.0), 0.0 // Point 6, Triangle 2
}
The above vertex data sets up two triangles to create a rectangle in the center of your screen. Many of these things can be found in tutorials like WebGL Fundamentals by Greggman.
Please have a look at this post:
http://games.greggman.com/game/webgl-fundamentals/
It shows how to do 2d drawing with WebGL.
I guess you can easily adapt it to suit your need for custom 2d space coordinates.

Three.js sphere

In the following line of code
mesh = new THREE.Mesh(new THREE.SphereGeometry(500,60,40),
new THREE.MeshBasicMaterial({map:texture,overdraw:true}));
What are the values 60 and 40 and what is their effect on the sphere?
mesh.scale.x = -1;
What does the above statement do??
I have gone through many articles but none explains the above and even the three.js documentation gives the syntax for use and not the description.
Take a look at the documentation of the Three.js:
http://threejs.org/docs/#Reference/Extras.Geometries/SphereGeometry
So 60 and 40 are numbers of segments that sphere is divided into, horizontally and vertically.
mesh.scale.x = -1; would invert the mesh "inside-out".
Generally, the scale value for same axis multiplies vertex's position on according axis with scale factor for that axis. So scale on x axis would multiply x-component of the vertex's position with it.
Try to avoid negative scaling factors, it might lead to very undesirable effects. It is also recommended to always scale mesh uniformly on all three axis, something like:
var factor = 2.0;
mesh.scale = new THREE.Vector3(factor, factor, factor);

3D normal/look-at vector from Euler angles

I'm working on a JavaScript/Canvas 3D FPS-like engine and desperately need a normal vector (or look-at vector if you will) for near and far plane clipping. I have the x and y axis rotation angles and am able to do it easily with only one of them at the time, but I just can't figure out how to get both of them...
The idea is to use this vector it to calculate a point in front of the camera, the near and far clipping planes must also be definable by constants so the vector has to be normalized, I hoped that with only the angles it would be possible to get this vector length to 1 without normalizing, but that's not the problem.
I don't have any roll (rotation around z axis) so it's that much easier.
My math looks like this:
zNear = 200; // near plane at an arbitrary 200 "points" away from camera position
// normal calculated with only y rotation angle (vertical axis)
normal = {
x: Math.sin(rotation.y),
y: 0,
z: Math.cos(rotation.y)};
Then clip a point in 3D space by testing the vector from the plane to it by means of a dot product.
nearPlane = {
x: position.x+normal.x*zNear,
y: position.y+normal.y*zNear,
z: position.z+normal.z*zNear};
// test a point at x, y, z against the near clipping plane
if(
(nearPlane.x-x)*normal.x+
(nearPlane.y-y)*normal.y+
(nearPlane.z-z)*normal.z < 0
)
{
return;
}
// then project the 3D point to screen
When a point is behind the player its projection coordinates are reversed (x=-x, y=-y) so nothing makes sense any more, that's why I'm trying to remove them.
I want that green arrow there, but in 3D.
After some intensive brain processing I figured out that
My original look-at vector was (0, 0, 1)
The z-rotation angle (roll) was always 0
There was no reason for the rotation matrix found on Wikipedia not to work
By applying the full rotation matrix on the (0, 0, 1) vector and taking in account that rz = 0 the solution I got was:
normal = {
x: Math.cos(camera.rotation.x)*Math.sin(camera.rotation.y),
y: -Math.sin(camera.rotation.x),
z: Math.cos(camera.rotation.y)*Math.cos(camera.rotation.x)};
And now everything works perfectly. The error was using only the x and y rotation matrices without taking in account rz = 0 for all angles which changed the matrix a little.

Categories

Resources