HTML canvas shows tiny slice of adjacent image in tileset - javascript

I've been programming a game using the HTML5 canvas and JavaScript, and when I try to rotate an image, it displays a tiny sliver of the adjacent image from the sprite sheet. I know that I could separate the images in the sprite sheet, but I'm trying to find another way to solve the problem, like changing a setting.
It isn't a big problem, but it's strange that a piece of an adjacent image would be grabbed when it was not specified. The sprites are 16 by 16 pixels.
screenshot
another one
in the sprite sheet
The line of code that draws the hand sprite is the second draw image, and I'm using an index to grab the images. Here, the result is 208, which is where the green square is in the image.
c.save();
c.translate(canvas.width/2, canvas.height/2);
if(mouseAngle >= 90 || mouseAngle <= -90) {
c.scale(-1, 1);
c.rotate(Math.PI / 180 * (180 + -mouseAngle));
} else {
c.rotate(Math.PI / 180 * (mouseAngle));
};
c.drawImage(Images.items, itemID[this.heldItem] * 16, 0, 16, 16, scale, -12 * scale, 16 * scale, 16 * scale);
c.drawImage(Images.player, this.handFramePath[this.dmgIndex] * 16, 0, 16, 16, scale, -12 * scale, 16 * scale, 16 * scale);
c.restore();

Yes, textures do bleed from the cropping of drawImage.
Usually we can try to prevent that by ensuring that our context's transforms are on integer coordinates, as to avoid any antialiasing, but for rotation... that's more complex.
So the best in your case (with or without bleeding actually), is to extract each sprite from the sprite-sheet in its own ImageBitmap object.
This way the cropping will be done without any transformation messing in, and it will have the added benefit of allowing the browser to optimize the sprites that are used more often (rather than moving the whole sprite-sheet every time).
(async() => {
const spritesheet = document.querySelector("img");
await spritesheet.decode();
const canvas = document.querySelector("canvas");
const ctx = canvas.getContext("2d");
// apply some transforms
ctx.translate(50, 50);
ctx.rotate(Math.PI/32);
ctx.translate(-50, -50);
// draw only the gray rectangle
// cropped from the full spritesheet on the left
// (bleeds in all directions on Chrome)
ctx.drawImage(spritesheet,
8, 8, 8, 8,
50, 50, 50, 50
);
// single sprite on the right
const sprite = await createImageBitmap(spritesheet, 8, 8, 8, 8);
ctx.drawImage(sprite,
150, 50, 50, 50
);
})().catch(console.error);
<p>The original sprite-sheet:<img src="https://i.stack.imgur.com/I1xPN.png"></p>
<canvas></canvas>
createImageBitmap is now supported in all up to date browsers, but for older ones (e.g Safari did expose it only a few weeks ago), I made a polyfill you can find here.

Related

Pixi.js zoom / not re-rendering sprites?

I am learning pixi.js, intending to use it to render a large directed graph with many nodes. I forked a codepen that had something similar, but when I use a simple circle texture for sprites, and when I zoom in, the edges get blurry:
function makeParicleTexture(props) {
const gfx = new PIXI.Graphics();
gfx.beginFill(props.fill);
gfx.lineStyle(props.strokeWidth, props.stroke);
gfx.drawCircle(props.size / 2, props.size / 2, 3)
gfx.endFill();
const texture = app.renderer.generateTexture(gfx, PIXI.SCALE_MODES.LINEAR, 2);
return texture;
}
Here is the codepen: https://codepen.io/mfbridges/pen/xxRqGRz
How can I ask pixi.js to re-rasterize the circle at this new zoom level so that edges of the circle are crisp when you zoom in?
I have seen many examples where this happens seemingly automatically (e.g. here: https://bl.ocks.org/pkerpedjiev/cf791db09ebcabaec0669362f4df1776) so I'm confused why it's not working in the codepen above.
I have seen many examples where this happens seemingly automatically (e.g. here: https://bl.ocks.org/pkerpedjiev/cf791db09ebcabaec0669362f4df1776) so I'm confused why it's not working in the codepen above.
It works there because they just draw "Graphics" object (no texture):
var graphics = new PIXI.Graphics();
graphics.beginFill(0xe74c3c); // Red
d3.range(numCircles).map(function() {
graphics.drawCircle(randomX(), randomY(), 1);
});
stage.addChild(graphics);
"Graphics" are always "scaling" correctly because they are calculated on each render (i think). But textures are generated once and then reused.
What can help in your code:
make bigger Texture and then scale the Sprite that is created from it
gfx.drawCircle(props.size / 2, props.size / 2, 3)
// make radius bigger:
gfx.drawCircle(props.size / 2, props.size / 2, 30)
//then in "makeSprites" function add this line somewhere after Sprite is created:
sprite.scale.set(0.1, 0.1);
^ see: https://www.html5gamedevs.com/topic/16601-resize-texture/
increase "resolution" of texture ( "The resolution / device pixel ratio of the texture being generated." - https://pixijs.download/dev/docs/PIXI.AbstractRenderer.html#generateTexture ) :
const texture = app.renderer.generateTexture(gfx, PIXI.SCALE_MODES.LINEAR, 2);
// change to:
const texture = app.renderer.generateTexture(gfx, PIXI.SCALE_MODES.LINEAR, 20);
You need to exepriment and decide which way to use :)

Partial Equirectangular Panorama Three.js

I've got full equirectangular images working well with Three.js:
scene = new THREE.Scene();
geometry = new THREE.SphereBufferGeometry( 500, 60, 40 );
geometry.scale(-1, 1, 1);
material = new THREE.MeshBasicMaterial({ map: texture });
mesh = new THREE.Mesh(geometry, material);
mesh.rotation.y = Math.PI;
scene.add( mesh );
But my images actually only contain 180x180 degrees (half the sphere) so I'm trying to get a square texture partially applied on the spherical mesh without stretching the image across the entire sphere. I figure it has something to do with the texture.offset.xyz parameters, but I haven't been successful. While I can continue to pad my images to conform to 2x1 Equirectangular standards, I'd rather cut this step out of my processing workflow.
Below you'll find both the full equirectangular image and the square one I'm trying to get working. Does anyone have any clues on how to accomplish this? Thanks!
SphereBufferGeometry has more optional parameters:
SphereBufferGeometry(radius, widthSegments, heightSegments, phiStart, phiLength, thetaStart, thetaLength)
radius — sphere radius. Default is 50.
widthSegments — number of horizontal segments. Minimum value is 3, and the default is 8.
heightSegments — number of vertical segments. Minimum value is 2, and the default is 6.
phiStart — specify horizontal starting angle. Default is 0.
phiLength — specify horizontal sweep angle size. Default is Math.PI * 2.
thetaStart — specify vertical starting angle. Default is 0.
thetaLength — specify vertical sweep angle size. Default is Math.PI.
you can use phiStart, phiLength, thetaStart and thetaLength to define partial sphere
so to do an half sphere you can try something like:
geometry = new THREE.SphereBufferGeometry( 500, 60, 40, 0, Math.PI, 0, Math.PI );
reference http://threejs.org/docs/#Reference/Extras.Geometries/SphereBufferGeometry
The error is not in source code, it's in texture images: they are both wrong.
A 180 degrees fisheye like this:
reprojected into equirectangular will look like this:
Your textures looks like a mix of 360x180 equirectangular and 270° fisheye, wihich looks like this (with wrong labels/numbers, as I used same 180 FOV fisheye to create it):

Three.js: How do I scale and offset my image textures?

How do I scale and offset my image textures?
My image dimensions is 1024px x 1024px.
var textureMap = THREE.ImageUtils.loadTexture( 'texture.png' );
Have a look at the texture documentation:
.repeat - How many times the texture is repeated across the surface, in each direction U and V.
.offset - How much a single repetition of the texture is offset from the beginning, in each direction U and V. Typical range is 0.0 to 1.0.
.wrapS - The default is THREE.ClampToEdgeWrapping, where the edge is clamped to the outer edge texels. The other two choices are THREE.RepeatWrapping and THREE.MirroredRepeatWrapping.
.wrapT - The default is THREE.ClampToEdgeWrapping, where the edge is clamped to the outer edge texels. The other two choices are THREE.RepeatWrapping and THREE.MirroredRepeatWrapping.
NOTE: tiling of images in textures only functions if image dimensions are powers of two (2, 4, 8, 16, 32, 64, 128, 256, 512, 1024, 2048, ...) in terms of pixels. Individual dimensions need not be equal, but each must be a power of two. This is a limitation of WebGL, not Three.js.
Example of scale:
// scale x2 horizontal
texture.repeat.set(0.5, 1);
// scale x2 vertical
texture.repeat.set(1, 0.5);
// scale x2 proportional
texture.repeat.set(0.5, 0.5);
Offset with texture.offset.set(u, v); where u and v are percents (e.g. 0.67).
There's not a specific scale method, but it's basically the arguments to .repeat(): texture.repeat.set(countU, countV). Smaller numbers will scale bigger: consider fitting 2 vs fitting 20 across the same axis.

Creating a HTML5 infinite canvas

I'm trying to learn how to build an interactive canvas from scratch, but I'm having trouble trying to draw things outside the canvas' viewport (things that exceed canvas.width and canvas.height). The goal is to have something like an infinite canvas where you can scroll and zoom and put things anywhere I want.
I figured out how to properly calculate the right insertion point when we zoomed out, the algorithm works like this:
see if the component should be added off the limits of the canvas;
if so, transform the offset (x, y) adding the distance between the point and the edge of the canvas.
I noticed that the event.pageX and event.pageY values are always given to me based on the width and height of the canvas, so if I'm zoomed out these values are, then, smaller than it should be (since I'm viewing more pixels). The transform algorithm work as follows in JS:
// pageX is 430, pageY is 480
// canvas has width=600 height=600
// scale is 0.6, meaning the canvas actually has size 360x360
var currentSize = canvas.width * scale; // 360
pageX = canvas.width + (pageX - currentSize);
pageY = canvas.width + (pageY - currentSize);
Drawing on paper this logic seem to work, but the problem is I (apparently) can't draw outside canvas limits, so I'm unable to see the result. Questions are:
Is this logic correct?
Is there a way to achieve my goal? (pointing right literature will be very appreciated)
Is canvas the right tool to the job or I should use something else?
The complete example I'm using to learn can be found on this fiddle.
UPDATE
I had another idea to solve the problem: instead of drawing things outside the canvas, I simply translate my points to fit inside the canvas' limits proportionally and then apply scale to zoom in/out. Something like this:
// canvas is 500x500
var points = [
{text: 'Foo', x: 10, y: 10},
{text: 'Bar', x: 854, y: 552}, // does not fit inside
{text: 'Baz', x: 352, y: 440}
];
// The canvas can't show all these points, the ideal
// would be having a canvas of at least size 900x600,
// so I can use a rule of three to convert all points
// from this imaginary canvas to fit inside my 500x500
// in 900px, x=10
// in 500px, x=?
// hence, the formulas are `newX=x * 500 / 900` and `newY = y * 500 / 600`
var converted_points = [
{text: 'Foo', x: 5.55, y: 8.33},
{text: 'Bar', x: 474.44, y: 460},
{text: 'Baz', x: 195.55, y: 366.66}
];
After that I suppose I would just need to scale/transform the canvas to do zooming. Is that logic ok?
You can use the library called TiledCanvas
It provides interfaces to zoom and move.
And draw in an infinite space using all the canvas apis.
It does require that you tell where you are drawing.
https://github.com/Squarific/TiledCanvas

How can I simulate z-index in canvas

I have asked a question before: How can I control z-index of canvas objects? and we reached to a solution that may not be a good one for complicated situations.
I found that canvas doesn't have a z-index system, but a simple ordered drawing one. Now there is a new question: how can I simulate z-index system to make this problem fixed in complicated situations?
The good answer can solve a big problem.
It's not that canvas doesn't have a z-index, it's that canvas doesn't keep objects drawn contrary to the HTML page. It just draws on the pixel matrix.
There are basically two types of drawing models :
object ones (usually vector) : objects are kept and managed by the engine. They can usually be removed or changed. They have a z-index
bitmap ones : there are no objects. You just change a pixel matrix
The Canvas model is a bitmap one. To have objects drawn over other ones, you must draw them after. This means you must manage what you draw.
The canvas model is very fast, but if you want a drawing system managing your objects, maybe you need SVG instead.
If you want to use a canvas, then the best is to keep what you draw as objects.
Here's an example I just made : I keep a square list and every second I randomize their zindex and redraw them :
var c = document.getElementById('c').getContext('2d');
function Square(x, y, s, color) {
this.x = x; this.y = y; this.s = s; this.color = color;
this.zindex=0;
}
Square.prototype.draw = function(c) {
c.fillStyle = this.color;
c.fillRect(this.x, this.y, this.s, this.s);
}
var squares = [
new Square(10, 10, 50, 'blue'), new Square(40, 10, 40, 'red'), new Square(30, 50, 30, 'green'),
new Square(60, 30, 40, '#111'), new Square(0, 30, 20, '#444'), new Square(70, 00, 40, '#999')
];
function draw() {
c.fillStyle = "white";
c.fillRect(0, 0, 1000, 500);
for (var i=0; i<squares.length; i++) squares[i].draw(c);
}
setInterval(function(){
// give all squares a random z-index
squares.forEach(function(v){v.zindex=Math.random()});
// sort the list accordingly to zindex
squares.sort(function(a,b){return a.zindex-b.zindex});
draw();
}, 1000);
Demonstration
The idea is that the square array is sorted accordingly to zindex. This could be easily extended to other types of objects.
As dystroy has said, z-index is, at its simplest, just an index to tell you in what order to draw things on the canvas, so that they overlap properly.
If you mean to do more than this, say to replicate the existing workings of a browser, then you would have more work to do. The order in which objects are drawn in a browser is a complicated calculation that is driven by:
The DOM tree
Elements' position attributes
Elements' z-index attributes
The canonical source to this is the Elaborate description of Stacking Contexts, part of the CSS specification.

Categories

Resources