Related
I use html5 canvas elements to resize images im my browser. It turns out that the quality is very low. I found this: Disable Interpolation when Scaling a <canvas> but it does not help to increase the quality.
Below is my css and js code as well as the image scalled with Photoshop and scaled in the canvas API.
What do I have to do to get optimal quality when scaling an image in the browser?
Note: I want to scale down a large image to a small one, modify color in a canvas and send the result from the canvas to the server.
CSS:
canvas, img {
image-rendering: optimizeQuality;
image-rendering: -moz-crisp-edges;
image-rendering: -webkit-optimize-contrast;
image-rendering: optimize-contrast;
-ms-interpolation-mode: nearest-neighbor;
}
JS:
var $img = $('<img>');
var $originalCanvas = $('<canvas>');
$img.load(function() {
var originalContext = $originalCanvas[0].getContext('2d');
originalContext.imageSmoothingEnabled = false;
originalContext.webkitImageSmoothingEnabled = false;
originalContext.mozImageSmoothingEnabled = false;
originalContext.drawImage(this, 0, 0, 379, 500);
});
The image resized with photoshop:
The image resized on canvas:
Edit:
I tried to make downscaling in more than one steps as proposed in:
Resizing an image in an HTML5 canvas and
Html5 canvas drawImage: how to apply antialiasing
This is the function I have used:
function resizeCanvasImage(img, canvas, maxWidth, maxHeight) {
var imgWidth = img.width,
imgHeight = img.height;
var ratio = 1, ratio1 = 1, ratio2 = 1;
ratio1 = maxWidth / imgWidth;
ratio2 = maxHeight / imgHeight;
// Use the smallest ratio that the image best fit into the maxWidth x maxHeight box.
if (ratio1 < ratio2) {
ratio = ratio1;
}
else {
ratio = ratio2;
}
var canvasContext = canvas.getContext("2d");
var canvasCopy = document.createElement("canvas");
var copyContext = canvasCopy.getContext("2d");
var canvasCopy2 = document.createElement("canvas");
var copyContext2 = canvasCopy2.getContext("2d");
canvasCopy.width = imgWidth;
canvasCopy.height = imgHeight;
copyContext.drawImage(img, 0, 0);
// init
canvasCopy2.width = imgWidth;
canvasCopy2.height = imgHeight;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
var rounds = 2;
var roundRatio = ratio * rounds;
for (var i = 1; i <= rounds; i++) {
console.log("Step: "+i);
// tmp
canvasCopy.width = imgWidth * roundRatio / i;
canvasCopy.height = imgHeight * roundRatio / i;
copyContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvasCopy.width, canvasCopy.height);
// copy back
canvasCopy2.width = imgWidth * roundRatio / i;
canvasCopy2.height = imgHeight * roundRatio / i;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
} // end for
// copy back to canvas
canvas.width = imgWidth * roundRatio / rounds;
canvas.height = imgHeight * roundRatio / rounds;
canvasContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvas.width, canvas.height);
}
Here is the result if I use a 2 step down sizing:
Here is the result if I use a 3 step down sizing:
Here is the result if I use a 4 step down sizing:
Here is the result if I use a 20 step down sizing:
Note: It turns out that from 1 step to 2 steps there is a large improvement in image quality but the more steps you add to the process the more fuzzy the image becomes.
Is there a way to solve the problem that the image gets more fuzzy the more steps you add?
Edit 2013-10-04: I tried the algorithm of GameAlchemist. Here is the result compared to Photoshop.
PhotoShop Image:
GameAlchemist's Algorithm:
Since your problem is to downscale your image, there is no point in talking about interpolation -which is about creating pixel-. The issue here is downsampling.
To downsample an image, we need to turn each square of p * p pixels in the original image into a single pixel in the destination image.
For performances reasons Browsers do a very simple downsampling : to build the smaller image, they will just pick ONE pixel in the source and use its value for the destination. which 'forgets' some details and adds noise.
Yet there's an exception to that : since the 2X image downsampling is very simple to compute (average 4 pixels to make one) and is used for retina/HiDPI pixels, this case is handled properly -the Browser does make use of 4 pixels to make one-.
BUT... if you use several time a 2X downsampling, you'll face the issue that the successive rounding errors will add too much noise.
What's worse, you won't always resize by a power of two, and resizing to the nearest power + a last resizing is very noisy.
What you seek is a pixel-perfect downsampling, that is : a re-sampling of the image that will take all input pixels into account -whatever the scale-.
To do that we must compute, for each input pixel, its contribution to one, two, or four destination pixels depending wether the scaled projection of the input pixels is right inside a destination pixels, overlaps an X border, an Y border, or both.
( A scheme would be nice here, but i don't have one. )
Here's an example of canvas scale vs my pixel perfect scale on a 1/3 scale of a zombat.
Notice that the picture might get scaled in your Browser, and is .jpegized by S.O..
Yet we see that there's much less noise especially in the grass behind the wombat, and the branches on its right. The noise in the fur makes it more contrasted, but it looks like he's got white hairs -unlike source picture-.
Right image is less catchy but definitively nicer.
Here's the code to do the pixel perfect downscaling :
fiddle result :
http://jsfiddle.net/gamealchemist/r6aVp/embedded/result/
fiddle itself : http://jsfiddle.net/gamealchemist/r6aVp/
// scales the image by (float) scale < 1
// returns a canvas containing the scaled image.
function downScaleImage(img, scale) {
var imgCV = document.createElement('canvas');
imgCV.width = img.width;
imgCV.height = img.height;
var imgCtx = imgCV.getContext('2d');
imgCtx.drawImage(img, 0, 0);
return downScaleCanvas(imgCV, scale);
}
// scales the canvas by (float) scale < 1
// returns a new canvas containing the scaled image.
function downScaleCanvas(cv, scale) {
if (!(scale < 1) || !(scale > 0)) throw ('scale must be a positive number <1 ');
var sqScale = scale * scale; // square scale = area of source pixel within target
var sw = cv.width; // source image width
var sh = cv.height; // source image height
var tw = Math.floor(sw * scale); // target image width
var th = Math.floor(sh * scale); // target image height
var sx = 0, sy = 0, sIndex = 0; // source x,y, index within source array
var tx = 0, ty = 0, yIndex = 0, tIndex = 0; // target x,y, x,y index within target array
var tX = 0, tY = 0; // rounded tx, ty
var w = 0, nw = 0, wx = 0, nwx = 0, wy = 0, nwy = 0; // weight / next weight x / y
// weight is weight of current source point within target.
// next weight is weight of current source point within next target's point.
var crossX = false; // does scaled px cross its current px right border ?
var crossY = false; // does scaled px cross its current px bottom border ?
var sBuffer = cv.getContext('2d').
getImageData(0, 0, sw, sh).data; // source buffer 8 bit rgba
var tBuffer = new Float32Array(3 * tw * th); // target buffer Float32 rgb
var sR = 0, sG = 0, sB = 0; // source's current point r,g,b
/* untested !
var sA = 0; //source alpha */
for (sy = 0; sy < sh; sy++) {
ty = sy * scale; // y src position within target
tY = 0 | ty; // rounded : target pixel's y
yIndex = 3 * tY * tw; // line index within target array
crossY = (tY != (0 | ty + scale));
if (crossY) { // if pixel is crossing botton target pixel
wy = (tY + 1 - ty); // weight of point within target pixel
nwy = (ty + scale - tY - 1); // ... within y+1 target pixel
}
for (sx = 0; sx < sw; sx++, sIndex += 4) {
tx = sx * scale; // x src position within target
tX = 0 | tx; // rounded : target pixel's x
tIndex = yIndex + tX * 3; // target pixel index within target array
crossX = (tX != (0 | tx + scale));
if (crossX) { // if pixel is crossing target pixel's right
wx = (tX + 1 - tx); // weight of point within target pixel
nwx = (tx + scale - tX - 1); // ... within x+1 target pixel
}
sR = sBuffer[sIndex ]; // retrieving r,g,b for curr src px.
sG = sBuffer[sIndex + 1];
sB = sBuffer[sIndex + 2];
/* !! untested : handling alpha !!
sA = sBuffer[sIndex + 3];
if (!sA) continue;
if (sA != 0xFF) {
sR = (sR * sA) >> 8; // or use /256 instead ??
sG = (sG * sA) >> 8;
sB = (sB * sA) >> 8;
}
*/
if (!crossX && !crossY) { // pixel does not cross
// just add components weighted by squared scale.
tBuffer[tIndex ] += sR * sqScale;
tBuffer[tIndex + 1] += sG * sqScale;
tBuffer[tIndex + 2] += sB * sqScale;
} else if (crossX && !crossY) { // cross on X only
w = wx * scale;
// add weighted component for current px
tBuffer[tIndex ] += sR * w;
tBuffer[tIndex + 1] += sG * w;
tBuffer[tIndex + 2] += sB * w;
// add weighted component for next (tX+1) px
nw = nwx * scale
tBuffer[tIndex + 3] += sR * nw;
tBuffer[tIndex + 4] += sG * nw;
tBuffer[tIndex + 5] += sB * nw;
} else if (crossY && !crossX) { // cross on Y only
w = wy * scale;
// add weighted component for current px
tBuffer[tIndex ] += sR * w;
tBuffer[tIndex + 1] += sG * w;
tBuffer[tIndex + 2] += sB * w;
// add weighted component for next (tY+1) px
nw = nwy * scale
tBuffer[tIndex + 3 * tw ] += sR * nw;
tBuffer[tIndex + 3 * tw + 1] += sG * nw;
tBuffer[tIndex + 3 * tw + 2] += sB * nw;
} else { // crosses both x and y : four target points involved
// add weighted component for current px
w = wx * wy;
tBuffer[tIndex ] += sR * w;
tBuffer[tIndex + 1] += sG * w;
tBuffer[tIndex + 2] += sB * w;
// for tX + 1; tY px
nw = nwx * wy;
tBuffer[tIndex + 3] += sR * nw;
tBuffer[tIndex + 4] += sG * nw;
tBuffer[tIndex + 5] += sB * nw;
// for tX ; tY + 1 px
nw = wx * nwy;
tBuffer[tIndex + 3 * tw ] += sR * nw;
tBuffer[tIndex + 3 * tw + 1] += sG * nw;
tBuffer[tIndex + 3 * tw + 2] += sB * nw;
// for tX + 1 ; tY +1 px
nw = nwx * nwy;
tBuffer[tIndex + 3 * tw + 3] += sR * nw;
tBuffer[tIndex + 3 * tw + 4] += sG * nw;
tBuffer[tIndex + 3 * tw + 5] += sB * nw;
}
} // end for sx
} // end for sy
// create result canvas
var resCV = document.createElement('canvas');
resCV.width = tw;
resCV.height = th;
var resCtx = resCV.getContext('2d');
var imgRes = resCtx.getImageData(0, 0, tw, th);
var tByteBuffer = imgRes.data;
// convert float32 array into a UInt8Clamped Array
var pxIndex = 0; //
for (sIndex = 0, tIndex = 0; pxIndex < tw * th; sIndex += 3, tIndex += 4, pxIndex++) {
tByteBuffer[tIndex] = Math.ceil(tBuffer[sIndex]);
tByteBuffer[tIndex + 1] = Math.ceil(tBuffer[sIndex + 1]);
tByteBuffer[tIndex + 2] = Math.ceil(tBuffer[sIndex + 2]);
tByteBuffer[tIndex + 3] = 255;
}
// writing result to canvas.
resCtx.putImageData(imgRes, 0, 0);
return resCV;
}
It is quite memory greedy, since a float buffer is required to store the intermediate values of the destination image (-> if we count the result canvas, we use 6 times the source image's memory in this algorithm).
It is also quite expensive, since each source pixel is used whatever the destination size, and we have to pay for the getImageData / putImageDate, quite slow also.
But there's no way to be faster than process each source value in this case, and situation is not that bad : For my 740 * 556 image of a wombat, processing takes between 30 and 40 ms.
Fast canvas resample with good quality: http://jsfiddle.net/9g9Nv/442/
Update: version 2.0 (faster, web workers + transferable objects) - https://github.com/viliusle/Hermite-resize
/**
* Hermite resize - fast image resize/resample using Hermite filter. 1 cpu version!
*
* #param {HtmlElement} canvas
* #param {int} width
* #param {int} height
* #param {boolean} resize_canvas if true, canvas will be resized. Optional.
*/
function resample_single(canvas, width, height, resize_canvas) {
var width_source = canvas.width;
var height_source = canvas.height;
width = Math.round(width);
height = Math.round(height);
var ratio_w = width_source / width;
var ratio_h = height_source / height;
var ratio_w_half = Math.ceil(ratio_w / 2);
var ratio_h_half = Math.ceil(ratio_h / 2);
var ctx = canvas.getContext("2d");
var img = ctx.getImageData(0, 0, width_source, height_source);
var img2 = ctx.createImageData(width, height);
var data = img.data;
var data2 = img2.data;
for (var j = 0; j < height; j++) {
for (var i = 0; i < width; i++) {
var x2 = (i + j * width) * 4;
var weight = 0;
var weights = 0;
var weights_alpha = 0;
var gx_r = 0;
var gx_g = 0;
var gx_b = 0;
var gx_a = 0;
var center_y = (j + 0.5) * ratio_h;
var yy_start = Math.floor(j * ratio_h);
var yy_stop = Math.ceil((j + 1) * ratio_h);
for (var yy = yy_start; yy < yy_stop; yy++) {
var dy = Math.abs(center_y - (yy + 0.5)) / ratio_h_half;
var center_x = (i + 0.5) * ratio_w;
var w0 = dy * dy; //pre-calc part of w
var xx_start = Math.floor(i * ratio_w);
var xx_stop = Math.ceil((i + 1) * ratio_w);
for (var xx = xx_start; xx < xx_stop; xx++) {
var dx = Math.abs(center_x - (xx + 0.5)) / ratio_w_half;
var w = Math.sqrt(w0 + dx * dx);
if (w >= 1) {
//pixel too far
continue;
}
//hermite filter
weight = 2 * w * w * w - 3 * w * w + 1;
var pos_x = 4 * (xx + yy * width_source);
//alpha
gx_a += weight * data[pos_x + 3];
weights_alpha += weight;
//colors
if (data[pos_x + 3] < 255)
weight = weight * data[pos_x + 3] / 250;
gx_r += weight * data[pos_x];
gx_g += weight * data[pos_x + 1];
gx_b += weight * data[pos_x + 2];
weights += weight;
}
}
data2[x2] = gx_r / weights;
data2[x2 + 1] = gx_g / weights;
data2[x2 + 2] = gx_b / weights;
data2[x2 + 3] = gx_a / weights_alpha;
}
}
//clear and resize canvas
if (resize_canvas === true) {
canvas.width = width;
canvas.height = height;
} else {
ctx.clearRect(0, 0, width_source, height_source);
}
//draw
ctx.putImageData(img2, 0, 0);
}
Suggestion 1 - extend the process pipe-line
You can use step-down as I describe in the links you refer to but you appear to use them in a wrong way.
Step down is not needed to scale images to ratios above 1:2 (typically, but not limited to). It is where you need to do a drastic down-scaling you need to split it up in two (and rarely, more) steps depending on content of the image (in particular where high-frequencies such as thin lines occur).
Every time you down-sample an image you will loose details and information. You cannot expect the resulting image to be as clear as the original.
If you are then scaling down the images in many steps you will loose a lot of information in total and the result will be poor as you already noticed.
Try with just one extra step, or at tops two.
Convolutions
In case of Photoshop notice that it applies a convolution after the image has been re-sampled, such as sharpen. It's not just bi-cubic interpolation that takes place so in order to fully emulate Photoshop we need to also add the steps Photoshop is doing (with the default setup).
For this example I will use my original answer that you refer to in your post, but I have added a sharpen convolution to it to improve quality as a post process (see demo at bottom).
Here is code for adding sharpen filter (it's based on a generic convolution filter - I put the weight matrix for sharpen inside it as well as a mix factor to adjust the pronunciation of the effect):
Usage:
sharpen(context, width, height, mixFactor);
The mixFactor is a value between [0.0, 1.0] and allow you do downplay the sharpen effect - rule-of-thumb: the less size the less of the effect is needed.
Function (based on this snippet):
function sharpen(ctx, w, h, mix) {
var weights = [0, -1, 0, -1, 5, -1, 0, -1, 0],
katet = Math.round(Math.sqrt(weights.length)),
half = (katet * 0.5) |0,
dstData = ctx.createImageData(w, h),
dstBuff = dstData.data,
srcBuff = ctx.getImageData(0, 0, w, h).data,
y = h;
while(y--) {
x = w;
while(x--) {
var sy = y,
sx = x,
dstOff = (y * w + x) * 4,
r = 0, g = 0, b = 0, a = 0;
for (var cy = 0; cy < katet; cy++) {
for (var cx = 0; cx < katet; cx++) {
var scy = sy + cy - half;
var scx = sx + cx - half;
if (scy >= 0 && scy < h && scx >= 0 && scx < w) {
var srcOff = (scy * w + scx) * 4;
var wt = weights[cy * katet + cx];
r += srcBuff[srcOff] * wt;
g += srcBuff[srcOff + 1] * wt;
b += srcBuff[srcOff + 2] * wt;
a += srcBuff[srcOff + 3] * wt;
}
}
}
dstBuff[dstOff] = r * mix + srcBuff[dstOff] * (1 - mix);
dstBuff[dstOff + 1] = g * mix + srcBuff[dstOff + 1] * (1 - mix);
dstBuff[dstOff + 2] = b * mix + srcBuff[dstOff + 2] * (1 - mix)
dstBuff[dstOff + 3] = srcBuff[dstOff + 3];
}
}
ctx.putImageData(dstData, 0, 0);
}
The result of using this combination will be:
ONLINE DEMO HERE
Depending on how much of the sharpening you want to add to the blend you can get result from default "blurry" to very sharp:
Suggestion 2 - low level algorithm implementation
If you want to get the best result quality-wise you'll need to go low-level and consider to implement for example this brand new algorithm to do this.
See Interpolation-Dependent Image Downsampling (2011) from IEEE.
Here is a link to the paper in full (PDF).
There are no implementations of this algorithm in JavaScript AFAIK of at this time so you're in for a hand-full if you want to throw yourself at this task.
The essence is (excerpts from the paper):
Abstract
An interpolation oriented adaptive down-sampling algorithm is proposed
for low bit-rate image coding in this paper. Given an image, the
proposed algorithm is able to obtain a low resolution image, from
which a high quality image with the same resolution as the input
image can be interpolated. Different from the traditional
down-sampling algorithms, which are independent from the
interpolation process, the proposed down-sampling algorithm hinges the
down-sampling to the interpolation process. Consequently, the
proposed down-sampling algorithm is able to maintain the original
information of the input image to the largest extent. The down-sampled
image is then fed into JPEG. A total variation (TV) based post
processing is then applied to the decompressed low resolution image.
Ultimately, the processed image is interpolated to maintain the
original resolution of the input image. Experimental results verify
that utilizing the downsampled image by the proposed algorithm, an
interpolated image with much higher quality can be achieved. Besides,
the proposed algorithm is able to achieve superior performance than
JPEG for low bit rate image coding.
(see provided link for all details, formulas etc.)
If you wish to use canvas only, the best result will be with multiple downsteps. But that's not good enougth yet. For better quality you need pure js implementation. We just released pica - high speed downscaler with variable quality/speed. In short, it resizes 1280*1024px in ~0.1s, and 5000*3000px image in 1s, with highest quality (lanczos filter with 3 lobes). Pica has demo, where you can play with your images, quality levels, and even try it on mobile devices.
Pica does not have unsharp mask yet, but that will be added very soon. That's much more easy than implement high speed convolution filter for resize.
Why use the canvas to resize images? Modern browsers all use bicubic interpolation — the same process used by Photoshop (if you're doing it right) — and they do it faster than the canvas process. Just specify the image size you want (use only one dimension, height or width, to resize proportionally).
This is supported by most browsers, including later versions of IE. Earlier versions may require browser-specific CSS.
A simple function (using jQuery) to resize an image would be like this:
function resizeImage(img, percentage) {
var coeff = percentage/100,
width = $(img).width(),
height = $(img).height();
return {"width": width*coeff, "height": height*coeff}
}
Then just use the returned value to resize the image in one or both dimensions.
Obviously there are different refinements you could make, but this gets the job done.
Paste the following code into the console of this page and watch what happens to the gravatars:
function resizeImage(img, percentage) {
var coeff = percentage/100,
width = $(img).width(),
height = $(img).height();
return {"width": width*coeff, "height": height*coeff}
}
$('.user-gravatar32 img').each(function(){
var newDimensions = resizeImage( this, 150);
this.style.width = newDimensions.width + "px";
this.style.height = newDimensions.height + "px";
});
Not the right answer for people who really need to resize the image itself, but just to shrink the file size.
I had a problem with "directly from the camera" pictures, that my customers often uploaded in "uncompressed" JPEG.
Not so well known is, that the canvas supports (in most browsers 2017) to change the quality of JPEG
data=canvas.toDataURL('image/jpeg', .85) # [1..0] default 0.92
With this trick I could reduce 4k x 3k pics with >10Mb to 1 or 2Mb, sure it depends on your needs.
look here
I found a solution that doesn't need to access directly the pixel data and loop through it to perform the downsampling. Depending on the size of the image this can be very resource intensive, and it would be better to use the browser's internal algorithms.
The drawImage() function is using a linear-interpolation, nearest-neighbor resampling method. That works well when you are not resizing down more than half the original size.
If you loop to only resize max one half at a time, the results would be quite good, and much faster than accessing pixel data.
This function downsample to half at a time until reaching the desired size:
function resize_image( src, dst, type, quality ) {
var tmp = new Image(),
canvas, context, cW, cH;
type = type || 'image/jpeg';
quality = quality || 0.92;
cW = src.naturalWidth;
cH = src.naturalHeight;
tmp.src = src.src;
tmp.onload = function() {
canvas = document.createElement( 'canvas' );
cW /= 2;
cH /= 2;
if ( cW < src.width ) cW = src.width;
if ( cH < src.height ) cH = src.height;
canvas.width = cW;
canvas.height = cH;
context = canvas.getContext( '2d' );
context.drawImage( tmp, 0, 0, cW, cH );
dst.src = canvas.toDataURL( type, quality );
if ( cW <= src.width || cH <= src.height )
return;
tmp.src = dst.src;
}
}
// The images sent as parameters can be in the DOM or be image objects
resize_image( $( '#original' )[0], $( '#smaller' )[0] );
This is the improved Hermite resize filter that utilises 1 worker so that the window doesn't freeze.
https://github.com/calvintwr/blitz-hermite-resize
const blitz = Blitz.create()
/* Promise */
blitz({
source: DOM Image/DOM Canvas/jQuery/DataURL/File,
width: 400,
height: 600
}).then(output => {
// handle output
})catch(error => {
// handle error
})
/* Await */
let resized = await blitz({...})
/* Old school callback */
const blitz = Blitz.create('callback')
blitz({...}, function(output) {
// run your callback.
})
Here is a reusable Angular service for high quality image / canvas resizing: https://gist.github.com/fisch0920/37bac5e741eaec60e983
The service supports lanczos convolution and step-wise downscaling. The convolution approach is higher quality at the cost of being slower, whereas the step-wise downscaling approach produces reasonably antialiased results and is significantly faster.
Example usage:
angular.module('demo').controller('ExampleCtrl', function (imageService) {
// EXAMPLE USAGE
// NOTE: it's bad practice to access the DOM inside a controller,
// but this is just to show the example usage.
// resize by lanczos-sinc filter
imageService.resize($('#myimg')[0], 256, 256)
.then(function (resizedImage) {
// do something with resized image
})
// resize by stepping down image size in increments of 2x
imageService.resizeStep($('#myimg')[0], 256, 256)
.then(function (resizedImage) {
// do something with resized image
})
})
Maybe man you can try this, which is I always use in my project.In this way you can not only get high quality image ,but any other element on your canvas.
/*
* #parame canvas => canvas object
* #parame rate => the pixel quality
*/
function setCanvasSize(canvas, rate) {
const scaleRate = rate;
canvas.width = window.innerWidth * scaleRate;
canvas.height = window.innerHeight * scaleRate;
canvas.style.width = window.innerWidth + 'px';
canvas.style.height = window.innerHeight + 'px';
canvas.getContext('2d').scale(scaleRate, scaleRate);
}
instead of .85, if we add 1.0. You will get exact answer.
data=canvas.toDataURL('image/jpeg', 1.0);
You can get clear and bright image. Please check
I really try to avoid running through image data, especially on larger images. Thus I came up with a rather simple way to decently reduce image size without any restrictions or limitations using a few extra steps.
This routine goes down to the lowest possible half step before the desired target size. Then it scales it up to twice the target size and then half again. Sounds funny at first, but the results are astoundingly good and go there swiftly.
function resizeCanvas(canvas, newWidth, newHeight) {
let ctx = canvas.getContext('2d');
let buffer = document.createElement('canvas');
buffer.width = ctx.canvas.width;
buffer.height = ctx.canvas.height;
let ctxBuf = buffer.getContext('2d');
let scaleX = newWidth / ctx.canvas.width;
let scaleY = newHeight / ctx.canvas.height;
let scaler = Math.min(scaleX, scaleY);
//see if target scale is less than half...
if (scaler < 0.5) {
//while loop in case target scale is less than quarter...
while (scaler < 0.5) {
ctxBuf.canvas.width = ctxBuf.canvas.width * 0.5;
ctxBuf.canvas.height = ctxBuf.canvas.height * 0.5;
ctxBuf.scale(0.5, 0.5);
ctxBuf.drawImage(canvas, 0, 0);
ctxBuf.setTransform(1, 0, 0, 1, 0, 0);
ctx.canvas.width = ctxBuf.canvas.width;
ctx.canvas.height = ctxBuf.canvas.height;
ctx.drawImage(buffer, 0, 0);
scaleX = newWidth / ctxBuf.canvas.width;
scaleY = newHeight / ctxBuf.canvas.height;
scaler = Math.min(scaleX, scaleY);
}
//only if the scaler is now larger than half, double target scale trick...
if (scaler > 0.5) {
scaleX *= 2.0;
scaleY *= 2.0;
ctxBuf.canvas.width = ctxBuf.canvas.width * scaleX;
ctxBuf.canvas.height = ctxBuf.canvas.height * scaleY;
ctxBuf.scale(scaleX, scaleY);
ctxBuf.drawImage(canvas, 0, 0);
ctxBuf.setTransform(1, 0, 0, 1, 0, 0);
scaleX = 0.5;
scaleY = 0.5;
}
} else
ctxBuf.drawImage(canvas, 0, 0);
//wrapping things up...
ctx.canvas.width = newWidth;
ctx.canvas.height = newHeight;
ctx.scale(scaleX, scaleY);
ctx.drawImage(buffer, 0, 0);
ctx.setTransform(1, 0, 0, 1, 0, 0);
}
context.scale(xScale, yScale)
<canvas id="c"></canvas>
<hr/>
<img id="i" />
<script>
var i = document.getElementById('i');
i.onload = function(){
var width = this.naturalWidth,
height = this.naturalHeight,
canvas = document.getElementById('c'),
ctx = canvas.getContext('2d');
canvas.width = Math.floor(width / 2);
canvas.height = Math.floor(height / 2);
ctx.scale(0.5, 0.5);
ctx.drawImage(this, 0, 0);
ctx.rect(0,0,500,500);
ctx.stroke();
// restore original 1x1 scale
ctx.scale(2, 2);
ctx.rect(0,0,500,500);
ctx.stroke();
};
i.src = 'https://static.md/b70a511140758c63f07b618da5137b5d.png';
</script>
DEMO: Resizing images with JS and HTML Canvas Demo fiddler.
You may find 3 different methods to do this resize, that will help you understand how the code is working and why.
https://jsfiddle.net/1b68eLdr/93089/
Full code of both demo, and TypeScript method that you may want to use in your code, can be found in the GitHub project.
https://github.com/eyalc4/ts-image-resizer
This is the final code:
export class ImageTools {
base64ResizedImage: string = null;
constructor() {
}
ResizeImage(base64image: string, width: number = 1080, height: number = 1080) {
let img = new Image();
img.src = base64image;
img.onload = () => {
// Check if the image require resize at all
if(img.height <= height && img.width <= width) {
this.base64ResizedImage = base64image;
// TODO: Call method to do something with the resize image
}
else {
// Make sure the width and height preserve the original aspect ratio and adjust if needed
if(img.height > img.width) {
width = Math.floor(height * (img.width / img.height));
}
else {
height = Math.floor(width * (img.height / img.width));
}
let resizingCanvas: HTMLCanvasElement = document.createElement('canvas');
let resizingCanvasContext = resizingCanvas.getContext("2d");
// Start with original image size
resizingCanvas.width = img.width;
resizingCanvas.height = img.height;
// Draw the original image on the (temp) resizing canvas
resizingCanvasContext.drawImage(img, 0, 0, resizingCanvas.width, resizingCanvas.height);
let curImageDimensions = {
width: Math.floor(img.width),
height: Math.floor(img.height)
};
let halfImageDimensions = {
width: null,
height: null
};
// Quickly reduce the size by 50% each time in few iterations until the size is less then
// 2x time the target size - the motivation for it, is to reduce the aliasing that would have been
// created with direct reduction of very big image to small image
while (curImageDimensions.width * 0.5 > width) {
// Reduce the resizing canvas by half and refresh the image
halfImageDimensions.width = Math.floor(curImageDimensions.width * 0.5);
halfImageDimensions.height = Math.floor(curImageDimensions.height * 0.5);
resizingCanvasContext.drawImage(resizingCanvas, 0, 0, curImageDimensions.width, curImageDimensions.height,
0, 0, halfImageDimensions.width, halfImageDimensions.height);
curImageDimensions.width = halfImageDimensions.width;
curImageDimensions.height = halfImageDimensions.height;
}
// Now do final resize for the resizingCanvas to meet the dimension requirments
// directly to the output canvas, that will output the final image
let outputCanvas: HTMLCanvasElement = document.createElement('canvas');
let outputCanvasContext = outputCanvas.getContext("2d");
outputCanvas.width = width;
outputCanvas.height = height;
outputCanvasContext.drawImage(resizingCanvas, 0, 0, curImageDimensions.width, curImageDimensions.height,
0, 0, width, height);
// output the canvas pixels as an image. params: format, quality
this.base64ResizedImage = outputCanvas.toDataURL('image/jpeg', 0.85);
// TODO: Call method to do something with the resize image
}
};
}}
I have about 120 000 particles (each particle 1px size) that I need to find the best and most important: fastest way to draw to my canvas.
How would you do that?
Right now I'm basically getting my pixels into an Array, and then I loop over these particles, do some x and y calculations and draw them out using fillRect. But the framerate is like 8-9 fps right now.
Any ideas? Please example.
Thank you
LATEST UPDATE (my code)
function init(){
window.addEventListener("mousemove", onMouseMove);
let mouseX, mouseY, ratio = 2;
const canvas = document.getElementById("textCanvas");
const context = canvas.getContext("2d");
canvas.width = window.innerWidth * ratio;
canvas.height = window.innerHeight * ratio;
canvas.style.width = window.innerWidth + "px";
canvas.style.height = window.innerHeight + "px";
context.imageSmoothingEnabled = false;
context.fillStyle = `rgba(255,255,255,1)`;
context.setTransform(ratio, 0, 0, ratio, 0, 0);
const width = canvas.width;
const height = canvas.height;
context.font = "normal normal normal 232px EB Garamond";
context.fillText("howdy", 0, 160);
var pixels = context.getImageData(0, 0, width, height).data;
var data32 = new Uint32Array(pixels.buffer);
const particles = new Array();
for(var i = 0; i < data32.length; i++) {
if (data32[i] & 0xffff0000) {
particles.push({
x: (i % width),
y: ((i / width)|0),
ox: (i % width),
oy: ((i / width)|0),
xVelocity: 0,
yVelocity: 0,
a: pixels[i*4 + 3] / 255
});
}
}
/*const particles = Array.from({length: 120000}, () => [
Math.round(Math.random() * (width - 1)),
Math.round(Math.random() * (height - 1))
]);*/
function onMouseMove(e){
mouseX = parseInt((e.clientX-canvas.offsetLeft) * ratio);
mouseY = parseInt((e.clientY-canvas.offsetTop) * ratio);
}
function frame(timestamp) {
context.clearRect(0, 0, width, height);
const imageData = context.getImageData(0, 0, width, height);
const data = imageData.data;
for (let i = 0; i < particles.length; i++) {
const particle = particles[i];
const index = 4 * Math.round((particle.x + particle.y * width));
data[index + 0] = 0;
data[index + 1] = 0;
data[index + 2] = 0;
data[index + 3] = 255;
}
context.putImageData(imageData, 0, 0);
for (let i = 0; i < particles.length; i++) {
const p = particles[i];
var homeDX = p.ox - p.x;
var homeDY = p.oy - p.y;
var cursorForce = 0;
var cursorAngle = 0;
if(mouseX && mouseX > 0){
var cursorDX = p.ox - mouseX;
var cursorDY = p.oy - mouseY;
var cursorDistanceSquared = (cursorDX * cursorDX + cursorDY * cursorDY);
cursorForce = Math.min(10/cursorDistanceSquared,10);
cursorAngle = -Math.atan2(cursorDY, cursorDX);
}else{
cursorForce = 0;
cursorAngle = 0;
}
p.xVelocity += 0.2 * homeDX + cursorForce * Math.cos(cursorAngle);
p.yVelocity += 0.2 * homeDY + cursorForce * Math.sin(cursorAngle);
p.xVelocity *= 0.55;
p.yVelocity *= 0.55;
p.x += p.xVelocity;
p.y += p.yVelocity;
}
requestAnimationFrame(frame);
}
requestAnimationFrame(frame);
}
Moving 7.2Million particles a second
Not using webGL and shaders and you want 120K particles per frame at
60fps you need a throughput of 7.2million points per second. You need a fast machine.
Web workers multi-core CPUs
Quick solutions. On multi core machines web workers give linear performance increase for each hardware core. Eg On a 8 Core i7 you can run 7 workers sharing data via sharedArrayBuffers (shame that its all turned of ATM due to CPU security risk see MDN sharedArrayBuffer) and get slightly lower than 7 times performance improvement. Note benifits are only from actual hardware cores, JS threads tend to run flat out, Running two workers in one core results in an overall decreased throughput.
Even with shared buffers turned of it is still a viable solution if you are in control of what hardware you run on.
Make a movie.
LOL but no it is an option, and there is no upper limit to particle count. Though not as interactive as I think you may want. If you are selling something via the FX you are after a wow, not a how?
Optimize
Easy to say hard to do. You need to go over the code with a fine tooth comb. Remember that removing a single line if running at full speed is 7.2million lines removed per second.
I have gone over the code one more time. I can not test it so it may or may not work. But its to give you ideas. You could even consider using integer only math. JS can do fixed point math. The integer size is 32bits way more than you need for even a 4K display.
Second optimization pass.
// call this just once outside the animation loop.
const imageData = this.context.getImageData(0, 0, this.width * this.ratio, this.height * this.ratio);
// create a 32bit buffer
const data32 = new Uint32Array(imageData.data.buffer);
const pixel = 0xFF000000; // pixel to fill
const width = imageData.width;
// inside render loop
data32.fill(0); // clear the pixel buffer
// this line may be a problem I have no idea what it does. I would
// hope its only passing a reference and not creating a copy
var particles = this.particleTexts[0].getParticles();
var cDX,cDY,mx,my,p,cDistSqr,cForce,i;
mx = this.mouseX | 0; // may not need the floor bitwize or 0
my = this.mouseY | 0; // if mouse coords already integers
if(mX > 0){ // do mouse test outside the loop. Need loop duplication
// But at 60fps thats 7.2million less if statements
for (let i = 0; i < particles.length; i++) {
var p = particles[i];
p.xVelocity += 0.2 * (p.ox - p.x);
p.yVelocity += 0.2 * (p.oy - p.y);
p.xVelocity *= 0.55;
p.yVelocity *= 0.55;
data32[((p.x += p.xVelocity) | 0) + ((p.y += p.yVelocity) | 0) * width] = pixel;
}
}else{
for (let i = 0; i < particles.length; i++) {
var p = particles[i];
cDX = p.x - mx;
cDY = p.y - my;
cDist = Math.sqrt(cDistSqr = cDX*cDX + cDY*cDY + 1);
cForce = 1000 / (cDistSqr * cDist)
p.xVelocity += cForce * cDx + 0.2 * (p.ox - p.x);
p.yVelocity += cForce * cDY + 0.2 * (p.oy - p.y);
p.xVelocity *= 0.55;
p.yVelocity *= 0.55;
data32[((p.x += p.xVelocity) | 0) + ((p.y += p.yVelocity) | 0) * width] = pixel;
}
}
// put pixel onto the display.
this.context.putImageData(imageData, 0, 0);
Above is about as much as I can cut it down. (Cant test it so may or may not suit your need) It may give you a few more frames a second.
Interleaving
Another solution may suit you and that is to trick the eye. This increases frame rate but not points processed and requires that the points be randomly distributed or artifacts will be very noticeable.
Each frame you only process half the particles. Each time you process a particle you calculate the pixel index, set that pixel and then add the pixel velocity to the pixel index and particle position.
The effect is that each frame only half the particles are moved under force and the other half coast for a frame..
This may double the frame rate. If your particles are very organised and you get clumping flickering type artifacts, you can randomize the distribution of particles by applying a random shuffle to the particle array on creation. Again this need good random distribution.
The next snippet is just as an example. Each particle needs to hold the pixelIndex into the pixel data32 array. Note that the very first frame needs to be a full frame to setup all indexes etc.
const interleave = 2; // example only setup for 2 frames
// but can be extended to 3 or 4
// create frameCount outside loop
frameCount += 1;
// do half of all particals
for (let i = frameCount % frameCount ; i < particles.length; i += interleave ) {
var p = particles[i];
cDX = p.x - mx;
cDY = p.y - my;
cDist = Math.sqrt(cDistSqr = cDX*cDX + cDY*cDY + 1);
cForce = 1000 / (cDistSqr * cDist)
p.xVelocity += cForce * cDx + 0.2 * (p.ox - p.x);
p.yVelocity += cForce * cDY + 0.2 * (p.oy - p.y);
p.xVelocity *= 0.55;
p.yVelocity *= 0.55;
// add pixel index to particle's property
p.pixelIndex = ((p.x += p.xVelocity) | 0) + ((p.y += p.yVelocity) | 0) * width;
// write this frames pixel
data32[p.pixelIndex] = pixel;
// speculate the pixel index position in the next frame. This need to be as simple as possible.
p.pixelIndex += (p.xVelocity | 0) + (p.yVelocity | 0) * width;
p.x += p.xVelocity; // as the next frame this particle is coasting
p.y += p.yVelocity; // set its position now
}
// do every other particle. Just gets the pixel index and sets it
// this needs to remain as simple as possible.
for (let i = (frameCount + 1) % frameCount ; i < particles.length; i += interleave)
data32[particles[i].pixelIndex] = pixel;
}
Less particles
Seams obvious, but is often over looked as a viable solution. Less particles does not mean less visual elements/pixels.
If you reduce the particle count by 8 and at setup create a large buffer of offset indexes. These buffers hold animated pixel movements that closely match the behavior of pixels.
This can be very effective and give the illusion that each pixels is in fact independent. But the work is in the pre processing and setting up the offset animations.
eg
// for each particle after updating position
// get index of pixel
p.pixelIndex = (p.x | 0 + p.y | 0) * width;
// add pixel
data32[p.pixelIndex] = pixel;
// now you get 8 more pixels for the price of one particle
var ind = p.offsetArrayIndex;
// offsetArray is an array of pixel offsets both negative and positive
data32[p.pixelIndex + offsetArray[ind++]] = pixel;
data32[p.pixelIndex + offsetArray[ind++]] = pixel;
data32[p.pixelIndex + offsetArray[ind++]] = pixel;
data32[p.pixelIndex + offsetArray[ind++]] = pixel;
data32[p.pixelIndex + offsetArray[ind++]] = pixel;
data32[p.pixelIndex + offsetArray[ind++]] = pixel;
data32[p.pixelIndex + offsetArray[ind++]] = pixel;
data32[p.pixelIndex + offsetArray[ind++]] = pixel;
// offset array arranged as sets of 8, each set of 8 is a frame in
// looping pre calculated offset animation
// offset array length is 65536 or any bit mask able size.
p.offsetArrayIndex = ind & 0xFFFF ; // ind now points at first pixel of next
// set of eight pixels
This and an assortment of other similar tricks can give you the 7.2million pixels per second you want.
Last note.
Remember every device these days has a dedicated GPU. Your best bet is to use it, this type of thing is what they are good at.
Computing those particles within a shader on a webgl context will provide the most performant solution. See e. g. https://www.shadertoy.com/view/MdtGDX for an example.
If you prefer to continue using a 2d context, you could speed up rendering particles by doing so off-screen:
Get the image data array by calling context.getImageData()
Draw pixels by manipulating the data array
Put the data array back with context.putImageData()
A simplified example:
const output = document.getElementById("output");
const canvas = document.getElementById("canvas");
const context = canvas.getContext("2d");
const width = canvas.width;
const height = canvas.height;
const particles = Array.from({length: 120000}, () => [
Math.round(Math.random() * (width - 1)),
Math.round(Math.random() * (height - 1))
]);
let previous = 0;
function frame(timestamp) {
// Print frames per second:
const delta = timestamp - previous;
previous = timestamp;
output.textContent = `${(1000 / delta).toFixed(1)} fps`;
// Draw particles:
context.clearRect(0, 0, width, height);
const imageData = context.getImageData(0, 0, width, height);
const data = imageData.data;
for (let i = 0; i < particles.length; i++) {
const particle = particles[i];
const index = 4 * (particle[0] + particle[1] * width);
data[index + 0] = 0;
data[index + 1] = 0;
data[index + 2] = 0;
data[index + 3] = 255;
}
context.putImageData(imageData, 0, 0);
// Move particles randomly:
for (let i = 0; i < particles.length; i++) {
const particle = particles[i];
particle[0] = Math.max(0, Math.min(width - 1, Math.round(particle[0] + Math.random() * 2 - 1)));
particle[1] = Math.max(0, Math.min(height - 1, Math.round(particle[1] + Math.random() * 2 - 1)));
}
requestAnimationFrame(frame);
}
requestAnimationFrame(frame);
<canvas id="canvas" width="500" height="500"></canvas>
<output id="output"></output>
Instead of drawing individual pixels, you might also want to consider drawing and moving a few textures with a lot of particles on each of them. This might come close to a full particle effect at better performance.
Hello wizards of Stack Overflow,
I've been tasked with plotting some data onto a scatter chart in Javascript, but with a twist! These plotted objects need to follow a strict colour code. I've got the plotting part right, but the colour generation has me stumped. The graph follows a maximum value x and y of 100 and a minimum value of zero (I'm dealing with percentages).
The bottom left corner of the graph should be pure green and the diagonal top right should be pure red with a hazy yellow-orange in the middle. E.g. point (0, 0) should be (red:0 green:255 blue:0), point (100, 100) should be (red:255 green:0 blue:0) and point (50, 50) should be (red:132 green:132 blue:20).
So basically there's a diagonal gradient of green to red running from point (0, 0) to point (100, 100).
| red
| /
| /
| green
Has anyone dealt with a similar situation and perhaps has some sort of algorithm to figure this out?
Regards,
JP
I don't think I can envision what you want to plot exactly, but I think you can solve a lot of things when you split your r, g and b values into different functions.
So instead of func_rgb(x, y) {...} you should make three different functions - one for each color channel - that you can manipulate individually and add the results up afterwards.
func_r(x,y) {
return x/100 * 256;
}
func_g(x,y) {
return (1 - x/100) * 256;
}
func_b(x,y) {
return (1 - (0.5 - x/100)^2) * 20;
}
I know these functions only contain the X-value, however, I think you can figure out the rest of the math on your own.
From the information I have so far, the easiest way to deal with this would be:
Green = 255 - ((255/100)*((x+y)/2))
Red = ((255/100)*((x+y)/2))
This way, if you were at (0, 0) you'd have:
Green = 255 - ((255/100)*((0+0)/2)) = 255
Red = ((255/100)*((0+0)/2)) = 0
Or if you were at (13, 13):
Green = 255 - ((255/100)*((13+13)/2)) = 222
Red = ((255/100)*((13+13)/2)) = 33
It's important to mention that blue seems not to be relevant, and I don't know what should happen if x and y are very different, so I just calculated the mean.
If the lower left corner is completely green (in rgb (0, 255, 0)) and the upper right corner is red ((255, 0, 0)), that means that the equation for red is 255 / 100 * y and the equation for green is 255 - 255 / 100 * x. This way the upper left corner will be (255, 255, 0) and the lower right will be (0, 0, 0)
<html>
<body>
<canvas id="canvas"></canvas>
<script type="application/javascript">
// Colours that you want each corner to have
var topLeft = {r: 0,g: 0,b: 0};
var topRight = {r: 255,g: 0,b: 0};
var bottomLeft = {r: 0,g: 255,b: 0};
var bottomRight = {r: 0,g: 0,b: 0};
var output = {r: 0,g: 0,b: 0};
// Perform bilinear interpolation on both axis
// This just means to do linear interpolation for y & x, then combine the results
// Provide the XY you need the colour for and the size of your graph
function getSpectrumColour(x,y,width,height) {
var div = 1.0 / (width*height);
output.r = div * (bottomLeft.r * (width - x) * (height - y) + bottomRight.r * x * y
+ topLeft.r * (width - x) * (height - y) + topRight.r * x * y);
output.g = div * (bottomLeft.g * (width - x) * (height - y) + bottomRight.g * x * y
+ topLeft.g * (width - x) * (height - y) + topRight.g * x * y);
output.b = div * (bottomLeft.b * (width - x) * (height - y) + bottomRight.b * x * y
+ topLeft.b * (width - x) * (height - y) + topRight.b * x * y);
return output;
}
var canvas = null;
var ctx = null;
var graphWidth = 100;
var graphHeight = 100;
window.onload = function() {
canvas = document.getElementById("canvas");
canvas.width = graphWidth;
canvas.height = graphHeight;
ctx = canvas.getContext("2d");
var colour = null;
for (var x = 0; x < graphWidth; ++x) {
for (var y = 0; y < graphHeight; ++y) {
colour = getSpectrumColour(x,y,graphWidth,graphHeight);
ctx.fillStyle = "rgba("+colour.r+","+colour.g+","+colour.b+",1.0)";
ctx.fillRect(x,graphHeight - y,1,1);
}
}
}
</script>
</body>
</html>
I'm taking the following approach to animate a star field across the screen, but I'm stuck for the next part.
JS
var c = document.getElementById('stars'),
ctx = c.getContext("2d"),
t = 0; // time
c.width = 300;
c.height = 300;
var w = c.width,
h = c.height,
z = c.height,
v = Math.PI; // angle of vision
(function animate() {
Math.seedrandom('bg');
ctx.globalAlpha = 1;
for (var i = 0; i <= 100; i++) {
var x = Math.floor(Math.random() * w), // pos x
y = Math.floor(Math.random() * h), // pos y
r = Math.random()*2 + 1, // radius
a = Math.random()*0.5 + 0.5, // alpha
// linear
d = (r*a), // depth
p = t*d; // pixels per t
x = x - p; // movement
x = x - w * Math.floor(x / w); // go around when x < 0
(function draw(x,y) {
var gradient = ctx.createRadialGradient(x, y, 0, x + r, y + r, r * 2);
gradient.addColorStop(0, 'rgba(255, 255, 255, ' + a + ')');
gradient.addColorStop(1, 'rgba(0, 0, 0, 0)');
ctx.beginPath();
ctx.arc(x, y, r, 0, 2*Math.PI);
ctx.fillStyle = gradient;
ctx.fill();
return draw;
})(x, y);
}
ctx.restore();
t += 1;
requestAnimationFrame(function() {
ctx.clearRect(0, 0, c.width, c.height);
animate();
});
})();
HTML
<canvas id="stars"></canvas>
CSS
canvas {
background: black;
}
JSFiddle
What it does right now is animate each star with a delta X that considers the opacity and size of the star, so the smallest ones appear to move slower.
Use p = t; to have all the stars moving at the same speed.
QUESTION
I'm looking for a clearly defined model where the velocities give the illusion of the stars rotating around the expectator, defined in terms of the center of the rotation cX, cY, and the angle of vision v which is what fraction of 2π can be seen (if the center of the circle is not the center of the screen, the radius should be at least the largest portion). I'm struggling to find a way that applies this cosine to the speed of star movements, even for a centered circle with a rotation of π.
These diagrams might further explain what I'm after:
Centered circle:
Non-centered:
Different angle of vision:
I'm really lost as to how to move forwards. I already stretched myself a bit to get here. Can you please help me with some first steps?
Thanks
UPDATE
I have made some progress with this code:
// linear
d = (r*a)*z, // depth
v = (2*Math.PI)/w,
p = Math.floor( d * Math.cos( t * v ) ); // pixels per t
x = x + p; // movement
x = x - w * Math.floor(x / w); // go around when x < 0
JSFiddle
Where p is the x coordinate of a particle in uniform circular motion and v is the angular velocity, but this generates a pendulum effect. I am not sure how to change these equations to create the illusion that the observer is turning instead.
UPDATE 2:
Almost there. One user at the ##Math freenode channel was kind enough to suggest the following calculation:
// linear
d = (r*a), // depth
p = t*d; // pixels per t
x = x - p; // movement
x = x - w * Math.floor(x / w); // go around when x < 0
x = (x / w) - 0.5;
y = (y / h) - 0.5;
y /= Math.cos(x);
x = (x + 0.5) * w;
y = (y + 0.5) * h;
JSFiddle
This achieves the effect visually, but does not follow a clearly defined model in terms of the variables (it just "hacks" the effect) so I cannot see a straightforward way to do different implementations (change the center, angle of vision). The real model might be very similar to this one.
UPDATE 3
Following from Iftah's response, I was able to use Sylvester to apply a rotation matrix to the stars, which need to be saved in an array first. Also each star's z coordinate is now determined and the radius r and opacity a are derived from it instead. The code is substantially different and lenghthier so I am not posting it, but it might be a step in the right direction. I cannot get this to rotate continuously yet. Using matrix operations on each frame seems costly in terms of performance.
JSFiddle
Here's some pseudocode that does what you're talking about.
Make a bunch of stars not too far but not too close (via rejection sampling)
Set up a projection matrix (defines the camera frustum)
Each frame
Compute our camera rotation angle
Make a "view" matrix (repositions the stars to be relative to our view)
Compose the view and projection matrix into the view-projection matrix
For each star
Apply the view-projection matrix to give screen star coordinates
If the star is behind the camera skip it
Do some math to give the star a nice seeming 'size'
Scale the star coordinate to the canvas
Draw the star with its canvas coordinate and size
I've made an implementation of the above. It uses the gl-matrix Javascript library to handle some of the matrix math. It's good stuff. (Fiddle for this is here, or see below.)
var c = document.getElementById('c');
var n = c.getContext('2d');
// View matrix, defines where you're looking
var viewMtx = mat4.create();
// Projection matrix, defines how the view maps onto the screen
var projMtx = mat4.create();
// Adapted from http://stackoverflow.com/questions/18404890/how-to-build-perspective-projection-matrix-no-api
function ComputeProjMtx(field_of_view, aspect_ratio, near_dist, far_dist, left_handed) {
// We'll assume input parameters are sane.
field_of_view = field_of_view * Math.PI / 180.0; // Convert degrees to radians
var frustum_depth = far_dist - near_dist;
var one_over_depth = 1 / frustum_depth;
var e11 = 1.0 / Math.tan(0.5 * field_of_view);
var e00 = (left_handed ? 1 : -1) * e11 / aspect_ratio;
var e22 = far_dist * one_over_depth;
var e32 = (-far_dist * near_dist) * one_over_depth;
return [
e00, 0, 0, 0,
0, e11, 0, 0,
0, 0, e22, e32,
0, 0, 1, 0
];
}
// Make a view matrix with a simple rotation about the Y axis (up-down axis)
function ComputeViewMtx(angle) {
angle = angle * Math.PI / 180.0; // Convert degrees to radians
return [
Math.cos(angle), 0, Math.sin(angle), 0,
0, 1, 0, 0,
-Math.sin(angle), 0, Math.cos(angle), 0,
0, 0, 0, 1
];
}
projMtx = ComputeProjMtx(70, c.width / c.height, 1, 200, true);
var angle = 0;
var viewProjMtx = mat4.create();
var minDist = 100;
var maxDist = 1000;
function Star() {
var d = 0;
do {
// Create random points in a cube.. but not too close.
this.x = Math.random() * maxDist - (maxDist / 2);
this.y = Math.random() * maxDist - (maxDist / 2);
this.z = Math.random() * maxDist - (maxDist / 2);
var d = this.x * this.x +
this.y * this.y +
this.z * this.z;
} while (
d > maxDist * maxDist / 4 || d < minDist * minDist
);
this.dist = Math.sqrt(d);
}
Star.prototype.AsVector = function() {
return [this.x, this.y, this.z, 1];
}
var stars = [];
for (var i = 0; i < 5000; i++) stars.push(new Star());
var lastLoop = Date.now();
function loop() {
var now = Date.now();
var dt = (now - lastLoop) / 1000.0;
lastLoop = now;
angle += 30.0 * dt;
viewMtx = ComputeViewMtx(angle);
//console.log('---');
//console.log(projMtx);
//console.log(viewMtx);
mat4.multiply(viewProjMtx, projMtx, viewMtx);
//console.log(viewProjMtx);
n.beginPath();
n.rect(0, 0, c.width, c.height);
n.closePath();
n.fillStyle = '#000';
n.fill();
n.fillStyle = '#fff';
var v = vec4.create();
for (var i = 0; i < stars.length; i++) {
var star = stars[i];
vec4.transformMat4(v, star.AsVector(), viewProjMtx);
v[0] /= v[3];
v[1] /= v[3];
v[2] /= v[3];
//v[3] /= v[3];
if (v[3] < 0) continue;
var x = (v[0] * 0.5 + 0.5) * c.width;
var y = (v[1] * 0.5 + 0.5) * c.height;
// Compute a visual size...
// This assumes all stars are the same size.
// It also doesn't scale with canvas size well -- we'd have to take more into account.
var s = 300 / star.dist;
n.beginPath();
n.arc(x, y, s, 0, Math.PI * 2);
//n.rect(x, y, s, s);
n.closePath();
n.fill();
}
window.requestAnimationFrame(loop);
}
loop();
<script src="https://cdnjs.cloudflare.com/ajax/libs/gl-matrix/2.3.1/gl-matrix-min.js"></script>
<canvas id="c" width="500" height="500"></canvas>
Some links:
More on projection matrices
gl-matrix
Using view/projection matrices
Update
Here's another version that has keyboard controls. Kinda fun. You can see the difference between rotating and parallax from strafing. Works best full page. (Fiddle for this is here or see below.)
var c = document.getElementById('c');
var n = c.getContext('2d');
// View matrix, defines where you're looking
var viewMtx = mat4.create();
// Projection matrix, defines how the view maps onto the screen
var projMtx = mat4.create();
// Adapted from http://stackoverflow.com/questions/18404890/how-to-build-perspective-projection-matrix-no-api
function ComputeProjMtx(field_of_view, aspect_ratio, near_dist, far_dist, left_handed) {
// We'll assume input parameters are sane.
field_of_view = field_of_view * Math.PI / 180.0; // Convert degrees to radians
var frustum_depth = far_dist - near_dist;
var one_over_depth = 1 / frustum_depth;
var e11 = 1.0 / Math.tan(0.5 * field_of_view);
var e00 = (left_handed ? 1 : -1) * e11 / aspect_ratio;
var e22 = far_dist * one_over_depth;
var e32 = (-far_dist * near_dist) * one_over_depth;
return [
e00, 0, 0, 0,
0, e11, 0, 0,
0, 0, e22, e32,
0, 0, 1, 0
];
}
// Make a view matrix with a simple rotation about the Y axis (up-down axis)
function ComputeViewMtx(angle) {
angle = angle * Math.PI / 180.0; // Convert degrees to radians
return [
Math.cos(angle), 0, Math.sin(angle), 0,
0, 1, 0, 0,
-Math.sin(angle), 0, Math.cos(angle), 0,
0, 0, -250, 1
];
}
projMtx = ComputeProjMtx(70, c.width / c.height, 1, 200, true);
var angle = 0;
var viewProjMtx = mat4.create();
var minDist = 100;
var maxDist = 1000;
function Star() {
var d = 0;
do {
// Create random points in a cube.. but not too close.
this.x = Math.random() * maxDist - (maxDist / 2);
this.y = Math.random() * maxDist - (maxDist / 2);
this.z = Math.random() * maxDist - (maxDist / 2);
var d = this.x * this.x +
this.y * this.y +
this.z * this.z;
} while (
d > maxDist * maxDist / 4 || d < minDist * minDist
);
this.dist = 100;
}
Star.prototype.AsVector = function() {
return [this.x, this.y, this.z, 1];
}
var stars = [];
for (var i = 0; i < 5000; i++) stars.push(new Star());
var lastLoop = Date.now();
var dir = {
up: 0,
down: 1,
left: 2,
right: 3
};
var dirStates = [false, false, false, false];
var shiftKey = false;
var moveSpeed = 100.0;
var turnSpeed = 1.0;
function loop() {
var now = Date.now();
var dt = (now - lastLoop) / 1000.0;
lastLoop = now;
angle += 30.0 * dt;
//viewMtx = ComputeViewMtx(angle);
var tf = mat4.create();
if (dirStates[dir.up]) mat4.translate(tf, tf, [0, 0, moveSpeed * dt]);
if (dirStates[dir.down]) mat4.translate(tf, tf, [0, 0, -moveSpeed * dt]);
if (dirStates[dir.left])
if (shiftKey) mat4.rotate(tf, tf, -turnSpeed * dt, [0, 1, 0]);
else mat4.translate(tf, tf, [moveSpeed * dt, 0, 0]);
if (dirStates[dir.right])
if (shiftKey) mat4.rotate(tf, tf, turnSpeed * dt, [0, 1, 0]);
else mat4.translate(tf, tf, [-moveSpeed * dt, 0, 0]);
mat4.multiply(viewMtx, tf, viewMtx);
//console.log('---');
//console.log(projMtx);
//console.log(viewMtx);
mat4.multiply(viewProjMtx, projMtx, viewMtx);
//console.log(viewProjMtx);
n.beginPath();
n.rect(0, 0, c.width, c.height);
n.closePath();
n.fillStyle = '#000';
n.fill();
n.fillStyle = '#fff';
var v = vec4.create();
for (var i = 0; i < stars.length; i++) {
var star = stars[i];
vec4.transformMat4(v, star.AsVector(), viewProjMtx);
if (v[3] < 0) continue;
var d = Math.sqrt(v[0] * v[0] + v[1] * v[1] + v[2] * v[2]);
v[0] /= v[3];
v[1] /= v[3];
v[2] /= v[3];
//v[3] /= v[3];
var x = (v[0] * 0.5 + 0.5) * c.width;
var y = (v[1] * 0.5 + 0.5) * c.height;
// Compute a visual size...
// This assumes all stars are the same size.
// It also doesn't scale with canvas size well -- we'd have to take more into account.
var s = 300 / d;
n.beginPath();
n.arc(x, y, s, 0, Math.PI * 2);
//n.rect(x, y, s, s);
n.closePath();
n.fill();
}
window.requestAnimationFrame(loop);
}
loop();
function keyToDir(evt) {
var d = -1;
if (evt.keyCode === 38) d = dir.up
else if (evt.keyCode === 37) d = dir.left;
else if (evt.keyCode === 39) d = dir.right;
else if (evt.keyCode === 40) d = dir.down;
return d;
}
window.onkeydown = function(evt) {
var d = keyToDir(evt);
if (d >= 0) dirStates[d] = true;
if (evt.keyCode === 16) shiftKey = true;
}
window.onkeyup = function(evt) {
var d = keyToDir(evt);
if (d >= 0) dirStates[d] = false;
if (evt.keyCode === 16) shiftKey = false;
}
<script src="https://cdnjs.cloudflare.com/ajax/libs/gl-matrix/2.3.1/gl-matrix-min.js"></script>
<div>Click in this pane. Use up/down/left/right, hold shift + left/right to rotate.</div>
<canvas id="c" width="500" height="500"></canvas>
Update 2
Alain Jacomet Forte asked:
What is your recommended method of creating general purpose 3d and if you would recommend working at the matrices level or not, specifically perhaps to this particular scenario.
Regarding matrices: If you're writing an engine from scratch on any platform, then you're unavoidably going to end up working with matrices since they help generalize the basic 3D mathematics. Even if you use OpenGL/WebGL or Direct3D you're still going to end up making a view and projection matrix and additional matrices for more sophisticated purposes. (Handling normal maps, aligning world objects, skinning, etc...)
Regarding a method of creating general purpose 3d... Don't. It will run slow, and it won't be performant without a lot of work. Rely on a hardware-accelerated library to do the heavy lifting. Creating limited 3D engines for specific projects is fun and instructive (e.g. I want a cool animation on my webpage), but when it comes to putting the pixels on the screen for anything serious, you want hardware to handle that as much as you can for performance purposes.
Sadly, the web has no great standard for that yet, but it is coming in WebGL -- learn WebGL, use WebGL. It runs great and works well when it's supported. (You can, however, get away with an awful lot just using CSS 3D transforms and Javascript.)
If you're doing desktop programming, I highly recommend OpenGL via SDL (I'm not sold on SFML yet) -- it's cross-platform and well supported.
If you're programming mobile phones, OpenGL ES is pretty much your only choice (other than a dog-slow software renderer).
If you want to get stuff done rather than writing your own engine from scratch, the defacto for the web is Three.js (which I find effective but mediocre). If you want a full game engine, there's some free options these days, the main commercial ones being Unity and Unreal. Irrlicht has been around a long time -- never had a chance to use it, though, but I hear it's good.
But if you want to make all the 3D stuff from scratch... I always found how the software renderer in Quake was made a pretty good case study. Some of that can be found here.
You are resetting the stars 2d position each frame, then moving the stars (depending on how much time and speed of each star) - this is a bad way to achieve your goal. As you discovered, it gets very complex when you try to extend this solution to more scenarios.
A better way would be to set the stars 3d location only once (at initialization) then move a "camera" each frame (depending on time). When you want to render the 2d image you then calculate the stars location on screen. The location on screen depends on the stars 3d location and the current camera location.
This will allow you to move the camera (in any direction), rotate the camera (to any angle) and render the correct stars position AND keep your sanity.
I use html5 canvas elements to resize images im my browser. It turns out that the quality is very low. I found this: Disable Interpolation when Scaling a <canvas> but it does not help to increase the quality.
Below is my css and js code as well as the image scalled with Photoshop and scaled in the canvas API.
What do I have to do to get optimal quality when scaling an image in the browser?
Note: I want to scale down a large image to a small one, modify color in a canvas and send the result from the canvas to the server.
CSS:
canvas, img {
image-rendering: optimizeQuality;
image-rendering: -moz-crisp-edges;
image-rendering: -webkit-optimize-contrast;
image-rendering: optimize-contrast;
-ms-interpolation-mode: nearest-neighbor;
}
JS:
var $img = $('<img>');
var $originalCanvas = $('<canvas>');
$img.load(function() {
var originalContext = $originalCanvas[0].getContext('2d');
originalContext.imageSmoothingEnabled = false;
originalContext.webkitImageSmoothingEnabled = false;
originalContext.mozImageSmoothingEnabled = false;
originalContext.drawImage(this, 0, 0, 379, 500);
});
The image resized with photoshop:
The image resized on canvas:
Edit:
I tried to make downscaling in more than one steps as proposed in:
Resizing an image in an HTML5 canvas and
Html5 canvas drawImage: how to apply antialiasing
This is the function I have used:
function resizeCanvasImage(img, canvas, maxWidth, maxHeight) {
var imgWidth = img.width,
imgHeight = img.height;
var ratio = 1, ratio1 = 1, ratio2 = 1;
ratio1 = maxWidth / imgWidth;
ratio2 = maxHeight / imgHeight;
// Use the smallest ratio that the image best fit into the maxWidth x maxHeight box.
if (ratio1 < ratio2) {
ratio = ratio1;
}
else {
ratio = ratio2;
}
var canvasContext = canvas.getContext("2d");
var canvasCopy = document.createElement("canvas");
var copyContext = canvasCopy.getContext("2d");
var canvasCopy2 = document.createElement("canvas");
var copyContext2 = canvasCopy2.getContext("2d");
canvasCopy.width = imgWidth;
canvasCopy.height = imgHeight;
copyContext.drawImage(img, 0, 0);
// init
canvasCopy2.width = imgWidth;
canvasCopy2.height = imgHeight;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
var rounds = 2;
var roundRatio = ratio * rounds;
for (var i = 1; i <= rounds; i++) {
console.log("Step: "+i);
// tmp
canvasCopy.width = imgWidth * roundRatio / i;
canvasCopy.height = imgHeight * roundRatio / i;
copyContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvasCopy.width, canvasCopy.height);
// copy back
canvasCopy2.width = imgWidth * roundRatio / i;
canvasCopy2.height = imgHeight * roundRatio / i;
copyContext2.drawImage(canvasCopy, 0, 0, canvasCopy.width, canvasCopy.height, 0, 0, canvasCopy2.width, canvasCopy2.height);
} // end for
// copy back to canvas
canvas.width = imgWidth * roundRatio / rounds;
canvas.height = imgHeight * roundRatio / rounds;
canvasContext.drawImage(canvasCopy2, 0, 0, canvasCopy2.width, canvasCopy2.height, 0, 0, canvas.width, canvas.height);
}
Here is the result if I use a 2 step down sizing:
Here is the result if I use a 3 step down sizing:
Here is the result if I use a 4 step down sizing:
Here is the result if I use a 20 step down sizing:
Note: It turns out that from 1 step to 2 steps there is a large improvement in image quality but the more steps you add to the process the more fuzzy the image becomes.
Is there a way to solve the problem that the image gets more fuzzy the more steps you add?
Edit 2013-10-04: I tried the algorithm of GameAlchemist. Here is the result compared to Photoshop.
PhotoShop Image:
GameAlchemist's Algorithm:
Since your problem is to downscale your image, there is no point in talking about interpolation -which is about creating pixel-. The issue here is downsampling.
To downsample an image, we need to turn each square of p * p pixels in the original image into a single pixel in the destination image.
For performances reasons Browsers do a very simple downsampling : to build the smaller image, they will just pick ONE pixel in the source and use its value for the destination. which 'forgets' some details and adds noise.
Yet there's an exception to that : since the 2X image downsampling is very simple to compute (average 4 pixels to make one) and is used for retina/HiDPI pixels, this case is handled properly -the Browser does make use of 4 pixels to make one-.
BUT... if you use several time a 2X downsampling, you'll face the issue that the successive rounding errors will add too much noise.
What's worse, you won't always resize by a power of two, and resizing to the nearest power + a last resizing is very noisy.
What you seek is a pixel-perfect downsampling, that is : a re-sampling of the image that will take all input pixels into account -whatever the scale-.
To do that we must compute, for each input pixel, its contribution to one, two, or four destination pixels depending wether the scaled projection of the input pixels is right inside a destination pixels, overlaps an X border, an Y border, or both.
( A scheme would be nice here, but i don't have one. )
Here's an example of canvas scale vs my pixel perfect scale on a 1/3 scale of a zombat.
Notice that the picture might get scaled in your Browser, and is .jpegized by S.O..
Yet we see that there's much less noise especially in the grass behind the wombat, and the branches on its right. The noise in the fur makes it more contrasted, but it looks like he's got white hairs -unlike source picture-.
Right image is less catchy but definitively nicer.
Here's the code to do the pixel perfect downscaling :
fiddle result :
http://jsfiddle.net/gamealchemist/r6aVp/embedded/result/
fiddle itself : http://jsfiddle.net/gamealchemist/r6aVp/
// scales the image by (float) scale < 1
// returns a canvas containing the scaled image.
function downScaleImage(img, scale) {
var imgCV = document.createElement('canvas');
imgCV.width = img.width;
imgCV.height = img.height;
var imgCtx = imgCV.getContext('2d');
imgCtx.drawImage(img, 0, 0);
return downScaleCanvas(imgCV, scale);
}
// scales the canvas by (float) scale < 1
// returns a new canvas containing the scaled image.
function downScaleCanvas(cv, scale) {
if (!(scale < 1) || !(scale > 0)) throw ('scale must be a positive number <1 ');
var sqScale = scale * scale; // square scale = area of source pixel within target
var sw = cv.width; // source image width
var sh = cv.height; // source image height
var tw = Math.floor(sw * scale); // target image width
var th = Math.floor(sh * scale); // target image height
var sx = 0, sy = 0, sIndex = 0; // source x,y, index within source array
var tx = 0, ty = 0, yIndex = 0, tIndex = 0; // target x,y, x,y index within target array
var tX = 0, tY = 0; // rounded tx, ty
var w = 0, nw = 0, wx = 0, nwx = 0, wy = 0, nwy = 0; // weight / next weight x / y
// weight is weight of current source point within target.
// next weight is weight of current source point within next target's point.
var crossX = false; // does scaled px cross its current px right border ?
var crossY = false; // does scaled px cross its current px bottom border ?
var sBuffer = cv.getContext('2d').
getImageData(0, 0, sw, sh).data; // source buffer 8 bit rgba
var tBuffer = new Float32Array(3 * tw * th); // target buffer Float32 rgb
var sR = 0, sG = 0, sB = 0; // source's current point r,g,b
/* untested !
var sA = 0; //source alpha */
for (sy = 0; sy < sh; sy++) {
ty = sy * scale; // y src position within target
tY = 0 | ty; // rounded : target pixel's y
yIndex = 3 * tY * tw; // line index within target array
crossY = (tY != (0 | ty + scale));
if (crossY) { // if pixel is crossing botton target pixel
wy = (tY + 1 - ty); // weight of point within target pixel
nwy = (ty + scale - tY - 1); // ... within y+1 target pixel
}
for (sx = 0; sx < sw; sx++, sIndex += 4) {
tx = sx * scale; // x src position within target
tX = 0 | tx; // rounded : target pixel's x
tIndex = yIndex + tX * 3; // target pixel index within target array
crossX = (tX != (0 | tx + scale));
if (crossX) { // if pixel is crossing target pixel's right
wx = (tX + 1 - tx); // weight of point within target pixel
nwx = (tx + scale - tX - 1); // ... within x+1 target pixel
}
sR = sBuffer[sIndex ]; // retrieving r,g,b for curr src px.
sG = sBuffer[sIndex + 1];
sB = sBuffer[sIndex + 2];
/* !! untested : handling alpha !!
sA = sBuffer[sIndex + 3];
if (!sA) continue;
if (sA != 0xFF) {
sR = (sR * sA) >> 8; // or use /256 instead ??
sG = (sG * sA) >> 8;
sB = (sB * sA) >> 8;
}
*/
if (!crossX && !crossY) { // pixel does not cross
// just add components weighted by squared scale.
tBuffer[tIndex ] += sR * sqScale;
tBuffer[tIndex + 1] += sG * sqScale;
tBuffer[tIndex + 2] += sB * sqScale;
} else if (crossX && !crossY) { // cross on X only
w = wx * scale;
// add weighted component for current px
tBuffer[tIndex ] += sR * w;
tBuffer[tIndex + 1] += sG * w;
tBuffer[tIndex + 2] += sB * w;
// add weighted component for next (tX+1) px
nw = nwx * scale
tBuffer[tIndex + 3] += sR * nw;
tBuffer[tIndex + 4] += sG * nw;
tBuffer[tIndex + 5] += sB * nw;
} else if (crossY && !crossX) { // cross on Y only
w = wy * scale;
// add weighted component for current px
tBuffer[tIndex ] += sR * w;
tBuffer[tIndex + 1] += sG * w;
tBuffer[tIndex + 2] += sB * w;
// add weighted component for next (tY+1) px
nw = nwy * scale
tBuffer[tIndex + 3 * tw ] += sR * nw;
tBuffer[tIndex + 3 * tw + 1] += sG * nw;
tBuffer[tIndex + 3 * tw + 2] += sB * nw;
} else { // crosses both x and y : four target points involved
// add weighted component for current px
w = wx * wy;
tBuffer[tIndex ] += sR * w;
tBuffer[tIndex + 1] += sG * w;
tBuffer[tIndex + 2] += sB * w;
// for tX + 1; tY px
nw = nwx * wy;
tBuffer[tIndex + 3] += sR * nw;
tBuffer[tIndex + 4] += sG * nw;
tBuffer[tIndex + 5] += sB * nw;
// for tX ; tY + 1 px
nw = wx * nwy;
tBuffer[tIndex + 3 * tw ] += sR * nw;
tBuffer[tIndex + 3 * tw + 1] += sG * nw;
tBuffer[tIndex + 3 * tw + 2] += sB * nw;
// for tX + 1 ; tY +1 px
nw = nwx * nwy;
tBuffer[tIndex + 3 * tw + 3] += sR * nw;
tBuffer[tIndex + 3 * tw + 4] += sG * nw;
tBuffer[tIndex + 3 * tw + 5] += sB * nw;
}
} // end for sx
} // end for sy
// create result canvas
var resCV = document.createElement('canvas');
resCV.width = tw;
resCV.height = th;
var resCtx = resCV.getContext('2d');
var imgRes = resCtx.getImageData(0, 0, tw, th);
var tByteBuffer = imgRes.data;
// convert float32 array into a UInt8Clamped Array
var pxIndex = 0; //
for (sIndex = 0, tIndex = 0; pxIndex < tw * th; sIndex += 3, tIndex += 4, pxIndex++) {
tByteBuffer[tIndex] = Math.ceil(tBuffer[sIndex]);
tByteBuffer[tIndex + 1] = Math.ceil(tBuffer[sIndex + 1]);
tByteBuffer[tIndex + 2] = Math.ceil(tBuffer[sIndex + 2]);
tByteBuffer[tIndex + 3] = 255;
}
// writing result to canvas.
resCtx.putImageData(imgRes, 0, 0);
return resCV;
}
It is quite memory greedy, since a float buffer is required to store the intermediate values of the destination image (-> if we count the result canvas, we use 6 times the source image's memory in this algorithm).
It is also quite expensive, since each source pixel is used whatever the destination size, and we have to pay for the getImageData / putImageDate, quite slow also.
But there's no way to be faster than process each source value in this case, and situation is not that bad : For my 740 * 556 image of a wombat, processing takes between 30 and 40 ms.
Fast canvas resample with good quality: http://jsfiddle.net/9g9Nv/442/
Update: version 2.0 (faster, web workers + transferable objects) - https://github.com/viliusle/Hermite-resize
/**
* Hermite resize - fast image resize/resample using Hermite filter. 1 cpu version!
*
* #param {HtmlElement} canvas
* #param {int} width
* #param {int} height
* #param {boolean} resize_canvas if true, canvas will be resized. Optional.
*/
function resample_single(canvas, width, height, resize_canvas) {
var width_source = canvas.width;
var height_source = canvas.height;
width = Math.round(width);
height = Math.round(height);
var ratio_w = width_source / width;
var ratio_h = height_source / height;
var ratio_w_half = Math.ceil(ratio_w / 2);
var ratio_h_half = Math.ceil(ratio_h / 2);
var ctx = canvas.getContext("2d");
var img = ctx.getImageData(0, 0, width_source, height_source);
var img2 = ctx.createImageData(width, height);
var data = img.data;
var data2 = img2.data;
for (var j = 0; j < height; j++) {
for (var i = 0; i < width; i++) {
var x2 = (i + j * width) * 4;
var weight = 0;
var weights = 0;
var weights_alpha = 0;
var gx_r = 0;
var gx_g = 0;
var gx_b = 0;
var gx_a = 0;
var center_y = (j + 0.5) * ratio_h;
var yy_start = Math.floor(j * ratio_h);
var yy_stop = Math.ceil((j + 1) * ratio_h);
for (var yy = yy_start; yy < yy_stop; yy++) {
var dy = Math.abs(center_y - (yy + 0.5)) / ratio_h_half;
var center_x = (i + 0.5) * ratio_w;
var w0 = dy * dy; //pre-calc part of w
var xx_start = Math.floor(i * ratio_w);
var xx_stop = Math.ceil((i + 1) * ratio_w);
for (var xx = xx_start; xx < xx_stop; xx++) {
var dx = Math.abs(center_x - (xx + 0.5)) / ratio_w_half;
var w = Math.sqrt(w0 + dx * dx);
if (w >= 1) {
//pixel too far
continue;
}
//hermite filter
weight = 2 * w * w * w - 3 * w * w + 1;
var pos_x = 4 * (xx + yy * width_source);
//alpha
gx_a += weight * data[pos_x + 3];
weights_alpha += weight;
//colors
if (data[pos_x + 3] < 255)
weight = weight * data[pos_x + 3] / 250;
gx_r += weight * data[pos_x];
gx_g += weight * data[pos_x + 1];
gx_b += weight * data[pos_x + 2];
weights += weight;
}
}
data2[x2] = gx_r / weights;
data2[x2 + 1] = gx_g / weights;
data2[x2 + 2] = gx_b / weights;
data2[x2 + 3] = gx_a / weights_alpha;
}
}
//clear and resize canvas
if (resize_canvas === true) {
canvas.width = width;
canvas.height = height;
} else {
ctx.clearRect(0, 0, width_source, height_source);
}
//draw
ctx.putImageData(img2, 0, 0);
}
Suggestion 1 - extend the process pipe-line
You can use step-down as I describe in the links you refer to but you appear to use them in a wrong way.
Step down is not needed to scale images to ratios above 1:2 (typically, but not limited to). It is where you need to do a drastic down-scaling you need to split it up in two (and rarely, more) steps depending on content of the image (in particular where high-frequencies such as thin lines occur).
Every time you down-sample an image you will loose details and information. You cannot expect the resulting image to be as clear as the original.
If you are then scaling down the images in many steps you will loose a lot of information in total and the result will be poor as you already noticed.
Try with just one extra step, or at tops two.
Convolutions
In case of Photoshop notice that it applies a convolution after the image has been re-sampled, such as sharpen. It's not just bi-cubic interpolation that takes place so in order to fully emulate Photoshop we need to also add the steps Photoshop is doing (with the default setup).
For this example I will use my original answer that you refer to in your post, but I have added a sharpen convolution to it to improve quality as a post process (see demo at bottom).
Here is code for adding sharpen filter (it's based on a generic convolution filter - I put the weight matrix for sharpen inside it as well as a mix factor to adjust the pronunciation of the effect):
Usage:
sharpen(context, width, height, mixFactor);
The mixFactor is a value between [0.0, 1.0] and allow you do downplay the sharpen effect - rule-of-thumb: the less size the less of the effect is needed.
Function (based on this snippet):
function sharpen(ctx, w, h, mix) {
var weights = [0, -1, 0, -1, 5, -1, 0, -1, 0],
katet = Math.round(Math.sqrt(weights.length)),
half = (katet * 0.5) |0,
dstData = ctx.createImageData(w, h),
dstBuff = dstData.data,
srcBuff = ctx.getImageData(0, 0, w, h).data,
y = h;
while(y--) {
x = w;
while(x--) {
var sy = y,
sx = x,
dstOff = (y * w + x) * 4,
r = 0, g = 0, b = 0, a = 0;
for (var cy = 0; cy < katet; cy++) {
for (var cx = 0; cx < katet; cx++) {
var scy = sy + cy - half;
var scx = sx + cx - half;
if (scy >= 0 && scy < h && scx >= 0 && scx < w) {
var srcOff = (scy * w + scx) * 4;
var wt = weights[cy * katet + cx];
r += srcBuff[srcOff] * wt;
g += srcBuff[srcOff + 1] * wt;
b += srcBuff[srcOff + 2] * wt;
a += srcBuff[srcOff + 3] * wt;
}
}
}
dstBuff[dstOff] = r * mix + srcBuff[dstOff] * (1 - mix);
dstBuff[dstOff + 1] = g * mix + srcBuff[dstOff + 1] * (1 - mix);
dstBuff[dstOff + 2] = b * mix + srcBuff[dstOff + 2] * (1 - mix)
dstBuff[dstOff + 3] = srcBuff[dstOff + 3];
}
}
ctx.putImageData(dstData, 0, 0);
}
The result of using this combination will be:
ONLINE DEMO HERE
Depending on how much of the sharpening you want to add to the blend you can get result from default "blurry" to very sharp:
Suggestion 2 - low level algorithm implementation
If you want to get the best result quality-wise you'll need to go low-level and consider to implement for example this brand new algorithm to do this.
See Interpolation-Dependent Image Downsampling (2011) from IEEE.
Here is a link to the paper in full (PDF).
There are no implementations of this algorithm in JavaScript AFAIK of at this time so you're in for a hand-full if you want to throw yourself at this task.
The essence is (excerpts from the paper):
Abstract
An interpolation oriented adaptive down-sampling algorithm is proposed
for low bit-rate image coding in this paper. Given an image, the
proposed algorithm is able to obtain a low resolution image, from
which a high quality image with the same resolution as the input
image can be interpolated. Different from the traditional
down-sampling algorithms, which are independent from the
interpolation process, the proposed down-sampling algorithm hinges the
down-sampling to the interpolation process. Consequently, the
proposed down-sampling algorithm is able to maintain the original
information of the input image to the largest extent. The down-sampled
image is then fed into JPEG. A total variation (TV) based post
processing is then applied to the decompressed low resolution image.
Ultimately, the processed image is interpolated to maintain the
original resolution of the input image. Experimental results verify
that utilizing the downsampled image by the proposed algorithm, an
interpolated image with much higher quality can be achieved. Besides,
the proposed algorithm is able to achieve superior performance than
JPEG for low bit rate image coding.
(see provided link for all details, formulas etc.)
If you wish to use canvas only, the best result will be with multiple downsteps. But that's not good enougth yet. For better quality you need pure js implementation. We just released pica - high speed downscaler with variable quality/speed. In short, it resizes 1280*1024px in ~0.1s, and 5000*3000px image in 1s, with highest quality (lanczos filter with 3 lobes). Pica has demo, where you can play with your images, quality levels, and even try it on mobile devices.
Pica does not have unsharp mask yet, but that will be added very soon. That's much more easy than implement high speed convolution filter for resize.
Why use the canvas to resize images? Modern browsers all use bicubic interpolation — the same process used by Photoshop (if you're doing it right) — and they do it faster than the canvas process. Just specify the image size you want (use only one dimension, height or width, to resize proportionally).
This is supported by most browsers, including later versions of IE. Earlier versions may require browser-specific CSS.
A simple function (using jQuery) to resize an image would be like this:
function resizeImage(img, percentage) {
var coeff = percentage/100,
width = $(img).width(),
height = $(img).height();
return {"width": width*coeff, "height": height*coeff}
}
Then just use the returned value to resize the image in one or both dimensions.
Obviously there are different refinements you could make, but this gets the job done.
Paste the following code into the console of this page and watch what happens to the gravatars:
function resizeImage(img, percentage) {
var coeff = percentage/100,
width = $(img).width(),
height = $(img).height();
return {"width": width*coeff, "height": height*coeff}
}
$('.user-gravatar32 img').each(function(){
var newDimensions = resizeImage( this, 150);
this.style.width = newDimensions.width + "px";
this.style.height = newDimensions.height + "px";
});
Not the right answer for people who really need to resize the image itself, but just to shrink the file size.
I had a problem with "directly from the camera" pictures, that my customers often uploaded in "uncompressed" JPEG.
Not so well known is, that the canvas supports (in most browsers 2017) to change the quality of JPEG
data=canvas.toDataURL('image/jpeg', .85) # [1..0] default 0.92
With this trick I could reduce 4k x 3k pics with >10Mb to 1 or 2Mb, sure it depends on your needs.
look here
I found a solution that doesn't need to access directly the pixel data and loop through it to perform the downsampling. Depending on the size of the image this can be very resource intensive, and it would be better to use the browser's internal algorithms.
The drawImage() function is using a linear-interpolation, nearest-neighbor resampling method. That works well when you are not resizing down more than half the original size.
If you loop to only resize max one half at a time, the results would be quite good, and much faster than accessing pixel data.
This function downsample to half at a time until reaching the desired size:
function resize_image( src, dst, type, quality ) {
var tmp = new Image(),
canvas, context, cW, cH;
type = type || 'image/jpeg';
quality = quality || 0.92;
cW = src.naturalWidth;
cH = src.naturalHeight;
tmp.src = src.src;
tmp.onload = function() {
canvas = document.createElement( 'canvas' );
cW /= 2;
cH /= 2;
if ( cW < src.width ) cW = src.width;
if ( cH < src.height ) cH = src.height;
canvas.width = cW;
canvas.height = cH;
context = canvas.getContext( '2d' );
context.drawImage( tmp, 0, 0, cW, cH );
dst.src = canvas.toDataURL( type, quality );
if ( cW <= src.width || cH <= src.height )
return;
tmp.src = dst.src;
}
}
// The images sent as parameters can be in the DOM or be image objects
resize_image( $( '#original' )[0], $( '#smaller' )[0] );
This is the improved Hermite resize filter that utilises 1 worker so that the window doesn't freeze.
https://github.com/calvintwr/blitz-hermite-resize
const blitz = Blitz.create()
/* Promise */
blitz({
source: DOM Image/DOM Canvas/jQuery/DataURL/File,
width: 400,
height: 600
}).then(output => {
// handle output
})catch(error => {
// handle error
})
/* Await */
let resized = await blitz({...})
/* Old school callback */
const blitz = Blitz.create('callback')
blitz({...}, function(output) {
// run your callback.
})
Here is a reusable Angular service for high quality image / canvas resizing: https://gist.github.com/fisch0920/37bac5e741eaec60e983
The service supports lanczos convolution and step-wise downscaling. The convolution approach is higher quality at the cost of being slower, whereas the step-wise downscaling approach produces reasonably antialiased results and is significantly faster.
Example usage:
angular.module('demo').controller('ExampleCtrl', function (imageService) {
// EXAMPLE USAGE
// NOTE: it's bad practice to access the DOM inside a controller,
// but this is just to show the example usage.
// resize by lanczos-sinc filter
imageService.resize($('#myimg')[0], 256, 256)
.then(function (resizedImage) {
// do something with resized image
})
// resize by stepping down image size in increments of 2x
imageService.resizeStep($('#myimg')[0], 256, 256)
.then(function (resizedImage) {
// do something with resized image
})
})
Maybe man you can try this, which is I always use in my project.In this way you can not only get high quality image ,but any other element on your canvas.
/*
* #parame canvas => canvas object
* #parame rate => the pixel quality
*/
function setCanvasSize(canvas, rate) {
const scaleRate = rate;
canvas.width = window.innerWidth * scaleRate;
canvas.height = window.innerHeight * scaleRate;
canvas.style.width = window.innerWidth + 'px';
canvas.style.height = window.innerHeight + 'px';
canvas.getContext('2d').scale(scaleRate, scaleRate);
}
instead of .85, if we add 1.0. You will get exact answer.
data=canvas.toDataURL('image/jpeg', 1.0);
You can get clear and bright image. Please check
I really try to avoid running through image data, especially on larger images. Thus I came up with a rather simple way to decently reduce image size without any restrictions or limitations using a few extra steps.
This routine goes down to the lowest possible half step before the desired target size. Then it scales it up to twice the target size and then half again. Sounds funny at first, but the results are astoundingly good and go there swiftly.
function resizeCanvas(canvas, newWidth, newHeight) {
let ctx = canvas.getContext('2d');
let buffer = document.createElement('canvas');
buffer.width = ctx.canvas.width;
buffer.height = ctx.canvas.height;
let ctxBuf = buffer.getContext('2d');
let scaleX = newWidth / ctx.canvas.width;
let scaleY = newHeight / ctx.canvas.height;
let scaler = Math.min(scaleX, scaleY);
//see if target scale is less than half...
if (scaler < 0.5) {
//while loop in case target scale is less than quarter...
while (scaler < 0.5) {
ctxBuf.canvas.width = ctxBuf.canvas.width * 0.5;
ctxBuf.canvas.height = ctxBuf.canvas.height * 0.5;
ctxBuf.scale(0.5, 0.5);
ctxBuf.drawImage(canvas, 0, 0);
ctxBuf.setTransform(1, 0, 0, 1, 0, 0);
ctx.canvas.width = ctxBuf.canvas.width;
ctx.canvas.height = ctxBuf.canvas.height;
ctx.drawImage(buffer, 0, 0);
scaleX = newWidth / ctxBuf.canvas.width;
scaleY = newHeight / ctxBuf.canvas.height;
scaler = Math.min(scaleX, scaleY);
}
//only if the scaler is now larger than half, double target scale trick...
if (scaler > 0.5) {
scaleX *= 2.0;
scaleY *= 2.0;
ctxBuf.canvas.width = ctxBuf.canvas.width * scaleX;
ctxBuf.canvas.height = ctxBuf.canvas.height * scaleY;
ctxBuf.scale(scaleX, scaleY);
ctxBuf.drawImage(canvas, 0, 0);
ctxBuf.setTransform(1, 0, 0, 1, 0, 0);
scaleX = 0.5;
scaleY = 0.5;
}
} else
ctxBuf.drawImage(canvas, 0, 0);
//wrapping things up...
ctx.canvas.width = newWidth;
ctx.canvas.height = newHeight;
ctx.scale(scaleX, scaleY);
ctx.drawImage(buffer, 0, 0);
ctx.setTransform(1, 0, 0, 1, 0, 0);
}
context.scale(xScale, yScale)
<canvas id="c"></canvas>
<hr/>
<img id="i" />
<script>
var i = document.getElementById('i');
i.onload = function(){
var width = this.naturalWidth,
height = this.naturalHeight,
canvas = document.getElementById('c'),
ctx = canvas.getContext('2d');
canvas.width = Math.floor(width / 2);
canvas.height = Math.floor(height / 2);
ctx.scale(0.5, 0.5);
ctx.drawImage(this, 0, 0);
ctx.rect(0,0,500,500);
ctx.stroke();
// restore original 1x1 scale
ctx.scale(2, 2);
ctx.rect(0,0,500,500);
ctx.stroke();
};
i.src = 'https://static.md/b70a511140758c63f07b618da5137b5d.png';
</script>
DEMO: Resizing images with JS and HTML Canvas Demo fiddler.
You may find 3 different methods to do this resize, that will help you understand how the code is working and why.
https://jsfiddle.net/1b68eLdr/93089/
Full code of both demo, and TypeScript method that you may want to use in your code, can be found in the GitHub project.
https://github.com/eyalc4/ts-image-resizer
This is the final code:
export class ImageTools {
base64ResizedImage: string = null;
constructor() {
}
ResizeImage(base64image: string, width: number = 1080, height: number = 1080) {
let img = new Image();
img.src = base64image;
img.onload = () => {
// Check if the image require resize at all
if(img.height <= height && img.width <= width) {
this.base64ResizedImage = base64image;
// TODO: Call method to do something with the resize image
}
else {
// Make sure the width and height preserve the original aspect ratio and adjust if needed
if(img.height > img.width) {
width = Math.floor(height * (img.width / img.height));
}
else {
height = Math.floor(width * (img.height / img.width));
}
let resizingCanvas: HTMLCanvasElement = document.createElement('canvas');
let resizingCanvasContext = resizingCanvas.getContext("2d");
// Start with original image size
resizingCanvas.width = img.width;
resizingCanvas.height = img.height;
// Draw the original image on the (temp) resizing canvas
resizingCanvasContext.drawImage(img, 0, 0, resizingCanvas.width, resizingCanvas.height);
let curImageDimensions = {
width: Math.floor(img.width),
height: Math.floor(img.height)
};
let halfImageDimensions = {
width: null,
height: null
};
// Quickly reduce the size by 50% each time in few iterations until the size is less then
// 2x time the target size - the motivation for it, is to reduce the aliasing that would have been
// created with direct reduction of very big image to small image
while (curImageDimensions.width * 0.5 > width) {
// Reduce the resizing canvas by half and refresh the image
halfImageDimensions.width = Math.floor(curImageDimensions.width * 0.5);
halfImageDimensions.height = Math.floor(curImageDimensions.height * 0.5);
resizingCanvasContext.drawImage(resizingCanvas, 0, 0, curImageDimensions.width, curImageDimensions.height,
0, 0, halfImageDimensions.width, halfImageDimensions.height);
curImageDimensions.width = halfImageDimensions.width;
curImageDimensions.height = halfImageDimensions.height;
}
// Now do final resize for the resizingCanvas to meet the dimension requirments
// directly to the output canvas, that will output the final image
let outputCanvas: HTMLCanvasElement = document.createElement('canvas');
let outputCanvasContext = outputCanvas.getContext("2d");
outputCanvas.width = width;
outputCanvas.height = height;
outputCanvasContext.drawImage(resizingCanvas, 0, 0, curImageDimensions.width, curImageDimensions.height,
0, 0, width, height);
// output the canvas pixels as an image. params: format, quality
this.base64ResizedImage = outputCanvas.toDataURL('image/jpeg', 0.85);
// TODO: Call method to do something with the resize image
}
};
}}