Why there's white line appears in HTML canvas between two shape? - javascript

Why there's white line appears in JS canvas between two shape?
I'm making a game with JS / TS (I'm using MacBook Pro), with HTML5 canvas, and there's a unexpected white line appear between two shapes in safari browser:
but I run exactly same code in chrome, everything is fine:
So why this is happened? And how can I fix it?
code I'm using to render
CONTEXT.drawImage(
CACHES.get(this.materialURL),
(this.rect.x - camera.location.x) * GRID_W,
(this.rect.y - camera.location.y) * GRID_H,
GRID_W,
GRID_H,
);

Render artifacts
More info?
There are many reasons this can happen. Most are the result of rounding errors. Sometimes the error is in JavaScript, other times it occurs in the rendering.
There are subtle differences in the JS engines (resulting from hardware, OS, driver and or engine implementations) that can result in rendering artifact that differ across devices.
There are major differences in rendering implementations even on the same browser, same OS, and using the same hardware, depending on setup (flags).
Where your artifacts are coming from I can only guess at without a lot more information. Even how you captured the example images can change the solution.
Things to try
Try using nearest pixel lookup by setting 2D context smoothing off
ctx.imageSmoothingEnabled = false;
To turn back on use
ctx.imageSmoothingEnabled = true;
Use software rendering (CPU) by setting the willReadFrequently flag when getting the context.
const ctx = canvas.getContext("2d", {willReadFrequently: true});
Note this can slow things down a lot
Turn off canvas alpha (to stop BG appearing at seams) using context option alpha
const ctx = canvas.getContext("2d", {alpha: false});
Ensure that the source image resolution matches the render size.
In other words does
const img = CACHES.get(this.materialURL);
const isSameRes = img.width === GRID_W && img.height === GRID_H;
isSameRes should equal true?
Note use naturalWidth and naturalHeight if img is an instance of Image
Extend the source image by 1 px on each edge copying the edge pixels as shown in next image. This will prevent transparent edge pixels bleeding into rendering result.
Then render the inner original image as shown below
const img = CACHES.get(this.materialURL);
ctx.drawImage(
img,
1, 1, img.width - 2, img.height - 2,
(this.rect.x - camera.location.x) * GRID_W,
(this.rect.y - camera.location.y) * GRID_H,
GRID_W,
GRID_H,
);
Note this will add in tiny bit of overhead.
Ensure integer coordinates by flooring coordinates and forcing constants to be integers.
// When defining GRID_W and GRID_H (assuming positive integer values)
// Force internal type to int32 by using bitwise operation on values
// Note this may not do anything
const GRID_W = 32 | 0;
const GRID_H = 32 | 0;
// Render using floored coordinates.
ctx.drawImage(
CACHES.get(this.materialURL),
Math.floor((this.rect.x - camera.location.x) * GRID_W),
Math.floor((this.rect.y - camera.location.y) * GRID_H),
GRID_W,
GRID_H,
);
More
There are many more options but without the needed information I would be wasting your time.

Related

Three.js sets Texture RGB values to zero when ALPHA is zero on IOS

I am working on a WebGL project using javascript and the three.js framework. For that I am writing a custom shader with GLSL in which I have to load several lookup tables. Meaning I need to use some textures' individual RGBA values for some calculations rather than displaying them.
This works fine on all devices that I've tested. However, on iOS devices (like an iPad) the RGB values of a texture are automatically set to 0 when its alpha channel is 0. I do not think that this is due to GLSL's texture2D function but rather has something to do with how three.js loads textures on iOS. I am using the built-in TextureLoader for that:
var textureLoader = new THREE.TextureLoader();
var lutMap = textureLoader.load('path/to/lookup/table/img.png');
lutMap.minFilter = THREE.NearestFilter;
lutMap.magFilter = THREE.NearestFilter;
lutMap.generateMipmaps = false;
lutMap.type = THREE.UnsignedByteType;
lutMap.format = THREE.RGBAFormat;
For testing purposes I've created a test image with constant RGB values (255,0,0) and with a constantly decreasing alpha value from the top-right corner to the bottom-left one with some pixels' alpha values being 0:
After the texture was loaded, I checked the zero-alpha pixels and their R values were indeed set to 0. I used the following code to read the image's data:
function getImageData( image ) {
var canvas = document.createElement( 'canvas' );
canvas.width = image.width;
canvas.height = image.height;
var context = canvas.getContext( '2d' );
context.drawImage( image, 0, 0 );
return context.getImageData( 0, 0, image.width, image.height );
}
The strange thing was that this was also true on my Windows PC, but the shader works just fine. So maybe this is only due to the canvas and has nothing to do with the actual problem. On the iOS device however, the texture2D(...) lookup in the GLSL code indeed returned (0,0,0,0) for exactly those pixels. (Please note that I come from Java/C++ and I am not very familiar with javascript yet! :) )
I've also tried to set the premultipliedAlpha flag to 0 in the WebGLRenderer instance, but also in the THREE.ShaderMaterial object itself. Sadly, It did not fix the problem.
Did anyone experience similar problems and knows how to fix this unwanted behaviour?
The low level PNG reading code on iOS will go through CoreGraphics and premultiply each RGB value by the A component for each pixel, so if A = 0 then each RGB value will come out as zero. What you can do is load a 24 BPP image, so that the alpha is always 0xFF (aka 255), but you cannot disable this premultiply step under iOS when dealing with a 32 BPP image.

Performance of CSS Flex vs. manual JavaScript computation

I have a browser-based system which consists of, among other modular components, an <iframe> container which is nested with other <iframe> for - currently - up to three levels. A given webpage may be embedded within multiple nested frames simultaneously. The end-users' screen resolutions and the nested frames' sizes can vary.
It is therefore important for element sizes, paddings, margins etc. to be defined in relative terms. To this end, I have identified two approaches: Either I use CSS Flex wherever possible and compute with JavaScript manually for the rest, or do the reverse and compute wherever possible. Here's an example of the computation-focused approach for one of my more complex pages to be embedded in the frames:
// Tile size-dependent CSS
const RATIO = 0.618;
// Amount of space to use in view
var viewHeight = window.innerHeight;
var viewWidth = window.innerWidth;
var viewVertSpace = viewHeight * 0.8;
var viewHoriSpace = viewWidth * 0.8;
// Position and sizing for each overall column
var colWidth = Math.round(viewHoriSpace * 0.5);
var colSpace = Math.round(viewVertSpace) - 2; // Deduct 2px bottom border
// Sizing of column 1 elements
var summaryHeight = colSpace * 0.5;
var mainRowHeight = summaryHeight * RATIO;
var mainRowSize = Math.round(mainRowHeight - 10); // Deduct 5px vertical padding per side
var subTextSize = Math.round((summaryHeight - mainRowHeight) * (1 - RATIO));
var diffIconSize = Math.round((mainRowSize - subTextSize) * RATIO);
// Sizing of column 2 elements
var horiSpace = colWidth * RATIO; // Leave some space on both sides
var chartWidth = horiSpace - (horiSpace * RATIO);
var innerBarWidth = chartWidth * (1 - RATIO);
var targetArrowWidth = subTextSize * 0.5;
There is a performance constraint on the system's loading time, one which has been failed during the first deployment to the test server. I have been continuously optimising the code (part of which involved implementing lazy initialisation and ordered loading to prevent too many simultaneous HTTP calls) and this is one area I'm looking at. I have read that extensive use of CSS Flex in more complex applications can have a significant performance impact but I wonder if relying on manual computation via JavaScript to set absolute pixel sizes is actually better?
While specific implementations may vary, here are some general things to consider:
You will not be able to control when the CSS causes your elements to resize, with JavaScript, you can make some decisions such as setting timeouts or establishing minimum values to trigger a change. However, any such solutions will be blocking any other JavaScript you may wish to be running in the same time frame. Similarly, any other JavaScript you have running will block this code. Using CSS Flexbox will require you to check on which browser-specific implementation details apply to your use cases (the same is of course true in your JavaScript).
In my experience, CSS flexbox has been faster than any JavaScript solutions that attempt to address the same concerns, I cannot guarantee that this is a universal truth though.
You should also consider code maintenance when implementing a solution. If your JavaScript is full of magic numbers and strange conditionals, it might be easier to maintain a CSS solution (assuming you do not fill it with magic numbers and strange conditionals as well, which I find easier to avoid with a Flexbox).
I'm sorry I can't give you a "use this every time answer", but hopefully this will help you make good decisions given the constrains that exist

Canvas shining star background performance issue

I've got an issue with an experiment I'm working on.
My plan is to have a beautiful and shining stars Background on a whole page.
Using that wondeful tutorial (http://timothypoon.com/blog/2011/01/19/html5-canvas-particle-animation/) I managed to get the perfect background.
I use a static canvas to display static stars and an animated canvas for the shining ones.
The fact is it's very memory hungry! On chrome and opera it runs quite smoothly, but on firefox IE or tablet, it was a total mess 1s to render each frame etc... It is worse on pages where HEIGHT is huge.
So i went into some optimisations:
-Using a buffer canvas, the problem was createRadialGradient which was called 1500 times each frame
-Using a big buffer canvas, and 1 canvas for each stars with an only call to createRadialGradient at init.
-Remove that buffer canvas and drawing every stars canvas to the main one
That last optimisation was the best i could achieve so i wrote a fiddle displaying how is the code right now.
//Buffering the star image
this.scanvas = document.createElement('canvas');
this.scanvas.width=2*this.r;
this.scanvas.height=2*this.r;
this.scon=this.scanvas.getContext('2d');
g = this.scon.createRadialGradient(this.r,this.r,0,this.r,this.r,this.r);
g.addColorStop(0.0, 'rgba(255,255,255,0.9)');
g.addColorStop(this.stop, 'rgba('+this.color.r+','+this.color.g+','+this.color.b+','+this.stop+')');
g.addColorStop(1.0, 'rgba('+this.color.r+','+this.color.g+','+this.color.b+',0)');
this.scon.fillStyle = g;
this.scon.fillRect(0,0,2*this.r,2*this.r);
That's the point where I need you:
-A way to adjust the number of shining stars according to the user perfomance
-Optimisation tips
Thanks in advance to everyone minding to help me and I apologize if I made grammar mistakes, my english isn't perfect.
EDIT
Thanks for your feedbacks,
Let me explains the whole process,
Every stars has it's own different gradient and size, that's why I stored it into a personal canvas, the shining effect is only done by scaling that canvas on the main one with drawImage.
I think the best would be to prerender 50 or 100 different stars in a buffer canvas then picking and drawing a random one, don't you think?
EDIT2
Updated fiddle according to Warlock great advises, one prerendered star, scaled to match the current size. The stars are less pretty, but the whole thing runs a lot smoother.
EDIT3
Updated fiddle to use a sprite sheet. Gorgeous!!!!
//generate the star strip
var len=(ttlm/rint)|0;
scanvas = document.createElement('canvas');
scanvas.width=len*2*r;
scanvas.height=2*r;
scon=scanvas.getContext('2d');
for(var i=0;i<len;i++){
var newo = (i/len);
var cr = r*newo;
g = scon.createRadialGradient(2*r*i+r,r,0,2*r*i+r,r,(cr <= 2 ? 2 : cr));
g.addColorStop(0.0, 'rgba(200,220,255,'+newo+')');
g.addColorStop(0.2, 'rgba(200,220,255,'+(newo*.7)+')');
g.addColorStop(0.4, 'rgba(150,170,205,'+(newo*.2)+')');
g.addColorStop(0.7, 'rgba(150,170,205,0)');
scon.fillStyle = g;
scon.fillRect(2*r*i,0,2*r,2*r);
}
EDIT 4(Final)
Dynamic stars creations
function draw() {
frameTime.push(Date.now());
con.clearRect(0,0,WIDTH,HEIGHT);
for(var i = 0, len = pxs.length; i < len; i++) {
pxs[i].fade();
pxs[i].draw();
}
requestAnimationFrame(draw);
if(allowMore === true && frameTime.length == monitoredFrame)
{
if(getAvgTime()<threshold && pxs.length<totalStars )
{
addStars();
}
else
{
allowMore=false;
static=true;
fillpxs(totalStars-pxs.length,pxss);
drawstatic();
static=false;
}
}
}
Here is the updated and final fiddle, with spritesheet, dynamic stars creation and several optimisations. If you see anything else i should update don't hesitate.
POST EDIT Reenabled shooting stars/Prototyped object/got rid of Jquery
http://jsfiddle.net/macintox/K8YTu/32/
Thanks everyone who helped me, that was really kind and instructive, and I hope it will help somebody sometimes.
Aesdotjs.
PS: I'm so happy. After testing, that script run smoothly on every browser even IE9. Yatta!!
Adopting to browser performance
To measure capability of the user's setup you can implement a dynamic star creator which stops at a certain threshold.
For example, in your code you define a minimum number of stars to draw. Then in your main loop you measure the time and if the time spent drawing the stars are less than your max threshold you add 10 more stars (I'm just throwing out a number here).
Not many are aware of that requestAnimationFrame gives an argument (DOMHighResTimeStamp) to the function it calls with time in milliseconds spent since last request. This will help you keep track of load and as we know that 60 fps is about 16.7 ms per frame we can set a threshold a little under this to be optimal and still allow some overhead for other browser stuff.
A code could look like this:
var minCount = 100, /// minimum number of stars
batchCount = 10, /// stars to add each frame
threshold= 14, /// milliseconds for each frame used
allowMore = true; /// keep adding
/// generate initial stars
generateStarts(minCount);
/// timeUsed contains the time in ms since last requestAnimationFrame was called
function loop(timeUsed) {
if (allowMore === true && timeUsed < threshold) {
addMoreStars(batchNumber);
} else {
allowMore = false;
}
/// render stars
requestAnimationFrame(loop);
}
Just note that this is a bit simplified. You will need to run a few rounds first and measure the average to have this work better as you can and will get peak when you add stars (and due to other browser operations).
So add stars, measure a few rounds, if average is below threshold add stars and repeat.
Optimizations
Sprite-sheets
As to optimizations sprite-sheets are the way to go. And they don't have to just be the stars (I'll try to explain below).
The gradient and arc is the costly part of this applications. Even when pre-rendering a single star there is cost in resizing so many stars due to interpolation etc.
When there becomes a lot of costly operations it is better to do a compromise with memory usage and pre-render everything you can.
For example: render the various sizes by first rendering a big star using gradient and arc.
Use that star to draw the other sizes as a strip of stars with the same cell size.
Now, draw only half of the number of stars using the sprite-sheet and draw clipped parts of the sprite-sheet (and not re-sized). Then rotate the canvas 90 degrees and draw the canvas itself on top of itself in a different position (the canvas becoming a big "sprite-sheet" in itself).
Rotating 90 degrees is not so performance hungry as other degrees (0, 90, 180, 270 are optimized). This will give you the illusion of having the actual amount of stars and since it's rotated we are not able to detect a repetitive pattern that easy.
A single drawImage operation of canvas is faster than many small draw operations of all the stars.
(and of course, you can do this many times instead of just once up to a point right before where you start see patterns - there is no key answer to how many, what size etc. so to find the right balance is always an experiment).
Integer numbers
Other optimizations can be using only integer positions and sizes. When you use float numbers sub-pixeling is activated which is costly as the browser need to calculate anti-alias for the offset pixels.
Using integer values can help as sub-pixeling isn't needed (but this doesn't mean the image won't be interpolated if not 1:1 dimension).
Memory bounds
You can also help the underlying low-lowel bitmap handling a tiny bit by using sizes and positions dividable on 4. This has to do with memory copy and low-level clipping. You can always make several sprite-sheet to variate positions within a cell that is dividable on 4.
This trick is more valuable on slower computers (ie. typical consumer spec'ed computers).
Turn off anti-aliasing
Turn off anti-aliasing for images. This will help performance but will give a little more rough result of the stars. To turn off image anti-aliasing do this:
ctx.webkitEnableImageSmoothing = false;
ctx.mozEnableImageSmoothing = false;
ctx.enableImageSmoothing = false;
You will by doing this see a noticeable improvement in performance as long as you use drawImage to render the stars.
Cache everything
Cache everything you can cache, being the star image as well as variables.
When you do this stars.length the browser's parser need to first find stars and then traverse that tree to find length - for each round (this may be optimized in some browsers).
If you first cache this to a variable var len = stars.length the browser only need to traverse the tree and branch once and in the loop it will only need to look up the local scope to find variable len which is faster.
Resolution reduction
You can also reduce resolution in half, ie. do everything at half the target size. In the final step draw your render enlarged to full size. This will save you initially 75% render area but give you a bit low-res look as a result.
From the professional video world we often use low-resolution when things are animated (primarily moving) as the eye/brain patch up (or can't detect) so much details when objects are moving and therefor isn't so noticeable. If this can help here must be tested - perhaps not since the stars aren't actually moving, but worth a try for the second benefit: increased performance.
How about just creating a spritesheet of a star in its various stages of radial glow.
You could even use canvas to initially create the spritesheet.
Then use context.drawImage(spritesheet,spriteX,spriteY,starWidth,starHeight) to display the star.
Spritesheet images can be drawn to the screen very quickly with very little overhead.
You might further optimize by breaking the spritesheet into individual star images.
Good luck on your project :)
1. Minimize operations, related to the DOM;
In the LINE 93 you are creating canvas:
this.scanvas = document.createElement('canvas');
You need only one canvas instead of this. Move canvas creation to the initialization step.
2. Use integer coordinates for canvas;
3. Use Object Pool design pattern to improve performance.
4. In for loops cache the length variable:
for(var i = 0; i < pxs.length; i++) {...
}
Better:
for(var i = 0, len = pxs.length; i < len; i++) {
...
}
Note: don't mix jquery with native js.

Optimal pixel drawing speed?

I'm using the Canvas object with javascript. Just doing some tests to see how fast I can set pixels in a draw loop.
On mac, it works great in FF, safari, chrome. On windows, I get a flickering effect on FF and chrome. It looks like somehow the canvas implementation on windows is different than on mac for the different browsers? (not sure if that's true).
This is the basic code I'm using to do the drawing (taken from the article below - I've optimized the below to tighten the draw loop, it runs pretty smooth now):
var canvas = document.getElementById('myCanvasElt');
var ctx = canvas.getContext('2d');
var canvasData = ctx.getImageData(0, 0, canvas.width, canvas.height);
for (var x = 0; x < canvasData.width; x++) {
for (var y = 0; y < canvasData.height; y++) {
// Index of the pixel in the array
var idx = (x + y * canvas.width) * 4;
canvasData.data[idx + 0] = 0;
canvasData.data[idx + 1] = 255;
canvasData.data[idx + 2] = 0;
canvasData.data[idx + 3] = 255;
}
}
ctx.putImageData(canvasData, 0, 0);
again, browers on windows will flicker a bit. It looks like the canvas implementation is trying to clear the canvas to white before the next drawing operation takes place (this does not happen on mac). I'm wondering if there is a setting I can change in the Canvas object to modify that value (double-buffering, clear before draw, etc)?
This is the article I am using as reference:
http://hacks.mozilla.org/2009/06/pushing-pixels-with-canvas/
Thanks
I think it's fairly clear that browsers who implement the Canvas object use DIBS (device independent bitmaps). The fact that you have access to the pixelbuffer without having to lock the handle first is proof of this. And Direct2D has nothing to do with JS in a browser thats for sure. GDI is different since it uses DDBs (device dependent bitmaps, i.e allocated from video memory rather than conventional ram). All of this however has nothing to do with optimal JS rendering speed. I think writing the RGBA values as you do is probably the best way.
The crucial factor in the code above is the call to putImageData(). This is where browsers can differ in their implementation. Are you in fact writing directly to the DIB, and putImageData is simply a wrapper around InvalidateRect? Or are you in fact writing to a duplicate in memory, which in turn is copied into the canvas device context? If you use linux or mac then this is still a valid question. Although device contexts etc. are typically "windows" terms, most OS'es deal with handles or structures in pretty much the same way. But once again, we are at the mercy of the browser vendor.
I think the following can be said:
If you are drawing many pixels in one go, then writing directly to the pixelbuffer as you do is probably the best. It is faster to "bitblt" (copy) the pixelbuffer in one go after X number of operations. The reason for this is that the native graphics functions like FillRect also calls "invalidate rectangle" which tells the system that a portion if the screen needs a re-draw (refresh). So if you call 100 line commands, then 100 update's will be issued - slowing down the process. Unless (and this is the catch) you use the beginPath/EndPath methods as they should be used. Then it's a whole different ballgame.
It's here that the Begin/End path "system" comes into play, and also the Stroke/Outline commands. They allow you to execute X number of drawing operations within a single update. But a lot of people get this wrong and issue a redraw for each call to line/fillrect etc.
Also, have you tried creating an invisible canvas object, drawing to that, and then copying to a visible canvas? This could be faster (proper double-buffering).
The problem is with the way the browsers use the native graphics APIs on the different OSes. And even on the same OS, using different APIs (for example GDI vs. Direct2D in Windows) would also produce different results.

Detecting the system DPI/PPI from JS/CSS?

I'm working on a kind of unique app which needs to generate images at specific resolutions according to the device they are displayed on. So the output is different on a regular Windows browser (96ppi), iPhone (163ppi), Android G1 (180ppi), and other devices. I'm wondering if there's a way to detect this automatically.
My initial research seems to say no. The only suggestion I've seen is to make an element whose width is specified as "1in" in CSS, then check its offsetWidth (see also How to access screen display’s DPI settings via javascript?). Makes sense, but iPhone is lying to me with that technique, saying it's 96ppi.
Another approach might be to get the dimensions of the display in inches and then divide by the width in pixels, but I'm not sure how to do that either.
<div id='testdiv' style='height: 1in; left: -100%; position: absolute; top: -100%; width: 1in;'></div>
<script type='text/javascript'>
var devicePixelRatio = window.devicePixelRatio || 1;
dpi_x = document.getElementById('testdiv').offsetWidth * devicePixelRatio;
dpi_y = document.getElementById('testdiv').offsetHeight * devicePixelRatio;
console.log(dpi_x, dpi_y);
</script>
grabbed from here http://www.infobyip.com/detectmonitordpi.php. Works on mobile devices! (android 4.2.2 tested)
I came up with a way that doesn't require the DOM... at all
The DOM can be messy, requiring you to append stuff to the body without knowing what stuff is going on with width: x !important in your stylesheet. You would also have to wait for the DOM to be ready to use...
/**
* Binary search for a max value without knowing the exact value, only that it can be under or over
* It dose not test every number but instead looks for 1,2,4,8,16,32,64,128,96,95 to figure out that
* you thought about #96 from 0-infinity
*
* #example findFirstPositive(x => matchMedia(`(max-resolution: ${x}dpi)`).matches)
* #author Jimmy Wärting
* #see {#link https://stackoverflow.com/a/35941703/1008999}
* #param {function} fn The function to run the test on (should return truthy or falsy values)
* #param {number} start=1 Where to start looking from
* #param {function} _ (private)
* #returns {number} Intenger
*/
function findFirstPositive (f,b=1,d=(e,g,c)=>g<e?-1:0<f(c=e+g>>>1)?c==e||0>=f(c-1)?c:d(e,c-1):d(c+1,g)) {
for (;0>=f(b);b<<=1);return d(b>>>1,b)|0
}
var dpi = findFirstPositive(x => matchMedia(`(max-resolution: ${x}dpi)`).matches)
console.log(dpi)
There is the resolution CSS media query — it allows you to limit CSS styles to specific resolutions:
http://www.w3.org/TR/css3-mediaqueries/#resolution
However, it’s only supported by Firefox 3.5 and above, Opera 9 and above, and IE 9. Other browsers won’t apply your resolution-specific styles at all (although I haven’t checked non-desktop browsers).
Here is what works for me (but didn't test it on mobile phones):
<body><div id="ppitest" style="width:1in;visible:hidden;padding:0px"></div></body>
Then I put in the .js: screenPPI = document.getElementById('ppitest').offsetWidth;
This got me 96, which corresponds to my system's ppi.
DPI is by definition tied to the physical size of the display. So you won't be able to have the real DPI without knowing exactly the hardware behind.
Modern OSes agreed on a common value in order to have compatible displays: 96 dpi. That's a shame but that's a fact.
You will have to rely on sniffing in order to be able to guess the real screen size needed to compute the resolution (DPI = PixelSize / ScreenSize).
I also needed to display the same image at the same size at different screen dpi but only for Windows IE. I used:
<img src="image.jpg" style="
height:expression(scale(438, 192));
width:expression(scale(270, 192))" />
function scale(x, dpi) {
// dpi is for orignal dimensions of the image
return x * screen.deviceXDPI/dpi;
}
In this case the original image width/height are 270 and 438 and the image was developed on 192dpi screen. screen.deviceXDPI is not defined in Chrome and the scale function would need to be updated to support browsers other than IE
The reply from #Endless is pretty good, but not readable at all,
this is a similar approche with fixed min/max (it should be good ones)
var dpi = (function () {
for (var i = 56; i < 2000; i++) {
if (matchMedia("(max-resolution: " + i + "dpi)").matches === true) {
return i;
}
}
return i;
})();
matchMedia is now well supported and should give good result, see http://caniuse.com/#feat=matchmedia
Be careful the browser won't give you the exact screen dpi but only an approximation
function getPPI(){
// create an empty element
var div = document.createElement("div");
// give it an absolute size of one inch
div.style.width="1in";
// append it to the body
var body = document.getElementsByTagName("body")[0];
body.appendChild(div);
// read the computed width
var ppi = document.defaultView.getComputedStyle(div, null).getPropertyValue('width');
// remove it again
body.removeChild(div);
// and return the value
return parseFloat(ppi);
}
(From VodaFone)
Reading through all these responses was quite frustrating, when the only correct answer is: No, it is not possible to detect the DPI from JavaScript/CSS. Often, the operating system itself does not even know the DPI of the connected screens (and reports it as 96 dpi, which I suspect might be the reason why many people seem to believe that their method of detecting DPI in JavaScript is accurate). Also, when multiple screens are connected to a device forming a unified display, the viewport and even a single DOM element can span multiple screens with different DPIs, which would make these calculations quite challenging.
Most of the methods described in the other answers will almost always result in an output of 96 dpi, even though most screens nowadays have a higher DPI. For example, the screen of my ThinkPad T14 has 157 dpi, according to this calculator, but all the methods described here and my operating system tell me that it has 96 dpi.
Your idea of assigning a CSS width of 1in to a DOM element does not work. It seems that a CSS inch is defined as 96 CSS pixels. By my understanding, a CSS pixel is defined as a pixel multiplied by the devicePixelRatio, which traditionally is 1, but can be higher or lower depending on the zoom level configured in the graphical interface of the operating system and in the browser.
It seems that the approach of using resolution media queries produces at least some results on a few devices, but they are often still off by a factor of more than 2. Still, on most devices this approach also results in a value of 96 dpi.
I think your best approach is to combine the suggestion of the "sniffer" image with a matrix of known DPIs for devices (via user agent and other methods). It won't be exact and will be a pain to maintain, but without knowing more about the app you're trying to make that's the best suggestion I can offer.
Can't you do anything else? For instance, if you are generating an image to be recognized by a camera (i.e. you run your program, swipe your cellphone across a camera, magic happens), can't you use something size-independent?
If this is an application to be deployed in controlled environments, can you provide a calibration utility? (you could make something simple like print business cards with a small ruler in it, use it during the calibration process).
I just found this link: http://dpi.lv/. Basically it is a webtool to discover the client device resolution, dpi, and screen size.
I visited on my computer and mobile phone and it provides the correct resolution and DPI for me. There is a github repo for it, so you can see how it works.
Generate a list of known DPI:
https://stackoverflow.com/a/6793227
Detect the exact device. Using something like:
navigator.userAgent.toLowerCase();
For example, when detecting mobile:
window.isMobile=/iphone|ipod|ipad|android|blackberry|opera mini|opera mobi|skyfire|maemo|windows phone|palm|iemobile|symbian|symbianos|fennec/i.test(navigator.userAgent.toLowerCase());
And profit!
Readable code from #Endless reply:
const dpi = (function () {
let i = 1;
while ( !hasMatch(i) ) i *= 2;
function getValue(start, end) {
if (start > end) return -1;
let average = (start + end) / 2;
if ( hasMatch(average) ) {
if ( start == average || !hasMatch(average - 1) ) {
return average;
} else {
return getValue(start, average - 1);
}
} else {
return getValue(average + 1, end);
}
}
function hasMatch(x) {
return matchMedia(`(max-resolution: ${x}dpi)`).matches;
}
return getValue(i / 2, i) | 0;
})();
Maybe I'm a little bit steering off this topic...
I was working on a html canvas project, which was intended to provide a drawing canvas for people to draw lines on. I wanted to set canvas's size to 198x280mm which is fit for A4 printing.
So I started to search for a resolution to convert 'mm' to 'px' and to display the canvas suitably on both PC and mobile.
I tried solution from #Endless ,code as:
const canvas = document.getElementById("canvas");
function findFirstPositive(b, a, i, c) {
c=(d,e)=>e>=d?(a=d+(e-d)/2,0<b(a)&&(a==d||0>=b(a-1))?a:0>=b(a)?c(a+1,e):c(d,a-1)):-1
for (i = 1; 0 >= b(i);) i *= 2
return c(i / 2, i)|0
}
const dpi = findFirstPositive(x => matchMedia(`(max-resolution: ${x}dpi)`).matches)
let w = 198 * dpi / 25.4;
let h = 280 * dpi / 25.4;
canvas.width = w;
canvas.height = h;
It worked well on PC browser, showing dpi=96 and size was 748x1058 px;work well on PC
However turned to mobile devices, it was much larger than I expected: size: 1902x2689 px.can't work on mobile
After searching for keywords like devicePixelRatio, I suddenly realize that, I don't actually need to show real A4 size on mobile screen (under which situation it's actually hard to use), I just need the canvas's size fit for printing, so I simply set the size to:
let [w,h] = [748,1058];
canvas.width = w;
canvas.height = h;
...and it is well printed:well printed

Categories

Resources