Related
Here is my game plnkr.
(Edit: another plnkr with one static monster instead of multiple dynamic ones)
Enter or the button will restart the game.
Can anyone tell why the collision detection algorithm taken from here is not working? It seems to detect a hit not accurately (too widely). The demo on their site works great but I'm not sure what I'm doing wrong.
Most relevant piece of code (inside update function):
// Are they touching?
if (heroImage.width) {
var heroImageData = ctx.getImageData(heroImage.x, heroImage.y, heroImage.width, heroImage.height);
var monsterImageData;
for (var i = 0; i < monsters.length; i++) {
var monster = monsters[i];
monster.x += monster.directionVector.x;
monster.y += monster.directionVector.y;
monsterImageData = ctx.getImageData(monster.monsterImage.x, monster.monsterImage.y, monster.monsterImage.width, monster.monsterImage.height);
if (isPixelCollision(heroImageData, hero.x, hero.y, monsterImageData, monster.x, monster.y)) {
stop();
}
}
}
As #GameAlchemist pointed out you're taking ImageData for monster and hero from the canvas background, which has already been painted with the background image. Thus will always have alpha value 255 (Opaque).
Which is being checked in the collision function
if (
( pixels [((pixelX - x ) + (pixelY - y ) * w ) * 4 + 3 /*RGBA, alpha # 4*/] !== 0/*alpha zero expected*/ ) &&
( pixels2[((pixelX - x2) + (pixelY - y2) * w2) * 4 + 3 /*RGBA, alpha # 4*/] !== 0/*alpha zero expected*/ )
) {
return true;
}
Instead both the ImageData should be generated by drawing these images to a canvas with nothing painted. Even after doing that collision algorithm doesn't seem to work too well.
I have created two variables monsterImageData and heroImageData to hold the imageData these variable are loaded only once.
There's a new canvas in HTML file id=testCanvas. This is used to get image data values for monster and heroes.
Here is the plunker link for modified code.
Your hero image is 71x68px and has a lot of transparent space around the outside. I'm guessing if you crop this to just fit the image it will reduce the space between collisions.
You are taking the imageData on the game's drawing context, so since you have a background, there's no transparent pixel at all, so your pixel collision detection returns always true - > you are just doing a bounding box check, in fact.
The idea of the algorithm is to compare two static imageData that only need to be computed once (getImageData is a costly operation).
A few advices :
• load your images before launching the game.
• redim (crop) your image, it has a lot of void, as #Quantumplate noticed.
• compute only once the imageData of your sprites on the context before the launch of the game. Do not forget to clearRect() the canvas before the drawImage + getImageData. This is the way to solve your bug.
• get rid of the
if (xDiff < 4 && yDiff < 4) {
and the corresponding else. This 'optimisation' is pointless. The point of using pixel detection is to be precise. Redim (crop) your image is more important to win a lot of time (but do you need to ... ?? )
• Rq : How poorly written is the pixel detection algorithm !!! 1) To round a number, it's using !! 5 different methods (round, <<0, ~~, 0 |, ? : ) !!! 2) It loops on X first when CPU cache prefers on Y first, and many other things... But now if that works...
Here's an alternate (more efficient) pixel perfect collision test...
Preparation: For each image you want to test for collisions
As mentioned, trim any excess transparent pixels off the edges of your image,
Resize a canvas to the image size, (you can reuse 1 canvas for multiple images)
Draw the image on the canvas,
Get all the pixel info for the canvas: context.getImageData,
Make an array containing only alpha information: false if transparent, otherwise true.
To do a pixel-perfect collision test
Do a quick test to see if the image rects are colliding. If not, you're done.
// r1 & r2 are rect objects {x:,y:,w:.h:}
function rectsColliding(r1,r2){
return(!(
r1.x > r2.x+r2.w ||
r1.x+r1.w < r2.x ||
r1.y > r2.y+r2.h ||
r1.y+r1.h < r2.y
));
}
Calculate the intersecting rect of the 2 images
// r1 & r2 are rect objects {x:,y:,w:.h:}
function intersectingRect(r1,r2){
var x=Math.max(r1.x,r2.x);
var y=Math.max(r1.y,r2.y);
var xx=Math.min(r1.x+r1.w,r2.x+r2.w);
var yy=Math.min(r1.y+r1.h,r2.y+r2.h);
return({x:x,y:y,w:xx-x,h:yy-y});
}
Compare the intersecting pixels in both alpha arrays. If both arrays have a non-transparent pixel at the same location then there is a collision. Be sure to normalize against the origin (x=0,y=0) by offsetting your comparisons.
// warning untested code -- might need tweaking
var i=intersectingRect(r1,r2);
var offX=Math.min(r1.x,r2.x);
var offY=Math.min(r1.y,r2.y);
for(var x=i.x-offX; x<=(i.x-offX)+i.w; x++{
for(var y=i.y-offY; y<=(i.y-offY)+i.h; y++{
if(
// x must be valid for both arrays
x<alphaArray1[y].length && x<alphaArray2[y].length &&
// y must be valid for both arrays
y<alphaArray1.length && y<alphaArray2.length &&
// collision is true if both arrays have common non-transparent alpha
alphaArray1[x,y] && alphaArray2[x,y]
){
return(true);
}
}}
return(false);
I'm stuck on a problem :
Let's consider a square on a surface that is moving (in a video). So remember it's not always a plane surface, it can be skewed, rotated, etc.
Right now, I'm detecting it with Aruco JS and getting the coordinates (x,y) of its 4 corners. I'm pretty sure that, starting with this coordinates, I can render the transformation using transform: matrix3d();.
The thing is : I have about 0 knowledge in maths stuff and especially trigonometry. And I really would learn / understand that.
To sum up : With the coordinates of the corners of a square in real time, how can I apply the transformation to another element using CSS transform property ?
Here's some things I've done so far (this code is executed inside a window.requestAnimationFrame) :
console.log('rotation : ', rotation[0], rotation[1], rotation[2]);
console.log('translation : ', translation);
var dimensions = {
width: lineDistance(corners[0], corners[1]),
height: lineDistance(corners[0], corners[3])
}, center = {
x: corners[0].x + (corners[1].x - corners[0].x)/2,
y: corners[0].y + (corners[3].y - corners[0].y)/2
}, rotateAngle = angle(corners[0].x, corners[0].y, corners[1].x, corners[1].y),
rotateXangle = parseInt(rotation[1][2]);
img.style.top = corners[0].y;
img.style.left = corners[0].x;
img.style.width = dimensions.width;
img.style.height = dimensions.height;
img.style.transform = 'rotate('+rotateAngle+'deg) rotateX('+Math.asin(-rotateXangle)+'deg) rotateY('+-Math.atan2(rotation[0][2], rotation[2][2])+'deg) rotateZ('+Math.atan2(rotation[1][0], rotation[1][1])+'deg)';
The core of this question has been asked on Math Stack Exchange, titled Finding the Transform matrix from 4 projected points (with Javascript). My answer there should serve your needs as well. Plus the better math type setting there will make things easier to read.
Basics
I am working on a small tool that is supposed help with some geometric calculations for print-related products.
Overview
I have two inputs (w and h) where the user is supposed to enter a width and a height for a box. This box is supposed to be a representation of the users measurements as a small CSS-based box.
The problem is that I cannot just take the measurements and apply them as pixels, or even pixels * 10, or anything, as width/height for the display box, because my space is limited.
The box can have a maximum measurement of 69 x 69.
What I want to achieve is that to apply the longer entered measurement to its according axis, then calculate the other axis in proportion to this.
My approach
I am not a maths person at all. But I did my best and I put together a function that will accomplish the above:
updateRectBox: function(w, h){
// define maximum width and height for box
var max_x=69;
var max_y=69;
var factor,x,y;
factor=w/h;
if(w==h){
// if we have a 1:1 ratio, we want the box to fill `69px` on both axis
x=max_x;
y=max_y;
} else {
if(w>h){
// if width is larger than height, we calculate the box height using the factor
x=max_x;
y=(factor>1 ? max_y/factor : max_y*factor);
} else {
// if height is larger than width, we calculate the box width using the factor
x=(factor>1 ? max_x/factor : max_x*factor);
y=max_y;
}
}
// using this to set the box element's properties
jQuery('#rect').css({
'width': (x)+'px',
'height': (y)+'px'
});
}
This function works well, but:
Question
I know this can be done more beautifully, with less code. But due to my lack of math skills, I just cannot think of anything more compact than what I wrote.
I've created a working fiddle to make it easier for you to test your optimizations.
Your function accomplishes exactly what it needs to. There are ways which are arguably more elegant to write, however.
The basic idea is that you have a box with dimensions (w × h) and you want a box which is a scaled version of this one to fit in a (69 × 69) box.
To fit in a (69 × 69) box, your (w × h) box must be less than 69 wide, and less than 69 tall. Suppose you scale by the quantity s. Then your new box has dimension (s * w × s * h). Using the above constraint, we know that:
s * w <= 69 and that s * h <= 69. Rewrite these, solving for s, and you get:
s <= 69 / w and s <= 69 / h. Both must hold true, so you can rewrite this as:
s <= min( 69 / w, 69 / h). In addition, you want s to be as large as possible (so the box completely fills the region) so s = min( 69 / w, 69 / h).
Your code accomplishes the same, but through if-statements. You can rewrite it considerably terser by doing:
updateRectBox: function(width, height) {
// define maximum width and height for box
var max_width = 69;
var max_height = 69;
var scale = Math.min( max_width / width, max_height / height );
var x = scale * width;
var y = scale * height;
// using this to set the box element's properties
jQuery('#rect').css({
'width': x+'px',
'height': y+'px'
});
}
Changing the variable names helps make it slightly more readable (w and h presumably do mean width and height, but making this explicit is helpful).
All this said, it's unlikely that there will be noticeable performance differences between this and your original. The code is extremely fast, since it does very little. That said, I made a jsperf which shows that using Math.min is about 1.7 times faster on my browser.
I've searched far and wide throughout the web thinking that somebody may have had a similar need, but have come short. I'm needing to create a calculator that will adjust the size of a stage for draggable objects based on a Width and Height field (in feet).
I'm needing to maintain a max width and height that would, ideally, be set in a variable for easy modification. This max width and height would be set in pixels. I would set dimensions of the draggable items on the stage in "data-" attributes, I imagine. I'm not looking to match things up in terms of screen resolutions.
What's the best way to approach this? I'm pretty mediocre at math and have come up short in being able to create the functions necessary for scaling a stage of objects and their container like this.
I'm a skilled jQuery user, so if it makes sense to make use of jQuery in this, that'd be great. Thanks in advance.
There are at least a couple of ways to scale things proportionately. Since you will know the projected (room) dimensions and you should know at least one of the scaled dimensions (assuming you know the width of the stage), you can scale proportionately by objectLengthInFeet / roomWidthInFeet * stageWidthInPixels.
Assuming a stage width of 500 pixels for an example, once you know the room dimensions and the width of the stage:
var stageWidth = 500,
roomWidth = parseFloat($('#width').val(), 10) || 0, // default to 0 if input is empty or not parseable to number
roomHeight = parseFloat($('#height').val(), 10) || 0, // default to 0 if input is empty or not parseable to number
setRoomDimensions = function (e) {
roomWidth = parseFloat($('#width').val(), 10);
roomHeight = parseFloat($('#height').val(), 10);
},
feetToPixels = function feetToPixels(feet) {
var scaled = feet / roomWidth * stageWidth;
return scaled;
};
Here's a demo: http://jsfiddle.net/uQDnY/
I'm working on a kind of unique app which needs to generate images at specific resolutions according to the device they are displayed on. So the output is different on a regular Windows browser (96ppi), iPhone (163ppi), Android G1 (180ppi), and other devices. I'm wondering if there's a way to detect this automatically.
My initial research seems to say no. The only suggestion I've seen is to make an element whose width is specified as "1in" in CSS, then check its offsetWidth (see also How to access screen display’s DPI settings via javascript?). Makes sense, but iPhone is lying to me with that technique, saying it's 96ppi.
Another approach might be to get the dimensions of the display in inches and then divide by the width in pixels, but I'm not sure how to do that either.
<div id='testdiv' style='height: 1in; left: -100%; position: absolute; top: -100%; width: 1in;'></div>
<script type='text/javascript'>
var devicePixelRatio = window.devicePixelRatio || 1;
dpi_x = document.getElementById('testdiv').offsetWidth * devicePixelRatio;
dpi_y = document.getElementById('testdiv').offsetHeight * devicePixelRatio;
console.log(dpi_x, dpi_y);
</script>
grabbed from here http://www.infobyip.com/detectmonitordpi.php. Works on mobile devices! (android 4.2.2 tested)
I came up with a way that doesn't require the DOM... at all
The DOM can be messy, requiring you to append stuff to the body without knowing what stuff is going on with width: x !important in your stylesheet. You would also have to wait for the DOM to be ready to use...
/**
* Binary search for a max value without knowing the exact value, only that it can be under or over
* It dose not test every number but instead looks for 1,2,4,8,16,32,64,128,96,95 to figure out that
* you thought about #96 from 0-infinity
*
* #example findFirstPositive(x => matchMedia(`(max-resolution: ${x}dpi)`).matches)
* #author Jimmy Wärting
* #see {#link https://stackoverflow.com/a/35941703/1008999}
* #param {function} fn The function to run the test on (should return truthy or falsy values)
* #param {number} start=1 Where to start looking from
* #param {function} _ (private)
* #returns {number} Intenger
*/
function findFirstPositive (f,b=1,d=(e,g,c)=>g<e?-1:0<f(c=e+g>>>1)?c==e||0>=f(c-1)?c:d(e,c-1):d(c+1,g)) {
for (;0>=f(b);b<<=1);return d(b>>>1,b)|0
}
var dpi = findFirstPositive(x => matchMedia(`(max-resolution: ${x}dpi)`).matches)
console.log(dpi)
There is the resolution CSS media query — it allows you to limit CSS styles to specific resolutions:
http://www.w3.org/TR/css3-mediaqueries/#resolution
However, it’s only supported by Firefox 3.5 and above, Opera 9 and above, and IE 9. Other browsers won’t apply your resolution-specific styles at all (although I haven’t checked non-desktop browsers).
Here is what works for me (but didn't test it on mobile phones):
<body><div id="ppitest" style="width:1in;visible:hidden;padding:0px"></div></body>
Then I put in the .js: screenPPI = document.getElementById('ppitest').offsetWidth;
This got me 96, which corresponds to my system's ppi.
DPI is by definition tied to the physical size of the display. So you won't be able to have the real DPI without knowing exactly the hardware behind.
Modern OSes agreed on a common value in order to have compatible displays: 96 dpi. That's a shame but that's a fact.
You will have to rely on sniffing in order to be able to guess the real screen size needed to compute the resolution (DPI = PixelSize / ScreenSize).
I also needed to display the same image at the same size at different screen dpi but only for Windows IE. I used:
<img src="image.jpg" style="
height:expression(scale(438, 192));
width:expression(scale(270, 192))" />
function scale(x, dpi) {
// dpi is for orignal dimensions of the image
return x * screen.deviceXDPI/dpi;
}
In this case the original image width/height are 270 and 438 and the image was developed on 192dpi screen. screen.deviceXDPI is not defined in Chrome and the scale function would need to be updated to support browsers other than IE
The reply from #Endless is pretty good, but not readable at all,
this is a similar approche with fixed min/max (it should be good ones)
var dpi = (function () {
for (var i = 56; i < 2000; i++) {
if (matchMedia("(max-resolution: " + i + "dpi)").matches === true) {
return i;
}
}
return i;
})();
matchMedia is now well supported and should give good result, see http://caniuse.com/#feat=matchmedia
Be careful the browser won't give you the exact screen dpi but only an approximation
function getPPI(){
// create an empty element
var div = document.createElement("div");
// give it an absolute size of one inch
div.style.width="1in";
// append it to the body
var body = document.getElementsByTagName("body")[0];
body.appendChild(div);
// read the computed width
var ppi = document.defaultView.getComputedStyle(div, null).getPropertyValue('width');
// remove it again
body.removeChild(div);
// and return the value
return parseFloat(ppi);
}
(From VodaFone)
Reading through all these responses was quite frustrating, when the only correct answer is: No, it is not possible to detect the DPI from JavaScript/CSS. Often, the operating system itself does not even know the DPI of the connected screens (and reports it as 96 dpi, which I suspect might be the reason why many people seem to believe that their method of detecting DPI in JavaScript is accurate). Also, when multiple screens are connected to a device forming a unified display, the viewport and even a single DOM element can span multiple screens with different DPIs, which would make these calculations quite challenging.
Most of the methods described in the other answers will almost always result in an output of 96 dpi, even though most screens nowadays have a higher DPI. For example, the screen of my ThinkPad T14 has 157 dpi, according to this calculator, but all the methods described here and my operating system tell me that it has 96 dpi.
Your idea of assigning a CSS width of 1in to a DOM element does not work. It seems that a CSS inch is defined as 96 CSS pixels. By my understanding, a CSS pixel is defined as a pixel multiplied by the devicePixelRatio, which traditionally is 1, but can be higher or lower depending on the zoom level configured in the graphical interface of the operating system and in the browser.
It seems that the approach of using resolution media queries produces at least some results on a few devices, but they are often still off by a factor of more than 2. Still, on most devices this approach also results in a value of 96 dpi.
I think your best approach is to combine the suggestion of the "sniffer" image with a matrix of known DPIs for devices (via user agent and other methods). It won't be exact and will be a pain to maintain, but without knowing more about the app you're trying to make that's the best suggestion I can offer.
Can't you do anything else? For instance, if you are generating an image to be recognized by a camera (i.e. you run your program, swipe your cellphone across a camera, magic happens), can't you use something size-independent?
If this is an application to be deployed in controlled environments, can you provide a calibration utility? (you could make something simple like print business cards with a small ruler in it, use it during the calibration process).
I just found this link: http://dpi.lv/. Basically it is a webtool to discover the client device resolution, dpi, and screen size.
I visited on my computer and mobile phone and it provides the correct resolution and DPI for me. There is a github repo for it, so you can see how it works.
Generate a list of known DPI:
https://stackoverflow.com/a/6793227
Detect the exact device. Using something like:
navigator.userAgent.toLowerCase();
For example, when detecting mobile:
window.isMobile=/iphone|ipod|ipad|android|blackberry|opera mini|opera mobi|skyfire|maemo|windows phone|palm|iemobile|symbian|symbianos|fennec/i.test(navigator.userAgent.toLowerCase());
And profit!
Readable code from #Endless reply:
const dpi = (function () {
let i = 1;
while ( !hasMatch(i) ) i *= 2;
function getValue(start, end) {
if (start > end) return -1;
let average = (start + end) / 2;
if ( hasMatch(average) ) {
if ( start == average || !hasMatch(average - 1) ) {
return average;
} else {
return getValue(start, average - 1);
}
} else {
return getValue(average + 1, end);
}
}
function hasMatch(x) {
return matchMedia(`(max-resolution: ${x}dpi)`).matches;
}
return getValue(i / 2, i) | 0;
})();
Maybe I'm a little bit steering off this topic...
I was working on a html canvas project, which was intended to provide a drawing canvas for people to draw lines on. I wanted to set canvas's size to 198x280mm which is fit for A4 printing.
So I started to search for a resolution to convert 'mm' to 'px' and to display the canvas suitably on both PC and mobile.
I tried solution from #Endless ,code as:
const canvas = document.getElementById("canvas");
function findFirstPositive(b, a, i, c) {
c=(d,e)=>e>=d?(a=d+(e-d)/2,0<b(a)&&(a==d||0>=b(a-1))?a:0>=b(a)?c(a+1,e):c(d,a-1)):-1
for (i = 1; 0 >= b(i);) i *= 2
return c(i / 2, i)|0
}
const dpi = findFirstPositive(x => matchMedia(`(max-resolution: ${x}dpi)`).matches)
let w = 198 * dpi / 25.4;
let h = 280 * dpi / 25.4;
canvas.width = w;
canvas.height = h;
It worked well on PC browser, showing dpi=96 and size was 748x1058 px;work well on PC
However turned to mobile devices, it was much larger than I expected: size: 1902x2689 px.can't work on mobile
After searching for keywords like devicePixelRatio, I suddenly realize that, I don't actually need to show real A4 size on mobile screen (under which situation it's actually hard to use), I just need the canvas's size fit for printing, so I simply set the size to:
let [w,h] = [748,1058];
canvas.width = w;
canvas.height = h;
...and it is well printed:well printed