How to get two variables to always equal a constant ratio? - javascript

I have two variables: width and height. Assume that the variables are two random positive numbers.
Width and height must always equal the ratio of 1.4:1.
To put it another way,
width / height === 1.4
Must always evaluate to true.
In JavaScript, how do I change width and height so that they always equal this constant ratio?

const obj = {
width: 100 * Math.random(),
get height() { return this.width / 1.4; },
set height(v) { this.width = v * 1.4; },
};
Use getters and setters to change them both at the same time.

If you're getting two random numbers and need to change them to equal the ratio, you'll need to pick one and define the other based on it. For instance:
width = height * 1.4;

Related

Getting the most rectangular width and height values ​that can fit a large amount of pixels in an image

What's the best way to calculate the size needed to fit an array of pixels into an image as close as possible to a rectangle/square shape without losing or adding unnecessary pixels?
Using an image of 100 pixels as an example, the best size to fit all pixels would be 10x10, because 10*10 is 100, that is, it adds the least amount of extra pixels (in this case 0). 100x1 also fits all pixels, but it sure is much less rectangular than 10x10.
And in order to fit 101 pixels the best size is 8x13, although 8*13 is 104, it's the only multiplication that doesn't lose any pixels, adds few extra pixels (3), and has the most rectangular shape.
So far I have been able to solve this problem with the following rules:
Dividing width by height must result in a value greater than 0.5 and
less than 1.5. (This will ensure that only the most rectangular values ​​are kept).
Multiplying width by height must result in a value greater than or
equal to the number of pixels.
By applying these rules I end up with a variety of possibilities, the best one being the one that, when multiplied, most closely approximates the number of pixels.
Here's what my code looks like currently:
function loopPixels(pixels, callback) {
for (let x = 2; x < pixels; x++) {
for (let y = 2; y < pixels; y++) {
callback(x, y);
}
}
}
function getRectangle(pixels) {
let result = {extraPixels: pixels};
loopPixels(pixels, (left, right) => {
let half = (left/right);
let total = (left*right);
if (Math.round(half) == 1 && total >= pixels) {
if (total-pixels < result.extraPixels) {
result = {size: [left, right], extraPixels: total-pixels};
}
}
})
return result;
}
getRectangle(101) // must return [[8, 13], 3] (width, height and additional pixels)
What it does is keep a variable holding the smallest result of (width*height)-pixels, which is the difference between the found values ​​and the number of pixels.
Although it works fine for small amounts of pixels, with huge values (which would likely return 1000x1000 sizes), it's ultra slow.
Is there a specific reason for such slowness? and would it be possible to get the same result without using nested for loops?
The following code can be made more efficient but it is very descriptive. It takes a pixel count (n) and a k value which denotes the best k matches.
Lets try 68M pixels for some resonable aspect ratios.
function getReasonableDimensions(n,k){
var max = ~~Math.sqrt(n);
return Array.from({length: max}, (_,i,a) => [n%(max-i),max-i])
.sort((a,b) => a[0] - b[0])
.slice(0,k)
.map(t => [Math.floor(n/t[1]), t[1]]);
}
var res = getReasonableDimensions(68000000,10)
console.log(JSON.stringify(res));

CSS / CGRect Positioning

CSS
I have a database with a table plan, each table has a property of geometry, with the x and y values.
These values when presented in a web browser get rendered like so:
getStyles() {
const { x, y } = this.props.geometry || {};
return {
left: `${x}%`,
top: `${y}%`,
};
}
So obviously the x and y are percentage values out of 100.
iOS
I've created a UIScrollView and a subclass for the view of each table (TableView).
When the view is added to the scrollView, a method inside TableView gets called to update the table position which looks like this:
- (void)updateTablePosition:(Table *)table {
if (self.superview) {
float x_position = (self.superview.frame.size.width / 100) * table.position.x;
float y_position = (self.superview.frame.size.height / 100) * table.position.y;
[self setFrame:CGRectMake(x_position, y_position, self.frame.size.width, self.frame.size.height)];
}
}
The positions are perfect! However, I have a pan gesture for each TableView which allows me to move them and change the position, the only problem is I can't figure out how to translate this value back to what it would be in CSS (a percentage).
Edit: Removed code to change position because it was completely wrong.
- (void)tablePanAction:(UIPanGestureRecognizer *)sender {
UIScrollView *scrollView = (UIScrollView *)self.superview;
if (sender.state == UIGestureRecognizerStateBegan) {
[scrollView setScrollEnabled:NO];
}
CGPoint updatedLocation = [sender locationInView:scrollView];
self.center = updatedLocation;
if (sender.state == UIGestureRecognizerStateEnded) {
// from here, we should convert the updated location
// back to a relative percentage of the scrollView
}
}
And it turns out.. the answer was ridiculously simple, just took a little bit of thinking. Anyway, i'm answering this question just in case anyones not completely clued up on maths.
The new Table.geometry positions can be calculated like so:
float new_css_x = (self.frame.origin.x / scrollView.frame.size.width) * 100;
float new_css_y = (self.frame.origin.y / scrollView.frame.size.height) * 100;

Why does this expression equal NaN, but equal a valid answer when defined somewhere else?

So I'm writing a game on JS Canvas and I'm making my own GUI from scratch. To do so, I made a button object with fields x, y, width, height and intersects(click_event). For some reason, when I directly put this expression for x, it returns NaN even though the expression works everywhere else.
It's just a simple game on Canvas. I know I could probably use some dirty trick to work around it, but I want to keep my code clean. I just don't understand why this wouldn't work.
var button = {
height:80,
width:200,
x:canvas.width/2 - this.width/2, //this is the problem
y:200,
//other stuff
};
console.log(button.x); //this prints "NaN"
console.log(canvas.width/2 - button.width/2); //prints correct num
The canvas width is 1000, so 1000 / 2 - 200 / 2 should equal 400, which it does when called inside console.log.
But when I put it inside button.x it evaluates to NaN.
You can't access/reference a property within a object during initialization.
So this will never work:
var myObject = {
height: 2
doubleHeight: 2 * this.height
}
One solution would be to add the poperty after you have initialized the object. Your code would like this:
var button = {
height:80,
width:200,
y:200,
//other stuff
};
button.x = canvas.width/2 - button.width/2
Another solution would be to wrap inside function
function createButton(height, width, canvasWidth) {
return {
height: height,
width: width,
y: width,
x: canvasWidth/2 - width/2
}
}
It can be achieved by using constructor function
var button = new function() {
this.height=80;
this.width=200;
this.x = canvas.width/2 - this.width/2;
this.y=200;
}

How to plot x,y coordinates of item on a image in html

I need to place/plot a pin(Small image to point the parts of person) on a image(For example: A image of person).
I am getting x,y,height,width values from server for a specific pin and i am creating one div element for each pin and assigning x,y,height,width values.
In Javascript, i am calculating view scale value in below mentioned way and multiply view scale with x,y,width,height and assign it into pin div element.
const screenwidth = screen.width;
const screenheight = screen.height;
viewscale = Math.min(screenwidth / mainImagewidth, screenheight / mainImageheight);
I am not able to place the pin on exact position of main image. Please help me if someone has idea of this logic.
Update:
Please find below the explanation through image.
Red Color rectangle is the screen. Green is the main image, let's say human image. Black color rectangle is a pin to describe a part in human image. I am getting x,y coordinates for this black colored rect pin from server.
Assuming I've understood this correctly, here's a possible demo solution:
First, define a config for your pin:
const pinConfig = {
width: 45,
height: 45,
offsetLeft: 40,
offsetTop: 75
};
Define a simple key/value map for getting the correct size type (width or height) when given an offset type (left or top):
const offsetTypeToSizeDimensionMap = {
left: 'width',
top: 'height'
};
Use a simple fn that calculates offset position relative to size. size / 2 because we need to compensate for the size of the pin, so positioning is based on the center of the element.
const calcRelativeOffsetPos = (offsetPos, size) => offsetPos - (size / 2);
Here's a style attribute string generating fn, accepts an object (our pinConfig above, basically):
const generateStylesString = (stylesConfig) => {
return Object.keys(stylesConfig).map((styleProp) => {
if (styleProp.includes('offset')){
const stylePropName = styleProp.split('offset')[1].toLowerCase();
const relativeSizeTypeByOffsetType = offsetTypeToSizeDimensionMap[stylePropName];
const calculatedRelativeOffsetPos = calcRelativeOffsetPos(stylesConfig[styleProp], stylesConfig[relativeSizeTypeByOffsetType]);
return stylePropName + ': ' + calculatedRelativeOffsetPos + 'px; ';
}
return styleProp + ': ' + stylesConfig[styleProp] + 'px; ';
}).join('');
};
Finally, set style attr to .child-parent node:
document.querySelector('.child-image').setAttribute('style', generateStylesString(pinConfig));
Here's an example on Codepen: https://codepen.io/Inlesco/pen/xLwjLy?editors=1010
If you need the React way, it's easy - just concat the generated inline styles string to a JSX element when mapping out the elements and that's it.
Feel free to provide feedback, so we can improve this :)

Measure distance between two HTML elements' centers

If I have HTML elements as follows:
<div id="x"></div>
<div id="y" style="margin-left:100px;"></div>
...how do I find the distance between them in pixels using JavaScript?
Get their positions, and use the Pythagorean Theorem to determine the distance between them...
function getPositionAtCenter(element) {
const {top, left, width, height} = element.getBoundingClientRect();
return {
x: left + width / 2,
y: top + height / 2
};
}
function getDistanceBetweenElements(a, b) {
const aPosition = getPositionAtCenter(a);
const bPosition = getPositionAtCenter(b);
return Math.hypot(aPosition.x - bPosition.x, aPosition.y - bPosition.y);
}
const distance = getDistanceBetweenElements(
document.getElementById("x"),
document.getElementById("y")
);
If you browser doesn't support Math.hypot(), you can use instead:
Math.sqrt(
Math.pow(aPosition.x - bPosition.x, 2) +
Math.pow(aPosition.y - bPosition.y, 2)
);
The Pythagorean Theorem relates to the relationship between the sides of a right-angled triangle.
The elements are plotted on a Cartesian coordinate system (with origin in top left), so you can imagine a right-angled triangle between the elements' coordinates (the unknown side is the hypotenuse).
You can modify the equation to get the value of c by getting the square root of the other side.
Then, you simply plug the values in (the x and y are the differences between the elements once their centers are determined) and you will find the length of the hypotenuse, which is the distance between the elements.
as far as div's are now empty, the basic idea is to measure the distance between their left top corners
distX = y.offsetLeft - x.offsetLeft;
distY = y.offsetTop - x.offsetTop;
distance = Math.sqrt(distX*distX + distY*distY);
alert(Math.floor(distance));
but you have to substract first div's height and width, if you put something inside. This method has some issues with support and border width of elements in different browsers.
anyway take a look at Fiddle
Note, that even with content (if you don't change it with css) divs will be 100% width, so if you want just to measure br's height use:
distance = = y.offsetTop - x.offsetTop;

Categories

Resources