Math.log, a Natural logarithmic y scale for currency time-series? - javascript

My attempts to make my y values scale to logarithmic are turning my data upside-down.
I have vanilla js code and every implementation I read about are tied up in huge libraries of production code, I am not sure how/what to extract, I need some guidance as I could not put my finger on the problem weather it be sum or miss-use of the Math functions.
I am testing this by drawing the y 'data' to canvas (no libraries used) my x axis is constant 2px difference
Math.log uses e (2.718) as default base which is what I have read.. So I should be seeing my price data on a natural log scale but it wont work.
function logScale(data){
var log=data.slice(0);
var i=log.length;
while(i--){
log[i]=Math.log(data[i]); // should be natural but I don't see a change
// log[i]=Math.pow(Math.log(data[i]),10); //upside-down
// log[i]=Math.log(data[i])/Math.LN10; //no visible change when drawn to canvas
}
console.dir(log);
return log;
}
Another attempt from a couple of weeks ago where I am using the data min, max and difference. then removing all the infinity.
function ᐥlogarithm(data){
var Lmax,Lmin,Ldif,Logarithm,infinity;
Lmax=Math.max.apply(this,data);
Lmin=Math.min.apply(this,data);
Ldif=(Lmax-Lmin);
Logarithm=[];
infinity=[];
for(var i=data.length-1;i>=0;i--){
Logarithm[i]=Math.log((data[i]-Lmin)/Ldif);
if(Logarithm[i]===-Infinity){infinity.push(i);}
}
for(var i=0;i<=infinity.length-1;i++){Logarithm.splice(infinity[i],1);}
return Logarithm;
}
The data looks different but still not quite like log scale. It is vertically warped to best describe it.
Please note jsfiddle-ing this won't work as The data is bitcoin prices (real time) so as there is no working code for a log scale there is no good way to show a comparison. Bitcoin or any other exchange data gets served as is so these functions would (if they worked) transfom any data array to log scale.
How do D3 do it? What is wrong with my code?

Related

JavaScript: How do i use a string as a piece of code

I am making a simple-ish graph maker to visualise equations. I need to be able to have the user input a string in a textbox and then interpret that as a piece of code I can execute to produce the graph. The way I am displaying the graph is by going through x in small increments and using an equation to then calculate the y position and then drawing a line between the points. At the moment I am just manually making a function in the code for example:
function(val) { return (val * val) + 5; }
but I need to be able to create a similar function from a string so the user could just input something like "(x*x)+(2*x)". is there any way to do this?
Canonically, this is done with eval(), although it comes with many caveats and should probably be avoided.
There are several questions on SO that discuss eval alternatives, but in your case, I would suggest a very bare-bones parser -- especially if all you're handling are mathematical equations.

Multiplayer Game - Client Interpolation Calculation?

I am creating a Multiplayer game using socket io in javascript. The game works perfectly at the moment aside from the client interpolation. Right now, when I get a packet from the server, I simply set the clients position to the position sent by the server. Here is what I have tried to do:
getServerInfo(packet) {
var otherPlayer = players[packet.id]; // GET PLAYER
otherPlayer.setTarget(packet.x, packet.y); // SET TARGET TO MOVE TO
...
}
So I set the players Target position. And then in the Players Update method I simply did this:
var update = function(delta) {
if (x != target.x || y != target.y){
var direction = Math.atan2((target.y - y), (target.x - x));
x += (delta* speed) * Math.cos(direction);
y += (delta* speed) * Math.sin(direction);
var dist = Math.sqrt((x - target.x) * (x - target.x) + (y - target.y)
* (y - target.y));
if (dist < treshhold){
x = target.x;
y = target.y;
}
}
}
This basically moves the player in the direction of the target at a fixed speed. The issue is that the player arrives at the target either before or after the next information arrives from the server.
Edit: I have just read Gabriel Bambettas Article on this subject, and he mentions this:
Say you receive position data at t = 1000. You already had received data at t = 900, so you know where the player was at t = 900 and t = 1000. So, from t = 1000 and t = 1100, you show what the other player did from t = 900 to t = 1000. This way you’re always showing the user actual movement data, except you’re showing it 100 ms “late”.
This again assumed that it is exactly 100ms late. If your ping varies a lot, this will not work.
Would you be able to provide some pseudo code so I can get an Idea of how to do this?
I have found this question online here. But none of the answers provide an example of how to do it, only suggestions.
I'm completely fresh to multiplayer game client/server architecture and algorithms, however in reading this question the first thing that came to mind was implementing second-order (or higher) Kalman filters on the relevant variables for each player.
Specifically, the Kalman prediction steps which are much better than simple dead-reckoning. Also the fact that Kalman prediction and update steps work somewhat as weighted or optimal interpolators. And futhermore, the dynamics of players could be encoded directly rather than playing around with abstracted parameterizations used in other methods.
Meanwhile, a quick search led me to this:
An improvement of dead reckoning algorithm using kalman filter for minimizing network traffic of 3d on-line games
The abstract:
Online 3D games require efficient and fast user interaction support
over network, and the networking support is usually implemented using
network game engine. The network game engine should minimize the
network delay and mitigate the network traffic congestion. To minimize
the network traffic between game users, a client-based prediction
(dead reckoning algorithm) is used. Each game entity uses the
algorithm to estimates its own movement (also other entities'
movement), and when the estimation error is over threshold, the entity
sends the UPDATE (including position, velocity, etc) packet to other
entities. As the estimation accuracy is increased, each entity can
minimize the transmission of the UPDATE packet. To improve the
prediction accuracy of dead reckoning algorithm, we propose the Kalman
filter based dead reckoning approach. To show real demonstration, we
use a popular network game (BZFlag), and improve the game optimized
dead reckoning algorithm using Kalman filter. We improve the
prediction accuracy and reduce the network traffic by 12 percents.
Might seem wordy and like a whole new problem to learn what it's all about... and discrete state-space for that matter.
Briefly, I'd say a Kalman filter is a filter that takes into account uncertainty, which is what you've got here. It normally works on measurement uncertainty at a known sample rate, but it could be re-tooled to work with uncertainty in measurement period/phase.
The idea being that in lieu of a proper measurement, you'd simply update with the kalman predictions. The tactic is similar to target tracking applications.
I was recommended them on stackexchange myself - took about a week to figure out how they were relevant but I've since implemented them successfully in vision processing work.
(...it's making me want to experiment with your problem now !)
As I wanted more direct control over the filter, I copied someone else's roll-your-own implementation of a Kalman filter in matlab into openCV (in C++):
void Marker::kalmanPredict(){
//Prediction for state vector
Xx = A * Xx;
Xy = A * Xy;
//and covariance
Px = A * Px * A.t() + Q;
Py = A * Py * A.t() + Q;
}
void Marker::kalmanUpdate(Point2d& measuredPosition){
//Kalman gain K:
Mat tempINVx = Mat(2, 2, CV_64F);
Mat tempINVy = Mat(2, 2, CV_64F);
tempINVx = C*Px*C.t() + R;
tempINVy = C*Py*C.t() + R;
Kx = Px*C.t() * tempINVx.inv(DECOMP_CHOLESKY);
Ky = Py*C.t() * tempINVy.inv(DECOMP_CHOLESKY);
//Estimate of velocity
//units are pixels.s^-1
Point2d measuredVelocity = Point2d(measuredPosition.x - Xx.at<double>(0), measuredPosition.y - Xy.at<double>(0));
Mat zx = (Mat_<double>(2,1) << measuredPosition.x, measuredVelocity.x);
Mat zy = (Mat_<double>(2,1) << measuredPosition.y, measuredVelocity.y);
//kalman correction based on position measurement and velocity estimate:
Xx = Xx + Kx*(zx - C*Xx);
Xy = Xy + Ky*(zy - C*Xy);
//and covariance again
Px = Px - Kx*C*Px;
Py = Py - Ky*C*Py;
}
I don't expect you to be able to use this directly though, but if anyone comes across it and understand what 'A', 'P', 'Q' and 'C' are in state-space (hint hint, state-space understanding is a pre-req here) they'll likely see how connect the dots.
(both matlab and openCV have their own Kalman filter implementations included by the way...)
This question is being left open with a request for more detail, so I’ll try to fill in the gaps of Patrick Klug’s answer. He suggested, reasonably, that you transmit both the current position and the current velocity at each time point.
Since two position and two velocity measurements give a system of four equations, it enables us to solve for a system of four unknowns, namely a cubic spline (which has four coefficients, a, b, c and d). In order for this spline to be smooth, the first and second derivatives (velocity and acceleration) should be equal at the endpoints. There are two standard, equivalent ways of calculating this: Hermite splines (https://en.wikipedia.org/wiki/Cubic_Hermite_spline) and Bézier splines (http://mathfaculty.fullerton.edu/mathews/n2003/BezierCurveMod.html). For a two-dimensional problem such as this, I suggested separating variables and finding splines for both x and y based on the tangent data in the updates, which is called a clamped piecewise cubic Hermite spline. This has several advantages over the splines in the link above, such as cardinal splines, which do not take advantage of that information. The locations and velocities at the control points will match, you can interpolate up to the last update rather than the one before, and you can apply this method just as easily to polar coordinates if the game world is inherently polar like Space wars. (Another approach sometimes used for periodic data is to perform a FFT and do trigonometric interpolation in the frequency domain, but that doesn’t sound applicable here.)
What originally appeared here was a derivation of the Hermite spline using linear algebra in a somewhat unusual way that (unless I made a mistake entering it) would have worked. However, the comments convinced me it would be more helpful to give the standard names for what I was talking about. If you are interested in the mathematical details of how and why this works, this is a better explanation: https://math.stackexchange.com/questions/62360/natural-cubic-splines-vs-piecewise-hermite-splines
A better algorithm than the one I gave is to represent the sample points and first derivatives as a tridiagonal matrix that, multiplied by a column vector of coefficients, produces the boundary conditions, and solve for the coefficients. An alternative is to add control points to a Bézier curve where the tangent lines at the sampled points intersect and on the tangent lines at the endpoints. Both methods produce the same, unique, smooth cubic spline.
One situation you might be able to avoid if you were choosing the points rather than receiving updates is if you get a bad sample of points. You can’t, for example, intersect parallel tangent lines, or tell what happened if it’s back in the same place with a nonzero first derivative. You’d never choose those points for a piecewise spline, but you might get them if an object made a swerve between updates.
If my computer weren’t broken right now, here is where I would put fancy graphics like the ones I posted to TeX.SX. Unfortunately, I have to bow out of those for now.
Is this better than straight linear interpolation? Definitely: linear interpolation will get you straight- line paths, quadratic splines won't be smooth, and higher-order polynomials will likely be overfitted. Cubic splines are the standard way to solve that problem.
Are they better for extrapolation, where you try to predict where a game object will go? Possibly not: this way, you’re assuming that a player who’s accelerating will keep accelerating, rather than that they will immediately stop accelerating, and that could put you much further off. However, the time between updates should be short, so you shouldn’t get too far off.
Finally, you might make things a lot easier on yourself by programming in a bit more conservation of momentum. If there’s a limit to how quickly objects can turn, accelerate or decelerate, their paths will not be able to diverge as much from where you predict based on their last positions and velocities.
Depending on your game you might want to prefer smooth player movement over super-precise location. If so, then I'd suggest to aim for 'eventual consistency'. I think your idea of keeping 'real' and 'simulated' data-points is a good one. Just make sure that from time to time you force the simulated to converge with the real, otherwise the gap will get too big.
Regarding your concern about different movement speed I'd suggest you include the current velocity and direction of the player in addition to the current position in your packet. This will enable you to more smoothly predict where the player would be based on your own framerate/update timing.
Essentially you would calculate the current simulated velocity and direction taking into account the last simulated location and velocity as well as last known location and velocity (put more emphasis on the second) and then simulate new position based on that.
If the gap between simulated and known gets too big, just put more emphasis on the known location and the otherPlayer will catch up quicker.

JS Canvas get pixel value very frequently

I am creating a video game based on Node.js/WebGL/Canvas/PIXI.js.
In this game, blocks have a generic size: they can be circles, polygons, or everything. So, my physical engine needs to know where exactly the things are, what pixels are walls and what pixels are not. Since I think PIXI don't allow this, I create an invisible canvas where I put all the wall's images of the map. Then, I use the function getImageData to create a function "isWall" at (x, y):
function isWall(x, y):
return canvas.getImageData(x, y, 1, 1).data[3] != 0;
However, this is very slow (it takes up to 70% of the CPU time of the game, according to Chrome profiling). Also, since I introduced this function, I sometimes got the error "Oops, WebGL crashed" without any additional advice.
Is there a better method to access the value of the pixel? I thought about storing everything in a static bit array (walls have a fixed size), with 1 corresponding to a wall and 0 to a non-wall. Is it reasonable to have a 10-million-cells array in memory?
Some thoughts:
For first check: Use collision regions for all of your objects. The regions can even be defined for each side depending on shape (ie. complex shapes). Only check for collisions inside intersecting regions.
Use half resolution for hit-test bitmaps (or even 25% if your scenario allow). Our brains are not capable of detecting pixel-accurate collisions when things are moving so this can be taken advantage of.
For complex shapes, pre-store the whole bitmap for it (based on its region(s)) but transform it to a single value typed array like Uint8Array with high and low values (re-use this instead of getting one and one pixels via the context). Subtract object's position and use the result as a delta for your shape region, then hit-testing the "bitmap". If the shape rotates, transform incoming check points accordingly (there is probably a sweet-spot here where updating bitmap becomes faster than transforming a bunch of points etc. You need to test for your scenario).
For close-to-square shaped objects do a compromise and use a simple rectangle check
For circles and ellipses use un-squared values to check distances for radius.
In some cases you can perhaps use collision predictions which you calculate before the games starts and when knowing all objects positions, directions and velocities (calculate the complete motion path, find intersections for those paths, calculate time/distance to those intersections). If your objects change direction etc. due to other events during their path, this will of course not work so well (or try and see if re-calculating is beneficial or not).
I'm sure why you would need 10m stored in memory, it's doable though - but you will need to use something like a quad-tree and split the array up, so it becomes efficient to look up a pixel state. IMO you will only need to store "bits" for the complex shapes, and you can limit it further by defining multiple regions per shape. For simpler shapes just use vectors (rectangles, radius/distance). Do performance tests often to find the right balance.
In any case - these sort of things has to be hand-optimized for the very scenario, so this is just a general take on it. Other factors will affect the approach such as high velocities, rotation, reflection etc. and it will quickly become very broad. Hope this gives some input though.
I use bit arrays to store 0 || 1 info and it works very well.
The information is stored compactly and gets/sets are very fast.
Here is the bit library I use:
https://github.com/drslump/Bits-js/blob/master/lib/Bits.js
I've not tried with 10m bits so you'll have to try it on your own dataset.
The solution you propose is very "flat", meaning each pixel must have a corresponding bit. This results in a large amount of memory being required--even if information is stored as bits.
An alternative testing data ranges instead of testing each pixel:
If the number of wall pixels is small versus the total number of pixels you might try storing each wall as a series of "runs". For example, a wall run might be stored in an object like this (warning: untested code!):
// an object containing all horizontal wall runs
var xRuns={}
// an object containing all vertical wall runs
var yRuns={}
// define a wall that runs on y=50 from x=100 to x=185
// and then runs on x=185 from y=50 to y=225
var y=50;
var x=185;
if(!xRuns[y]){ xRuns[y]=[]; }
xRuns[y].push({start:100,end:185});
if(!yRuns[x]){ yRuns[x]=[]; }
yRuns[x].push({start:50,end:225});
Then you can quickly test an [x,y] against the wall runs like this (warning untested code!):
function isWall(x,y){
if(xRuns[y]){
var a=xRuns[y];
var i=a.length;
do while(i--){
var run=a[i];
if(x>=run.start && x<=run.end){return(true);}
}
}
if(yRuns[x]){
var a=yRuns[x];
var i=a.length;
do while(i--){
var run=a[i];
if(y>=run.start && y<=run.end){return(true);}
}
}
return(false);
}
This should require very few tests because the x & y exactly specify which array of xRuns and yRuns need to be tested.
It may (or may not) be faster than testing the "flat" model because there is overhead getting to the specified element of the flat model. You'd have to perf test using both methods.
The wall-run method would likely require much less memory.
Hope this helps...Keep in mind the wall-run alternative is just off the top of my head and probably requires tweaking ;-)

d3.v3 scatterplot with all circles the same radius

Every example I have found shows all of the scatter plot points to be of random radii. Is it possible to have them all the same size? If I try to statically set the radius all of the circles will be very small (I'm assuming the default radius). However, if I use Math.random() as in most examples there are circles large and small. I want all the circles to be large. Is there a way to do that? Here's the code snippet forming the graph data using Math.random() (this works fine for some reason):
function scatterData(xData, yData)
{
var data = [];
for (var i = 0; i < seismoNames.length; i++)
{
data.push({
key: seismoNames[i],
values: []
});
var xVals=""+xData[i];
xVals=xVals.split(",");
var yVals=""+yData[i];
yVals=yVals.split(",");
for (var j = 0; j < xVals.length; j++)
{
data[i].values.push({
x: xVals[j],
y: yVals[j],
size: Math.random()
});
}
}
return data;
}
Math.random() spits out values between 0 and 1 such as 0.164259538891095 and 0.9842195005008699. I have tried putting these as static values in the 'size' attribute, but no matter what the circles are always really small. Is there something I'm missing?
Update: The NVD3 API has changed, and now uses pointSize, pointSizeDomain, etc. instead of just size. The rest of the logic for exploring the current API without complete documentation still applies.
For NVD3 charts, the idea is that all adjustments you make can be done by calling methods on the chart function itself (or its public components) before calling that function to draw the chart in a specific container element.
For example, in the example you linked too, the chart function was initialized like this:
var chart = nv.models.scatterChart()
.showDistX(true)
.showDistY(true)
.color(d3.scale.category10().range());
chart.xAxis.tickFormat(d3.format('.02f'));
chart.yAxis.tickFormat(d3.format('.02f'));
The .showDistX() and .showDistY() turn on the tick-mark distribution in the axes; .color() sets the series of colours you want to use for the different categories. The next too lines access the default axis objects within the chart and set the number format to be a two-digit decimal. You can play around with these options by clicking on the scatterplot option from the "Live Code" page.
Unfortunately, the makers of the NVD3 charts don't have a complete documentation available yet describing all the other options you can set for each chart. However, you can use the javascript itself to let you find out what methods are available.
Inspecting a NVD3.js chart object to determine options
Open up a web page that loads the d3 and nvd3 library. The live code page on their website works fine. Then open up your developer's console command line (this will depend on your browser, search your help pages if you don't know how yet). Now, create a new nvd3 scatter chart function in memory:
var testChart = nv.models.scatterChart();
On my (Chrome) console, the console will then print out the entire contents of the function you just created. It is interesting, but very long and difficult to interpret at a glance. And most of the code is encapsulated so you can't change it easily. You want to know which properties you can change. So run this code in the next line of your console:
for (keyname in testChart){console.log(keyname + " (" + typeof(testChart[keyname]) + ")");}
The console should now print out neatly the names of all the methods and objects that you can access from that chart function. Some of these will have their own methods and objects you can access; discover what they are by running the same routine, but replacing the testChart with testChart.propertyName, like this:
for (keyname in testChart.xAxis){console.log(keyname + " (" + typeof(testChart.xAxis[keyname]) + ")");}
Back to your problem. The little routine I suggested above doesn't sort the property names in any order, but skimming through the list you should see three options that relate to size (which was the data variable that the examples were using to set radius)
size (function)
sizeDomain (function)
sizeRange (function)
Domain and range are terms used by D3 scales, so that gives me a hint about what they do. Since you don't want to scale the dots, let's start by looking at just the size property. If you type the following in the console:
testChart.size
It should print back the code for that function. It's not terribly informative for what we're interested in, but it does show me that NVD3 follows D3's getter/setter format: if you call .property(value) you set the property to that value, but if you call .property() without any parameters, it will return back the current value of that property.
So to find out what the size property is by default, call the size() method with no parameters:
testChart.size()
It should print out function (d) { return d.size || 1}, which tells us that the default value is a function that looks for a size property in the data, and if it doesn't exist returns the constant 1. More generally, it tells us that the value set by the size method determines how the chart gets the size value from the data. The default should give a constant size if your data has no d.size property, but for good measure you should call chart.size(1); in your initialization code to tell the chart function not to bother trying to determine size from the data and just use a constant value.
Going back to the live code scatterplot can test that out. Edit the code to add in the size call, like this:
var chart = nv.models.scatterChart()
.showDistX(true)
.showDistY(true)
.color(d3.scale.category10().range())
.size(1);
chart.xAxis.tickFormat(d3.format('.02f'));
chart.yAxis.tickFormat(d3.format('.02f'));
Adding that extra call successfully sets all the dots to the same size -- but that size is definitely not 1 pixel, so clearly there is some scaling going on.
First guess for getting bigger dots would be to change chart.size(1) to chart.size(100). Nothing changes, however. The default scale is clearly calculating it's domain based on the data and then outputting to a standard range of sizes. This is why you couldn't get big circles by setting the size value of every data element to 0.99, even if that would create a big circle when some of the data was 0.01 and some was 0.99. Clearly, if you want to change the output size, you're going to have to set the .sizeRange() property on the chart, too.
Calling testChart.sizeRange() in the console to find out the default isn't very informative: the default value is null (nonexistent). So I just made a guess that, same as the D3 linear scale .range() function, the expected input is a two-element array consisting of the max and min values. Since we want a constant, the max and min will be the same. So in the live code I change:
.size(1);
to
.size(1).sizeRange([50,50]);
Now something's happening! But the dots are still pretty small: definitely not 50 pixels in radius, it looks closer to 50 square pixels in area. Having size computed based on the area makes sense when sizing from the data, but that means that to set a constant size you'll need to figure out the approximate area you want: values up to 200 look alright on the example, but the value you choose will depend on the size of your graph and how close your data points are to each other.
--ABR
P.S. I added the NVD3.js tag to your question; be sure to use it as your main tag in the future when asking questions about the NVD3 chart functions.
The radius is measured in pixels. If you set it to a value less than one, yes, you will have a very small circle. Most of the examples that use random numbers also use a scaling factor.
If you want all the circles to have a constant radius you don't need to set the value in the data, just set it when you add the radius attribute.
Not sure which tutorials you were looking at, but start here: https://github.com/mbostock/d3/wiki/Tutorials
The example "Three little circles" does a good step-by-step of the different things you can do with circles:
http://mbostock.github.io/d3/tutorial/circle.html

Click detection in a 2D isometric grid?

I've been doing web development for years now and I'm slowly getting myself involved with game development and for my current project I've got this isometric map, where I need to use an algorithm to detect which field is being clicked on. This is all in the browser with Javascript by the way.
The map
It looks like this and I've added some numbers to show you the structure of the fields (tiles) and their IDs. All the fields have a center point (array of x,y) which the four corners are based on when drawn.
As you can see it's not a diamond shape, but a zig-zag map and there's no angle (top-down view) which is why I can't find an answer myself considering that all articles and calculations are usually based on a diamond shape with an angle.
The numbers
It's a dynamic map and all sizes and numbers can be changed to generate a new map.
I know it isn't a lot of data, but the map is generated based on the map and field sizes.
- Map Size: x:800 y:400
- Field Size: 80x80 (between corners)
- Center position of all the fields (x,y)
The goal
To come up with an algorithm which tells the client (game) which field the mouse is located in at any given event (click, movement etc).
Disclaimer
I do want to mention that I've already come up with a working solution myself, however I'm 100% certain it could be written in a better way (my solution involves a lot of nested if-statements and loops), and that's why I'm asking here.
Here's an example of my solution where I basically find a square with corners in the nearest 4 known positions and then I get my result based on the smallest square between the 2 nearest fields. Does that make any sense?
Ask if I missed something.
Here's what I came up with,
function posInGrid(x, y, length) {
xFromColCenter = x % length - length / 2;
yFromRowCenter = y % length - length / 2;
col = (x - xFromColCenter) / length;
row = (y - yFromRowCenter) / length;
if (yFromRowCenter < xFromColCenter) {
if (yFromRowCenter < (-xFromColCenter))--row;
else++col;
} else if (yFromRowCenter > xFromColCenter) {
if (yFromRowCenter < (-xFromColCenter))--col;
else++row;
}
return "Col:"+col+", Row:"+row+", xFC:"+xFromColCenter+", yFC:"+yFromRowCenter;
}
X and Y are the coords in the image, and length is the spacing of the grid.
Right now it returns a string, just for testing.. result should be row and col, and those are the coordinates I chose: your tile 1 has coords (1,0) tile 2 is(3,0), tile 10 is (0,1), tile 11 is (2,1). You could convert my coordinates to your numbered tiles in a line or two.
And a JSFiddle for testing http://jsfiddle.net/NHV3y/
Cheers.
EDIT: changed the return statement, had some variables I used for debugging left in.
A pixel perfect way of hit detection I've used in the past (in OpenGL, but the concept stands here too) is an off screen rendering of the scene where the different objects are identified with different colors.
This approach requires double the memory and double the rendering but the hit detection of arbitrarily complex scenes is done with a simple color lookup.
Since you want to detect a cell in a grid there are probably more efficient solutions but I wanted to mention this one for it's simplicity and flexibility.
This has been solved before, let me consult my notes...
Here's a couple of good resources:
From Laserbrain Studios, The basics of isometric programming
Useful article in the thread posted here, in Java
Let me know if this helps, and good luck with your game!
This code calculates the position in the grid given the uneven spacing. Should be pretty fast; almost all operations are done mathematically, using just one loop. I'll ponder the other part of the problem later.
def cspot(x,y,length):
l=length
lp=length+1
vlist = [ (l*(k%2))+(lp*((k+1)%2)) for k in range(1,y+1) ]
vlist.append(1)
return x + sum(vlist)

Categories

Resources