I'm looking at the developers reference for IE10 (http://msdn.microsoft.com/en-us/library/ie/hh673549(v=vs.85).aspx) and I'm basically trying to figure out how touch works in IE10.
I have been unable to find if there is a property that tells you how many touches are on the screen. I'm basically looking for a JavaScript property of some sort that I could check.
The only way that you seem to be able to do that is to trap the MSPointerDown and MSPointerUp events and keep count of how many unique pointers you have based on the pointerId.
You can check the following section of the above document that you listed... http://msdn.microsoft.com/en-us/library/ie/hh673557(v=vs.85).aspx#maxtouchpoints
// To test for touch capable hardware
if(navigator.msMaxTouchPoints) { ... }
// To test for multi-touch capable hardware
if(navigator.msMaxTouchPoints && navigator.msMaxTouchPoints > 1) { ... }
// To get the maximum number of points the hardware supports
var touchPoints = navigator.msMaxTouchPoints;
Related
I've been trying to make a multiplayer game using javascript (most of which is on the server, using Node.js); one of the core mechanics I want to make is that players will be able to design their own fighting style (right down to how they swing their sword etc). Problem is that I can't find any simple way of constraining players' movements.
I first tried to write a method that checks then clamps the player's style so it doesn't look like they're breaking every limb simultaneously, but that didn't really work. I then found the wonderful CANNON.ConeTwistConstraint, but after looking through the documentation I've found that CANNON.js's constraints don't seem to have any sort of built-in function for just testing whether two bodies are exceeding the constraint's limits. I've thought about having my game just create objects in a separate simulation and check whether a force is being applied to either object, but I'm not sure about how to go about this, or if there's a better way.
Is there a simple/easier solution to my problem? If not, what would be the least CPU-intensive way of implementing the above?
You can manually check if the ConeTwistConstraint is hitting its limit. If you have a look at the method CANNON.ConeEquation.prototype.computeB, you can see that it computes the constraint violation, "g", using cos() and a dot product. You can simply do the same with the following code.
var eq = coneTwistConstraint.coneEquation;
var g = Math.cos(eq.angle) - eq.axisA.dot(eq.axisB);
if(g > 0) {
// Constraint limit exceeded
} else {
// Constraint is within limits
}
Using the same strategy, you can check if the twist limit is exceeded. Since the twist equation is a CANNON.RotationalEquation, the code becomes:
var eq2 = coneTwistConstraint.twistEquation;
var g2 = Math.cos(eq2.maxAngle) - eq2.axisA.dot(eq2.axisB);
Has anyone implemented a javascript audio DAW with multiple tempo and meter change capabilities like most of the desktop daws (pro tools, sonar, and the like)? As far as I can tell, claw, openDAW, and web audio editor don't do this. Drawing a grid meter, converting between samples and MBT time, and rendering waveforms is easy when the tempo and meter do not change during the project, but when they do it gets quite a bit more complicated. I'm looking for any information on how to accomplish something like this. I'm aware that the source for Audacity is available, but I'd love to not have to dig through an enormous pile of code in a language I'm not an expert in to figure this out.
web-based DAW solutions exists.web-based DAW's are seen as SaaS(Software as a Service) applications.
They are lightweight and contain basic fundamental DAW features.
For designing rich client applications(RCA) you should take a look at GWT and Vaadin.
I recommend GWT because it is mature and has reusable components and its also AJAX driven.
Also here at musicradar site they have listed nine different browser based audio workstations.you can also refer to popcorn maker which is entirely javascript code.You can get some inspiration from there to get started.
You're missing the last step, which will make it easier.
All measures are relative to fractions of minutes, based on the time-signature and tempo.
The math gets a little more complex, now that you can't just plot 4/4 or 6/8 across the board and be done with it, but what you're looking at is running an actual time-line (whether drawn onscreen or not), and then figuring out where each measure starts and ends, based on either the running sum of a track's current length (in minutes/seconds), or based on the left-most take's x-coordinate (starting point) + duration...
or based on the running total of each measure's length in seconds, up to the current beat you care about.
var measure = { beats : 4, denomination : 4, tempo : 80 };
Given those three data-points, you should be able to say:
var measure_length = SECONDS_PER_MINUTE / measure.tempo * measure.beats;
Of course, that's currently in seconds. To get it in ms, you'd just use MS_PER_MINUTE, or whichever other ratio of minutes you'd want to measure by.
current_position + measure_length === start_of_next_measure;
You've now separated out each dimension required to allow you to calculate each measure on the fly.
Positioning each measure on the track, to match up with where it belongs on the timeline is as simple as keeping a running tally of where X is (the left edge of the measure) in ms (really in screen-space and project-coordinates, but ms can work fine for now).
var current_position = 0,
current_tempo = 120,
current_beats = 4,
current_denomination = 4,
measures = [ ];
measures.forEach(function (measure) {
if (measure.tempo !== current_tempo) {
/* draw tempo-change, set current_tempo */
/* draw time-signature */
}
if (measure.beats !== current_beats ||
measure.denomination !== current_denomination) {
/* set changes, draw time-signature */
}
draw_measure(measure, current_position);
current_position = MS_PER_MINUTE / measure.beats * measure.tempo;
});
Drawing samples just requires figuring out where you're starting from, and then sticking to some resolution (MS/MS*4/Seconds).
The added benefit of separating out the calculation of the time is that you can change the resolution of your rendering on the fly, by changing which time-scale you're comparing against (ms/sec/min/etc), so long as you re-render the whole thing, after scaling.
The rabbit hole goes deeper (for instance, actual audio tracks don't really care about measures/beats, though quantization-processes do), so to write a non-destructive, non-linear DAW, you can just set start-time and duration properties on views into your audio-buffer (or views into view-buffers of your audio buffer).
Those views would be the non-destructive windows that you can resize and drag around your track.
Then there's just the logic of figuring out snaps -- what your screen-space is, versus project-space, and when you click on a track's clip, which measure, et cetera, you're in, to do audio-snapping on resize/move.
Of course, to do a 1:1 recreation of ProTools in JS in the browser would not fly (gigs of RAM for one browser tab won't do, media capture API is still insufficient for multi-tracking, disk-writes are much, much more difficult in browser than in C++, in your OS of choice, et cetera), but this should at least give you enough to run with.
Let me know if I'm missing something.
I'm developing in JavaScript for IE10/IE11, and am trying to get custom multi-touch events to work. I'm using the MSGesture API, and it provides a nice abstraction layer, but I would like access to the underlying positions of the touched points. Is there a way to get this?
I guess it could be calculated from the initial offset, using the scale/translation information, but was thinking there would probably be a cleaner way?
Example code:
container.addEventListener("MSGestureChange", function (e) {
// We have access to e.g.
// e.scale / e.translationX / e.translationY
// Now, this is what we need, but it's only for one finger
// (or is it an average somehow?)
// e.offsetX
// e.offsetY
// Is it possible to get offsetX/Y for each finger (pointer)?
}, false);
I've got a demo that is required to run in Firefox on a Windows 7 touch tablet. According to this, Mozilla has implemented a standardized touch API. However, this does not work on a windows 7 tablet. None of these events are triggered in FF 14.
we have to use the MozTouchMove event. But all it does is dispatch sequential events. WI.e. finger one then finger two then finger three etc.
It's difficult to even distinguish two fingers from one. I'd have to measure the distance between updates the assign my own "IDs" to each "Region". After that, to detect a two-finger drag, we'd have to make sure the parsing stays the same throughout--which if one finger goes up may "undrag" as the "second" position is overwritten by the "first" position. Trying to come up with an approach. Any ideas?
// multitouch event handler for two finger scrolling in x or y
function onTouchMove(event) {
var eventTouch = new point(event.clientX, event.clientY);
if (previousTouch.x == 0 && previousTouch.y == 0) {
previousTouch = new point(eventTouch.x, eventTouch.y);
return;
}
//filter really close touches
//in this case, assume single touch and defer to system mouse
if (previousTouch !== undefined) {
if (eventTouch.distance(previousTouch) < 6) {
return;
}
}
fingerIndex ++;
// only track every other touch to keep fingers consistent
if( fingerIndex % 2 == 0)
{
document.getElementById("finger1").style.left = previousTouch.x + "px";
document.getElementById("finger1").style.top = previousTouch.y + "px";
document.getElementById("finger2").style.left = eventTouch.x + "px";
document.getElementById("finger2").style.top = eventTouch.y + "px";
}
previousTouch = eventTouch;
}
The status document you link to is dated July 2010 - Mozilla was close, two years ago. So you are now using the deprecated single-touch API and attempting to implement multi-touch - this is bound to get ugly. The documentation actually points you to the new multi-touch API that is available since Firefox 12. This lets you distinguish the touches properly and the documentation actually explains in much detail how you would do it.
Is there any way to detect which windows XP theme is in use?
I suspect that there is no specific api call you can make, but you may be able to figure it out by checking something on some DOM element, ie feature detection.
Another question: does the classic theme even exist on windows vista or windows 7?
edit - this is my solution:
function isXpTheme() {
var rgb;
var map = { "rgb(212,208,200)" : false,
"rgb(236,233,216)" : true };
var $elem = $("<button>");
$elem.css("backgroundColor", "ButtonFace");
$("body").append($elem);
var elem = $elem.get(0);
if (document.defaultView && document.defaultView.getComputedStyle) {
s = document.defaultView.getComputedStyle(elem, "");
rgb = s && s.getPropertyValue("background-color");
} else if (elem.currentStyle) {
rgb = (function (el) { // get a rgb based color on IE
var oRG =document.body.createTextRange();
oRG.moveToElementText(el);
var iClr=oRG.queryCommandValue("BackColor");
return "rgb("+(iClr & 0xFF)+","+((iClr & 0xFF00)>>8)+","+
((iClr & 0xFF0000)>>16)+")";
})(elem);
} else if (elem.style["backgroundColor"]) {
rgb = elem.style["backgroundColor"];
} else {
rgb = null;
}
$elem.remove();
rgb = rgb.replace(/[ ]+/g,"")
if(rgb){;
return map[rgb];
}
}
Next step is to figure out what this function returns on non-xp machines and/or figure out how to detect windows boxes. I have tested this in windows XP only, so vista and windows 7 might give different color values, it should be easy to add though.
Here is a demo page of this in action:
http://programmingdrunk.com/current-projects/isXpTheme/
Interesting question. The only thing that comes to mind is checking the size of a default button. It is styled differently in both themes, and I suppose it has a different size. This could be half-way reliable if you give the button a fixed text size.
I'll start the XP virtual machine and check whether the sizes actually differ.
Update: They do differ.
Google "I'm feeling lucky" button
in classic skin: 99 x 23.75 (sic!) pixels
in XP skin: 97 x 21.75 pixels
A second, less reliable approach that comes to mind is giving an element a CSS system colour and then parsing the resulting computed colour. In classic mode, the ButtonFace property will have a specific shade of grey and I think a different one in the default skin. Again, would have to be tested.
Update: they differ, too.
ButtonFace CSS system colour
in Windows classic skin: #D4D0C8
in XP skin: #ECE9D8
Obviously, both approaches will break if the user did any customization to colours and/or font sizes. The font size approach is the more reliable IMO, as there are fewer people playing around with that.
You would, of course, have to have comparison tables for all Windows generations, as presumably, the values for classic and default skin will differ.
just to give a starting point look for IsThemeActive()