RequestAnimationFrame behaviour..hows it work? - javascript

I have been playing around with requestAnimationframe for chrome, and wondered how it actually behaves.
When i load my canvas and draw, I get a steady 60FPS. If i scroll around using offset like a click and drag around a map, the FPS will drop (as expected)...once i stop dragging around the map, the FPS creeps back up to its steady 60fps, again as expected.
Here how ever is where I'm wondered if this is delibrate for requestAnimationframe. If i drag the map around until the FPS drop, drops below 30 for an extended period of time, once i stop dragging, it climbs back up, but this time it hits 30FPS and will not go higher. It appears as if the browser decided 30FPS is perhaps the best option.
Is this delibrately done by the browser, i been trying to find out if this is the case. Because it will go to 60fps if i dont drop below 30fps for too long.

Yes, it's something that the browsers are capable of doing.
"How it's supposed to work" isn't really something that anybody can answer, here.
The reason for that is simply that under the hood is 100% browser-specific.
But it's very safe to say that yes, the browser is capable of deciding when you should be locked into a 30Hz refresh, rather than a 60Hz refresh.
An illustration of why this is the case:
requestAnimationFrame() is also tied into the Page Visibility API if the vendors want (very true for Chrome).
Basically, if the page isn't visible, they can slow the requestAnimationFrame() updates down to a few times per second or pause them altogether.
Given that knowledge, it's entirely plausible to believe that one of two things is happening:
they're intentionally capping you at 30fps, because they feel your experience will be more stable there, based on averaged performance data
they're intentionally throttling you, but there's some bug in the system (or some less than lenient math) which is preventing you from going back up to 60, after the coast has cleared, .and if they are using averaged performance data, then that might be part of the issue.
Either way, it is at very least mostly-intentional, with the only unanswered question being why it sticks to 30fps.
Did you leave it alone for 20 or 30 minutes after the fact, to see if it went back up at any time, afterwards?

You can run Timeframe analysis from Chrome DevTools to look for maverick JS that is slowing down your animation times.
https://developers.google.com/chrome-developer-tools/docs/timeline
RAF will find the best place to paint your changes not the closest one. So, if the JS in the RAF callback is taking two frames worth of time(around 16ms per frame on your 60hz hardware), then you FPS will drop to 30.
From Paul Irish via Boris
Actually, “It’s currently capped at 1000/(16 + N) fps, where N is the number of ms it takes your callback to execute. If your callback takes 1000ms to execute, then it’s capped at under 1fps. If your callback takes 1ms to execute, you get about 60fps.” (thx, Boris)
http://www.paulirish.com/2011/requestanimationframe-for-smart-animating/

Related

Html5 Canvas "Composite Layers" causing long frames

I have a javascript client that runs on a web page, drawing with requestAnimationFrame to the canvas and communicating via websockets to my NodeJS backend server (using the 'ws' module on the server side).
Profiling with Chrome DevTools, it seems that the combined time for scripting, rendering, & drawing each frame is only at maximum a few milliseconds. Yet there's still jank -- long frames from 20 - 40ms.
The timeline shows that in almost all of these cases there is a "response" that exceeds the length of the frame and/or a "Composite Layers" that occurs towards the end too.
This is essentially how I'm using requestAnimationFrame:
function drawGame() {
// Drawing to gameCanvas from cacheCanvas
// cacheCanvas is updated whenever an update is received from the server
ctx.drawImage(cacheCanvas,
// source rectangle
0, 0,
gameCanvas.width*2, gameCanvas.height*2,
// destination
100, 100,
gameCanvas.width*2, gameCanvas.height*2
);
requestAnimationFrame(drawGame);
}
requestAnimationFrame(drawGame);
The server sends updates using setInterval() at 60hz. When a message is received from the server, the client immediately draws it. I suspect that this timing may be incorrect in conjunction with requestAnimationFrame, and is leading to the composite layers at the end of the frame.
Even so, I'm confused as to why there is so much idle time in-between scripting and "composite layers" for each frame.
So...
Is there a way to control when "composite layers" is called?
Should I be saving the data from each update message and only drawing it at the beginning of the next animation frame?
What is the "response" referring to?
Thanks!
The version of Chrome, rendering options, and video drivers may all affect this. Post that information with your question. Also try searching on the Chromium bug list.
You can also try the latest dev build of Firefox which is supposed to have better performance by using multiple processes.
To determine whether server responses etc. have anything to do with performance, remove them and use fake data from the client only as a test.
I think you hit on some of the problems, there.
Solutioning:
Let's talk about potential solutions as a TLDR, and then explain how I get there.
Cache your messages to a buffer (eg: push them into an array), when the socket sends data; draw the buffered messages in the next animation frame; clear the buffer (or at least the ones that have been drawn), to await the next set of messages. Don't do heavy processing (drawing is one of the heaviest possible) on the main thread during I/O event handling.
a. If this is still not good enough, move the WebSocket (and data parsing, etc) into a WebWorker, and get the data handling off of the main thread.
b. If 2a is still not good enough, also make your canvas an OffscreenCanvas which animates in the worker, and draws to a "bitmap" context (not "2d") on the main thread... or just have a "2d" canvas (or whatever you are using) on the front end and use .transferControlToOffscreen() to move the draw calls into the WebWorker
c. regardless of solution in 2b, continue to draw based on animationFrame, not whenever a WebSocket hands you data, if animation is at all important (if you are just updating a bar chart with new data every few seconds, none of any of this, including Chrome's complaints, matters)
You have a weird thing going on where you are only drawing portions of your canvas images, and you don't explain why... but if ctx belongs to gameCanvas and you are drawing to 100, 100, canvas.width * 2, canvas.height * 2 then something is off, because you are drawing to 2x the size of the canvas, and showing the top-left quadrant of the drawing, with a padding-top and padding-left of 100px... and that seems like a lot of waste (though it's not actually going to make you pay for all of the missed draw calls, checking the bounds is something you should be doing, yourself). Of course, if ctx isn't owned by gameCanvas and ctx.canvas.width is actually 100px + 2 * gameCanvas.width then feel free to disregard all of #3.
This isn't guaranteed to solve all of your problems, but I do think these go a long way to smoothing out performance, by decoupling WebSocket and data parsing from your actual drawing performance... and preventing duplicate drawing actions (where one is potentially delayed by the other).
Justification:
Ultimately, I think these problems comes down to the following:
frame-pacing
browser animation-frame scheduling
timing of network handling
time spent on main thread, during event callbacks
First, it sounds like your frame-pacing is off, and that will show up in Chrome's complaints. If you're comfortable with frame-pacing, skip the following paragraph.
If you aren't familiar with the concept of frame-pacing, imagine that you are running at a solid 30fps (~33.3ms/frame), but some frames take, say 30fps, and some frames take 36ms... in that regard, while the average framerate might still be correctly described as 30fps, in human experience, some of your frames are now 20% longer than other frames (30ms followed by 36ms), and your eye notices the judder; presuming your animation requests were aiming for 30fps (probably 60+), then Chrome is going to highlight every frame that pushes longer than the 33.3ms time (or ~16.6ms for 60fps).
The next thing to understand is that requestAnimationFrame tries as hard as it can to lock itself to your monitor's refresh rate (or clean fractions thereof); back to the frame-pacing. But here's the problem, because in your case, this canvas is on the main thread (and I presume your websocket... and the initial paint for the other canvas...) all of these things are threatening to push the timing of your animation callback off. Consider a setTimeout(f, 100) It seems like f will run in exactly 100ms. But that's not true. It's guaranteed to run at some point, at least 100ms from now. If 99.8ms from now, a 10.2ms process starts running, then f won't run for 110ms, even if it was scheduled for 100ms.
In reality, we are talking about 60fps, or 120fps, or 144fps, or 165fps. This monitor is 144Hz, so I would expect 144fps or 72fps or 36fps updates, but even assuming the lax 30fps, the problem is that the timing is really fragile. A 4ms update, if it happens at the wrong time (ie: right before an animation callback is scheduled to run) is going to mess up your pacing, and show up on that Chrome timeline as a warning (that 4ms is a 10%+ delay for 30fps, it's 20%+ for 60fps, etc). This is also why your idle times are going to be huge. It's sitting and waiting and doing nothing... and just before it's ready to run the next animation frame at the perfect time to fit in with your screen refresh... a WebSocket message comes in, and then you do a billion things (like drawing in a 2D canvas is a huge for loop, even if it's hidden by the API) in that event, which delays the calling of the animation frame.
The last two I will sum up like this:
In JS, there is a saying... "Do not block the main thread". It's not really a saying. It's a state of being; a way of life. Do. Not. Block. The. Thread. Drawing pixels on a canvas (which is later going to have its pixels drawn on another canvas), and doing that inside of an event callback, is the epitome of blocking the main thread. It would be like having a 3,000 line long function run on window.onscroll or window.onmousemove. It doesn't matter how fast your PC is, your page performance is going to tank. In your handler, especially if it is an oft-fired handler, do the bare minimum to prep the data, store the data for later, and either return if you are set up to poll for this data (like a game loop), or schedule something setTimeout(f, 0) or Promise.resolve().then(f) or requestIdleCallback (if it's a low-importance thing), etc, to look at it later.
To sum it up, performance is critical, but performance isn't just the time it takes to run, it's also the precision of the time when it runs. Keep things off the main thread, so that this time can stay as accurate as possible.

Shall the requestAnimationFrame calls always be throttled down to 60 FPS?

requestAnimationFrame is called whenever the screen is ready for a repaint.
On modern screens the refresh rate may be a lot higher then 60 frames per second. If there is a lot of stuff going on inside those calls - it may impact the overall performance of the application.
So my question is: shall the requestAnimationFrame always be throttled down to 60FPS? Can human eye actually tell the difference between for example a 16ms and an 8ms repaint delay?
[UPDATE]
I ended up throttling it down to 60FPS for higher performance on screens with high refresh rate.
And would suggest this approach to everyone who has a lot of stuff goin on inside the rAF calls.
You should do your own testing of course though.
Per MDN it will not necessarily always be 60 FPS.
Relevant quote:
This will request that your animation function be called before the browser performs the next repaint. The number of callbacks is usually 60 times per second, but will generally match the display refresh rate in most web browsers as per W3C recommendation. The callback rate may be reduced to a lower rate when running in background tabs.
As for can the human eye distinguish 60 vs 120 FPS, well, that's an open, and opinionated question. Some claim to see it, other's claim its impossible. Allowing the end user to choose is (or simply using their hardware to its fullest) probably best.
As markE's comment pointed out. The requestAnimationFrame callback receives a DOMHighResTimeStamp which is a high precision timer accurate to the "thousandth of a millisecond." By using this time-stamp, and calculating the delta between frames, you can tune your refresh rate to whatever desired value.
References:
requestAnimationFrame
W3C Timing control for script-based animations
DOMHighResTimeStamp
Note: please remember to leave a comment addressing any downvotes, otherwise they are not useful feedback.
I guess that people having 120hz or higher frame rate displays are aware that it requires more resources to generate twice as much frames.
That and/or they have more powerful computers than most users. I personally have a very powerful PC but two 60hz displays and the only guy I know that has a display with a higher framerate than 60hz is a pro gamer so obviously he has no performance issue when browsing the web.
Also, people using very high framerate displays are getting used to that level of fluidity and they might notice the difference (event if I doubt it).
My two cents are : respect their preference of having an overkill display. It's what they want.
By default, i think it is good to limit the framerate to 60Hz, since :
• High framerate means more heat, so the (cpu) fan noise will be annoying.
• For most game nobody will notice.
• it's easy to do.
• For those with ecological concerns, high fps uses more power (==> more CO2).
About the visual interest of 120 Hz :
For 2D games where only a tiny amount of the screen is actually changed between each frame it's of little to no interest.
For 3D games, especially those targeting to be realistic, using 120Hz allows to get a more 'cinema'-like experience.
Why ?
==> Most 3D renderers only render a scene at a point in time, so what you see is a succession of 'perfect' still images.
On the other hand, a real camera will -like the human eye- be kept open for a few milliseconds, thus the moves happening during this time will leave a trail on the image, providing a more true to life experience.
The 60Hz boundary is only enough to fool the eye about the motion, so what the 120Hz+ screen brings is that the screen is so fast eye remanence cannot follow and you have again that camera/eye trail effect.
The code looks like :
var minFrame = 13;
var maxFrame = 19;
var typicalFrame = 16;
var gameTime = 0;
var lastDrawTime = -1;
animate(drawTime) {
requestAnimationFrame(animate);
var dt = drawTime - lastDrawTime;
lastDrawTime = drawTime ;
if (dt<minFrame) return;
if (dt>maxFrame) dt=typicalFrame; // in case of a tab-out
gameTime+=dt;
// ...
}
function lauchAnimation() {
requestAnimationFrame ( function(t) { lastDrawTime = t;
requestAnimationFrame(animate); } );
}
Rq1 : When you limit the fps, you must take care of the fact that the frame rate is not stable at all in a Browser.
So even with an application doing nothing, on a 60Hz screen, has frame duration that can go from 14ms to 19ms. (!!!!) So you must take some margin when capping the frame rate to some value.
Rq2 : In the example above 'typicalFrame' is to be replaced by the native screen frame rate (that you have to compute by yourself).

Is calling requestAnimationFrame with Raphael a performance hit?

I'm working on a fairly resource hungry web application which heavily relies on Raphael.js for roughly 50% of the animations used, the rest I have rolled my own animation method which uses webkitRequestAnimationFrame in conjunction with the the Web Audio API's context.currentTime to sync animations with the audio component.
I am experiencing pretty terrible performance at the moment and looking through Raphael's source I see that it also uses requestAnimationFrame. Most of the lag I am experiencing seems to occur when both my animations and Raphael's are running concurrently. Is this because requestAnimationFrame is essentially being called twice per draw cycle?
Essentially what I'm asking is basically do I have to re-roll my own implementation of an animate for raphael objects and stick it in with my existing requestAnimationFrame?
Hmmm as far as I know the whole point of RAF is to sync everything so that its ready for the next render update. I would be doing exactly the same as you as the this is the whole point of it.
As per the spec:
The expectation is that the user agent will run tasks from the animation task source at at a regular interval matching the display's refresh rate. Running tasks at a lower rate can result in animations not appearing smooth. Running tasks at a higher rate can cause extra computation to occur without a user-visible benefit.
So I would say NO it shouldn't be a performance hit.
I'm having a similar issue with sluggish SVG animation. My understanding of RAF is that it batches updates together wherever they come from, so I dont think that was your problem.
I've found that most of my cycles are taken up by repainting. There's not much you can do JS-wise to speed up repainting. But I think you can make it easier on the browser by cutting down on transparency effects, filters, and large areas of the screen changing. Also, repainting is a function of the the amount of pixels that you're updating. I'm making a full-screen site and when I double the viewport size, it doubles my paint time.

How to determine the best "framerate" (setInterval delay) to use in a JavaScript animation loop?

When writing a JavaScript animation, you of course make a loop using setInterval (or repeated setTimeout). But what is the best delay to use in the setInterval/setTimeout call(s)?
In the jQuery API page for the .animate() function, the user "fbogner" says:
Just if anyone is interested: Animations are "rendered" using a setInterval with a time out of 13ms. This is quite fast! Chrome's fastest possible interval is about 10ms. All other browsers "sample" at about 20-30ms.
Any idea how jQuery determined to use this specific number?
Started bounty. I'm hoping someone with knowledge of the source code behind Chromium or Firefox can provide some hard facts that might back up the decision of a specific framerate. Or perhaps a list of animations (or frameworks) and their delays. I believe this makes for an interesting opportunity to do a bit of research.
Interesting - I just took the time to look at Google's Pac-Man source to see what they did. They set up an array of possible FPSes (90, 45, 30), start at the first one, and then each frame they check the "slowness" of the frame (amount the frame exceeded its allotted time). If the slowness exceeds 50ms 20 times, the framerate is notched down to the next in the list (90 -> 45, 45 -> 30). It appears that the framerate is never raised back up, presumably because the game is so short-lived that it wouldn't be worth the trouble to code that.
Oh, and the setInterval delay is of course set to 1000 / framerate. They do, in fact, use setInterval and not repeated setTimeouts.
I think this dynamic framerate feature is pretty neat!
I would venture to say that a substantial fraction of web users are using monitors that refresh at 60Hz, which translates to one frame every 16.66ms. So to make the monitor the bottleneck, you need to produce frames faster than 16.66ms.
There are two reasons you would pick a value like 13ms. First, the browser needs a little bit of time to repaint the screen (in my experience, never less than 1ms). Which puts you at, say, updating every 15ms, which happens to be a very interesting number - the standard timer resolution on Windows is 15ms (see John Resig's blog post). I suspect that an well-written 15ms animation looks very close to the same on a wide variety of browsers/operating systems.
FWIW, fbogner is plain wrong about non-Chrome browsers firing setInterval every 20-30ms. I wrote a test to measure the speed of setInterval firing, and got these numbers:
Chrome - 4ms
Firefox 3.5 - 15ms
IE6 - 15ms
IE8 - 15ms
The pseudo-code for this is this one:
FPS_WANTED = 25
(just a number, it can be changed while executing, or it can be constant)
TIME_OF_DRAWING = 1000/FPS_WANTED
(this is in milliseconds, I believe it is accurate enough)
( should be updated when FPS_WANTED changes)
UntilTheUserLeavesTheDrawingApplication()
{
time1 = getTime();
doAnimation();
time2 = getTime();
animationTime = time2-time1;
if (animationTime > TIME_OF_DRAWING)
{
[the FPS_WANTED cannot be reached]
You can:
1. Decrease the number of FPS to see if a lower framerate can be achieved
2. Do nothing because you want to get all you can from the CPU
}
else
{
[the FPS can be reached - you can decide to]
1. wait(TIME_OF_DRAWING-animationTime) - to keep a constant framerate of FPS_WANTED
2. increase framerate if you want
3. Do nothing because you want to get all you can from the CPU
}
}
Of course there can be variations of this but this is the basic algorithm that is valid in any case of animation.
When doing loops for animations, it's best that you find a balance between the speed of the loop, and how much work needs to be done.
For example, if you want to slide a div across the page within a second so it is a nice effect and timely. You would skip coordinates and have a reasonably fast loop time so the effect is noticeable, but not jumpy.
So it's a trial and error thing (by having to put work, time, and browser capability into account). So it doesn't only look nice on one browser.
The number told by fbogner have been tested.
The browsers throttle the js-activity to a certain degree to be usable every time.
If your javascript would be possible to run every 5msec the browser runtime would have much less cpu time to refresh the rendering or react on user input (clicks) because javascript-execution blocks the browser.
I think the chrome-devs allow you to run your javascript at much shorter intervals than the other browsers because their V8-Javascript-Engine compiles the JavaScript and therefore it runs faster and the browser will noch be blocked as long as with interpreted js-code.
But the engine is not only so much faster to allow shorter intervals the devs have certainly tested which is the best possible shortest interval to allow short intervals and don't blocking the browser for to long
Don't know the reasoning behind jQuery's interval time, as 13ms translates to 80fps which is very fast. The "standard" fps that's used in movies and such is 25fps and is fast enough that human eye won't notice any jittering. 25fps translates to 40ms, so to answer your question: anything below 40ms is enough for an animation.

SoundManager2 has irregular latency

I'm playing some notes at regular intervals. Each one is delayed by a random number of milliseconds, creating a jarring irregular effect. How do I fix it?
Note: I'm OK with some latency, just as long as it's consistent.
Answers of the type "implement your own small SoundManager2 replacement, optimized for timing-sensitive playback" are OK, if you know how to do that :) but I'm trying to avoid rewriting my whole app in Flash for now.
For an example of app with zero audible latency see the flash-based ToneMatrix.
Testcase
(see it here live or get it in an zip):
<head>
<title></title>
<script type="text/javascript"
src="http://www.schillmania.com/projects/soundmanager2/script/soundmanager2.js">
</script>
<script type="text/javascript">
soundManager.url = '.'
soundManager.flashVersion = 9
soundManager.useHighPerformance = true
soundManager.useFastPolling = true
soundManager.autoLoad = true
function recur(func, delay) {
window.setTimeout(function() { recur(func, delay); func(); }, delay)
}
soundManager.onload = function() {
var sound = soundManager.createSound("test", "test.mp3")
recur(function() { sound.play() }, 300)
}
</script>
</head>
<body>
</body>
</html>
I know this isn't the answer you want to hear, but there is no way to stop this, regardless of whether you wrote your own flash library to play sound or not.
For everyone who said "it works fine for me!" try resizing or moving your browser window as the poster's demo plays out. You'll hear more than just a subtle amount of delay. This is most noticeable in Firefox and IE, but even Chrome will experience it.
What's worse, if you click and hold the mouse down on the close box for the browser window, the sound completely stops until you release your mouse (you can release it outside of the close box and not actually close the window, FYI).
What is going on here?
It turns out that when you start resizing or moving around the browser window, the browser tries to multi-task the act of changing the window properties with the act of keeping up with the javascript going on in the window. It short-changes the window when it needs to.
When you hold down the mouse over the close box in the browser window, time stops completely. This is what is happening in smaller increments when you are re-sizing or moving the window: time is standing still in the javascript world in small, sporadic chunks (or large chunks, depending on how slow your machine is).
Now, you might say "sure, resizing the browser or holding down the close button makes the browser pause, but normally this wouldn't happen". Unfortunately you would be wrong.
It happens all the time, actually. I've run tests and it turns out that even by leaving the browser window completely still, not touching the mouse, and not touching the keyboard, backgrounds processes on the computer can still cause "hiccups", which means that for brief periods (perhaps as small as a few milliseconds) time is "standing still" in the browser, at completely random intervals outside of your control.
What do I mean by "standing still"? Let's say you have a setInterval() call (this applies to setTimeout also) running every 33 milliseconds (about 30 frames per second). Now, you would expect that after every 33 "real world" milliseconds your function would get called. And most of the time, this is true.
But when "hiccups" start happening, your setInterval call might happen in 43 milliseconds. What happened during the 10 ms? Nothing. Time stood still. Nothing on the browser was being updated. If you had sound playing, it will continue playing, but no NEW sound calls would start playing, because no javascript is being executed at all. If you had 5 setInterval() functions running, they would have all been paused for 10ms at some point.
The only way to tell that "time stood still" is to poll real-world time in your setInterval function callbacks. You'll be able to see that the browser tries to keep up most of the time, but that when you start resizing the window or doing something stressfull, the intervals will be longer than usual, but that all of your code will remain synched up (I'm making games using this technique, so you will see that all your game updates happen in synch, but just get slightly stuttered).
Usually, I should point out, these stutters are completely unnoticeable, and unless you write a function to log real-world time during setInterval times (as I have done in my own testing) you wouldn't even know about it. But it becomes a problem if you try to create some type of repetitive sound (like the beeping in the background of Asteriods) using repetitive play() calls.
My suggestion? If you have a sound that you know will loop, give it a long duration, maybe 10 seconds, and you'll be less likely to notice the hiccups (now, the graphics on the screen could still hiccup, but you're screwed there).
If you are writing a game and having the main character fire off a machine gun, don't do 10 rapid-succession calls to playSound('singleShot'), do one call to playSound('machineGunFire10Rounds'), or something along those lines.
You'll have to do some trickery to get around it, but in most cases you'll be alright.
It seems that Flash applets run in a process that is somehow less affected this whole "time freezing" thing going on in the regular browser/javascript environment, but I can still get it to happen, even on your link to the ToneMatrix example, by resizing or moving the browser window.
But Flash still seems much better than javascript. When I leave the browser alone I'd be willing to bet that Flash is not freezing for any amount of time and that intervals are always running on time.
tl;dr:
you're screwed in what you're hoping to achieve
try to deal with it using some workarounds
re-write your project in pure flash (no javascript)
wait for browsers to get better (Firefox 4 is getting a new javascript engine called JaegerMonkey which will be interesting to watch)
how do I know all this? I've done a lot of testing & logging with javascript, setInterval, and soundManager/html5 audio calls
In my comment to your question I mentioned that I don't hear the irregularity when I play your sample. That means I'm either "rhythm deaf", or that there may be something in your setup that interferes with good realtime performance. You don't mention any details of your environment, but you may have other processes running on your computer that are sucking up CPU cycles, or an older version of Flash that may not do a good job of handling sound latencies. I myself am using a recent version of Flash (10.something), whereas your parameters call for Flash 9. But maybe I should assume that if you're smart enough to be using SoundManager2 and StackOverflow that you would have eliminated these problems.
So here are some troubleshooting possibilities and comments that come to mind:
1) the SoundManager site has a number of demos, including JS-DOM "painting" + Sound, V2. Are you hearing irregular latencies and delays there? If not, maybe you can compare what they're doing there against what you're doing. If you are, then maybe look at your machine environment. When I run that demo, it is very responsive. (EDIT: Looking at it more closely, however, you can watch how the size of the brush stamps varies during a stroke. Since it varies with the time interval between mouse events (assuming you are keeping a constant mouse speed), you can visually see any irregularities in the pattern of mouse events. I can see occasional variation in stamp sizes, which does indicate that mouse events are not coming in at regular times. Which brings us to Javascript events.)
2) Javascript setTimeout() and setInterval() are not very reliable when it comes to timing. Mostly they will come back in some ballpark of the interval you have requested, but there can be large variations, usually delays, that make them unreliable. I've found that the same is true when using ActionScript inside Flash. You might want to print out the times that your sound.play() call is being made to see whether the irregularities are due to the irregularities in setTimeout/setInterval(). If that's the case, you could try shortening the interval, and then polling the system time to get much closer to the 300ms interval that you want. You can poll system time using new Date().getTime(), and this seems to have ms accuracy. Polling is of course a hideous hack that sucks up cycles when they could be used for something else, and I don't recommend it in general, but you may try it to see whether it helps. EDIT: Here's a writeup by John Resig on the handling of input and timer events in js.
3) When flash plays sounds, there is usually a latency involved, just so that the player can "build up a head of steam" and make sure there's enough stuff in the buffer to be played before the next buffer request is filled. There's a trade off between this latency and the reliability of uninterrupted playback. This might be a limitation you can't do anything about, short of "implement[ing] your own small SoundManager2 replacement", which I know you don't want to do.
Hope this helps. I recently wrote an AS3 sound experiment which exposed me to some of the basics, and will be watching this space to see what other suggestions people come up with.
You are using the javascript interval, which can not be guaranteed to fire at an exact time. I am sure that the internal Flash timing is far more reliable.
But this might help, fire recur AFTER you have triggered playing the sound.
window.setTimeout(function() { func(); recur(func, delay); }, delay);
As explained in another answer, there's no way you can avoid this. But...
I've done some experiments to mitigate this issues, and in the end I resorted to using:
lowLag: responsive html5 audio, which uses SoundManager2 for some cases where it's the fastest option available.
GSAP JS – Professional-Grade JavaScript Animation, in order to do the animation of properties and syncing of the audio (you probably don't care about this :P)
Take a peek at the source on the prototype of the demo, and (if possible) give lowLag a shot. It worked nicely for me.
Good luck!

Categories

Resources