Are there any real risks to continuous polling? - javascript

It's sort of been 'against my religion' to poll a condition using a timeOut or similar technique. I'd so much rather handle an event than continuously search for evidence that an event occurred.
But, due to spotty support for the onHashChange event, it was suggested I use a plugin that polls the window.location property every 50ms.
Are there any real risks to doing this (eg processing expense)? Or am I just superstitious?

When I asked a similar qestion recently, people suggested a poll frequency of once per second: What is a good setTimeout interval for polling in a IE?

We use a setInterval of 120ms to check the hash key.
Although it is a one page web app that relies heavily on javascript, the polling doesn't hit the performance at all.
I didn't try IE6 and IE7 but everything is fine on IE8 and other today's common browsers.
The 120ms value came by some qualitative testing.I played by steps of 20 while using the back/forward buttons.
I couldn't feel the difference for values lower than 120. While I could feel a growing lag for values over 120.
But this probably depends on your app, the polling time should have a value relative to the overall response time of the app.

Since it is a common practice to poll the window.location property, I believe there is no real risk involved. When I needed to implement it, I found out that such polling is used in Google's gmail and "translate".

The biggest problem here is that your window.location checking code will be executing 20 times every second, which seems like overkill.
You may want to test it on IE to see what kind of slowdown it has there, as IE is going to be the biggest problem with anything JS related.

Related

Detect Graphics card performance - JS

This is a longshot - is there anyway to detect poor vs strong graphics card performance via a JS plugin?
We have built a parallax site for a client, it stutters on lower performance machines - we could tweak the performance to make it work better across the board - but this of course reduces the experience for users with higher performance machines.
We could detect browser version also - but the same browser could run on low and high performance machines - so doesn't help our situation
Any ideas?
requestAnimationFrame (rAF) can help with this.
You could figure out your framerate using rAF. Details about that here: calculate FPS in Canvas using requestAnimationFrame. In short, you figure out the time difference between frames then divide 1 by it (e.g. 1/.0159s ~= 62fps ).
Note: with any method you choose, performance will be arbitrarily decided. Perhaps anything over 24 frames per second could be considered "high performance."
Why not let the user decide? Youtube (and many other video sharing sites) implements a selector for quality of playback, now a gear icon with a list of resolutions you can choose from. Would such a HD | SD or Hi-Fi | Lo-Fi selector work (or even make sense) in the context of your application?
This is where "old school" loading screens came in useful, you could render something complex either in the foreground (or hidden away) that didn't matter if it looked odd or jurky -- and by the time you had loaded your resources you could decide on what effects to enable or disable.
Basically you would use what jcage mentioned for this, testing of frame-rate (i.e. using a setInterval in conjuction with a timer). This isn't always 100% reliable however because if their machine decides in that instance to do a triple-helix-backward-somersault (or something more likely) you'd get a dodgy reading. It is possible, depending on the animations involved, to upgrade and downgrade the effects in realtime — but this is always more tricky to code, plus your own analysis of the situation can actually sometimes cause dropped performance.
Firefox has a build in list of graphic cards which are not supported: https://wiki.mozilla.org/Blocklisting/Blocked_Graphics_Drivers the related help acrticle.
But you only can indirectly test them when accessing WebGL features...
http://get.webgl.org/troubleshooting/ leads you to the corresponding browser provider when there are problems. When checking the JS code you will see that they test via
if (window.WebGLRenderingContext) {
alert('Your browser does not support WebGL');
}
if you have an up to date graphic card.
You might consider checking to see if the browser supports window.requestAnimationFrame, which would indicate you are running in a newer browser. Or alternatively consider checking jQuery.fx.interval.
You could then implement a custom function to gauge the available processing power. You might try using a custom easing function which can then be run through a function like .slideDown() to get an idea of the computation power available.
See this answer to another question for more ideas on how to check performance in javascript.
If the browser is ultra-modern and supports requestAnimationFrame, you can calculate the animation frame rate in realtime and drop your animation settings for slower machines. That's IE 10, Firefox 4, Chrome 10, and Safari 6.
You would essentially create a 'tick' function that runs on a requestAnimationFrame() callback and tracks how many milliseconds pass between each tick. After so many ticks have been registered, you can average it out to determine your overall frame rate. But there are two caveats to this:
When the tab is in the background requestAnimationFrame callbacks will be suspended -- so a single, sudden delay between frames of several seconds to many minutes does not mean it's a slow animation. And:
Simply having a JavaScript gauge running on each animation frame will cause the overall animation to slow down a bit; there's no way to measure something like that without negatively impacting it. Remember, Heisenberg's a bastard.
For older browsers, I don't know of any way to reliably gauge the frame rate. You may be able to simulate it using setTimeout() but it won't be nearly as accurate -- and might have an even more negative impact on performance.
This might be the risk/benefit based decision. I think that you will have to make important, but tough decision here.
1)
If you decide to have two verions, you will have to spend some time to:
figure out fast, non intrusive test
spend time implementing that
roll it into production
and you will most probably end up with incorrect implementation, for example at the moment my computer is running 70 chrome tabs, VLC with 1080p anime, and IntellijIDEA
The probability the my MBP, 2012 model will be detected as "slow" computer is high, at least now.
False positives are really hard to figure out.
2)
If you go for one version you will have to choose between HD and Lo-Fi, as #Patrick mentioned, which is again mistake in my opinion.
What i will suggest is that you go to Google Analytics, figure out browser distribution (yes, i know that it can be misleading, but so can any other test) and based on that (if majority of users are Chrome + modern IE/FF go with HD version, BUT spend some time figuring out optimisation strategy.
There are always things that could be done better and faster. Get one older laptop, and optimise until you get decent FPS rate. And thats it, yo as a developer need to make that decision, that is your duty.
3)
If from the Browser distribution you figure out that you absolutely must go with Lo-Fi version, well, try to think is the "downgrade" worth it, and implement it only if that is your last resort.

Best practice: very long running polling processes in Javascript?

I have a touch screen kiosk application that I'm developing that will be deployed on the latest version of Chrome.
The application will need to make AJAX calls to an web service, every 10 minutes or so to pull thru any updated content.
As it's a kiosk application, the page is unlikely to be reloaded very often and theoretically, unless the kiosk is turned off, the application could run for days at a time.
I guess my concern is memory usage and whether or not a very long running setTimeout loop would chew through a large amount of memory is given sufficient time.
I'm currently considering the use of Web Workers and I'm also going to look into Web Sockets but I was wondering if anyone had any experience with this type of thing?
Cheers,
Terry
The browser has a garbage collector so no problems on that. as long as you don't introduce memory leaks through bad code. here's an article and another article about memory leak patterns. that's should get you started on how to program efficiently, and shoot those leaky code.
also, you have to consider the DOM. a person in SO once said that "things that are not on screen should be removed and not just hidden" - this not only removes the entity in a viewing perspective, but actually removes it from the DOM, remove it's handlers, and the memory it used will be freed.
As for the setTimeout, lengthen the interval between calls. Too fast, you will chew up memory fast (and render the page quite... laggy). I just tested code for a timer-based "hashchange" detection, and even on chrome, it does make the page rather slow.
research on the bugs of chrome and keep updated as well.

Is setInterval CPU intensive?

I read somewhere that setInterval is CPU intensive. I created a script that uses setInterval and monitored the CPU usage but didn't notice a change. I want to know if there is something I missed.
What the code does is check for changes to the hash in the URL (content after #) every 100 milliseconds and if it has changed, load a page using AJAX. If it has not changed, nothing happens. Would there be any CPU issues with that.
I don't think setInterval is inherently going to cause you significant performance problems. I suspect the reputation may come from an earlier era, when CPUs were less powerful.
There are ways that you can improve the performance, however, and it's probably wise to do them:
Pass a function to setInterval, rather than a string.
Have as few intervals set as possible.
Make the interval durations as long as possible.
Have the code running each time as short and simple as possible.
Don't optimise prematurely -- don't make life difficult for yourself when there isn't a problem.
One thing, however, that you can do in your particular case is to use the onhashchange event, rather than timeouts, in browsers that support it.
I would rather say it's quite the opposite. Using setTimeout and setInterval correctly, can drastical reduce the browsers CPU usage. For instance, using setTimeout instead of using a for or while loop will not only reduce the intensity of CPU usage, but will also guarantee that the browser has a chance to update the UI queue more often. So long running processes will not freeze and lockup the user experience.
But in general, using setInterval really like a lot on your site may slow down things. 20 simultaneously running intervals with more or less heavy work will affect the show. And then again.. you really can mess up any part I guess that is not a problem of setInterval.
..and by the way, you don't need to check the hash like that. There are events for that:
onhashchange
will fire when there was a change in the hash.
window.addEventListener('hashchange', function(e) {
console.log('hash changed, yay!');
}, false);
No, setInterval is not CPU intensive in and of itself. If you have a lot of intervals running on very short cycles (or a very complex operation running on a moderately long interval), then that can easily become CPU intensive, depending upon exactly what your intervals are doing and how frequently they are doing it.
I wouldn't expect to see any issues with checking the URL every 100 milliseconds on an interval, though personally I would increase the interval to 250 milliseconds, just because I don't expect that the difference between the two would be noticeable to a typical user and because I generally try to use the longest timeout intervals that I think I can get away with, particularly for things that are expected to result in a no-op most of the time.
There's a bit of marketing going there under the "CPU intensive" term. What it really means is "more CPU intensive than some alternatives". It's not "CPU intensive" as in "uses a whole lot of CPU power like a game or a compression algorithm would do".
Explanation :
Once the browser has yielded control it relies on an interrupt from
the underlying operating system and hardware to receive control and
issue the JavaScript callback. Having longer durations between these
interrupts allows hardware to enter low power states which
significantly decreases power consumption.
By default the Microsoft Windows operating system and Intel based
processors use 15.6ms resolutions for these interrupts (64 interrupts
per second). This allows Intel based processors to enter their lowest
power state. For this reason web developers have traditionally only
been able to achieve 64 callbacks per second when using setTimeout(0)
when using HTML4 browsers including earlier editions of Internet
Explorer and Mozilla Firefox.
Over the last two years browsers have attempted to increase the number
of callbacks per second that JavaScript developers can receive through
the setTimeout and setInterval API’s by changing the power conscious
Windows system settings and preventing hardware from entering low
power states. The HTML5 specification has gone to the extreme of
recommending 250 callbacks per second. This high frequency can result
in a 40% increase in power consumption, impacting battery life,
operating expenses, and the environment. In addition, this approach
does not address the core performance problem of improving CPU
efficiency and scheduling.
From http://ie.microsoft.com/testdrive/Performance/setImmediateSorting/Default.html
In your case there will not be any issue. But if your doing some huge animations in canvas or working with webgl , then there will be some CPU issues, so for that you can use requestAnimationFrame.
Refer this link About requestAnimationFrame
Function time > interval time is bad, you can't know when cpu hiccups or is slow one and it stacks on top of ongoing functions until pc freezes. Use settimeout or even better, process.nextick using a callback inside a settimeout.

Javascript execution blocking rendering/scrolling in IE 8

I'm having a bit of trouble with IE 8 (and probably all previous versions). Firefox and Chrome/Webkit are seemingly fine.
Something is causing page rendering, scrolling, and basically all page interaction to become blocked. As best I can tell, Javascript execution is causing this to happen.
Specifically, I think there are two major responsible parties in my specific situation - Google Maps API (v3) and Facebook Connect.
In both cases, I am using the asynchronous load methods provided by both Google and Facebook.
I've tried a couple of things so far, to no avail:
Delaying execution with jQuery's $(document).ready(). This just prevents the locking until slightly later in the page load. Actually, since I use gzip compression, I'm not really sure it does anything - I'm not clear on how that works.
Delaying execution with window.onload. Same situation - the whole page loads, then it locks up while it grabs and executes the Facebook Connect code.
Using setTimeout(function(){}, 0). I'm not 100% clear on how this is supposed to actually work - as I understand it, it essentially is supposed to force the execution of the function's code to wait until the stack is clear. Unfortunately, this doesn't seem to do much of anything for me.
I think the problem is especially exaggerated for me because I am on a slow connection.
I can't think of any specific oddities with my site that would be a factor, but I won't rule that out.
So, my question:
Are there any best practices or existing solutions to this issue?
Is there something that I am obviously doing wrong?
The offending site is at: http://myscubadives.com/, if anyone would be willing to take a look at the specific implementation.
Thank you in advance for your time and help.
Sam
Yes, the browser (at least IE) suspends itself while Javascript is being executed. This makes things a bit faster, because it doesn't have to redraw and recalculate layout every time you make a change. However if your Javascript takes a long time to execute, this will seem like freezing. Synchronous XMLHttpRequests also count.
Unfortunately there is no pretty workaround. The typical advice is to use window.setTimeout() function with timeout set to 0 (or something very small) to split the workload in several parts. Inbetween the browser can manage to redraw itself and respond to some user interaction, so it doesn't seem frozen. The code gets ugly though.
In case of lengthy XMLHttpRequests you have no choice but to use the asynchronous version.
Added: Ahh, I see that you already know this. Should read more carefully. :P Did you also know that IE8 has Developer tools built in (Press F12 to activate) and that the Javascript tab has a profiler? I checked it out, and 2 seconds were spent exclusively in jQyery's get() method. Which gives me a strong suspicion that something is still using synchronous XMLHttpRequests.
Function: get
Count: 10
Inclusive time: 2 039,14
Exclusive time: 2 020,59
Url: http://ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js
Line: 127

What's the best way to determine at runtime if a browser is too slow to gracefully handle complex JavaScript/CSS?

I'm toying with the idea of progressively enabling/disabling JavaScript (and CSS) effects on a page - depending on how fast/slow the browser seems to be.
I'm specifically thinking about low-powered mobile devices and old desktop computers -- not just IE6 :-)
Are there any examples of this sort of thing being done?
What would be the best ways to measure this - accounting for things, like temporary slowdowns on busy CPUs?
Notes:
I'm not interested in browser/OS detection.
At the moment, I'm not interested in bandwidth measurements - only browser/cpu performance.
Things that might be interesting to measure:
Base JavaScript
DOM manipulation
DOM/CSS rendering
I'd like to do this in a way that affects the page's render-speed as little as possible.
BTW: In order to not confuse/irritate users with inconsistent behavior - this would, of course, require on-screen notifications to allow users to opt in/out of this whole performance-tuning process.
[Update: there's a related question that I missed: Disable JavaScript function based on user's computer's performance. Thanks Andrioid!]
Not to be a killjoy here, but this is not a feat that is currently possible in any meaningful way in my opinion.
There are several reasons for this, the main ones being:
Whatever measurement you do, if it is to have any meaning, will have to test the maximum potential of the browser/cpu, which you cannot do and maintain any kind of reasonable user experience
Even if you could, it would be a meaningless snapshot since you have no idea what kind of load the cpu is under from other applications than the browser while your test is running, and weather or not that situation will continue while the user is visiting your website.
Even if you could do that, every browser has their own strengths and weaknesses, which means, you'd have to test every dom manipulation function to know how fast the browser would complete it, there is no "general" or "average" that makes sense here in my experience, and even if there was, the speed with which dom manipulation commands execute, is based on the context of what is currently in the dom, which changes when you manipulate it.
The best you can do is to either
Let your users decide what they want, and enable them to easily change that decision if they regret it
or better yet
Choose to give them something that you can be reasonably sure that the greater part of your target audience will be able to enjoy.
Slightly off topic, but following this train of thought: if your users are not techleaders in their social circles (like most users in here are, but most people in the world are not) don't give them too much choice, ie. any choice that is not absolutely nescessary - they don't want it and they don't understand the technical consequences of their decision before it is too late.
A different approach, that does not need explicit benchmark, would be to progressively enable features.
You could apply features in prioritized order, and after each one, drop the rest if a certain amount of time has passed.
Ensuring that the most expensive features come last, you would present the user with a somewhat appropriate selection of features based on how speedy the browser is.
You could try timing some basic operations - have a look at Steve Souder's Episodes and Yahoo's boomerang for good ways of timing stuff browserside. However its going to be rather complicated to work out how the metrics relate to an acceptable level of performance / a rewarding user experience.
If you're going to provide a UI to let users opt in / opt out, why not just let the user choose the level of eye candy in the app vs the rendering speed?
Take a look at some of Google's (copyrighted!) benchmarks for V8:
http://v8.googlecode.com/svn/data/benchmarks/v4/regexp.js
http://v8.googlecode.com/svn/data/benchmarks/v4/splay.js
I chose a couple of the simpler ones to give an idea of similar benchmarks you could create yourself to test feature sets. As long as you keep the run-time of your tests between time logged at start to time logged at completion to less than 100 ms on the slowest systems (which these Google tests are vastly greater than) you should get the information you need without being detrimental to user experience. While the Google benchmarks care at a granularity between the faster systems, you don't. All that you need to know is which systems take longer than XX ms to complete.
Things you could test are regular expression operations (similar to the above), string concatenation, page scrolling, anything that causes a browser repaint or reflow, etc.
You could run all the benchmarks you want using Web Workers, then, according to results, store your detection about the performance of the machine in a cookie.
This is only for HTML5 Supported browsers, of-course
Some Ideas:
Putting a time-limit on the tests seems like an obvious choice.
Storing test results in a cookie also seems obvious.
Poor test performance on a test could pause further scripts
and trigger display of a non-blocking prompt UI (like the save password prompts common in modern web browsers)
that asks the user if they want to opt into further scripting effects - and store the answer in a cookie.
while the user hasn't answered the prompt, then periodically repeat the tests and auto-accept the scripting prompt if consecutive tests finish faster than the first one.
.
On a sidenote - Slow network speeds could also probably be tested
by timing the download of external resources (like the pages own CSS or JavaScript files)
and comparing that result with the JavaScript benchmark results.
this may be useful on sites relying on loads of XHR effects and/or heavy use of <img/>s.
.
It seems that DOM rendering/manipulation benchmarks are difficult to perform before the page has started to render - and are thus likely to cause quite noticable delays for all users.
I came with a similar question and I solved it this way, in fact it helped me taking some decisions.
After rendering the page I do:
let now, finishTime, i = 0;
now = Date.now();//Returns the number of miliseconds after Jan 01 1970
finishTime = now + 200; //We add 200ms (1/5 of a second)
while(now < finishTime){
i++;
now = Date.now();
}
console.log("I looped " + i + " times!!!");
After doing that I tested it on several browser with different OS and the i value gave me the following results:
Windows 10 - 8GB RAM:
1,500,000 aprox on Chrome
301,327 aprox on Internet Explorer
141,280 (on Firefox on a VirtualMachine running Lubuntu 2GB given)
MacOS 8GB RAM:
3,000,000 aprox on Safari
1,500,000 aprox on Firefox
70,000 (on Firefox 41 on a Virtual Machine running Windows XP 2GB given)
Windows 10 - 4GB RAM (This is an Old computer I have)
500,000 aprox on Google Chrome
I load a lot of divs in a form of list, the are loaded dinamically accordeing to user's input, this helped me to limit the number of elements I create according to the performance the have given, BUT
But the JS is not all!, because even tough the Lubuntu OS running on a virtual machine gave poor results, it loaded 20,000 div elements in less than 2 seconds and you could scroll through the list with no problem while I took more than 12 seconds for IE and the performance sucked!
So a Good way could be that, but When it comes to rendering, thats another story, but this definitely could help to take some decisions.
Good luck, everyone!

Categories

Resources