Variation in JavaScript's Date accuracy - javascript

TLDR: Is there data on variation of JS's Date accuracy?
I'm looking into doing some research online, gathering reaction data for experiments.
As a contrived example, let's say a user clicks a button and a new image is displayed on the screen. For the purposes of the question imagine that this takes somewhere between 50 and 100ms
I need to measure the delay between an interaction (e.g. a button click) and the displaying of the new DOM state, ideally to millisecond accuracy.
I've looked into it (including through SO with questions like this) and so far it doesn't really seem like using JS's built-in Dates will really cut it, since a delay in the execution thread can push the time out of sync. This seems a bit odd to me as dates are measured to ms precision, and yet accuracy seems to be much larger.
I'm also aware that there are other latencies associated, such as screen refresh rate. This question is purely about the execution inaccuracies.
My question is this: Is there any data on the error rates/variations etc. of the Date object across browsers/operating systems? Although it would be good to get an idea of the overall variation across systems what I'm really after is the repeat trial variation (doing the same thing on the same system over and over).
I'm looking for a solution that can be delivered entirely using a client-side browser, so no extensions or other executables that a user would need to download.

Related

How to get Render Performance for Javascript based Charting Libraries?

To preface I am pretty new to programming Javascript, but I have been working with various libraries for a while now. I've been tasked to get performance metrics for various charting libraries to find the fastest and most flexible based on some of the libraries available (e.g. AmCharts, HighCharts, SyncFusion, etc.). I've tried JSPerf and it seems like I am getting performance metrics for the code execution and not the actual rendered chart which is the metrics we want (aka what the user experience will be). I've tried using the performance.now() within the Javascript code in the header and also wrapped around the tags where the charts are displayed, but neither method is working.
What is the best way to get these performance metrics based on rendering?
Short Answer :
Either :
Start your timing right before the chart code executes and setup a MutationObserver to watch the DOM and end the time when all mutation ends.
Find out if the charting library has a done() event. (But be cautious as this can be inaccurate depending on implementation/library. "done()" could mean visually done, but background work is still being performed. This could cause interactivity to be jumpy until the chart is completely ready).
Long Answer :
I'm assuming your test data is quite large since most libraries can handle a couple thousand points without any negligible degradation. Measuring performance for client-side charting libraries is actually a two sided issue: rendering times and usability.
Rendering times can be measured by the duration when a library interprets the dataset, to the visual representation of the chart. Depending on each library's interpretation algorithm, your mileage will vary depending on the data size. Let's say library X uses an aggressive sampling
algorithm and only has to draw a small percentage of the dataset. Performance will be extremely fast, but it may or may not be an accurate representation of your data set. Even more so, interactivity at a finer grain detail could be limited.
Which leads me to the usability and interactivity aspect of performance. We're using a computer and not a chart on a piece of paper; it should be as interactive as possible.
As the amount of interactivity goes up though, your browser could be susceptible to slowdown depending on the library's implementation. What if each of your million data points was to be an interactive dom node? 1 Million data points would surely crash the browser.
Most of the charting libraries out there deal with the tradeoff between performance, accuracy, and usability differently. As for what is It all depends on the implementation.
Plug/Source : I am a developer at ZingChart and we deal with our customers with large datasets all the time. We also built this which is pretty relevant to your tests : http://www.zingchart.com/demos/zingchart-vs/
My method is really basic. I create a var with current time then call a console.log() with the time I got to the end of my code block and the difference.
var start = +new Date();
//do lots of cool stuff
console.log('Rendered in ' + (new Date() - start) + ' ms');
Very generic and does what it says on the tin. If you want to measure each section of code you would have to make new time slots. Yes, the calculation takes time. But it is miniscule compared to what the code that I want to measure is doing. Example in action at the jsFiddle.

Is there a harder to break way to obfuscate a javascript function than hieroglyphy?

I'm developing an online chess variant game. I want to create a javascript function that has the purpose of communicating to the server on behalf of the player the exact time spent on a move.
This message will be encrypted, of course, but in order to trust this function, I want to obfuscate it to the point that I can rely on the obfuscation algorithm.
I only know a few obfuscation algorithms, hieroglyphy being the most interesting. But it isn't unbreakable. Speed of execution and size are not critical, I can deduct the time spent by the function that sends the message in that same function, and the size can be even up to 2MB.
I'm pretty sure that there is no unbreakable algorithm because as long as it is required to run in a browser, anyone with enough patience can take it piece by piece and see what it does.
Do I have an alternative that would require more effort and time from a user with bad intentions?
Edit I've done some tests in every browser on WindowsXP and it appears that in FF, IE, Opera and Chrome the setTimeout function will trigger after a delay that is passed as the second parameter, regardless of any changes to system time during the delay. If no other information is presented to suggest otherwise, the logical conclusion would be that time can be measured client-side regardless of system time changes, using the setTimeout function but not the Date() object, up to a precision given by the setTimeout delay time.
Hamish mentioned in an answer below that modifying the browser date/time APIs is trivial. In that case, the javascript code is vulnerable to a modification that will increase the setTimeout real delay time. Some code should be set in place so that the server should start suspecting of cheating someone who has unreasonable lag time. This will always be a problem if lag time isn't included in thinking time.
There's a reason I can't use server side timing. The lag times would sometimes exceed a reasonable amount and that will leave users dissatisfied. And sometimes the lag can make all the difference.
Which brings me back to the original question. I'm looking for the best obfuscation method, where best is measured in the effort an attacker has to make to deobfuscate. Ideally, I would want to change the obfuscation algorithm faster than an attacker can deobfuscate, and then never to use that algorithm again or use it rarely, at a time the attacker won't expect.
I could set my computer's clock to three hours ago and your script would happily send -10800 seconds. NEVER rely on JavaScript to handle information in a trusted manner. Use your server-side code to time the difference between when the player's turn started and when they made their move, and absolutely keep a representation of the game on the server and make sure the move is valid.
Obfuscating your code doesn't help, for two reasons:
Users can still inspect the messages being sent from the browser to the server. You would also have to sign the message somehow, to prevent it being intercepted and modified. Generally, it will be even easier to unpack the message than the function used to generate it.
You're trying to measure the time taken on the move, which means your obfuscated function still has to trust the system clock and the browser date/time APIs. Both are trivial to modify.
A sensible solution would be to measure the time messages are sent and received on your server, and measure the latency of the connection to correct for transmission speeds (if you need to be very accurate).

Website Performance Testing: How best to approximate computer performance?

I have some browser-intensive CSS and animation in my webpage and I'd like to determine if the user has a fast PC or not so i can scale things accordingly to provide the best experience.
I am using http://detectmobilebrowser.com's script to detect all mobile devices, and I am going to include the clause /android|ipad|ipod|playbook|silk/i.test(a) to include all tablet devices as well.
However this doesn't and cannot really address the actual hardware. It doesn't go very far at all to paint a picture of what I'm looking for.
An iPhone 4S, for example, will be quite a lot more capable than many of the devices matched by the mobile user agent detector, and this provides no way for it to set itself apart.
Somebody might run Google Chrome on a Pentium II machine (somehow) and want to view my page. (This person probably does not have an iPhone 4S)
Obviously to actually get an idea for this I'll have to do some actual performance testing, and as with performance testing with any kind of application, it makes sense to only test the performance of the type of tasks that the application actually performs.
Even with this in mind I feel like it would be difficult to obtain any reasonably accurate numbers before the performance testing routine will have taken too long and the user will have became impatient. So this probably means go ahead with it unless I want the first initial impression to be perfect. Well, this actually happens to be the case. So I can't get away with measuring performance "after the first run" and adjusting the parameters later.
So what I've got left is to basically try to perform a similar task on initial page load, in a way that is dependent on browser rendering and processing speed, while not presenting anything to the user (so that to the user they still think the page is loading), and then preferably within a second or two obtain accurate enough numbers to set parameters for the actual page to animate and present in a pleasing manner that doesn't resemble a slideshow.
Maybe I could place a full-page white <div> over my test case so that I can prevent the user from seeing what's going on and hope that the browser will not be smart by avoiding doing all the work.
Has anybody ever done this?
I know people are going to say, "you probably don't need to do this", or "there's gotta be a better way" or "reduce the amount of effects".
The reason for doing any of the things I'm doing on the page are so that it looks good. That's the entire point of it. If I didn't care about that as much this question wouldn't exist. The goal is to give the javascript the ability to determine enough parameters to provide an awesome experience on a powerful computer, and also a passable experience on a less capable computer. When more power is available, it should be harnessed. So hopefully that can explain why such suggestions are not valid answers to the question.
I think this is a great question because it puts the user's experience first and foremost.
Several ideas come to mind:
Microsoft has published many tests demonstrating the performance of IE 9 and 10. Many of these tests focus on graphic performance, such as this one, which appears to use this JavaScript file to measure performance. There may be some code/concepts you can use.
A media-intensive page probably takes a few seconds to load anyway, so you have a little breathing room if you begin your tests while the rest of the content loads. For example, initiate AJAX/image requests, run your tests, and then handle the responses.
To test graphic performance, what about using a loading graphic as the performance test? I'm not usually a fan of "loading" screens, but if the site may take a few seconds to load, and the end result is better UX, then it isn't a bad idea.
The white screen idea may work if you draw a bunch of white shapes on it (not sure if any engines are smart enough to optimize this away because it is the same color).
Ultimately, I would err on the side of better performance and lower fidelity, and a less accurate (but fast) test versus making the user wait for too long.
Rather than measuring the user's CPU performance once and determining how many fancy visual effects to use from that, I would measure the amount of time taken by the CPU-intensive bits every time they execute (using new Date()), compare that to expected minimum and maximum values (which you will have to determine), and dynamically adjust the "effect level" up and down as appropriate.
Say if the user starts up a program in the background which eats a lot of CPU time. If you use this idea, your page will automatically tone down the visual effects to save CPU cycles. When the background program finishes, the fancy effects will come back. I don't know if your users will like this effect (but I am sure they will like the fact that their browser stays responsive when the CPU is overloaded).
This is a poor solution but it worked at the time: I used to generate two random matrices of about 100x100, multiply them (the school boy way) 100 times and time it. It took less than 1 second on regular machines and a bit more than 2 seconds in the slowest machine I could find (EeePC 1000H). After that I could say "well, this CPU can do X floating point operations per second", which is very inaccurate and probably wrong but the results were very stable and with very low standard deviations, so I guess you can call it a poor measure of javascript mathematical performance, which can tell you something about the CPU of that computer.
You can also check if it's WebGL enabled, and it will leave out all windows operating systems older than Vista. Since those don't have the hardware to run Vista or greater, those are slower PCs. You can check it with this:
function hasWebGL () {
if (typeof window.WebGLRenderingContext !== 'undefined') {
var canvas = document.createElement('canvas');
var gl = canvas.getContext('webgl') || canvas.getContext('experimental-webgl') || canvas.getContext('webkit-3d') || canvas.getContext('moz-webgl');
if(gl) {
return(true);
} else {
return(false);
}
}
}

How can I estimate browser's Javascript capabilities?

I serve a web page which makes the client do quite a lot of Javascript work as soon as it hits. The amount of work is proportional to the amount of content, which varies a lot.
In cases where there is a huge amount of content, the work can take so long that clients will issue their users with one of those "unresponsive script - do you want to cancel it?" messages. In cases with hardly any content, the work is over in the blink of an eye.
I have included a feature where, in cases where the content is larger than some value X, I include a "this may take a while" message to the user which is displayed before the hard work starts.
The trouble is choosing a good value for X since, for this particular page, Chrome is so very much faster than Firefox which is faster than IE. I'd like to warn all users when appropriate, but avoid putting the message up when it's only going to be there for 100ms since this is distracting. In other words, I'd like the value for X to also depend on the browser's Javascript capabilities.
So does anyone have a good way of figuring out a browser's capabilities? I'm currently considering just explicitly going off what the browser is, but that seems hacky, and there are other factors involved I guess.
If the data is relatively homogeneous, one method might be to have a helper function that checks how long a particular subset of the data has taken to go through, and make a conservative estimate of how long the entire set will take.
From there, decide whether to display the message or not.
This may not be where you want to go, but do you have a good idea why the javascript can take so long? Is it downloading a bunch of content over the wire or is the actual formatting/churning on the browser the slow part?
You might even be able to do something incrementally so that while the whole shebang takes a long time but users see content 'build' and thus don't have to be warned.
Why not just let the user decide what X is? (e.g. like those "display 10 | 20 | 50 | 100" per page choosers) Then you don't have to do any measurement/guesswork at all; you can let them make the optimal latency / information content tradeoff.
This is somewhat misleading; usually when one discusses a browser's JS capabilities, it's referring to the actual abilities of the browser, such as does it support native XMLHTTP? Does it support ActiveX? etc.
Regardless, there is no way to reliably deduce the processing power or speed of a browser. One might think that you could run some simple stress-tests, compute the result and compare to a list of past performances to see where the current user's browser ranks, and possibly use this information to arrive at an estimated time. The problem here, is that these calculations can not only be influenced by activities in the browser (or merely on the OS); for instance, you run your profiling script, and the user's AV scanner starts up because its 5pm; what normally might take 2s, takes 20s.
On thing to ask yourself, is: Does this processing have to take place right NOW? As n8wrl and Beska alluded to, you might need to code your own method whereby you break-up the work to be done into chunks and then you operate on them one at a time and using something like setTimeout(). This will give the engine time to 'breathe' -- and thus hopefully avoid the 'unresponsive script' warnings. Each of these chunks could also be used to update a progress bar (or similar) that gives the user some indication that work is being done.
Or you could take the approach like GMail - they flash a very small, red "Loading..." text area in the corner of the window. Sometimes its there for a few seconds, sometimes it's not there long enough to read it. Other times it blinks on-and-off several times. But you know when its doing something.
Lastly, also on the point of incrementally 'building' the page, you could inspect the source of Chrome's new tab page. Note: you can't view this using "view source"; instead, choose the "javascript console" option (while on the new tab page) and then look at the HTML source there. There should be a comment that explains their general strategy, like such:
<!-- This page is optimized for perceived performance. Our enemies are the time
taken for the backend to generate our data, and the time taken to parse
and render the starting HTML/CSS content of the page. This page is
designed to let Chrome do both of those things in parallel.
1. Defines temporary content callback functions
2. Fires off requests for content (these can come back 20-150ms later)
3. Defines basic functions (handlers)
4. Renders a fast-parse hard-coded version of itself (this can take 20-50ms)
5. Defines the full content-rendering functions
If the requests for content come back before the content-rendering functions
are defined, the data is held until those functions are defined. -->
Not sure if that helps, but I think it does give insight into how some of the big players handle challenges such as this.

What is considered a fast or slow load w/ respect to a web page

I just built a web page that is employing several different javascript elements. I am just curious as to what is considered a fast vs. a slow load time. Mine is coming out to be about 490ms w/ four different javascript pieces. Is that good, bad or average? Wondering if I need to optimize my js elements or not.
There's a paper from 1991 called “The information visualizer, an information workspace”, by Card et all (Xerox Palo Alto Research Center) that says:
0.1 second is about the limit for having the user feel that the system is reacting instantaneously, meaning that no special feedback is necessary except to display the result.
1.0 second is about the limit for the user's flow of thought to stay uninterrupted, even though the user will notice the delay. Normally, no special feedback is necessary during delays of more than 0.1 but less than 1.0 second, but the user does lose the feeling of operating directly on the data.
10 seconds is about the limit for keeping the user's attention focused on the dialogue. For longer delays, users will want to perform other tasks while waiting for the computer to finish, so they should be given feedback indicating when the computer expects to be done. Feedback during the delay is especially important if the response time is likely to be highly variable, since users will then not know what to expect.
A detailed overview can be found here.
Bottom line: if we want a fluid user experience, our apps need to operate at 10+ 'fps' :)
Just run your site through this tool and you'll know (almost) all there is to know about loading times. Or use Google's Speed Tracer if you use Chrome.
Try to think from the users point of view:
Instead of measuring the response time of xyz XHR request, consider the function the user wants to perform - "post a comment on this website"? Then measure the total time to achieve that goal. If the users work, as a whole, is too much or takes too long then they will go elsewhere.
eg. I can have 10 XHR functions that return in 15ms each it might seem blazing fast. But if the user has to click 10 different places to post a comment, they're going to get sick of my interface pretty quick too.
(extreme example)
This is going to vary from system to system. But if it's taking less than a few seconds to load everything (completely - dom ready), that's pretty good if you ask me. I recall keeping my entire project size down to about 50kb, back before we had all of this fancy ajax :) Cherish your broadbands and asynchronous calls!
If I remember well a podcast about StackOverflow (I can't find which one it is anymore, on DNR or on Hanselminute), above 1 sec the user starts to have the time to be distracted by something else and loose focus. At 10 seconds, I would probably already have closed your page!
of course it depends on what kind of action it is. but if it's a repetitive task, 1 second is the maximum I would say. That's what I aim to in general. It feels instant when you go under that limit.
The general rule of thumb is that the maximum attention span of a web surfer is 10 seconds*. Anything longer and they'll use another site.
*note: I read that somewhere but can't remember where.

Categories

Resources