Recording website load times? - javascript

I am trying to record the time it takes to load my website on an average (say 10 runs) from various locations in the world. For this, I was thinking of using a list of proxies to achieve this and am not sure this is the perfect way of doing this.
Is there a Firefox add-on that lets me time this perhaps using Firebug itself?
Is there an alternate way of running this test?
Any tips from the testing community would be awesome.

"Net" tab in Firebug
Google PageSpeed
Yahoo YSlow
You can also add a snippet of JavaScript to your pages and your clients can report their page render speed
http://blog.yottaa.com/2010/10/how-to-measure-page-load-time-with-google-analytics/

Try Google Page Speed for Firefox.

I did timings using the unix/linux command 'wget' (or 'curl'). What I would do was:
time wget 'hxxp://mycoolwebsite.com' > index.html
time curl 'hxxp://mycoolwebsite.com' > index.html
I did this for a mostly text-based web server that performed an action in php before returning a value. There are options to specify if you get the images too ( i think).
Hope this helps.

I couldn't recommend Fiddler more. While it doesn't check performance "around the world", it does make guesses based on typical latency. Plus, you get the actual load time (don't confuse this with render time) as done in an actual browser. Lots of web-based tools will download all the files for your page, but due to scripting and other things, they will miss tons. Fiddler catches everything.

there is also internetsupervision, though I'm not sure how accurate it really is.

pingdom.com has a freemium solution for this. One site for free, more sites will cost you.

You will want to use an external service for this. The service may have a Firefox plugin you can use, but ultimately you want to have your queries run from multiple controlled testing servers that are built to test your load in isolation from other variables.

Related

Can I create a listener in JS instead of repeatedly requesting a URL?

I'm working on a tool that will require 'listening' for a response from the server.
Currently I've got the page using JQuery to request a URL and respond based on it's output.
I do that every couple seconds.
However, as there will likely be hundreds of people using the tool all at the same time, that could be a pretty big server load.
Is there a way I can create a 'listener' that will notify the loaded pages when a change happens instead of constantly querying the server?
I haven't really been able to find much on Google (probably not searching for the correct thing) so hopefully someone here will know exactly what I'm talking about.
Thanks in advance for your quick responses!
You are looking for technologies named Comet or server push. There are several different implementations of this problem, typically involving long-running, but idle HTTP connections. Check out Atmosphere (in Java) or various other libraries.
Also make sure to have a look at web sockets (new HTML5 technology).
See also
COMET javascript library

Scraping dynamically generated html inside Android app

I am currently writing an Android app that, among other things, uses text information from websites which I do not own. In addition, some of the pages require authentification.
For some pages I have been able to log in and retrieve the html code using BasicNameValuePairs and an HTTPClient with its associated objects.
Unfortunately, these methods retrieve the webpage source without running any javascript functions that a browser (Android Webview even) would normally run. I need the text that some of these scripts are retrieving.
I've done my research, but everything I've found is guesswork & extremely confusing. I'm okay with ignoring pages that require login for now. Also, I am willing to post any code that may be useful for constructing a solution; It is an independent project.
Any concrete solutions for scraping the html result from javascript calls? An example would be absolutely top-notch.
Final Success:
Rhino. Used this jar file.
Other Things I Tried:
HttpClient provided by Android
Cannot run javascript
HtmlUnit
4 hours, no success. Also huge, added 12 mb to my apk.
SL4A
Finally compiled. Used THIS guide to set-up. Abandoned as overkill for a simple rhino jar.
Things That Might Work:
Selenium
Further results will be posted. Others results will be added if posted.
Note: many of the options listed above reference each other. I think rhino is included in both sl4a and htmlunit. Also, I think htmlunit contains selenium.
The aforementioned solutions are very slow and restrict you to 1 url (well, not really, but I dare you to scrape 10 urls with Rhino while your user is impatiently waiting for results).
An alternative is to use a cloud scraping solution. You get the benefit of not wasting phone bandwidth on downloading content you won't use.
Try this solution: Bobik Java SDK
It gives you the ability to scrape up to hundreds of sites in a matter of seconds

Do you have any idea how Google Docs Javascript do the interval data autorefresh?

Alright, Here it goes:
I'm currently implementing a software which autorefresh/autopull/autoreload the data to keep the screen live by using AJAX.
This is actually working, but I know I´ve used the simplest approach which is:
SetInterval (javascript)
Call the Refresh Method over and over each n seconds.
Read the Json Data, rebuild the HTML and update it.
This can also be done by just calling a SetTimeOut (javascript) and the end of the AJAX request.
In the refresh method I internally check that it´s not being called simultaneously, etc.
However... this is the simplest approach, it works but, in slow computers, firefox and ie, I can see this activity sometimes freezes the browser, and I know this might not be necessary because of the AJAX call, but how "intensive" is the javascript operation overall... but, after running a profiler, Overall javascript (using jquery by the way) seem to be fine. Also if I disable the autorefresh, the browser wont freeze by short seconds in slow computers.
I decided to investigate how several of the majors AJAX applications works out there.
Facebook for instance.. they do a request all the time, every N seconds, interpret the JSON and update the screen, but, google docs... I can seem to find any request.. This is maybe because: they are just telling the javascript debugger engine that they do not want their request to be logged??, or, are they using another approach to the refresh dilemma?
I read in another answer here at stackoverflow, that Google Docs keeps an open connection..
Can this be the answer? http://ajaxpatterns.org/HTTP_Streaming
What do you guys know about this?
Just as a side note, the application I´m developing is meant to be accessed by thousands of users at a time, and I know the JavaScript refresh routine only tells a little part of the history, but the Server Side Application and the database is currently supporting such a load according to the stress tests I did by using several thousands of virtualized stations. I just want to know what you think about the client browser problem specifically.
Regards and
If you are still reading this..
Thanks you for your time.
I suspect they're using WebSockets. Browser support is flaky, so your mileage may vary with this approach.
You may also want to look at APE (ajax push engine), which is a decent implementation of long polling with a client/server architecture.
You can read up on Long Polling. But then you'll have to handle dropped connections etc.

How does disqus work?

Does anyone know how disqus works?
It manages comments on a blog, but the comments are all held on third-party site. Seems like a neat use of cross-site communication.
The general pattern used is JSONP
Its actually implemented in a fairly sophisticated way (at least on the jQuery site) ... they defer the loading of the disqus.js and thread.js files until the user scrolls to the comment section.
The thread.js file contains json content for the comments, which are rendered into the page after its loaded.
You have three options when adding Disqus commenting to a site:
Use one of the many integrated solutions (WordPress, Blogger, Tumblr, etc. are supported)
Use the universal JavaScript code
Write your own code to communicate with the Disqus API
The main advantage of the integrated solutions is that they're easy to set up. In the case of WordPress, for example, it's as easy as activating a plug-in.
Having the ability to communicate with the API directly is very useful, and offers two advantages over the other options. First, it gives you as the developer complete control over the markup. Secondly, you're able to process comments server-side, which may be preferable.
Looks like that using easyXDM library, which uses the best available way for current browser to communicate with other site.
Quoting Anton Kovalyov's (former engineer at Disqus) answer to the same question on a different site that was really helpful to me:
Disqus is a third-party JavaScript application that runs in your browser and injects itself on publishers' websites. These publishers need to install a small snippet of JavaScript code that makes the first request to our servers and loads initial JavaScript loader. This loader then creates all necessary iframe elements, gets the data from our servers, renders templates and injects the result into some element on the page.
As you can probably guess there are quite a few different technologies supporting what seems like a simple operation. On the back-end you have to run and scale a gigantic web application that serves millions of requests (mostly read). We use Python, Django, PostgreSQL and Redis (for our realtime service).
On the front-end you have to minimize your payload, make sure your app is super fast and that it doesn't break in extremely hostile environments (you will be surprised how screwed up publisher websites can be). Cross-domain communication—ability to send messages from hosting website to your servers—can be tricky as well.
Unfortunately, it is impossible to explain how everything works in a comment on Quora, or even in an article. So if you're interested in the back-end side of Disqus just learn how to write, run and operate highly-scalable websites and you'll be golden. And if you're interested in the front-end side, Ben Vinegar and myself (both front-end engineers at Disqus) wrote a book on the topic called Third-party JavaScript (http://thirdpartyjs.com/).
I'm planning to read the book he mentioned, I guess it will be quite helpful.
Here's also a link to the official answer to this question on the Disqus site.
short answer? AJAX, you get your own url eg "site.com/?comments=ID" included via javascript... but with real time updates like that you would need a polling server.
I think they keep the content on their site and your site will only send & receive the data to/from disqus. Now I wonder what happens if you decide that you want to bring your commenting in house without losing all existing comments!. How easy would you get to your data I wonder? They claim that the data belongs to you, but they have the control over it, and there is not much explanation on their site about this.
I'm always leaving comment in disqus platform. Sometimes, comment seems to be removed once you refreshed it and sometimes it's not. I think the one that was removed are held for moderation without saying it.

How to take a screen shot of a web page?

I want to add a button to one of our web sites that will allow the user to file a bug with our bug tracking system.
One of the feature requests is that a screen cap of the page in question be sent along.
Without installing something on the end users machine, how can I do this? Does javascript have some sort of screen cap api?
You may grab the innerHTML of the page and then process it on the server:
document.getElementsByTagName('html')[0].innerHTML;
// this would also be interactive (i.e. if you've
// modified the DOM, that would be included)
No, javascript does not have anything like this.
I'm afraid that this will be quite hard. I can not think anything that would do this without installing on users computer.
I'd like to be proven wrong, but atleast this is an answer for you.
Get as much Info as you can about the user environment using jQuery. (jQuery.support) / user agent / cookies / form input values, the url (for get parameters and to know which page had an error)
Send the source of the page like mentionned by Moff.
Try serializing the DOM as it is now so you can compare what is different from the original page.
It is also useful to send the source of the page if you need to keep it for historic purposes, since when you update the page, it will be become different.
I'd suggest some sort of integration with FireShot which is a Free Firefox/IE addon.
I agree with the other answers -- no dice.
However, there is a firefox plugin, the Pearl Crescent Page Saver, which might be worth looking into for related tasks.
Take a look at pagecrop (implemented with jQuery + jCrop plug-in)
I must be missing something, but can't you just...
Press PrtScr on keyboard and paste into email.
See this question. Basically, no, not with javascript. Perhaps with ActiveX, but that's back to installing things on the client's PC.
Consider writing a server-side script that repeats the user's request exactly (assuming it's not after a POST) and storing the resulting html file.
You might look into using a web based solution such as the one offered at Super Screenshot! or WebShotsPro.com. Depending on your needs, such as screenshots of specific areas of pages, or pages inaccessible from the outside world, it may not work for you, but it is an idea.
Chrome plugin
https://chrome.google.com/extensions/detail/ckibcdccnfeookdmbahgiakhnjcddpki
You can also take a look at how Evernote does its screen capturing and maybe you can tie in to that or create your own chrome extension. https://chrome.google.com/webstore/detail/evernote-web-clipper/pioclpoplcdbaefihamjohnefbikjilc?hl=en

Categories

Resources