Website Performance testing automated tools/frameworks - javascript

I have a website that uses AJAX heavily to communicate with the server. Now I want to do performance and stress testing using automatic scripts. Do you have any recommendations?
The functionality maybe, given a URL, hook up the page ready callback. In the callback I can emulate "click" to some button using the button's id property.
Thanks.

There are a bunch of tools you can use to automate the UX of your site to make sure that things work fine. I'll break them down arbitrarily.
The ones that come to mind are Sahi and Selenium. These allow you to automate clicking, submitting etc. similar to what GUI testing tools do and test your application.
Mechanize (Perl version(the original), Ruby version and python version) are used to write scripts that can interact with your website to simulate a user. They're not "GUI" based so don't rely on a browser. This might affect what you can do with Javascript. Another similar tool (although I don't have personal experience with it) is watir.
If you want to hammer your website (i.e. performance testing), they only thing I've come across is the Apache Benchmarker. It can generate reports on how much raw traffic your site can take before it comes crashing down. Assuming your callbacks are not stateful, you can use this to hammer them.

use Selenium...

Related

analyse a .aspx site with paging with __doPostBack function

I want to analyse some data of a webpage, but here's the problem: the site has more pages which gets called with a __doPostBack function.
How can I "simulate" to go a page further and analyse this site, and so on..
At this time I analyse the data with JSoup in java - but I'm open to use some other language if it's necessary.
A postback-based system (.NET, Prado/PHP, etc) works in a manner that it keeps a complete snapshot of the browser contents on the server side. This is called a pagestate. Any attempt to manipulate with a client that is not JavaScript-capable is almost sure to fail.
What you need is a JavaScript-capable browser. The easiest solution I found is to use the framework Firefox is written in - XUL - to create such a desktop application. What you do is basically create a desktop application with a single browser element in it, which you can then script from the application itself without restrictions of the security container. Alternatively, you could also use the Greasemonkey plugin to do your bidding. The latter is a bit easier to get started with, but it's fairly limited since it's running on a per-page basis.
With both solutions you then have access to the page's DOM to gather data and you can also fire events (like clicking on a button). Unfortunately you have to learn JavaScript for this to work.
I used an automation library which is Selenium, which you can use in a lot of languages (C#, Java, Perl,...)
For more information how to start this link is very helpful: this.
As well as Selenium, you can use http://watin.org/

Is it possible to use node.js as a faster, more elegant, database enabled alternative to greasemonkey?

I was using Greasemonkey eariler in the week to automate some calls to a page to scrape some data from a website, this was awkward for two reasons:
It's GUI based instead of commandline based)
I had to store all persisted information in JSON, and not directly in a database.
Would it be possible, to use node.js as a Greasemonkey alternative since node.js can store records directly in a database, and won't be required visually load pages the way the Greasemonkey does?
Also I would think that node.js would be easier to work with since you don't have to re-deploy it's scripts to Firefox the way that you have to with GreaseMonkey, allowing you to easily use version control on separate scripting projects.
On the other hand using node.js to do GreaseMonkey's job might just be using a hammer to pound in a screw, so I thought I would check here to find out if I am mistaken.
On the other hand using node.js to do GreaseMonkey's job might just be using a hammer to pound in a screw
I would say that the opposite is true; I believe you're using Greasemonkey to do the job of a server-side processing library. Greasemonkey runs in the browser and is designed to modify your web experience by running scripts on the pages you visit.
Indeed, I believe Node.js would be very well suited to this task. With libraries like jsdom and node-jquery, you can easily do JavaScript parsing over the DOM. You may also wish to take a look at node.io, a "distributed data scraping and processing framework." Finally, you may look into non-Node (but still JavaScript) based tools, such as PhantomJS and CasperJS, which can do scraping, DOM manipulation, screenshots, and more.
The question is a bit of a non sequitur.
Greasemonkey is for clients to tweak their individual browsing experience, client-side.
Node.js is for developers to deliver applications to the masses (hopefully), server-side.
For scraping data, in an automatable way, use Node.js or some server-side library (Python works well).
For "Mashups" of webpages that you browse, use Greasemonkey.

One page only javascript applications

Have you experimented with single page web application, i.e. where the browser only 'GETs' one page form the server, the rest being handled by client side javascript code (one good example of such an 'application page' is Gmail)?
What are some pro's and con's of going with this approach for simpler applications (such as blogs and CMSs)?
How do you go about designing such an application?
Edit: As mentioned in the response a difficuly is to handle the back button, the refresh button, bookmarking/copying url. The latter can be solved using location.hash, any clue about the remaining two issues?
I call these single page apps "long lived" apps.
For "simpler applications" as you put it it's terrible. Things that work OOTB for browsers all of a sudden need special care and attention:
the back button
the refresh button
bookmarking/copying url
Note I'm not saying you can't do these things with single-page apps, I'm saying you need to make the effort to build them into the app code. If you simply had different resources at different urls, these work with no additional developer effort.
Now, for complex apps like gmail, google maps, the benefits there are:
user-perceived responsiveness of the application can increase
the usability of the application may go up (eg scrollbars don't jump to the top on the new page when clicking on what the user thought was a small action)
no white screen flicker during the HTTP request->response
One concern with long-lived apps is memory leaks. Traditional sites that requests a new page for each user action have the added benefit that the browser discards the DOM and any unused objects to the degree that memory can be reclaimed. Newer browsers have different mechanisms for this, but lets take IE as an example. IE will require special care to clean up memory periodically during the lifetime of the long-lived app. This is made somewhat easier by libraries these days, but by no means is a triviality.
As with a lot of things, a hybrid approach is great. It allows you to leverage JavaScript for lazy-loading specific content while separating parts of the app by page/url.
One pro is that you get the full presentation power of JavaScript as opposed to non-JavaScript web sites where the browser may flicker between pages and similar minor nuisances. You may notice lower bandwidth use as well as a result of only handling with the immediately important parts that need to be refreshed instead of getting a full web page back from the server.
The major con behind this is the accessibility concern. Users without JavaScript (or those who choose to disable it) can't use your web site unless you do some serious server-side coding to determine what to respond with depending on whether the request was made using AJAX or not. Depending on what (server-side) web framework you use, this can be either easy or extremely tedious.
It is not considered a good idea in general to have a web site which relies completely on the user having JavaScript.
One major con, and a major complaint of websites that have taken AJAX perhaps a bit too far, is that you lose the ability to bookmark pages that are "deep" into the content of the site. When a user bookmarks the page they will always get the "front" page of the site, regardless of what content they were looking at when they made the bookmark.
Maybe you should check SproutCore (Apple Used it for MobileMe) or Cappuccino, these are Javascript frameworks to make exactly that, designing desktop-like interfaces that only fetch responses from the server via JSON or XML.
Using either for a blog won't be a good idea, but a well designed desktop-like blog admin area may be a joy to use.
The main reason to avoid it is that taken alone it's extremely search-unfriendly. That's fine for webapps like GMail that don't need to be publically searchable, but for your blogs and CMS-driven sites it would be a disaster.
You could of course create the simple HTML version and then progressive-enhance it, but making it work nicely in both versions at once could be a bunch of work.
I was creating exactly these kind of pages as webapps for the iPhone. My method was to really put everything in one huge index.html file and to hide or show certain content. This showing and hiding i.e. the navigation of the page, I control in a special javascript file where the necessary functions for handling the display of the parts in the page are.
Pro: Everything is loaded in the beginning and you don't need to request anything from the server anymore, e.g. "switching" content and performing actions is very fast.
Con: First, everything has to load... that can take its time, if you have a lot of content that has to be shown immediately.
Another issue is that in case when the connection goes down, the user will not really notice until he actually needs the server side. You can notice that in Gmail as well. (It sometimes can be a positive thing though).
Hope it helps! greets
Usually, you will take a framework like GWT, Echo2 or similar.
The advantage of this approach is that the application feels much more like a desktop app. When the server is fast enough, users won't notice the many little data packets that go back and forth. Also, loading a page from scratch is an expensive operation. If you just modify parts of it, the browser can keep a lot of the existing model in memory and just change the parts that changed.
Another advantage of these frameworks is that you can develop your application in pure Java. This means you can debug it in your IDE just like any other Java app, you can write unit tests and run them automatically, etc.
I'll add that on slower machines, a con is that a large amount of JavaScript will bring the browser to a screeching halt. Since all the rendering is done client-side, if the user doesn't have a higher-end computer, it will ruin the experience. My work computer is a P4 3.0GHZ with 2 GB of ram and JavaScript heavy sites cause it to chug along slower than molasses, which really kills the user experience for me.

When NOT to use AJAX in web application development? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
I'm building a web application with the Zend Framework. I have wanted to include some AJAX type forms and modal boxes, but I also want my application to be as accessible as possible. I want my application to be enhanced by AJAX, but also fully functional without AJAX.
So as a general guideline...when should I not use AJAX? I mean, should I bother making my application usable without AJAX? Or does everyone have AJAX enabled browsers these days?
If you mean "accessible" in the ADA sense, AJAX is usually a no-no - your site should provide all its content and core functionality using only standard (X)HTML and CSS. Any javascript used should merely extend the core functionality, and your site should be coded to work elegantly in the absence of a javascript-enabled browser.
Examples: if you want a user to click on a thumbnail and get a full-size version of the image as a result, you can make the thumbnail a link. Then, the onclick event will fire a JQuery method that cancels the navigation behavior of the link and pops up a JQuery floating div to show the image on the current page. If the user's browser doesn't support JavaScript, the onclick event will never fire, and the user will be presented the image in a new page. The core functionality is the same with or without scripting.
EDIT: Skeleton example, sans JQuery-specific code.
<html>
<body>
Some URL
</body>
</html>
To cancel the navigation operation, simply make sure that the method invoked by the onclick event returns false at the end.
A neat example of the JQuery image popup I described can be found here.
Use ajax if it adds value for the user.
If the ajax version adds a lot more value than the non-ajax version then it might justify the expense to develop a solution that caters for both clients. Generally i wouldn't recommend doing the extra work (remember.. more code results in more maintenance).
I think one point is missing here: Use Ajax only for content any search engine does not need to know.
98% of users will have AJAX enabled browsers.
A significant percentage of those people won't have it turned on when they first visit your site though (or at all, ever perhaps).
I've seen websites that look like a blank page without javascript on. Don't be one of them. Javascript to fix layout issues is a horrible idea in my opinion. Make sure it loads and looks ok without Javascript. If people can atleast see what they are missing out on, they are likely to switch it on, but if your website looks like it's just broken, then...
I often have noscript block Flash and JavaScript until I make the decision that your site is worthy.
So be sure to tell me what I'm missing if I have JavaScript turned off.
It depends on the complexity of your web application.
If you can, having it functional with javascript disabled is great, because it makes your application usable not only by users on js-disabled browsers but also by robots. The day you decide to write an application to automatically fill your forms, for example, you don't have to write an API from the ground up.
In any case, do not user AJAX for EVERYTHING! I have just inherited a project that basically consists of a single page that is populated by a ton of AJAX calls and I can tell that you just thinking about it gives me physical pain. I guess the original developer didn't like the concept of using the back/forward button in the browser as a mean of navigation.
Unless you are targeting mobile devices or other non-standard web users, you can be fairly sure that the vast majority has Javascript enabled, because most major sites (including SO) rely heavily on it.
I want my application to be as accessible as possible.
You can do things like rendering your modals and forms as a page that can operate standalone.
The AJAX version pulls the template into a modal/container, the standalone version checks if it's an AJAX request and renders the page including the header/footer (this can occur from the same URL if planned well)
The AJAX version intercepts the submit and does AJAX submission then provides an inline thank you, the non-AJAX opens a thank you page. Once again you can likely use the same pages for each of these functions if thought out correctly.
Reusing templates and URL's helps avoid additional maintenance for the AJAX/non-AJAX versions.
I want my application to be enhanced by AJAX, but also fully
functional without AJAX.
Thinking through the structure of your URLs and templates can go a long way towards this, if you make most of your AJAX requests pull in completely rendered templates (as opposed to just data) then you can usually use the same URL to serve both versions. You just serve only the guts of the modal/form to the AJAX request and the entire page to a regular request.
When should I not use AJAX?
You should not use AJAX if doing so will cause a poor experience for a significant portion of your user base (there are of course techniques that can be used to mitigate this)
You should not use AJAX if the development time associated with implementing it will be too significant to justify the improvements in user experience
You should not use AJAX for content which has significant SEO value without implementing an appropriate fallback that allows it to be indexed (Crawlers are improving constantly but it's still a good idea)
I mean, should I bother making my application usable without AJAX? Or
does everyone have AJAX enabled browsers these days?
I'd say a lot of the time it's unnecessary as the vast majority of users will have AJAX enabled browsers, but there are scenarios where it's critical such as SEO optimization or when a large portion of your user base is likely to use browsers that are less likely to support Javascript as well or where they're likely to have Javascript/AJAX disabled.
A few examples of these scenarios:
A website for a company or government that uses an outdated browser as standard
A website where a large portion of the users may be disabled in a manner that may negatively impact their experience such as a website for vision or motor-skill impaired people may be negatively impacted by updating content via AJAX especially if it occurs rapidly.
A site accessed regularly via a less common device or browser that will cause a negative impact to a large portion of users
So what should I do?
Think about who is going to be using the site, how they're going to access it, and what they're going to access it with. Also try to think about not just the present but also the future.
Design the site in a manner that will cater to the majority of these users.
Think who will gain and who will loose based on my decision to use AJAX and if in doubt have a look at your analytics data to help weigh up the decision and if you lack the data it may be worth updating your tracking and obtaining a sample to aid the decision
Think does my decision to use AJAX cause any contradictions with core requirements for this project
Use AJAX to enhance content where possible as opposed to making it mandatory ie the content should work with or without JS/AJAX
Consider the additional development time involved with the use of AJAX (if any)
My experience is, we should use ajax after it works without it. For a couple of reasons.
First, if something breaks in the ajax, and you don't have it working without it, the site simply doesn't work. For example, a product list with pagination. It should work with the links alone, then use ajax when possible.
Second, for site indexing and accessibility. If it works without ajax, it's better.
And it's easier to break something (even if only for a few moments). A bad piece of code, an uncaught exception, an external library not loaded, a blocking browser extension,...
After everything works without ajax, its quite easier to add ajax. Just have the ajax catch the action, add ajax=1 and when returning the result, return only what you need if ajax=1, otherwise return everything.
In the product list example, I would only return the products and pagination html, and add to the correct div. If ajax stops working, the whole page is loaded and the customer sees the second page as it loads.
Ajax adds a lot of value to UX. If done right, the user gets a great feel when using the site, and better data usage because it doesn't load the whole page everytime.
But the question being "when not to use ajax", I would say, you should always count on it to improve UX but not rely on it for the site to work (as other users also mentioned). And nowadays we need both, great code and great user experience.
My practice is to use two main pages, let's say index.py and ajax.py. First one is responsible for generating full website, and is default target of forms. Other one generates only output specific for adequate ajax query. Logic behind both of them is the same, only the method of generating output is a bit different.
In jquery I simply change action parameter when sending a request. It works both with and without ajax, although long time have I not seen someone with disabled js and ajax.
I like the thought of coding your application without JavaScript / Ajax and then adding it in later to enhance the UI without depriving users of functionality just because they don't have JavaScript enabled. I read about this in Pro ASP.NET MVC but I think I've seen it elsewhere in reading about unobtrusive JavaScript.
You should not make your service bloated with web 2.0 effects like accordion, modal/etc forms, image zoomers etc.
Use modern tech smarter (AJAX is one of them) and your users will be happy. Do not fear AJAX -- this is very good thing to make user expirience smooth. But don't do things because you like it - do them because your user need it ;)
When you want to make a website that looks like a website, not a fugly imitation of a desktop app?
You should not use AJAX or JavaScript in cases where:
your system needs to be accessible
your system needs to be search friendly
However, by using a modern JS framework with some solid "unobtrusive" practices, you can progressively enhance pages so that they remain accessible and search-friendly while offering a slick UI to users.
This totally depends on the type of application or feature you're developing. If it is crucial that the application is accessible despite the absence of Javascript, then it would help to have fallback methods (i.e. alternative forms) to allow your user to use said functionality/feature. For that, it will require you to invest some of your time developing methods for collecting information not just using client-side scripts but also on the server-side.
For miscellaneous features that only serves to enhance user experience, it's mostly not worth it to develop fallback methods.
There's no reason to totally not use AJAX. AJAX helps minimize your traffic after all.
You can if you wish always use AJAX and update the history state using Push State or for more compatibility use the hash with none HTML5 compliant browsers.
with this you can have your server load a page then javascript read the document.hash and resume the state of the application base on the state of the hash.
for example i got to /index.html i click into something for example a client to open the view client you can change the hash to #/view/client/{client_id}/ then if a reload or go back using the browser the hash with change and you can use the onhashchanged event to capture it and match the sites state to the new hash then same if a favorite a certain state
A couple of other scenarios where one may be better off NOT using AJAX:
Letting someone to log into the web application. Use traditional form submit instead.
Searching and returning more than a few 100 rows from the database. Either break the process down or let the server side language handle it.

How do I use Mechanize to process JavaScript?

I'm connecting to a web site, logging in.
The website redirects me to new pages and Mechanize deals with all cookie and redirection jobs, but, I can't get the last page. I used Firebug and did same job again and saw that there are two more pages I had to pass with Mechanize.
I took a quick look at the pages and saw that there is some JavaScript and HTML code but couldn't understand it because it doesn't look like normal page code. What are those pages for? How they can redirect to other pages? What should I do to pass these?
If you need to handle pages with Javascript, try WATIR or Selenium - those drive a real web browser, and can thus handle any Javascript. WATIR Classic requires either IE or Firefox with a certain extension installed, and you will see the pages flash on the screen as it works.
Your other option would be understanding what the Javascript on the offending page does and bypassing it manually, but that seems onerous.
At present, Mechanize doesn't handle JavaScript. There's talk of eventually merging Johnson's capabilities into Mechanize, but until that happens, you have two options:
Figure out the JavaScript well enough to understand how to traverse those pages.
Automate an actual browser that does understand JavaScript using Watir.
what are those pages for? how they can redirect to other pages. what should i do to pass these?
Sometimes work is done on those pages. Sometimes the JavaScript is there to prevent automated access like what you're trying to do :). A lot of websites have unnecessary checks to make sure you have a "good" browser, so make sure that your user_agent is set to something common, like IE. Sometimes setting the user_agent to look like an old browser will let you get past without JavaScript.
Website automation is fun because you have to outsmart the website and its software developers, using multiple strategies. Like the others said, Watir is the best tool for getting past JavaScript at the moment.

Categories

Resources