When do you want more/less http requests? [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
It seems like to have you page load fast, you would want a series of small http requests.
If it was one big one, the user might have to wait much longer to see that the page was there at all.
However, I'v heard that minimizing your HTTP requests is more efficient. For example, this is why sprites are created for multiple images.
Is there a general guideline for when you want more and when you want less?

Multiple requests create overhead from both the connection and the headers.
Its like downloading the contents of an FTP site, one site has a single 1GB blob, another has 1,000,000 files totalling a few MB. On a good connection, the 1GB file could be downloaded in a few minutes, but the other is sure to take all day because the transfer negotiation ironically takes more time that the transfer itself.
HTTP is a bit more efficient than FTP, but the principle is the same.
What is important is the initial page load, which needs to be small enough to show some content to the user, then load additional assets outside of the user's view. A page with a thousand tiny images will benefit from a sprite always because the negotiations would not only cause strain to the connection, but also potentially the client computer.

EDIT 2 (25-08-2017)
Another update here; Some time has passed and HTTP2 is (becoming) a real thing. I suggest reading this page for more information about it.
Taken from the second link (at the time of this edit):
It is expected that HTTP/2.0 will:
Substantially and measurably improve end-user perceived latency in
most cases, over HTTP/1.1 using TCP. Address the "head of line
blocking" problem in HTTP.
Not require multiple connections to a server to enable parallelism,
thus improving its use of TCP, especially regarding congestion
control.
Retain the semantics of HTTP/1.1, leveraging existing documentation
(see above), including (but not limited to) HTTP methods, status
codes, URIs, and where appropriate, header fields.
Clearly define how HTTP/2.0 interacts with HTTP/1.x, especially in
intermediaries (both 2->1 and 1->2).
Clearly identify any new extensibility points and policy for their
appropriate use.
The bold sentence (emphasis mine) explains how HTTP2 will handle requests differently from HTTP1. Whereas HTTP1 will create ~8 (differs per browser) simultaneous (or "parallel") connections to fetch as much resources as possible, HTTP2 will re-use the same connection. This reduces overall time and network latency required to create a new connection which in turn, speeds up asset delivery. Additionally, your webserver will also have an easier time keeping ~8 times less connections open. Imagine the gains there :)
HTTP2 is also already quite widely supported in major browsers, caniuse has a table for it :)
EDIT (30-11-2015)
I've recently found this article on the topic 'page speed'. this post is very thorough and it's an interesting read at worst so I'd definitely give it a shot.
Original
There are too many answers to this question but here's my 2cents.
If you want to build a website you'll need few basic things in your tool belt like HTML, CSS, JS - maybe even PHP / Rails / Django (or one of the 10000+ other web frameworks) and MySQL.
The front-end part is basically all that gets sent to the client every request. The server-sided language calculates what needs to be sent which is how you build your website.
Now when it comes to managing assets (images, CSS, JS) you're diving into HTTP land since you'll want to do as few requests as possible. The reason for this is that there is a DNS penalty.
This DNS penalty however does not dictate your entire website of course. It's all about the balance between amount of requests and read- / maintainability for the programmers building the website.
Some frameworks like rails allow you to combine all your JS and CSS files into a big meta-like JS and CSS file before you deploy your application on your server. This ensures that (unless done otherwise) for instance ALL the JS and ALL the CSS used in the website get sent in one request per file.
Imagine having a popup script and something that fetches articles through AJAX. These will be two different scripts and when deploying without combining them - each page load including the popup and article script will send two requests, one for each file respectively.
The reason this is not true is because browsers cache whatever they can whenever they can because in the end browsers and people who build websites want the same thing. The best experience for our users!
This means that during the first request your website will ever answer to a client will cache as much as possible to make consecutive page loads faster in the future.
This is kind of like the browser way of helping websites become faster.
Now when the brilliant browserologists think of something it's more or less our job to make sure it works for the browser. Usually these sorts of things with caching etc are trivial and not hard to implement (thank god for that).
Having a lot of HTTP requests in a page load isn't an end-of-the-world thing since it'll only slow your first request but overall having less requests makes this "DNS-penalty" thing appear less often and will give your users more of an instant page load.
There are also other techniques besides file-merging that you could use to your advantage, when including a javascript you can choose it to be async or defer.
For async it means the script will be loaded and executed in the background whenever it's loaded, regardless of order of inclusion within HTML. This also pauses the HTML parser to execute the script directly.
For defer it's a bit different. It's kind of like async but files will be executed in the correct order and only after the HTML parser is done.
Something you wouldn't want to be "async" would be jQuery for instance, it's the key library for a lot of websites and you'll want to use it in other scripts so using async and not being sure when it's downloaded and executed is not a good plan.
Something you would want to be "async" is a google analytics script for instance, it's effectively optional for the end-user and thus should be labelled as not important - no matter how much you care about the stats your website isn't built for you but by you :)
To get back to requests and blend all this talk about async and deferred together, you can have multiple JS on your page for instance and not have the HTML parser pause to execute some JS - instead you can make this script defer and you'll be fine since the user's HTML and CSS will load while the JS parser waits nicely for the HTML parser.
This is not an example of reducing HTTP requests but it is an example of an alternative solution should you have this "one file" that doesn't really belong anywhere except in a separate request.
You will also never be able to build a perfect website, nor will http://github.com or http://stackoverflow.com but it doesn't matter, they are fast enough for our eyes to not see any crazy flashing content and those things are truly important for end-users.
If you are curious about how much requests is normal - don't. It's different for every website and the purpose of the website, tho I agree some things do go over the top sometimes but it is what it is and all we have to do is support browsers like they are supporting us - Even looking at IE / Edge there since they are also improving (slowly but steady anyways).
I hope my story made sense to you, I did re-read before the post but couldn't find anything while scouting for irregular typing or other kinds of illogical things.
Good luck!

The HTTP protocol is verbose, so the ratio of header size to payload size makes it more efficient to have a larger payload. On top of that, this is still a distributed communication which makes it inherently slow. You also, usually, have to set up and tear down the TCP connection for each request.
Also, I have found, that the small requests repeat data between themselves in an attempt to achieve RESTful purity (like including user data in every response).
The only time small requests are useful is when the data may not be needed at all, so you only load it when needed. However, even then it may be more performant to.simply retrieve it all in one go.

You always want less requests.
The reason we separate any javascript/css code in other files is we want the browser to cache them so other pages on our website will load faster.
If we have a single page website with no common libraries (like jQuery) it's best if you include all the code in your html.

Related

Overhead of separated javascript files in application loading time

My company is building a single page application using javascript extensively. As time goes on, the number of javascript files to include in the associated html page is getting bigger and bigger.
We have a program that mignifies the javascript files during the integration process but it does not merge them. So the number of files is not reduced.
Concretely this means that when the page is getting loaded, the browser requires the javascript files one by one initiating each time a http request.
Does anyone has metrics or a sort of benchmark that would indicate up to what extent the overhead in requesting the javascript files one by one is really a problem that would require to merge the files into a single one?
Thanks
It really depends on the number of users and connections allowed by the server and the maximum number of connections of the client.
Generally, a browser can do multiple HTTP requests at the same time, so in theory there shouldn't be much difference in having one javascript file or a few.
You don't only have to consider the javascript files, but the images too of course, so a high number of files can indeed slow things down (if you hit the maximum number of simultaneous connection from server or client). So regarding that it would be wise to merge those files.
#Patrick already explained benefits of merging. There is however also a benefit of having many small files. Browsers by default give you a maximum number of parallel requests per domain. It should be 2 by HTTP standard but browsers don't follow it anymore. This means that requests beyond that limit wait.
You can use subdomains and redirect requests from them to your server. Then you can code client in such way that it will use a unique subdomain for each file. Thus you'll be able to download all files at the same time (requests won't queue) effectively increasing performance (note that you will probably need more static files servers for this to handle the traffic).
I haven't seen this being used in real life but I think that's an idea worth mentioning and testing. Related:
Max parallel http connections in a browser?
I think that you should have a look at your app Architecture more than thinking about what is out there.
But this site should give you a good idea: http://www.browserscope.org/?category=network
Browsers and servers may have their own rules which are different. If you search for http requests limit, you will find a lot of posts. For example the max http request limit is per domain.
But speaking a bit about software development. I like the component based approach.
You should group your files per component. Depending on your application requirements, you can load first the mandatory components and lazy load the less needed one or on the fly. I don't think you should download the entire app if it's huge and has a lot of different functionalities that may or may not all be used by your users.

Why write modular javascript? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
The benefits of well-factored and modular code, in my understanding are re-usability and organization. Code written in a big chunk all in one file is difficult to read, and re-using small portions of the code requires careful copy-pasting, rather than include statements.
In particular, with regards to Javascript, I came across an example recently that got me thinking about this. A comment was made on SO to the effect that, if you are not including your javascripts conditionally on a page-by-page basis, this "represents a failure to modularize JS code properly". However, from a code re-use and organization point of view, there is no reason to consider what happens at page load time. The code will be just as readable if it is written in a bunch of separate files and then mashed together and minified before being served. The rails asset pipeline, for example, does just this.
When I first encountered the asset pipeline, my mind reeled and I started wondering "how do I make javascripts load only when needed?" I read a few SO questions and an article on the matter, and began to think that maybe I shouldn't worry about what happens to my code after it "compiles".
Is the purpose of writing modular code purely a human-level activity, should we stop worrying about modularity after the code starts running? In the case of Javascript, should we be concerned that our scripts are being mashed together before being included?
I think the one thing that you are not really talking about in this with regards to performance is actual HTML browser download behavior. I believe you have to walk a fine line between only displaying the javascript needed on a page by page basis and leveraging browser caching and download behavior.
For example, say you have 20 different javascript snippets that are going to be used on every page. In this case it is a no-brainer to compile/minify them into a single file, as the fewer files your browser needs to download, the better. This single file would also be able to be cached, that is assuming it is a static file or appearing to be static (via headers sent) if it is dynamically compiled.
Now say of those 20 snippets, 15 are used on every page and the others are used intermittently. Of course you put all 15 of the always used snippets into a single file. But what about the others? In my opinion you need to consider the size and frequency of use of the files. If they are small and used relatively frequently, I might consider putting them into the main file, with the thought that the extra size in the main file is outweighed by the need to have additional request to download the content later. If the code is large, I would tend to only use it where necessary. Of course once it is used, it should remain in cache.
This approach might best be suited for a web application where users are expect to typically have multiple page loads per session. Of course if you are designing an advertising landing pages or seomthing where the user only may see that single page, you might lean on keeping the initial javasciprt download as small as possible and only loading new javascript in as necessary based on user interaction.
Every aspect of this question boils down to "it depends".
Are you writing an enterprise-level application, which results in 80,000 lines of code, when you stuff it all together?
If so, then yes, compilation time is going to be huge, and if you stuff that in the <head> of your document, people are going to feel the wait time.
Even if it's already cached, compile time alone will be palpable.
Are you writing dozens of widgets which might never be seen by an end-user?
Especially on mobile?
If so, then you might want to save them the download/compile time, and instead load your core functionality, and then load extra functionality on-demand, as more studies are showing that the non-technical end-user expects their mobile-internet experience to be similar to their desktop experience, not only in terms of content, but in general wait-times.
Fewer and fewer people are willing to accept 5s-8s for a mobile experience (or a desktop experience on mobile) to get to the point of interactivity, just based on the "well, it's mobile, so it'll take longer" train of thought.
So again, if you've got an 80,000 line application, or a 300kB JS file, or are doing a whole lot of XML parsing, et cetera, prior to load, without offering a separate mobile experience, your stats on mobile are bound to hurt -- especially if you're a media site or a commercial/retail site.
So the correct answer to your question is to say that there is no correct answer to your question, excepting that there are good ideas and bad ideas, based on the target-devices, the intent of the site/application, the demographic, the code-base, the anticipation that users will frequent the site (and thus benefit from cached assets), the frequency of updates to the codebase (having one updated module, with 20 cached modules, versus a fully-invalid 21-module chunk, due to one updated line, with a client-base of 250,000 customers, is a consideration for several reasons)...
...and more...
Figure out what you're doing.
Figure out what you need to do to make that happen.
Figure out how to do it, while providing your customers a good experience.
Know how to combine files.
Know how to load on demand.
Know how to build a light bootstrap, which can intelligently load modules (and/or learn require/AMD).
Use these as tools to offer your users the best experience possible, given what you're trying to accomplish.

Why do web applications send HTML over the wire?

This question pertains to web applications. I have very little web app development experience, so might be missing some very obvious points/issues. Please point them out.
As I understand, in most web applications, a web server sends HTML over the wire to a client (browser). This happens every time a HTTP request is made. I feel this is very wasteful of bandwidth.
1) Since browsers can run JavaScript, why don't we just send a JavaScript program which can generate the webpage's HTML content (which the browser then renders).
2) Further a browser might cache the JavaScript program and next time the server only need send the data. The protocol might involve the browser sending the "program version" it has.
Consider an example of a relatively simple website Hacker News [http://news.ycombinator.com]. Let us separate the data (30 posts + their metadata) from its presentation. Assuming 1) above, the server can just send the data (say in JSON) + a JavaScript program to generate HTML. This gist shows the idea. The data for the 30 posts is in JSON [http://www.json.org/js.html] format. For this particular example the data transferred is cut in 1/2 (size of data+JavaScript / size of HTML). Further if browsers can do 2) above, it reduces the data transferred on each visit to 1/4 (size of data / size of HTML). [Note: this analysis is without considering compression; gzip,deflate is very successful in reducing the size of HTML. But isn't prevention better than cure?]
I see atleast the following advantages of this :-
* For most web pages, it will reduce the size of data transferred over the wire.
* Forces web apps to separate data from its presentation.
Disadvantages might include - more complex browsers, time to run the JavaScript program to generate HTML (this might get offset by the reduction in data size).
Now my question is - why are web applications not developed this way, or, why do web applications send HTML over the wire? Surely the web server (sending out HTML) doesn't care about HTML at all, so why should it, first, generate it, and then send it over the wire?
There are a few reasons, some of them historical this is by no means a complete list but just some of my experiences:
HTML predates JS, and a lot of scripts and libraries predate JS
Older browsers (think IE<=6) had rubbish, inconsistent JS engines, their rendering engines were much more consistent in how they treat HTML. So many more libraries and scripts predate consistent JS
It is a nightmare to debug applications written as you suggest if they are not constructed right (we have one at my work, it takes 30 minutes to find where a piece of html is actually generated)
It is a lot more work to do it right - why not use templates or static docs or something much simpler
Its not really a problem - HTML compresses really well
What you suggest is done - its called AJAX (OK, so ajax is more general than this but you all know what i mean)
It simply doesn't work for most plain-text user agents including those used by most search engines. If this page is serving most of your content, its generally a good idea to make it easy for Google to parse
Well the obvious reason on why this is the case is that JavaScript wasn't around when we started sending HTML around, and HTML was an improvement to sending around plaintext documents.
The reason we don't do this now: we eschew complex solutions to problems that aren't really problems.
Average internet connections download nearly 1M bytes per second, and web browsers are quite adept at parsing and starting to render this HTML before it's even all ready to be. They're also great at parallelizing the downloading of resources on the page. If we want to save a few bytes at the cost of some compute cycles, we gzip content before sending it. Problem solved.
And for the record, we do this with AJAX in complex webpages (checkout Github's source browsing for a great example of how awesome this can be).
What you suggest can, and is, done. Remember, web pages used to be static documents. Full blown web-based applications are a relatively recent idea.
I might also suggest that it isn't necessarily more efficient, especially when your pages are sent gzipped.
What you suggest is basically what a JavaScript full stack framework like ExtJS does. You can create rich, data intensive applications without writing any HTML -- well, only enough to reference the necessary .js libraries. The complex DOM needed for layouts, grids, forms etc is all created by the framework.
The simple answer is that HTML is older. Why is C99 not fully implemented with a lot of compilers? They figure 1989 is new enough for them. Also, JavaScript exercises a lot more control over people's browsers than they seem to want. Conditional statements and encoded data pose a security concern, and some people want to keep that can of worms closed to begin with. True, HTML is a very inefficient markup, but the size is insignificant compared to the images you download from the internet. That favicon takes up as much data as the page itself, and it's only 16 pixels across.
A good reason that the server-side code of a web application might do lots of HTML template work on the server side is that in many server environments it's not made easy to bundle up server-side data structures (object graphs) for easy delivery to the client. There may be information kept in server-side data structures that really shouldn't be delivered out to the client. Thus in order to send out a "pure" data-only response, the server would have to trim off sensitive data before delivering out the JSON. That's not an unsolvable problem, but I don't know of many server frameworks that facilitate a solution.
The server has direct, unfettered access to the database and to everything else that makes an application work: user preferences, history, account details, system settings, etc. To build an application that's client-centric for rendering purposes would mean concocting ways of keeping all that information intact and up-to-date on the client. For a lot of applications, that might not be terribly easy.
Finally, it's only relatively recently that it would make sense to trust a browser to provide a stable enough platform for building a long-lived "application environment" as a continually-updating web page. By building a web app such that pages are sometimes completely reloaded, there are lots of little "reboots". That's a cheap and dumb way of keeping a lid on at least some kinds of memory leaks.
Most implementations of sites with heavy Javascript use won't start executing until the DOM has fully loaded; then you'll get every page with 'loading screens' when the page wrapper has downloaded, but none of the content has.
Also, do remember that not all users have Javascript enabled, and not all browsers support high-level Javascript (think mobiles).
I would send HTML in a response if I wanted my application to work without Javascript. I would write HTML rendering code in my server-side language (most of the time not Javascript), which could then be used for two purposes: serving whole HTML pages, and serving bits of HTML in response to XHRs.
If the Javascript code is restricted to things like reporting UI events and replacing innerHTML with server-generated code, I don't have to duplicate any of my application logic across languages/frameworks. This duplication problem is one of the reasons why server-side Javascript is getting people excited.

Uncompressing content in browser on client side

I am interested to know about the possibilities of reducing http requests on servers by sending different kind of contents in a single compressed files and later uncompress on client's browser and place the stuff(images,css,js) where it should be.
I read somewhere that firefox is working on plan to give such features in future releases but it has not been done yet plus it would not be a standard version.
Will you guys suggest any solution for this?can Flash be used to uncompress compressed files on client side for later use?
Thanks
We did more or less what you describe in our web an are extremely happy of the response time.
The original files are all separated (HTML, CSS, JS, images) and we develop on them.
Then when moving to production we have a shell script that:
use YUI compressor to compress CSS and JS
all images are read and converted to data:image/png;base64,...
all blank spaces and comments are removed from the HTML
all these resources are put inline in the HTML
The page is ~300kb and usually cached.The server gzip it, the real size travelling the network is then lower.We don't use any additional compression.
And then there is a second call to get the data(JSON for us) and start rendering it client side.
I had to read your question a few times before I got what you were asking. It sounds like you want to basically combine all the elements of your site into a single downloadable file.
I'm fairly confident in saying I don't believe this is possible or desirable.
Firstly, you state that you've heard that Firefox may be supporting this. I haven't heard about that, but even if they do, how will you be able to use the feature while still supporting other browsers?
But even if you can do it, you've tagged this as 'performance-tuning', on the grounds that you'll be saving a few http requests. But in your effort to save http requests to speed things up, you need to be cautious that you don't actually end up slowing things down.
Combining all the files may cut you down to one http request, but your site may then load slower as the whole thing would need to load before any of it would be ready for display (as opposed to a normal page load where your page load may take time but at least some of it may be ready for display quite quickly).
What you can do right now, and which will be useful for reducing http requests, is combine your stylesheets into a single CSS, your scripts into a single JS file, and groups of related images into single image files (google CSS Sprites for more info on this technique).
Even then, you need to be careful about which files you combine - the point of the exersise is to reduce http requests so you need to take advantage caching, or you'll end up making things worse rather than better. Browsers can only cache files that are the same over multiple pages, so you should only combine the files that won't change between page loads. So for example, only combine the Javascript files which are in use across all the pages on your site.
My final comment would be to re-iterate what I've already said: Be cautious about over-optimising to the point that you actually end up slowing things down.

severside processing vs client side processing + ajax?

looking for some general advice and/or thoughts...
i'm creating what i think to be more of a web application then web page, because i intend it to be like a gmail app where you would leave the page open all day long while getting updates "pushed" to the page (for the interested i'm using the comet programming technique). i've never created a web page before that was so rich in ajax and javascript (i am now a huge fan of jquery). because of this, time and time again when i'm implementing a new feature that requires a dynamic change in the UI that the server needs to know about, i am faced with the same question:
1) should i do all the processing on the client in javascript and post back as little as possible via ajax
or
2) should i post a request to the server via ajax, have the server do all the processing and then send back the new html. then on the ajax response i do a simple assignment with the new HTML
i have been inclined to always follow #1. this web app i imagine may get pretty chatty with all the ajax requests. my thought is minimize as much as possible the size of the requests and responses, and rely on the continuously improving javascript engines to do as much of the processing and UI updates as possible. i've discovered with jquery i can do so much on the client side that i wouldn't have been able to do very easily before. my javascript code is actually much bigger and more complex than my serverside code. there are also simple calulcations i need to perform and i've pushed that on the client side, too.
i guess the main question i have is, should we ALWAYS strive for client side processing over server side processing whenever possible? i 've always felt the less the server has to handle the better for scalability/performance. let the power of the client's processor do all the hard work (if possible).
thoughts?
There are several considerations when deciding if new HTML fragments created by an ajax request should be constructed on the server or client side. Some things to consider:
Performance. The work your server has to do is what you should be concerned with. By doing more of the processing on the client side, you reduce the amount of work the server does, and speed things up. If the server can send a small bit of JSON instead of giant HTML fragment, for example, it'd be much more efficient to let the client do it. In situations where it's a small amount of data being sent either way, the difference is probably negligible.
Readability. The disadvantage to generating markup in your JavaScript is that it's much harder to read and maintain the code. Embedding HTML in quoted strings is nasty to look at in a text editor with syntax coloring set to JavaScript and makes for more difficult editing.
Separation of data, presentation, and behavior. Along the lines of readability, having HTML fragments in your JavaScript doesn't make much sense for code organization. HTML templates should handle the markup and JavaScript should be left alone to handle the behavior of your application. The contents of an HTML fragment being inserted into a page is not relevant to your JavaScript code, just the fact that it's being inserted, where, and when.
I tend to lean more toward returning HTML fragments from the server when dealing with ajax responses, for the readability and code organization reasons I mention above. Of course, it all depends on how your application works, how processing intensive the ajax responses are, and how much traffic the app is getting. If the server is having to do significant work in generating these responses and is causing a bottleneck, then it may be more important to push the work to the client and forego other considerations.
I'm currently working on a pretty computationally-heavy application right now and I'm rendering almost all of it on the client-side. I don't know exactly what your application is going to be doing (more details would be great), but I'd say your application could probably do the same. Just make sure all of your security- and database-related code lies on the server-side, because not doing so will open security holes in your application. Here are some general guidelines that I follow:
Don't ever rely on the user having a super-fast browser or computer. Some people are using Internet Explore 7 on old machines, and if it's too slow for them, you're going to lose a lot of potential customers. Test on as many different browsers and machines as possible.
Any time you have some code that could potentially slow down or freeze the browser momentarily, show a feedback mechanism (in most cases a simple "Loading" message will do) to tell the user that something is indeed going on, and the browser didn't just randomly freeze.
Try to load as much as you can during initialization and cache everything. In my application, I'm doing something similar to Gmail: show a loading bar, load up everything that the application will ever need, and then give the user a smooth experience from there on out. Yes, they're going to have to potentially wait a couple seconds for it to load, but after that there should be no problems.
Minimize DOM manipulation. Raw number-crunching JavaScript performance might be "fast enough", but access to the DOM is still slow. Avoid creating and destroying elements; instead simply hide them if you don't need them at the moment.
I recently ran into the same problem and decided to go with browser side processing, everything worked great in FF and IE8 and IE8 in 7 mode, but then... our client, using Internet Explorer 7 ran into problems, the application would freeze up and a script timeout box would appear, I had put too much work into the solution to throw it away so I ended up spending an hour or so optimizing the script and adding setTimeout wherever possible.
My suggestions?
If possible, keep non-critical calculations client side.
To keep data transfers low, use JSON and let the client side sort out the HTML.
Test your script using the lowest common denominator.
If needed use the profiling feature in FireBug. Corollary: use the uncompressed (development) version of jQuery.
I agree with you. Push as much as possible to users, but not too much. If your app slows or even worse crashes their browser you loose.
My advice is to actually test how you application acts when turned on for all day. Check that there are no memory leaks. Check that there isn't a ajax request created every half of second after working with application for a while (timers in JS can be a pain sometime).
Apart from that never perform user input validation with javascript. Always duplicate it on server.
Edit
Use jquery live binding. It will save you a lot of time when rebinding generated content and will make your architecture more clear. Sadly when I was developing with jQuery it wasn't available yet; we used other tools with same effect.
In past I also had a problem when one page part generation using ajax depends on other part generation. Generating first part first and second part second will make your page slower as expected. Plan this in front. Develop a pages so that they already have all content when opened.
Also (regarding simple pages too), keep number of referenced files on one server low. Join javascript and css libraries into one file on server side. Keep images on separate host, better separate hosts (creating just a third level domain will do too). Though this is worth it only on production; it will make development process more difficult.
Of course it depends on the data, but a majority of the time if you can push it client side, do. Make the client do more of the processing and use less bandwidth. (Again this depends on the data, you can get into cases that you have to send more data across to do it client side).
Some stuff like security checks should always be done on the server. If you have a computation that takes a lot of data and produces less data, also put it on the server.
Incidentally, did you know you could run Javascript on the server side, rendering templates and hitting databases? Check out the CommonJS ecosystem.
There could also be cross-browser support issues. If you're using a cross-browser, client-side library (eg JQuery) and it can handle all the processing you need then you can let the library take care of it. Generating cross-browser HTML server-side can be harder (tends to be more manual), depending on the complexity of the markup.
this is possible, but with the heavy intial page load && heavy use of caching. take gmail as an example
On initial page load, it downloads most of the js files it needed to run. And most of all cached.
dont over use of images and graphics.
Load all the data need to show in intial load and along with the subsequent predictable user data. in gmail & latest yahoo mail the inbox is not only populated with the single mail conversation body, It loads first few full email messages in advance at the time of pageload. secret of high resposiveness comes with the cost (gmail asks to load the light version if the bandwidth is low.i bet most of us have experienced ).
follow KISS principle. means keep ur desgin simple.
And never try to render the whole page using javascript in any case, you cannot predict all your endusers using the high config systems or high bandwidth systems.
Its smart to split the workload between your server and client.
If you think in the future you might want to create an API for your application (communicating with iPhone or android apps, letting other sites integrate with yours,) your would have to duplicate a bunch of code for all those devices if you go with a bare-bones server implementation of your application.

Categories

Resources