Should pages that are included in other pages have their own script? - javascript

I am using RequireJS and I am creating a own script file for each page. However I also have some components that are included into some of the pages (server side). Should these pages also get their own script file, or should the necessary javascript be in the containing page? Some of the functionality for the included pages are common to many pages.

I think you'd be better off thinking about your javascript as reusable modules rather than page-specific functionality. So, say your page has a search box which initiates an AJAX request, a few date pickers, and a whole bunch of tabs. Each of these should be a module (or if the functionality they provide is complex enough, a few modules). By breaking down your app into small pieces that have very focused aims, you make it easier to test each bit in isolation (automated unit tests) and reuse the functionality elsewhere.
Now as to how to load your javascript modules, there's a point where it makes sense to strategically load stuff based on user needs (ex: core.js is loaded by default but search.js isn't loaded until the user accesses the "search" tab) but you can get pretty far just packaging everything into a single file (require's r.js tool does this for you) and using the same script file (main.js) on every page.
When using a single script file, keep in mind that your js will need to work when the target of it's functionality is not present. jQuery makes this super simple and you almost don't have to think about it - ex:
$('#js-foo').on(...) // <-- this doesn't blow up if '#js-foo' isn't on the page.
I've also seen people set a data-attr on the body tag for the page - e.g. data-page="foo" and key js off of that:
var page = $('body').data('page');
if (page === 'foo'){
component1.setup();
component2.setup();
}
In your case, I would try building everything into a single file using RequireJS / AMD-style modules. Each component would get its own module file (mycomponent.js), your main.js would require() all your modules and init things appropriately, and finally you'd configure your r.js build to package everything into a single file when deploying to / running in production.
If you are interested in exploring this topic more, check out these posts:
Single Entry Point FTW
Single Entry Point Redux

Related

Webpack - multi-page application with a single budled js file

I'm trying to update a legacy frontend web application to use webpack for the dependencies. Right now it's structured like so:
- login.html
- dashboard.html
... src/login.js
... src/dashboard.js
Each page has its own javascript file, plus it loads in a bunch of external dependencies via script tags on the page. My problem is that most of the pages use some variation of the same bunch of very large libraries and jquery plugins. If I bundle each page's js into a seperate js file, i'm going to end up with a huge bundle for every page that has to be downloaded every time the user changes page. I'd prefer to just have one bundle that every page loads and uses the neccessary part. Is webpack fir for purpose here, and if so how should I be going about it?
I "solved" this by keeping my library imports as they are, ie jquery, bootstrap etc downloaded from CDNs. I made one bundle for each of my bigger local libraries and included that on each page so that it gets cached by the browser, then I used webpack with multiple entry point for our bespoke js code.

AngularJs SPA Javascript file

Do i have to include all my javascript file while loading main index page?
In single page application when we are not logged in, we include all of our .js file in main index file. This contains js file that is only needed when users are logged in.
What is better approach of managing angular app in this context?
Simple answer: yes.
Your application is a single-page one, so you can combine all JS files into one and load it at one request. It saves time for processing in the future.
Alternatively, create two pages login.html and others.html, then load two different sets of JS files accordingly.
Normally, nowadays the bandwidth is not the bottleneck, loading a larger JS file does not make trouble (usually).
You can split your code into multiple modules and then just load the js needed for that module.
I suggest using Gulp with packages to inject HTML when appropriate. You then have single lines of code as place holders for your Javascript and you run the Gulp task to inject the Javascript into the areas where it is needed.
You could also run gulp tasks to minify your js into just a few minified files. You will need to be sure your js in min safe (gulp can do this too).
If you make AMD - most often using RequireJS - then you won't need to include all from the very beginning.
A while ago we did a similar project, although without AngularJS, and by using RequireJS we made the different pages, which use different files. And this way people's browsers will never download certain files if they never go to certain pages.
(Of course, we had many pages inside the app, not just 2 or 3, where this wouldn't make any difference.)

Is it good practice to use requirejs to extend single pages instead of pure javascript applications?

So my question is mainly about the use case of RequireJS.
I read a lot about pure javascript driven web pages. Currently I extend single rendered views (e.g. provided by a PHP Framework) with AngularJS which adds a lot of value.
Sadly the dependency management gets harder and harder with every <script> tag on other 'single pages'. Even more when there is a main.js file which provides common libraries (e.g. jQuery and AngularJS itself).
I thought this doesn't fit into RequireJS philosophy to have one main file which requires all dependencies.
A good example would be an administration panel which uses some modules (defined by AngularJS's dependencies).
Example:
scripts/
adminpanel/
panel.app.js
panel.filters.js
panel.directives.js
antoherModule/
andAntoherModule/
require.js
tl;dr
When you use AngularJS to extend single pages, instead of building a completely javascript driven web application, is it good practice to use RequireJS for AMD loading modules which will be used on the single page ? And how is the best way to do it so ?
SPA usually means that the page doesn't refresh and all extra content is loaded on the fly. In essence the entire app is a single page. It doesn't mean that all of the content is loaded on the initial load (though if it is small enough, this could be the case). Using RequireJS / AMD architecture is really good for this kind of thing.
As the user navigates throughout the site, different partials / templates are retrieved as well as any supporting JavaScript.
The best way to do this is with define. Defining all of the requirements your script needs in order to work. All of the scripts needed will be loaded before the function is run, ensuring that you have everything you need. Furthermore, the items that you define as requirements can have their own define to specify the scripts they need... and so on.

Javascript and website loading time optimization

I know that best practice for including javascript is having all code in a separate .js file and allowing browsers to cache that file.
But when we begin to use many jquery plugins which have their own .js, and our functions depend on them, wouldn't it be better to load dynamically only the js function and the required .js for the current page?
Wouldn't that be faster, in a page, if I only need one function to load dynamically embedding it in html with the script tag instead of loading the whole js with the js plugins?
In other words, aren't there any cases in which there are better practices than keeping our whole javascript code in a separate .js?
It would seem at first glance that this would be a good idea, but in fact it would actually make matters worse. For example, if one page needs plugins 1, 2 and 3, then a file would be build server side with those plugins in it. Now, the browser goes to another page that needs plugins 2 and 4. This would cause another file to be built, this new file would be different from the first one, but it would also contain the code for plugin 2 so the same code ends up getting downloaded twice, bypassing the version that the browser already has.
You are best off leaving the caching to the browser, rather than trying to second-guess it. However, there are options to improve things.
Top of the list is using a CDN. If the plugins you are using are fairly popular ones, then the chances are that they are being hosted with a CDN. If you link to the CDN-hosted plugins, then any visitors who are hitting your site for the first time and who have also happened to have hit another site that's also using the same plugins from the same CDN, the plugins will already be cached.
There are, of course, other things you can to to speed your javascript up. Best practice includes placing all your script include tags as close to the bottom of the document as possible, so as to not hold up page rendering. You should also look into lazy initialization. This involves, for any stuff that needs significant setup to work, attaching a minimalist event handler that when triggered removes itself and sets up the real event handler.
One problem with having separate js files is that will cause more HTTP requests.
Yahoo have a good best practices guide on speeding up your site: http://developer.yahoo.com/performance/rules.html
I believe Google's closure library has something for combining javascript files and dependencies, but I havn't looked to much into it yet. So don't quote me on it: http://code.google.com/closure/library/docs/calcdeps.html
Also there is a tool called jingo http://code.google.com/p/jingo/ but again, I havn't used it yet.
I keep separate files for each plug-in and page during development, but during production I merge-and-minify all my JavaScript files into a single JS file loaded uniformly throughout the site. My main layout file in my web framework (Sinatra) uses the deployment mode to automatically either generate script tags for all JS files (in order, based on a manifest file) or perform the minification and include a single querystring-timestamped script inclusion.
Every page is given a body tag with a unique id, e.g. <body id="contact">.
For those scripts that need to be specific to a particular page, I either modify the selectors to be prefixed by the body:
$('body#contact form#contact').submit(...);
or (more typically) I have the onload handlers for that page bail early:
jQuery(function($){
if (!$('body#contact').length) return;
// Do things specific to the contact page here.
});
Yes, including code (or even a plug-in) that may only be needed by one page of the site is inefficient if the user never visits that page. On the other hand, after the initial load the entire site's JS is ready to roll from the cache.
The network latency is the main problem.You can get a very responsive page if you reduce the http calls to one.
It means all the JS, CSS are bundled into the HTML page.And if your can forget IE6/7 you can put the images as data:image/png;base64
When we release a new version of our web app, a shell script minify and bundle everything into a single html page.
Then there is a second call for the data, and we render all the HTML client-side using a JS template library: PURE
Ensure the page is cached and gzipped. There is probably a limit in size to consider.We try to stay under 400kb unzipped, and load secondary resources later when needed.
You can also try a service like http://www.blaze.io. It automatically peforms most front end optimization tactics and also couples in a CDN.
There currently in private beta but its worth submitting your website to.
I would recommend you join common bits of functionality into individual javascript module files and load them only in the pages they are being used using RequireJS / head.js or a similar dependency management tool.
An example where you are using lighbox popups, contact forms, tracking, and image sliders in different parts of the website would be to separate these into 4 modules and load them only where needed. That way you optimize caching and make sure your site has no unnecessary flab.
As a general rule its always best to have less files than more, its also important to work on the timing of each JS file, as some are needed BEFORE the page completes loading and some AFTER (ie, when user clicks something)
See a lot more tips in the article: 25 Techniques for Javascript Performance Optimization.
Including a section on managing Javascript file dependencies.
Cheers, hope this is useful.

Is it better to minify javascript into a single bundle for the whole site, or a specific-to-each-page bundle?

When minifying JavaScripts together in web-development, is it better from the user-loading-time point of view to:
make one single big bundle of JavaScript containing all the script, and include this on each page - so each page will probably not need all of it, but once the user has it cached, they don't need to get any further scripts (until it expires from their cache, of course) - optimising for number-of-requests
make one bundle of JavaScript per page, so that each page loads just the script that it needs and nothing else - so each page when first loaded will definitely require a JS request (but still subsequently have that cached. Optimising for size-of-requests.
I'm interested in some data upon which to base the decision for which strategy to go with. I can arrive at conclusions based on anecdote as easily as everyone else :-)
It really depends on the sizes and functions of the script. It's common to have a single master.js for all your pages, which contains all the functionality required by every page of your site, whilst having other js files for functionality that might only be needed on certain pages.
Take Stack Overflow, for instance. They have a master.js file included on every page of the site, but when you visit a question page or the "ask a question" page you'll notice wmd.js. This script includes all the functionality for the editor which is needed on fewer pages.
I would much rather have 1 minified js file for the entire site. Over a period of time this always performs better than having multiple js files.
Check out this link for more details
It depends entirely on what's in your scripts. If you've got loads of small functions which are used by a wide selection of pages then yes, a single file will be best. If you've got a large script that's only used by one page, you wouldn't typically want to slow down the initial front-page load time by including it in the shared script.
So what you will typically end up with is a compromise, with base functions shared across all pages in one script, and the more complex and specific functions in per-page or per-page-group scripts. It'll very rarely be beneficial to go the whole option-2 hog and have a completely separate script for each page.
Having shared functions in one file and separate page-specific complex scripts is also typically more maintainable.
If you are implementing websites in ASP.NET MVC, then you may find the following approach as the most sensible, one which I use all the time.
Make a list of all of the JavaScript files used in the project - ones used everywhere and those used by many or most pages. This defines your common bundle, which you should include from your master page. MVC4's Bundle & Minification feature does the magic there, you just need to list them.
The rest of JavaScript is usually for local use, i.e. within just one view, effectively implementing the view. And for that reason it should reside within the view, completely uncompressed.
For example, I make extensive use of AngularJS within views, so each such view contains its own angular controller and other local elements as needed. Depending on the view complexity, it can even have its own set of directives, services and factories, although typically those go into a partial view of one-two levels up.
The bottom line is, do not try to burden yourself with bundling JavaScript that's meant for local use. Bundle only the most generic stuff, and do it in just one place - your product's master page. Leave local JavaScript uncompressed in local files where it is used. This has no real effect on performance, while making your code much easier to understand and maintain.

Categories

Resources