can I copy external cdn javascript locally? - javascript

scripts from other providers I would normally include in my application in the following way:
<script src="https://apis.google.com/js/api.js"></script>
However I was wondering whether there is any drawback to just open the url: https://apis.google.com/js/api.js
and copy/paste the script inside my application
The advantage for this would be for example when using React - to just copy/paste the script inside the particular component that is using it.
However I'm not sure whether there are any drawbacks - for example whether these scripts might be updated sometimes by third parties (say Google) and it will stop working as I'll have old version copied locally.
Any idea whether there would be any problems with just copy/paste external third parties scripts locally into my code (say React component)?

The point of a CDN is to avoid downloading common scripts more than once: if you visit website A, and it gets https://apis.google.com/js/api.js, then you visit website B which also gets https://apis.google.com/js/api.js, your browser will only download it the first time, and website B will load faster.
Copying the script into your own file will work but you'll lose this advantage.

Yes, actually you can put it in a JS file and then in the Index.html you can do a reference to that file.
<script type='text/javascript' src="../../path/to/the/file/api.js"></script>

Related

Add custom client-side script that executes when on page

Suppose there is an external site that lacks functionality (by external I mean hosted and maintained by someone else), ¿is there any way to add a custom script to it that executes when on the page?
You can, but this will only happen in you machine.
For each machine that you want to run the code you will have to setup the plugins and your code.
GreaseMonkey is the most popular, as zzzzzBov commeted.
You can check this tuto to see some screenshots: http://www.techradar.com/news/internet/the-beginner-s-guide-to-greasemonkey-scripting-598247
http://www.thewindowsclub.com/greasemonkey-scripts-firefox

How to find the source file of the javascript

I'm using Primefaces 5.1. When I load my web page, go into the Firefox debugger and look into the javascript, I see that it has loaded some javascript (Primefaces.js.jsf to be precise) which is part of Primefaces 5.0 version. Now, I have taken care to delete all the references of 5.0. But still I get the same result.
So, my question is- when a web page is loaded, and we see the javascript files loaded, how do we know where a particular script file is being retrieved from?
P.S: I'm not the one who wrote the code to include this js. it is part of the framework. So, I have no control over where it is being accessed from. All I can do is if I know the path of the file, I can modify it to suit my needs.
I have deleted the history/cache/Temporary files and also loaded the page using Ctrl + F5. Didnt help.
Found out why the project was still referring to PF5.0. The primefaces-5.0.jar still co-existed with primefaces-5.1.jar in the WEB-INF/lib directory of the project residing in the glassfish directory 'glassfish3\glassfish\domains\domain1\eclipseApps'.

Most efficient way for browser to load resources

I currently use Angular with ng-routes, ng-table and ng-animate. I also use other libraries such as jQuery and Modernizr. Finally I have my own custom javascript, which can be multiple files.
So my projects load a long list of resources like this:
<script type="text/javascript" src="/lib/jquery.min.js"></script>
<script type="text/javascript" src="/lib/modernizr.min.js></script>
<script type="text/javascript" src="/lib/angular.min.js"></script>
<script type="text/javascript" src="/lib/angular-route.min.js"></script>
<script type="text/javascript" src="/lib/angular-animate.min.js"></script>
<script type="text/javascript" src="/js/custom.js"></script>
I'm aware that I can speed up my projects by reducing HTTP calls and compressing all files.
I use CodeKit which gives options to import JS files into each other and compile them into a single compressed file. If I do that for the list above I would then simply load a single file like this:
<script type="text/javascript" src="/js/main.min.js"></script>
A single HTTP call rather than 6+, and all code is minified. The problem here is that the single file is 500kb+ and would be much larger on a complete project, possibly over 1mb.
Will this mean the file won't get cached by browsers because it's too large?
Will Angular or other libraries have any issues running when compiled in this format? (console throws errors if .map file is not in same directory)
Surely a 500kb+ file will still take a long time to load, so will this really improve load times?
The way I see it is it has to wait for that entire large file to load before it can do anything. Whereas if the files are separate, they can begin executing the script as soon as they each load.
Will this mean the file won't get cached by browsers because it's too large?
That depends on the browser. Desktop browsers shouldn't have any problems with this, however mobile will.
Will Angular or other libraries have any issues running when compiled in this format? (console throws errors if .map file is not in same directory)
Nope. the .map issues won't affect your end-user unless they open the console. This can also be fixed by your build process producing a .map file for the combined version.
Surely a 500kb+ file will still take a long time to load, so will this really improve load times?
Yes, because you're not passing cookies and http headers back and forth with each request.
However, there are things that you can do to improve load time. For example, I split my files into two categories:
Files that are needed immediately, and files that can be loaded later.
Unfortunately, you can't really make this distinction with angular without consequences because all routes need to be defined up front, along with all controllers and services required by said routes. The only thing you can really remove would be in-line templating because templates can be requested later via ajax.
I was able to make a split like this for one of my applications; I split the administration code and client code, and only included the administration code after successful admin login. This is inconvenient for admins because an admin can't go directly to an admin page and can't reload an admin page without having problems, but it did reduce load time for non-admin users.
This leaves you with only one real option: reduce the overall size of the content you are including. This means making controllers and services as reusable as possible, and not including code that isn't necessary. Looking at your example, you're including jQuery. jQuery's purpose is to make it easier to write ajax requests and select/manipulate DOM nodes, both of which are already covered by Angular. Therefore, you can reduce the size of your codebase by removing jQuery and instead using angular-based plugins.

Save HTML As Standalone Page: Exporting Tool?

I need to regularly send html pages to a client as standalone .html files with no external dependencies. The original pages are done with node.js and express and they contains several librairies such as High Charts.
I have done the preparation manually until now, this includes:
Transform all images into blobs
Copy all external .js and .cs inside the page
Minimize where possible (standards librairies such as jQuery or Bootstrap...)
The result is a single .html file that can be opened without an internet connection and looks just like the original.
Is there any tool to do this automatically? If not, maybe I'll code it myself in Python. Do you have any recommendation around that?
Thanks
Monolith is a CLI tool for saving complete web pages as a single HTML file
See https://github.com/Y2Z/monolith
With apologies to OP, as this answer is probably far too late for him, but I'm posting it to help anyone with a similar problem:
HTTrack is an open-source project that does almost exactly what you described, though it doesn't work perfectly on some of the more peculiar JS.
It saves the page with most of the JS, the major images, and everything that the page needs to appear complete. It can be configured to include or exclude the entire or partial JS, images, and CSS.
This does not import all of the JS and other content into the HTML file, but neatly organizes all of the content into one folder and corrects all of the paths to make the folder portable.
It also seems to have trouble grabbing some external sources that are protected, but if it is your local site and simply uses common scripts like JQuery, you should be fine. When I tested it, it correctly downloaded all of my local CSS and any valid external CSS library that I incorporated, the JQuery and derivative scripts that I was using, and the embedded images.
Just to save everyone a question, the program by default saves the downloaded websites to C:\My Web Sites.

Javascript and website loading time optimization

I know that best practice for including javascript is having all code in a separate .js file and allowing browsers to cache that file.
But when we begin to use many jquery plugins which have their own .js, and our functions depend on them, wouldn't it be better to load dynamically only the js function and the required .js for the current page?
Wouldn't that be faster, in a page, if I only need one function to load dynamically embedding it in html with the script tag instead of loading the whole js with the js plugins?
In other words, aren't there any cases in which there are better practices than keeping our whole javascript code in a separate .js?
It would seem at first glance that this would be a good idea, but in fact it would actually make matters worse. For example, if one page needs plugins 1, 2 and 3, then a file would be build server side with those plugins in it. Now, the browser goes to another page that needs plugins 2 and 4. This would cause another file to be built, this new file would be different from the first one, but it would also contain the code for plugin 2 so the same code ends up getting downloaded twice, bypassing the version that the browser already has.
You are best off leaving the caching to the browser, rather than trying to second-guess it. However, there are options to improve things.
Top of the list is using a CDN. If the plugins you are using are fairly popular ones, then the chances are that they are being hosted with a CDN. If you link to the CDN-hosted plugins, then any visitors who are hitting your site for the first time and who have also happened to have hit another site that's also using the same plugins from the same CDN, the plugins will already be cached.
There are, of course, other things you can to to speed your javascript up. Best practice includes placing all your script include tags as close to the bottom of the document as possible, so as to not hold up page rendering. You should also look into lazy initialization. This involves, for any stuff that needs significant setup to work, attaching a minimalist event handler that when triggered removes itself and sets up the real event handler.
One problem with having separate js files is that will cause more HTTP requests.
Yahoo have a good best practices guide on speeding up your site: http://developer.yahoo.com/performance/rules.html
I believe Google's closure library has something for combining javascript files and dependencies, but I havn't looked to much into it yet. So don't quote me on it: http://code.google.com/closure/library/docs/calcdeps.html
Also there is a tool called jingo http://code.google.com/p/jingo/ but again, I havn't used it yet.
I keep separate files for each plug-in and page during development, but during production I merge-and-minify all my JavaScript files into a single JS file loaded uniformly throughout the site. My main layout file in my web framework (Sinatra) uses the deployment mode to automatically either generate script tags for all JS files (in order, based on a manifest file) or perform the minification and include a single querystring-timestamped script inclusion.
Every page is given a body tag with a unique id, e.g. <body id="contact">.
For those scripts that need to be specific to a particular page, I either modify the selectors to be prefixed by the body:
$('body#contact form#contact').submit(...);
or (more typically) I have the onload handlers for that page bail early:
jQuery(function($){
if (!$('body#contact').length) return;
// Do things specific to the contact page here.
});
Yes, including code (or even a plug-in) that may only be needed by one page of the site is inefficient if the user never visits that page. On the other hand, after the initial load the entire site's JS is ready to roll from the cache.
The network latency is the main problem.You can get a very responsive page if you reduce the http calls to one.
It means all the JS, CSS are bundled into the HTML page.And if your can forget IE6/7 you can put the images as data:image/png;base64
When we release a new version of our web app, a shell script minify and bundle everything into a single html page.
Then there is a second call for the data, and we render all the HTML client-side using a JS template library: PURE
Ensure the page is cached and gzipped. There is probably a limit in size to consider.We try to stay under 400kb unzipped, and load secondary resources later when needed.
You can also try a service like http://www.blaze.io. It automatically peforms most front end optimization tactics and also couples in a CDN.
There currently in private beta but its worth submitting your website to.
I would recommend you join common bits of functionality into individual javascript module files and load them only in the pages they are being used using RequireJS / head.js or a similar dependency management tool.
An example where you are using lighbox popups, contact forms, tracking, and image sliders in different parts of the website would be to separate these into 4 modules and load them only where needed. That way you optimize caching and make sure your site has no unnecessary flab.
As a general rule its always best to have less files than more, its also important to work on the timing of each JS file, as some are needed BEFORE the page completes loading and some AFTER (ie, when user clicks something)
See a lot more tips in the article: 25 Techniques for Javascript Performance Optimization.
Including a section on managing Javascript file dependencies.
Cheers, hope this is useful.

Categories

Resources