Whenever a page is loaded with Coldfusion, it loads many default Coldfusion javascripts. When I run the Goolge PageSpeed Tools, it always complain about the render-blocking JavaScript. Apparently, Coldfusion has many javascript when a page is loaded such as
...scripts/ajax/yui/yahoo-dom-event/yahoo-dom-event.js
...scripts/ajax/yui/aniax/yui/autocomplete/autocomplete-min.js
...scripts/ajax/yui/autocomplete/autocomplete-min.js
...scripts/ajax/messages/cfmessage.js
...scripts/ajax/package/cfajax.js
...scripts/ajax/package/cfautosuggest.js
...scripts/cfform.js
...scripts/masks.js
These all are considered render-blocking scripts. I can't find any information on how to make them none-render-blocking because obviously I can't add the async="async" parameter to the Coldfusion script which I can't see. How can I make the Coldfusion script none-render-blocking or am I stuck with it?
Can someone please shed some lights?
If you really wanted to do something about this instead of rewriting your UI code, you can actually grab the output from the buffer before it is sent to the client and modify it. Your modifications could be as simple as removing a hardcoded list of script tags and replacing them with a custom script file that you host in your webroot. The custom script file would simply be all of the other script files' contents combined.
To do this:
In onRequestEnd in application.cfc you can call var outputBuffer = getPageContext().popBody().getBuffer() which will return the body to be sent to the client.
Run replacements on outputBuffer, looking for those script tags and removing them via regular expressions. You'll want to keep track of whether or not you've actually removed any to use as a flag in the next step.
Finally, you would append to the output with your new master script tag if your flag that replacements have been made is true. After you do that, the buffer will be automatically flushed to the client.
I believe that Adobe no longer updates the script files, so you basically don't have to worry about versioning your master script file to clear the browser cache.
Edit/Note: Definitely avoid using the CF UI stuff, but we don't know what your situation is like right now. If you have hundreds or thousands of hours of rewriting to do, then obviously rewriting it all is likely not something that is practical at this time.
Related
What are the differences in performance and memory footprint , if any, in these different approaches:
1. Using Src
<script type='text/javascript" src="1MBOfjavascript.js"></script>
2. Directly injecting in the Head
$('head').append("<script type='text/javascript'>1MBOfJavascriptCode</script>");
I'm interested because we are developing a Cordova App where we use the second method to Inject to the DOM a previously downloaded Javascript bundle read from the HTML Local Storage.
Given that the script will probably grow in size, I'd like to know if with the second method I could incur in some memory issues or other DOM problem.
I believe the overhead in such cases should be insignificant as the main processing/memory consumption is based on how the actual script works. Ie the memory used by file would be the total size of the script which is what 1MB at max? However during execution the same script can use up 100MB easily.
Anyways , coming to the point.
Plain text inclusion would be better if it has to be included in all cases as it skips a script execution and also does not cause re-rendering by browser after the append.
Append option should be used in cases where you only need the script under specific conditions on client side and do not load it un-necessarily.
The browser goes through all elements before to render the page so I would imagine it is exactly the same, if anything if you had scripts after the page is downloaded chances are that you will get errors of the type "call to undefined functions" if you call onto a function that you have added after the load.
My advise would be add them at load using src but keep things tidy.
Javascript impact a lot depending upon how and where you are loading your file, there are many ways to load a file or code.
First method you mentioned is an conventional way to load javascipt file that is load file using src and usually its loaded before closing </head> tag but now a days it is in trend to load file befor your </body> tag. it speeds up application load and prepare DOM faster.
second method is not obviously a good way to load javascript at first as there can be some code that should be working only after your DOM is ready and as here you are loading your javscript with javascript/jquery append which depends upon your jquery file loading which will delay your code execution. and there might be possible that some of your code will not have desired output (depending upon how you have called functions and how much they are dependent upon DOM ready )
I would prefer to load a javascript file with first method and at the bottom of the page/app if possible.
asyncrounous loading can also be tried.
i think because browser behavior after retrieving response from any web server
will try to parse all the elements first for building the DOm, after the DOM are ready then your script would be executed, based on this it is obvious its the main
reasons is which is the first
will be executed first than injecting it.
About the second method
i think javascript engine in each browser can handle that thing easily and withouth any problem. the engine does not care about the size actually it only about how long the script will be loaded and then executed. what will be the issue is when you try injecting any elements into DOM is when you try to do it in loop there might be a memory problem.
when you say big javascript files yeah we should think about the memory. but
about the method you previously said its just about which are executed first.
thats my opinion
My script adds some annotations to each page on a site, and it needs a few MBs of static JSON data to know what kind of annotations to put where.
Right now I'm including it with just var data = { ... } as part of the script but that's really awkward to maintain and edit.
Are there any better ways to do it?
I can only think of two choices:
Keep it embedded in your script, but to keep maintainable(few megabytes means your editor might not like it much), you put it in another file. And add a compilation step to your workflow to concatenate it. Since you are adding a compilation you can also uglify your script so it might be slightly faster to download for the first time.
Get it dynamically using jsonp. Put it on your webserver, amazon s3 or even better, a CDN. Make sure it will be server cachable and gzipped so it won't slow down the client network by getting downloaded on every page! This solution will work better if you want to update your data regularly, but not your script(I think tampermonkey doesn't support auto updates).
My bet would would definetly be to use special storage functions provided by tampermonkey: GM_getValue, GM_setValue, GM_deleteValue. You can store your objects there as long as needed.
Just download the data from your server once at the first run. If its just for your own use - you can even simply insert all the data directly to a variable from console or use temporary textarea, and have script save that value by GM_setValue.
This way you can even optimize the speed of your script by having unrelated objects stored in different GM variables.
I've been trying to get this sorted all day, but really cant figure it out. I've got a page with <div id="ajax"></div> that is populated off a tab based menu pulling in a snippet of HTML code stored in an external file that contains some javascript (mainly form validation).
I've seen in places that I need to eval() the code, but then on the same side of things people say its the last thing to do.
Can someone point me in the right direction, and provide an example if possible as I am very new to jQuery / JavaScript.
Many thanks :)
pulling in a snippet of HTML code stored in an external file that contains some javascript (mainly form validation).
Avoid doing this. Writing <script> to innerHTML doesn't cause the script to get executed... though moving the element afterwards can cause it to get executed, at different times in different browsers.
So it's inconsistent in practice, and it doesn't really make any sense to include script anyway:
when you load the same snippet twice, you'd be running the same script twice, which might redefine some of the functions or variables bound to the page, which can leave you in extremely strange and hard-to-debug situations
non-async/defer scripts are expecting to run at parse time, and may include techniques which can't work when inserted into an existing document (in the case of document.write this typically destroys the whole page, a common symptom when trying to load third-party ad/tracking scripts).
Yes, jQuery tries to make some of this more consistent between browsers by extracting script elements and executing them. But no, it doesn't manage to fix all cases (and in principle can't). So don't ask it to. Keep your scripts static, and run any binding script code that needs to happen in the load callback.
If you fetch html via Ajax, and that html has a <script> tag in it, and you write that html into your document via something like $('#foo').append(html), then the JS should run immediately without any hassle whatsoever.
jquery automatically processes scripts received in an ajax request when adding the content to your page. If you are having a particular problem then post some code.
See this link
dataType: "html" - Returns HTML as
plain text; included script tags are
evaluated when inserted in the DOM.
I have a web application where the masterPage/template contains some static HTML that never changes but is sent with every request. (a lot of this HTML are hidden elements that are shown after a user does something)
I'm wondering if there is some way of caching it?
I was considering putting the HTML inside a javascript variable and doing a document.write or a jquery $(tag).html(cachedHTML); to get that content. The benefit here is that the javascript file will be cached by the browser and that HTML won't be passed along (speeding up pageload and decreasing bandwith).
Is there a more elegant solution? And if I do go this route, is there an easy way to convert all the HTML to be inside a javascript string without going through the HTML and formatting it? (remove spaces, escape double quotes, etc...) Thoughts?
Thanks!
Update: Here is the YSlow info... does this page seem too large? (There are 3597 DOM elements)
Some notes: In terms of the JS files, there are three main ones jquery, jquery-ui, and my global minified js, the rest are generated by asp.net or things like getsatisfaction
I may be wrong, but to me, it sounds well-intentioned but unnecessary. If your server is configured correctly, the HTML output will be gzipped. If we're not talking about megabytes of HTML, every image on your page will take more bandwidth than the document's markup.
In my experience, the greater worry about really huge chunks of HTML data is how the browser handles it. A 2-3 MB HTML document will take up the hundredfold of memory when finally rendered. If that's the case in your scenario, you may have a design problem at hand that even caching wouldn't solve.
What is the general developer opinion on including javascript code on the file instead of including it on the script tag.
So we all agree that jquery needs to be included with a script file, like below:
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"
type="text/javascript"></script>
My question is, in order to get functions on a page that is not on all pages of a site. Do we include the functions like below in the same page or in a global include file like above called mysite.js.
$(document).ready(function(){
$(".clickme").click(function(event){
alert("Thanks for visiting!");
});
});
ok. So the question is: if the code above is going to be called in every class="clickme" on a specific pages, and you have the ability to call it either from an include separate file called mysite.js or in the content of the page. Which way will you go?
Arguments are:
If you include it on the page you will only call it from those specific pages that the js functionality is needed.
Or you include it as a file, which the browser cached, but then jquery will have to spend x ms to know that that function is not trigger on a page without "clickme" class in it.
EDIT 1:
Ok. One point that I want to make sure people address is what is the effect of having the document.ready function called things that does not exist in the page, will that trigger any type of delay on the browser? Is that a significant impact?
First of all - $("#clickme") will find the id="clickme" not class="clickme". You'd want $(".clickme") if you were looking for classes.
I (try to) never put any actual JavaScript code inside my XHTML documents, unless I'm working on testing something on a page quickly. I always link to an external JS file to load the functionality I want. Browsers without JS (like web crawlers) will not load these files, and it makes your code look much cleaner to the "view source".
If I need a bit of functionality only on one page - it sometimes gets its own include file. It all depends on how much functionality / slow selectors it uses. Just because you put your JS in an external JS file doesn't mean you need to include it on every page.
The main reason I use this practice - if I need to change some JavaScript code, it will all be in the same place, and change site wide.
As far as the question about performance goes- Some selectors take a lot of time, but most of them (especially those that deal with ID) are very quick. Searching for a selector that doesn't exist is a waste of time, but when you put that up against the wasted time of a second script HTTP request (which blocks the DOM from being ready btw), searching for an empty selector will generally win as being the lesser of the two evils. jQuery 1.3 Performace Notes and SlickSpeed will hopefully help you decide on how many MS you really are losing to searching for a class.
I tend to use an external file so if a change is needed it is done in one place for all pages, rather than x changes on x pages.
Also if you leave the project and someone else has to take over, it can be a massive pain to dig around the project trying to find some inline js.
My personal preference is
completely global functions, plugins and utilities - in a separate JavaScript file and referenced in each page (much like the jQuery file)
specific page functionality - in a separate JavaScript file and only referenced in the page it is needed for
Remember that you can also minify and gzip the files too.
I'm a firm believer of Unobtrusive JavaScript and therefore try to avoid having any JavaScript code in with the markup, even if the JavaScript is in it's own script block.
I agreed to never have code in your HTML page. In ASP.net I programmatically have added a check for each page to see if it has a same name javascript file.
Eg. MyPage.aspx will look for a MyPage.aspx.js
For my MVC master page I have this code to add a javascript link:
// Add Each page's javascript file
if (Page.ViewContext.View is WebFormView)
{
WebFormView view = Page.ViewContext.View as WebFormView;
string shortUrl = view.ViewPath + ".js";
if (File.Exists(Server.MapPath(shortUrl)))
{
_clientScriptIncludes["PageJavascript"] = Page.ResolveUrl(shortUrl);
}
}
This works well because:
It is automagically included in my files
The .js file lives alongside the page itself
Sorry if this doesn't apply to your language/coding style.