Global JS file: Execute JSON call only once - javascript

I believe that I've made a terrible mistake. I have a global JS file that I include via script tag on every single JSP page.
<script type="text/javascript" src="/myGlobalJS.js" ></script>
In that JS file I make a call to get JSON; but I only want that executed once. Right now it executes every time a new JSP page loads.
var myObj = (function(){
var myData = {};
return{
setData:function(data){
myData = data.response;
}
};
})();
(function(){
$.getJSON(jsonUrl)
.done(function(data){
myObj.setData(data.response.data);
});
})();
How can I correct this elegantly?
Update: I can see the error of my thinking now. I'm thinking about these JSP pages as an application and when the app loads then it should execute getting this JSON file only one time. There is no persisted UI state across these JSP pages. So I think that I'll have to load the file and call it on every page I need it, unfortunately. Is this true?

I've found the answer here:
Global vars across pages
HTML5 supports sessionStorage, which will store variables as long as the session is in use. Only shortcoming is that it doesn't seem to fully support storing objects across all browsers yet. But it's definitely a nifty solution.

Related

Best practice for script caching with Javascript

Using:
C# MVC5 and Jquery
I have a filter screen that potentially uses multiple different filters. Based on what the user selects I make a call to the server and I load a partial view into a bootstrap modal as follows:
$.ajax({
url: filterUrl,
contentType: 'application/html',
success: function (filterContent) {
$("#divReportFilterModalBody").html(filterContent);
LoadFilterScript(SCOPESTRINGS[currentReport.Scope]);
},....
The next step is to load the necessary javascript for that filter page because you cant have scripts on a partial view. For this I also request the script from the server as follows:
$.getScript(scopeString + "FilterJavaScript",
function () {
The mvc controller:
[OutputCache(NoStore = true, Duration = 0, VaryByParam = "*")]
public ActionResult ScopeFilterJavaScript()
{
return
File(System.IO.File.ReadAllBytes(Server.MapPath("~/Scripts/.../filterPartial.js")), "text/javascript");
}
Because the user can only use one filter at a time and may or may not use multiple filters my questions are:
The scripts aren't big, is it better practice to load them all upfront rather then fetch them as required? The reason I load them as required is because they might not get called and didn't want to load a bunch of scripts that will not get used
Is not caching them a good idea because the user can use the same filter multiple times and in my current case the script will get loaded each time? OR should I rather cache the script and figure out a way not to load it again?
I'm also not 100% clear on script caching. What happens to the script in this case after it was loaded? If I make a call to the server I can see that it gets loaded again, was the previous scripts removed? Because when I look at the script tab on firebug they are all still listed there? Will this cause conflicts on the page?
What would best practice be in this scenario?
Thanks
Edit: I've been researching the topic a bit further and found this article (Old but still very relevant in my opinion). enter link description here
It's always a good idea to only load stuff if you actually need it. When the files arent that huge, maybe you can combine them and include them in the first place.
OR should I rather cache the script and figure out a way not to load it again?
yup.
When you load a script (without any queries) the browser caches it. But this has nothing to do with what happens when you load a script again. Either the servers delivers it "again" or the browser uses the cached one. Nevertheless, the script then executes again. Even if you remove it from the dom - once loaded scripts are just there.
Maybe you can wrap your scripts like so:
if (!window.foobarLoaded) {
// your script content
window.foobarLoaded = true;
}
Then you can load the script as many times as you like - it only "executes" once.

Caching Javascript inlined in HTML

Instead of having an external .js file, we can inline Javascript directly in HTML, i.e.
Externalized version
<html>
<body>
<script type="text/javascript" src="/app.js"></script>
</body>
</html>
Inlined version
<html>
<body>
<script type="text/javascript">
// app.js inlined
</script>
</body>
</html>
However, it's not recommended:
https://developer.yahoo.com/performance/rules.html#external
Put javascript and css inline in a single minified html file to improve performance?
The main reason is caching and pre-compiling - in the externalized version, the browser can download, pre-compile and store the file once for multiple pages, while it cannot do the same for inlined version.
However, is it possible to do something along these lines:
Inlined keyed version
<html>
<body>
<script type="text/javascript" hash="abc">
// app.js inlined
</script>
</body>
</html>
That is, do this:
In the first invocation, send the whole script and somehow tell the browser that the script hash is abc
Later, when the browser loads that or other pages containing the same script, it will send this key as a cookie. The server will only render the contents of the script if the key has been received.
That is, if the browser already knows about the script, the server will render just this:
Inlined keyed version, subsequent fetches (of the same or other pages)
<html>
<body>
<script type="text/javascript" hash="abc">
</script>
</body>
</html>
where notably the script contents are empty.
This would allow for shorter script fetching with a natural fallback.
Is the above possible? If not, is some other alternative to the above possible?
I don't know of a way to do what you asked, so I'll provide an alternative that might still suit your needs.
If you're really after a low latency first page load, you could inline the script, and then after the page loads, load the script via url so that it's in the browser cache for future requests. Set a cookie once you've loaded the script by direct url, so that your server can determine whether to inline the script or provide the external script url.
first page load
<script>
// inlined my-script.js goes here.
</script>
<script>
$(function(){
// load it again, so it's in the browser cache.
// notice I'm not executing the script, just loading it.
$.ajax("my-script.js").then(function(){
// set a cookie marking this script as cached
});
});
</script>
second page load
<script src="my-script.js"></script>
Obviously, this has the drawback that it loads the script twice. It also adds additional complexity for you to take care of when you update your script with new code - you need to make sure you address the cookie being for a old version.
I wouldn't bother with all this unless you really feel the need to optimize the first page. It might be worth it in your case.
The Concept
Here's an interesting approach (after being bugged by notifications :P)
You could have the server render your script this way. Notice the weird type attribute. That's to prevent the script from executing. We'll get to that in a second.
<script type="text/cacheable" data-hash="9182n30912830192c83012983xm019283x">
//inline script
</script>
Then create a library that looks for these scripts with weird types, get the innerHTML of these scripts, and execute them in the global context as if they were normally executing (via eval or new Function). This makes them execute like normal scripts. Here's a demo:
<script type="text/cacheable" data-hash="9182n30912830192c83012983xm019283x">
alert(a);
</script>
<script type="text/cacheable" data-hash="9182n30912830192c83012983xm019283x">
alert(b);
</script>
<script>
// Let's say we have a global
var a = "foo";
var b = "bar"
// Getting the source
var scripts = Array.prototype.slice.call(
document.querySelectorAll('script[type="text/cacheable"]')
);
scripts.forEach(function(script){
// Grabbing
var source = script.innerHTML;
// Create a function (mind security on this one)
var fn = new Function(source);
// Execute in the global scope
fn.call(window);
});
</script>
However...
Since you have the script source (the innerHTML), you can cache them somewhere locally (like in localStorage) and use the hash as its identifier. Then you can store the same hash in the cookie, where future page-requests can tell the server "Hey, I have cached script with [hash]. Don't print the script on the page anymore". Then you'll get this in future requests:
<script type="text/cacheable" data-hash="9182n30912830192c83012983xm019283x"></script>
That covers up the first half. The second phase is when your library sees an empty script. The other thing your library should do is when it sees an empty script, it should look up for that script with that hash in your local storage, get the script's source and execute it like you just did in the first place.
The Catch
Now there's always a trade-off in everything, and I'll highlight what I can think of here:
Pros
You only need one request for everything. Initial pageload contains scripts, subsequent pages become lighter because of the missing code, which is already cached by then.
Instant cache busting. Assuming the hash and code are 1:1, then changing the content should change the hash.
Cons
This assumes that pages are dynamic and are never cached. That's because if you happen to create a new script, with new hash, but had the client cache the page, then it will still be using the old hashes thus old scripts.
Initial page load will be heavy due to inlined scripts. But this can be overcome by compressing the source using a minifier on the server. Overhead of minification can also be overcome by caching minified results on the server.
Security. You'll be using eval or new Function. This poses a big threat when unauthorized code manages to sneak in. In addition, the threat is persistent because of the caching.
Out of sync pages. What happens if you get an empty script, whose hash is not in the cache? Perhaps the user deleted local storage? You'll have to issue a request to the server for it. Since you want the source, you'll have to have AJAX.
Scripts are not "normal". Your script is best put at the end of the page so that all inline scripts will be parsed by then. This means your scripts execute late and never in the time they get parsed by the browser.
Storage limits. localStorage has a size limit of 5-10MB, depending on which browser we're talking about. Cookies are limited to 4KB generally.
Request size. Note that cookies are shipped up to the server on request and down to the browser on response. That additional load might be more of a hassle than it is for good.
Added server-side logic. Because you need to know what needs to be added, you need to program your server to do it. This makes the client-side implementation dependent on the server. Switching servers (say from PHP to Python) wouldn't be as easy, as you need to port over the implementation.
If your <script> is not introduced as type=text/javascript, it will simply not be executed.
So you could have many tags like theses:
<script type="text/hashedjavascript" hash="abc">...</script>
<script type="text/hashedjavascript" hash="efg">...</script>
Then when the DOM is loaded, pick one and evaluate it.
I made an example here: http://codepen.io/anon/pen/RNGQEM
But it smells, real bad. It's definitely better to fetch two different files.
Actually what you should do, is have a single file my-scripts.js that contains the code for each of your script, wrapped in a function
// file: my-scripts.js
function script_abc(){
// what script abc is supposed to do
}
function script_efg(){
// what script efg is supposed to do
}
Then execute whatever your cookie tells you to. This is how AMD builders concatenate multiples files in one.
Also look for an AMD library such as requirejs
Edit: I misunderstood your question, removed the irrelevant part.

Call a function in one Javascript file from another Javascript file?

I need to call a function in an external ".js" file from another ".js" file, without referencing the external file in the <head> tag.
I know that it is possible to dynamically add an external ".js" file to the which allows access to that file, i can do that like so...
var AppFile = "test/testApp_1.js";
var NewScript=document.createElement('script');
var headID = document.getElementsByTagName("head")[0];
NewScript.src = AppFile;
headID.appendChild(NewScript);
However...
this is no use to me as the external files need to be stand-alone files that run start-up procedures on...
$(document).ready(function()
{...}
so adding the full file dynamically has an unwanted affect. Also, i cannot pre-reference the external file in the <head> tag as it needs to be dynamic.
So, this external file "test/testApp_1.js" contains a function that returns a string variable...
function setAppLogo(){
var LogoFile = "test/TestApp_1_Logo.png";
return LogoFile;
}
I need access to either this function, or I could store the string as a global var in the external file... either way is fine, I just need access to the value in LogoFile without loading the whole external file.
This one has had me stumped for a few hours now so any ideas would be greatly appreciated.
You might benefit from having some sort of app.js file that contains global variables/values that you will want to use from lots of places. You should include this .js file on every page (and maybe minify it/concatenate it with other js if you want to be clever and improve performance). Generally these globals should be attached to some object you create such as var APPNAME = { }; with variables/functions on it that will be used from many places.
Once you have this, then the external '.js' file that you want to load, and the one you are currently in, can both access the global APPNAME variable and all its attributes/functions and use them as desired. This may be a better approach for making your javascript more modular and separatable. Hope this helps.
You want to load the file once jQuery has loaded using ajax, and then run the related script in the successful ajax function.
See jQuery's getScript function: http://api.jquery.com/jQuery.getScript/
$(document).ready(function(){
$.getScript("http://domain.com/ajax/test.js", function(data, textStatus, jqxhr) {
console.log(data); //data returned
console.log(textStatus); //success
console.log(jqxhr.status); //200
console.log('Load was performed.');
//run your second script executable code here
});
});
It is possible to load the whole script through XHR (e.g. $.get in jQuery) and then parse it, perhaps using a regular expression, to extract the needed part:
$.get('pathtoscript.js', function(scriptBody) {
var regex = /function\s+setUpLogo\(\)\s*\{[^}]+}/g;
alert(scriptBody.match(regex)[0]); // supposed to output a function called
// 'setUpLogo' from the script, if the
// function does not have {} blocks inside
});
Nevertheless, it shall be noted that such an approach is highly likely to trigger maintenance obstacles. Regular expressions are not a best tool to parse JavaScript code; the example above, for instance, will not parse functions with nested {} blocks, which may well exist in the code in question.
It might be recommended to find a server-side solution to the problem, e.g. adding necessary script path or its part before the page is sent to browser.
I'm not sure this is a good idea but you can create an iframe and eval the file inside its 'window' object to avoid most of the undesired side effects (assuming it does not try to access its parent). Then you can access whatever function/variable you want via the iframe's window object.
Example:
function loadSomeJsInAFrame(url,cb) {
jQuery.get(url,function(res) {
iframe = jQuery('<iframe></iframe>').hide().appendTo(document.body);
iframe[0].contentWindow.eval(res);
if(cb) cb(iframe[0].contentWindow);
},'text');
}
loadSomeJsInAFrame('test/testApp_1.js',function(frameWindow) {
console.log(frameWindow.setAppLogo());
jQuery(frameWindow.frameElement).remove();
});
This will not guarantee that the sript in the file can not mess with your document, but not likely if it comes from a trusted source.
Also, don't forget to remove your iframe after you get what you need from it.
Ok, thanks everybody for all the input but i think that what I was trying to do is currently not possible, i.e. accessing a function from another file without loading that file.
I have however found a solution to my problem. I now query my server for a list of apps that are available, i then use this list to dynamically build the apps in a UI. when an app is then selected i can then call that file and the functions within. Its a bit more complex but its dynamic, has good performance and, it works. Thanks again for the brainstorming! ;)
It may be possible with the help of Web Workers. You would be able to run your script you've wanted to inject in kinda isolated environment, so it won't mess up your current page.
As you said, it is possible for setAppLogo to be global within "test/testApp_1.js", so I will rely on this statement.
In your original script you should create a worker, which references to a worker script file + listen to messages that would come from the worker:
var worker = new Worker('worker.js');
worker.onmessage = function (e) {
// ....
};
Then, in the worker (worker.js), you could use special function importScripts (docs) which allows to load external scripts in worker, the worker can also see global variables of these scripts. Also there is a function postMessage available in worker to send custom messages back to original script, which in turn is listening to these messages (worker.onmessage). Code for worker.js:
importScripts('test/testApp_1.js');
// "setAppLogo" is now available to worker as it is global in 'test/testApp_1.js'
// use Worker API to send message back to original script
postMessage(setAppLogo());
When it invokes you'll get the result of setAppLogo in you listener:
worker.onmessage = function (e) {
console.log(e.data); // "test/TestApp_1_Logo.png"
};
This example is very basic though, you should read more about Web Workers API and possible pitfalls.

Need to understand how tracking codes may be working in web content

I was recently approached by a web partner and they asked me to add their 'tracking code' to my site as shown below. The data and address would be different, but the structure is the same as below. Currently they load our site in an IFrame and what I can't understand is...
How could the script portion provide any value to them? Can a parent page read JavaScript state of something in a hosted IFrame? Google uses a similar pattern but they set the src which has Script that is executed when the page loads and could read the state.
Can anyone explain how this might be working or is this just useless page spam?
<img src="https://www.APartnerCompany.foo/thing.img?arg=value" />
<script type="text/javascript">
//<![CDATA[
var Foo = {};
Foo.Tracking = {};
Foo.Tracking.Sale = {};
Foo.Tracking.Sale.amount = '100.00';
//]]>
</script>
An inline script can also read values from the page it is contained in. To post them back to their own server, they seem to use the src attribute of that affiliate image. However, the pieces you provided are harmless (the code does nothing than constructing an object) and requests only a non-executable from https://www.APartnerCompany.foo/thing.img?arg=value (beeing logged at theirs).
When the request for that src is being sent the server is just logging the variables that are accompanied with it, the JS code is just updating the arguments with relevant data.
All it does is send information to a server through a GET request which it is logged. It can be as simple as reading the Apache logs for hits or it can be a cgi that processes the data. That is basically how all those ad/logging services work.

Best practices managing JavaScript on a single-page app

With a single page app, where I change the hash and load and change only the content of the page, I'm trying to decide on how to manage the JavaScript that each "page" might need.
I've already got a History module monitoring the location hash which could look like domain.com/#/company/about, and a Page class that will use XHR to get the content and insert it into the content area.
function onHashChange(hash) {
var skipCache = false;
if(hash in noCacheList) {
skipCache = true;
}
new Page(hash, skipCache).insert();
}
// Page.js
var _pageCache = {};
function Page(url, skipCache) {
if(!skipCache && (url in _pageCache)) {
return _pageCache[url];
}
this.url = url;
this.load();
}
The cache should let pages that have already been loaded skip the XHR. I also am storing the content into a documentFragment, and then pulling the current content out of the document when I insert the new Page, so I the browser will only have to build the DOM for the fragment once.
Skipping the cache could be desired if the page has time sensitive data.
Here's what I need help deciding on: It's very likely that any of the pages that get loaded will have some of their own JavaScript to control the page. Like if the page will use Tabs, needs a slide show, has some sort of animation, has an ajax form, or what-have-you.
What exactly is the best way to go around loading that JavaScript into the page? Include the script tags in the documentFragment I get back from the XHR? What if I need to skip the cache, and re-download the fragment. I feel the exact same JavaScript being called a second time might cause conflicts, like redeclaring the same variables.
Would the better way be to attach the scripts to the head when grabbing the new Page? That would require the original page know all the assets that every other page might need.
And besides knowing the best way to include everything, won't I need to worry about memory management, and possible leaks of loading so many different JavaScript bits into a single page instance?
If I understand the case correctly, you are trying to take a site that currently has pages already made for normal navigation, and you want to pull them down via ajax, to save yourself the page-reload?
Then, when this happens, you need to not reload the script tags for those pages, unless they're not loaded onto the page already?
If that is the case, you could try to grab all the tags from the page before inserting the new html into the dom:
//first set up a cache of urls you already have loaded.
var loadedScripts = [];
//after user has triggered the ajax call, and you've received the text-response
function clearLoadedScripts(response){
var womb = document.createElement('div');
womb.innerHTML = response;
var scripts = womb.getElementsByTagName('script');
var script, i = scripts.length;
while (i--) {
script = scripts[i];
if (loadedScripts.indexOf(script.src) !== -1) {
script.parentNode.removeChild(script);
}
else {
loadedScripts.push(script.src);
}
}
//then do whatever you want with the contents.. something like:
document.body.innerHTML = womb.getElementsByTagName('body')[0].innerHTML);
}
Oh boy are you in luck. I just did all of this research for my own project.
1: The hash event / manager you should be using is Ben Alman's BBQ:
http://benalman.com/projects/jquery-bbq-plugin/
2: To make search engines love you, you need to follow this very clear set of rules:
http://code.google.com/web/ajaxcrawling/docs/specification.html
I found this late and the game and had to scrap a lot of my code. It sounds like you're going to have to scrap some too, but you'll get a lot more out of it as a consequence.
Good luck!
I have never built such a site so I don't know if that is nbest practice, but I would put some sort of control information (like a comment or a HTTP header) in the response, and let the loader script handle redundancy/dependency cheching and adding the script tags to the header.
Do you have control over those pages being loaded? If not, I would recommend inserting the loaded page in an IFrame.
Taking the page scripts out of their context and inserting them in the head or adding them to another HTML element may cause problems unless you know exactly how the page is build.
If you have full control of the pages being loaded, I would recommend that you convert all your HTML to JS. It may sound strange but actually, a HTML->JS converter is not that far away. You could start of with Pure JavaScript HTML Parser and then let the parser output JS code, that builds the DOM using JQuery for example.
I was actually about to go down that road for a while ago on a webapp that I started working on, but now I handed it over to a contractor who converted all my pure JS pages into HTML+JQuery, whatever makes his daily work productive, I dont care, but I was really into that pure JS webapp approach and will definitely try it.
To me it sounds like you are creating a single-page app from the start (i.e. not re-factoring an existing site).
Several options I can think of:
Let the server control which script tags are included. pass a list of already-loaded script tags with the XHR request and have the server sort out which additional scripts need to be loaded.
Load all scripts before-hand (perhaps add them to the DOM after the page has loaded to save time) and then forget about it. For scripts that need to initialize UI, just have each requested page call include a script tag that calls a global init function with the page name.
Have each requested page call a JS function that deals with loading/caching scripts. This function would be accessible from the global scope and would look like this: require_scripts('page_1_init', 'form_code', 'login_code') Then just have the function keep a list of loaded scripts and only append DOM script tags for scripts that haven't been loaded yet.
You could use a script loader like YUI Loader, LAB.js or other like jaf
Jaf provides you with mechanism to load views (HTML snippets) and their respective js, css files to create single page apps. Check out the sample todo list app. Although its not complete, there's still a lot of useful libraries you can use.
Personally, I would transmit JSON instead of raw HTML:
{
"title": "About",
"requires": ["navigation", "maps"],
"content": "<div id=…"
}
This lets you send metadata, like an array of required scripts, along with the content. You'd then use a script loader, like one of the ones mentioned above, or your own, to check which ones are already loaded and pull down the ones that aren't (inserting them into the <head>) before rendering the page.
Instead of including scripts inline for page-specific logic, I'd use pre-determined classes, ids, and attributes on elements that need special handling. You can fire an "onrender" event or let each piece of logic register an on-render callback that your page loader will call after a page is rendered or loaded for the first time.

Categories

Resources