I have a page javascript page.js being loaded with require.js. The call to the page.js is placed on the bottom of the page after the calls to require.js and is as follows:
<script>
require(["page"]);
</script>
Functions inside the page.js simply do not execute each time the page is accessed.
To be clear, an alert('hello'); in the middle of page.js will be alerted most but not all of the times. I'm pretty sure this is not an existing IE issue, and that a simple alert will always execute provided there are no other JS errors.
95% of the time the page and it's corresponding functions execute, about 5% of the time the IE browsers are not reexecuting the contents of the page.js.
I don't think this is an inherent IE issue, but rather require.js is stumbling over related aggressive caching issues found in IE.
Edits:
Just to clarify, the page.js file is visible in the f12 dom load when the error occurs. The page is properly cached. The issue is that the cached code file is not re run!
For instance the alert in this file is not executed!
I'm not sure about the internals of require.js but I suppose they do xhr for the resources and eval it. It seems the xhr completes and loads into the dom, but the eval isn't working correctly. (This is of course speculation, as I don't know enough require.js internals).
The only way i know to prevent caching your js files is to add a random string to the end :
example :
<script src="http://www.mydomaine.com/myjsfile.js?t=123456"></script>
generate the "t" parameter content randomly using an md5 hash or wathever, this makes browsers believe that it's a different file each time.
There problem may not be due to caching. Caching is basically controlled on the server-side, so if you do not want a file cached, you have the server setting the cache-control headers to do that. Caching does not affect if a javascript file is "executed" or not, it only affects where the browser gets data from when trying resolve a given resource. Normally, you want .js files to be cached for performance reasons.
In your case, caching may not be the real problem. When using dynamic javascript source loaders (libraries like dojo support this), it may best that the file you load is wrapped in the following:
(function(){
// Main code here...
})();
This defines an anonymous function, and then executes it right away. This gives the following advantages:
Creates a closure so you can declare variables that are only visible in the scope of your file.
Ensure that any direct executable statements are executed.
Note, I'm not familar with require.js, so there is a possibility it can play a role in your problem. Also, you did not provide the file you are loading via require, which it may have a bug that is causing the inconsistency you are encountering.
Conclusion:
IE (where it was mostly occurring) was kind of swallowing up the error, when we were able to reproduce it in Chrome, we found an error indicating that one of our global funcs wasn't yet loaded because the global funcs file wasn't added to the require list. Unfortunately, we're not using require.js's compile + optimization which may or may not have barfed without an IMPLICIT listing of the globals.js as a dependency.
I guess the take home is make sure any functions called are themselves defined in a dependency implicitly listed in the require block!
Related
I am working with SystemJS and I have a pseudo-bootstrapper file that I use to check to make sure certain conditions are met before the loading of the main scripts to execute the page load. Here is a snippet of that code.
var obj = document.createElement('script');
obj.src = 'jspm_packages/system.js';
document.body.appendChild(obj);
This code does NOT execute the script, yet it does load it with a 200 code as evidenced by the network tab within the IE dev tools. There should be a global object "System" created, but it does not exist. Looking through the DOM, the object is properly created and appended to the body.
Does anyone know if this is strictly an issue with IE and SystemJS? I have no idea what's going on. I'm pulling my hair out, as per usual with the demon that is IE. I should note that every other browser works as expected, providing the "System" global variable.
EDIT Further testing has assured that this is not an issue with appendChild, as other scripts using the same method, execute on load just fine.
Reading this article tells us that your script may not run in IE11. The line in particular which is of interest is:
"Script elements with external resources should no longer execute during appendChild."
This appears to be what's happening.
EDIT: An alternate approach could be taken.
It would be a good idea to do condition checks on the server side before sending the response if you want to change page loading at the system.js level. If that is not possible then I'd suggest doing a redirect after the condition checks instead of appendChild.
The answer is that IE versions < Edge do not support promises. I needed a polyfill for IE 11.
What are the differences in performance and memory footprint , if any, in these different approaches:
1. Using Src
<script type='text/javascript" src="1MBOfjavascript.js"></script>
2. Directly injecting in the Head
$('head').append("<script type='text/javascript'>1MBOfJavascriptCode</script>");
I'm interested because we are developing a Cordova App where we use the second method to Inject to the DOM a previously downloaded Javascript bundle read from the HTML Local Storage.
Given that the script will probably grow in size, I'd like to know if with the second method I could incur in some memory issues or other DOM problem.
I believe the overhead in such cases should be insignificant as the main processing/memory consumption is based on how the actual script works. Ie the memory used by file would be the total size of the script which is what 1MB at max? However during execution the same script can use up 100MB easily.
Anyways , coming to the point.
Plain text inclusion would be better if it has to be included in all cases as it skips a script execution and also does not cause re-rendering by browser after the append.
Append option should be used in cases where you only need the script under specific conditions on client side and do not load it un-necessarily.
The browser goes through all elements before to render the page so I would imagine it is exactly the same, if anything if you had scripts after the page is downloaded chances are that you will get errors of the type "call to undefined functions" if you call onto a function that you have added after the load.
My advise would be add them at load using src but keep things tidy.
Javascript impact a lot depending upon how and where you are loading your file, there are many ways to load a file or code.
First method you mentioned is an conventional way to load javascipt file that is load file using src and usually its loaded before closing </head> tag but now a days it is in trend to load file befor your </body> tag. it speeds up application load and prepare DOM faster.
second method is not obviously a good way to load javascript at first as there can be some code that should be working only after your DOM is ready and as here you are loading your javscript with javascript/jquery append which depends upon your jquery file loading which will delay your code execution. and there might be possible that some of your code will not have desired output (depending upon how you have called functions and how much they are dependent upon DOM ready )
I would prefer to load a javascript file with first method and at the bottom of the page/app if possible.
asyncrounous loading can also be tried.
i think because browser behavior after retrieving response from any web server
will try to parse all the elements first for building the DOm, after the DOM are ready then your script would be executed, based on this it is obvious its the main
reasons is which is the first
will be executed first than injecting it.
About the second method
i think javascript engine in each browser can handle that thing easily and withouth any problem. the engine does not care about the size actually it only about how long the script will be loaded and then executed. what will be the issue is when you try injecting any elements into DOM is when you try to do it in loop there might be a memory problem.
when you say big javascript files yeah we should think about the memory. but
about the method you previously said its just about which are executed first.
thats my opinion
I would like to use requirejs to manage my code within a firefox xul plugin, and I can't get it to find my modules.
I know that xul doesn't play nice with the data-main attribute, so I have my main.js script as a second script:
<script src="chrome://myPackage/content/require.js" type="application/x-javascript"></script>
<script src="chrome://myPackage/content/main.js" type="application/x-javascript"></script>
This successfully calls the script, and the require function is available within main.js, but when I run
require(['lib1'], function(lib1){
alert(lib1.val1);
})
the alert never gets popped (lib1 is in the same directory as main.js).
I have tried this within and without setting the baseUrl as
require.config({
baseUrl: "chrome://myPackage/content/"
})
and it does not work either way.
Does anyone know how I can get require.js to look in the right place for my modules?
Addendum **
I added an error handling function and the error code returned is
http://requirejs.org/docs/errors.html#timeout
I have loaded the test module into a normal web page successfully. This seems to confirm that the issue is path configuration (it also takes the 15 second timeout before failing)
Firebug seems to have a working requirejs version. But more importantly, they have a far better mini-require.js that will not pollute the shared global scope when used in overlays (if used correctly :p)
https://github.com/firebug/firebug/blob/master/extension/modules/require.js
https://github.com/firebug/firebug/blob/master/extension/modules/mini-require.js
I suggest you have a look at these implementations and also the code using it.
Proactive warning:
Please note, that if your add-on uses code that defines lots of new properties on the scope in overlays (window) either by defining global functions or variables or implicitly declaring variables within functions, then this may interfere with other code running in the same scope (the browser code itself and other add-ons). Besides, should you want to submit your add-on to addons.mozilla.org, then a reviewer might not give it public status if your add-on "pollutes" the global scope/namespace in the main overlay.
We have an IE extension implemented as a Browser Helper Object (BHO). We have a utility function written in C++ that we add to the window object of the page so that other scripts in the page can use it to load local script files dynamically. In order to resolve relative paths to these local script files, however, we need to determine the path of the JavaScript file that calls our function:
myfunc() written in C++ and exposed to the page's JavaScript
file:///path/to/some/javascript.js
(additional stack frames)
From the top frame I want to get the information that the script calling myfunc() is located in file:///path/to/some/javascript.js.
I first expected that we could simply use the IActiveScriptDebug interface to get a stacktrace from our utility function. However, it appears to be impossible to get the IActiveScript interface from an IWebBrowser2 interface or associated document (see Full callstack for multiple frames JS on IE8).
The only thing I can think of is to register our own script debugger implementation and have myfunc() break into the debugger. However, I'm skeptical that this will work without prompting the user about whether they want to break into the debugger.
Before doing more thorough tests of this approach, I wanted to check whether anyone has definitive information about whether this is likely to work and/or can suggest an alternative approach that will enable a function written in C++ to get a stack trace from the scripting engine that invoked it.
Each script you load may have an id and each method of the script calling myfunc() may pass this id to myfunc(). This means that first you have to modify myfunct() and finally alter your scripts and calls.
This answer describes how I solved the actual issue I described in the original question. The question description isn't great since I was making assumptions about how to solve the problem that actually turned out to be unfounded. What I was really trying to do is determine the path of the currently running script. I've changed the title of the question to more accurately reflect this.
This is actually fairly easy to achieve since scripts are executed in an HTML document as they are loaded. So if I am currently executing some JavaScript that is loaded by a script tag, that script tag will always be the last script tag in the document (since the rest of the document hasn't loaded yet). To solve this problem, it is therefore enough just to get the URL of the src attribute of the last script tag and resolve any relative paths based on that.
Of course this doesn't work for script embedded directly in the HTML page, but that is bad practice anyway (IMO) so this doesn't seem like a very important limitation.
I Know, for instance, that when Chrome downloads a Javascript file, it is interpreted and JITed.
My question is, when IE6,7,8, first download a Javascript file, is the entire thing parsed and interpreted?
My understanding was that only top level function signatures and anything executed in the global scope was parsed on load. And then function bodies and the rest were parsed on execution.
If they are fully parsed on load, what do you think the time savings would be on deferring the function bodies to be downloaded and parsed later?
They are fully parsed on load. (IE has to parse the script to know where each function body ends, of course.) In the open-source implementations, every function is compiled to bytecode or even to machine code at the same time, and I imagine IE works the same way.
If you have a page that's actually loading too slowly, and you can defer loading 100K of script that you're probably not going to use, it might help your load times. Or not—see the update below.
(Trivia: JS benchmarks like Sunspider generally do not measure the time it takes to parse and compile the code.)
UPDATE – Since I posted this answer, things have changed! Implementations still parse each script on load at least enough to detect any SyntaxErrors, as required by the standard. But they sometimes defer compiling functions until they are first called.
Because defining a function is actually an operation, yes, your entire javascript file is parsed, and all of the top-level operations are interpreted. The code inside of your functions is not actually executed until it's called, but it is parsed.
for example:
var i=0;
var print = function( a ) {
document.getElementById( 'foo' ).innerHtml = a;
}
Everything gets parsed in the above example, and lines 1 and 2 get executed. However, line 3 doesn't get executed until it's called.
There are little "perceptual games" you can play with your users, like putting the script tags at the bottom of the HTML instead of at the top, so that the browser will render the top of the page before it receives the instructions to download and parse the javascript. You could probably also push your function definitions into a document.onload function, so that they don't get executed until after the whole page is loaded and in memory. However, this could cause a "flash of unstyled content" if your javascript is applying visual styles to things (such as jQuery UI stuff).
Yes, on all browsers the downloading of the resource blocks everything else on the page (CSS downloading, other JS downloading, rendering) if done with a <script> tag.
If you're loading all the javascript at the beginning or throughout your page you will see hiccups as a request is about 50ms and the parsing for a library file or something similar could be more than 100ms. 100ms is used as the standard for which anything greater will be noticed as "lag" by the user.
The time savings may be negligible, but the slight loss of user experience if there are pauses when your page is loading may be significant depending on your situation.
See LABjs' site for a lot of articles and great explanations on the benefits of deferring loading and parsing.
What do you mean by "downloads"? When it's included with tag, or when it's downloaded through XMLHttpRequest?
If you mean the inclusion by script, then IE interpret all js files at once. Otherwise you will be not able to call functions in that file or see syntax error message.
If you mean download by XMLHttpRequest, then you have to evaluate the content of the file yourself.