I have a complicated situation that involves scripts being loaded from different sources but to understand my problem I think it will be easier if I ask a simple question regarding AMD modules which I am not too familiar with.
On a site that is using AMD modules, there is a define function available.
What happens if I try to define and load a module like this:
define([], () => { return { thisIsMyTest: true } })
How do I know get access to that module I just defined?
I tried looking within require._defined
But I couldn't see it (I could have missed it too as I am not sure what name to look for).
The script that wants to access that object is not a AMD compatible script but I have control over that aspect so I am happy to write code for it.
Now to add some more clarify to the problem, I have a script that loads on a site (which I have no control over), my script loads another external script remotely (which I don't have any control over). This external script in normal cases exposes itself as a variable on the Window object, however it supports AMD modules and this said site uses AMD, so in this case this external script uses the define method of loading and I can't figure out how to get access to it.
Update:
Here is my snippet of code that loads my external library:
const libScript = document.createElement('script');
libScript.type = 'text/javascript';
libScript.src = '//some/3rdparty/lib.js'; // source
libScript.onload = () => {
// Use the library that normally exists on window.Lib
// However libScript in this site loads using AMD
// and window.Lib doesn't exist for sites that use AMD
// Other non AMD sites work fine.
};
document.head.appendChild(libScript);
Related
I was trying to create a script that would include all the required ones to make my Angular app work but doesn't seem to work:
function include(destination) {
var e = window.document.createElement('script');
e.setAttribute('src', destination);
window.document.body.appendChild(e);
}
include('../Scripts/myApp/main.js');
include('../Scripts/myApp/appConfig.js');
include('../Scripts/myApp/MyAppController.js');
include('../Scripts/myApp/Service1.js');
include('../Scripts/myApp/Service2.js');
Unfortunately, if I try to include the app scripts like that (it works btw, they are loaded into the page perfectly), I get an error as angular detects my "ng-app='myAppName'" directive but finds no module declared for it (I suppose because the script hasn't been loaded yet when angular checks). If I then try to at least place the module instantiation on a script called directly by the HTML page (the main.js) then it complains for the controller, services, etc. not existing and if I attach those to the app inside their own files (app.controller(), app.factory, ...) it does not complain but it doesn't happen either.
Looks like loading the scripts this way makes Angular not execute them as they may be loaded too late or something. The only way of doing it has been to set a script tag calling the scripts on the HTML.
But I have the feeling that I'm missing something, that there is a way of doing this and I just don't see it.
Does anyone know how to load all the required scripts by just calling one of them? Is there something on the angular app config that can be set to do so maybe?
It just looks too dirty to make so many calls and also bad in terms of performance.
I've checked that there is something called Yeoman but it looks like too big for what I want, then I saw RequireJS but I'm unsure that's what I really want and again not sure I want to install another library just for this. Finally, I've seen that on the app config you can set where to find the views and even a way to load the controller (I think, I'm unsure of anything now my head is so dizzy) so I was thinking that maybe Angular has already something to do this which looks so natural, joining the scripts into one?
Many thanks.
This is because your scripts are being loaded in asynchronously and angular is compiling the document synchronously as the page is loaded (since I'm assumnig angular.js is on the script includes as I don't see it above). You will have to manually bootstrap the app in this scenario, as you would also have to do when using RequireJS. Here is an example... Since you are not using a library like RequireJS, I am also putting a hacky counter to see when all your scripts are loaded to callback and bootstrap angular when all scripts have loaded:
function include(destination) {
var e = window.document.createElement('script');
e.setAttribute('src', destination);
//IE detect when script has fully loaded... Increase number of loaded scripts and
//call bootstrap if all have loaded.
e.onreadystatechange = function () {
if (newjs.readyState === "loaded" || newjs.readyState === "complete") {
e.onreadystatechange = null;
loadedScripts++;
}
if (loadedScripts === numScripts) {
bootstrapAngular("myAppName");
}
}
//Other browsers. Same logic as above - Increase scripts loaded and call bootstrap when complete
e.onload = function () {
loadedScripts++;
if (loadedScripts === numScripts) {
bootstrapAngular("myAppName");
}
}
window.document.body.appendChild(e);
}
//5 scripts being loaded
var numScripts = 5;
//0 currently loaded
var loadedScripts = 0;
include('../Scripts/myApp/main.js');
include('../Scripts/myApp/appConfig.js');
include('../Scripts/myApp/MyAppController.js');
include('../Scripts/myApp/Service1.js');
include('../Scripts/myApp/Service2.js');
function bootstrapAngular(appName) {
//Assuming you are putting ng-app at document level... Change to be your specific element.
angular.bootstrap(document, [appName]);
}
The solution above is a little hacky as we are keeping track of all the scripts that have loaded. This is where a library like RequireJS would play very nicely because it would call your callback function when the scripts you want have been loaded. An example would be:
require(['angular', 'jquery', 'appConfig', 'MyAppController', 'Service1', 'Service2'], function (ng, $) {
//bootstrap when all is ready
$(function () {
ng.bootstrap(document, ["myAppName"]);
});
});
If you're using AngularJS (or any similarly advanced framework) then it's definitely time for you to learn about module loading. Injecting <script> tags quickly becomes untenable as you start to build more modules and start dealing with more complex dependency chains.
RequireJS is a great package and can do everything under the sun. Here's a tutorial on using the two together that looks pretty good. Browserify is a similar library that lets you manage dependencies with NPM. It's a little bit simpler than RequireJS, which might be a plus for you. Here's a tutorial that looks decent.
Both RequireJS and Browserify come with command-line build tools, which are great because they can combine all of your source code into a single file when you're ready to deploy your app to a server.
I should warn you, however, that this stuff isn't easy. There's a fairly steep learning curve. In all likelihood, though, this is stuff you're going to need to learn sooner or later anyway.
I am working on a JavaScript project where we use requirejs to manage our code. The product we create is to be used on 3rd party web sites, and therefore we cannot assume that an AMD compatible library is present. To solve this, we include almond in our target file and expose our main module on window. The file created by our build looks like this:
(function() {
//Almond lib
//Our code
define('window-exposer', ['main-module'], function(mainModule) {
window.mainModule = mainModule;
});
require('window-exposer');
}());
When building a site that want to use mainModule an error is thrown because when the site specific code tries to access window.mainModule it has not been set yet. There are also cases where the module has indeed been initialized and the code works.
Is there any way to guarantee that the window-exposer is run before other JavascriptCode is?
I solved it by using the solution provided here https://github.com/jrburke/almond#exporting-a-public-api
I've developed a JavaScript plugin to be included on our customers' websites. The plugin I've created depends on some external libraries, which are bundled and delivered to the client as one big package: jQuery 1.8.2 and KnockoutJS v3.0.0.
The plugin plays fine on most sites, but if the host site uses RequireJS, my package fails to load because KnockoutJS automatically detects that RequireJS exists and attempts to use it. Here is the error that gets thrown:
Mismatched anonymous define() module
Obviously, I've found an "explanation" of the error message on the RequireJS site. Unfortunately, I don't understand how to avoid it. In my local copy of the KnockoutJS library, I've found the offending line:
(function(factory) {
// Support three module loading scenarios
if (typeof require === 'function' && typeof exports === 'object' && typeof module === 'object') {
// [1] CommonJS/Node.js
var target = module['exports'] || exports; // module.exports is for Node.js
factory(target);
} else if (typeof define === 'function' && define['amd']) {
// [2] AMD anonymous module
define(['exports'], factory);
} else {
// [3] No module loader (plain <script> tag) - put directly in global namespace
factory(window['ko'] = {});
}
}
If I manually edit this file so that condition [2] never executes and only condition [3] every executes, then everything works fine. Of course, I don't want to do this because it requires me editing an external library, which I'd prefer to keep in pristine condition so I can upgrade it later.
I have a feeling there may be a way to make this work, I just don't understand how RequireJS works. Obviously, KnockoutJS is TRYING to play nice with RequireJS, but in my case, it's failing. For me, in this case, even though RequireJS exists, I don't need KnockoutJS to use it.
How can I get these two libraries to work side by side?
EDIT
I have no control over when my library is loaded vs. all other libraries the host site already loads. In fact, most of the time my plugin will be included, it will be by someone with NO web dev experience using a terrible WYSIWYG platform like WordPress, Webs.com or Weebly so sometimes my script tag might make it to the top of the head element, other times it might be included in the body element somewhere.
Also, to be clear, my library does NOT use RequireJS. It just so happens that one of our customers that is trying to use my library DOES use RequireJS and when my library gets included, KnockoutJS (bundled with my library, but NOT already on the host site) throws an exception because it thinks it needs to register itself with RequireJS (or at least that's my speculation as to the exception).
While, in principal, I'm not opposed to loading the libraries my code depends on on demand, the truth is that it will create a slow, poor experience for my users as it will take additional request/response cycles to load them.
Well, the easiest thing to do would probably be to load knockout before requirejs. ko will no longer detect that require is present, and will go with option [3]. If you can't do this, the other option is to add your plugin and ko file in a require hierarchy.
So let's say that you plugin looked like this:
(function(ko){
//stuff
ko.applyBindings({});
})(ko)
You would need to change it to this:
require([
"knockout-3.0.0.js" // this should be the url you use for knockout
], function(ko){
//stuff
ko.applyBindings({});
})
and NOT load the knockout.js file as a separate tag. Require will handle the loading. The server must still be able to deliver the "knockout-3.0.0.js" url of course. This is how require works. It loads whatever urls you pass as elements in the array parameter of require, and passes what they return as parameters to the function.
If you need to minify/bundle both the plugin file and the ko file into a single file, you can use the reuquirejs minifier/optimizer (http://requirejs.org/docs/optimization.html). It will navigate the dependency tree and output only one js file with all modules inside. One quirk here: you need to drop the .js extension for the minifier to work, read more about it the documentation, I just mentioned it to save some headaches.
Also, more documentation on how to use ko with require can be found here: http://knockoutjs.com/documentation/amd-loading.html
EDIT, after op edit:
OK, so in this case you should create a separate scope, in which you can do what you want. You'll need to copy the ko code inside your file, but like this you'll at least get a single file.
So, first create a scope:
(function(){
})()
Then copy ko code inside:
(function(){
//ko code here, should be a single, minified line
})()
Then you need to trick ko into using option 3, so do this:
(function(){
var define = null; //so define will no longer be a function, don't forget the var
var require = null;
//ko code here, should be a single, minified line
})()
Optionally, you might also want to reassign window in the step above, if you don't want ko to be available to the entire page.
And now add your plugin code:
(function(){
var define = null; //so define will no longer be a function, don't forget the var
var require = null;
//ko code here, should be a single, minified line
//plugin code here;
})()
I want to determine at runtime whether a YUI module has been defined (i. e. whether someone has called YUI.add() for that module).
Based on reading the YUI code, it seems like YUI.Env.mods[moduleName] will do the trick, but I can't find any documentation for this property so I'm not sure if it's meant to be used/works in all cases. Is there a preferred way to do this?
EDIT: here's what I'm trying to accomplish:
We are switching from a system where most assets are manually loaded via link/script tags in the HEAD to one where we depend more on the YUI loader. To support legacy code, I want to make sure that modules preloaded in the HEAD will not get loaded again by YUI (some things like jQuery have issues when loaded twice).
The preloaded modules are a mix of YUI-style and non-YUI-style modules.
So far, I'm emitting code like the following:
<head>
<!-- bunch of script/link tags -->
<script>
var modules = // list of preloaded modules
, i;
for (i = 0; i < modules.length; ++i) {
if (!ISMODULEALREADYDEFINED(modules[i])) {
YUI.add(modules[i], function (Y) { }, '');
}
}
</script>
</head>
The reason I need the ISMODULEALREADYDEFINED check is that if some of the pre loaded modules are YUI-style modules, then YUI might still be loading up their dependencies asynchronously when we run the above script. If that happens, then the noop module definition I'm adding prevents the original module definition callback from ever running.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What's the difference between using Require.JS amd simply creating a <script> element in the DOM?
My understanding of Require.JS is that it offers the ability to load dependencies, but can this not simply be done by creating a <script> element that loads the necessary external JS file?
For example, lets assume I have the function doStuff(), which requires the function needMe(). doStuff() is in the external file do_stuff.js, while needMe() is in the external file need_me.js.
Doing this the Require.JS way:
define(['need_me'],function(){
function doStuff(){
//do some stuff
needMe();
//do some more stuff
}
});
Doing this by simply creating a script element:
function doStuff(){
var scriptElement = document.createElement('script');
scriptElement.src = 'need_me.js';
scriptElement.type = 'text/javascript';
document.getElementsByTagName('head')[0].appendChild(scriptElement);
//do some stuff
needMe();
//do some more stuff
}
Both of these work. However, the second version doesn't require me to load all of the Require.js library. I don't really see any functional difference...
What advantages does Require.JS offer in comparison to simply creating a element in the DOM?
In your example, you're creating the script tag asynchronously, which means your needMe() function would be invoked before the need_me.js file finishes loading. This results in uncaught exceptions where your function is not defined.
Instead, to make what you're suggesting actually work, you'd need to do something like this:
function doStuff(){
var scriptElement = document.createElement('script');
scriptElement.src = 'need_me.js';
scriptElement.type = 'text/javascript';
scriptElement.addEventListener("load",
function() {
console.log("script loaded - now it's safe to use it!");
// do some stuff
needMe();
//do some more stuff
}, false);
document.getElementsByTagName('head')[0].appendChild(scriptElement);
}
Arguably, it may or may not be best to use a package manager such as RequireJS or to utilize a pure-JavaScript strategy as demonstrated above. While your Web application may load faster, invoking functionality and features on the site would be slower since it would involve waiting for resources to load before that action could be performed.
If a Web application is built as a single-page app, then consider that people won't actually be reloading the page very often. In these cases, preloading everything would help make the experience seem faster when actually using the app. In these cases, you're right, one can merely load all resources simply by including the script tags in the head or body of the page.
However, if building a website or a Web application that follows the more traditional model where one transitions from page to page, causing resources to be reloaded, a lazy-loading approach may help speed up these transitions.
Here is the nice article on ajaxian.com as to why use it:
RequireJS: Asynchronous JavaScript loading
some sort of #include/import/require
ability to load nested dependencies
ease of use for developer but then backed by an optimization tool that helps deployment
Some other very pointed reasons why using RequireJS makes sense:
Managing your own dependencies rapidly falls apart for sizable projects.
You can have as many small files as you want, and don't have to worry about keeping track of dependencies or load order.
RequireJS makes it possible to write an entire, modular app without touching window object.
Taken from rmurphey's comments here in this Gist.
Layers of abstraction can be a nightmare to learn and adjust to, but when it serves a purpose and does it well, it just makes sense.
Here's a more concrete example.
I'm working in a project with 60 files. We have 2 different modes of running it.
Load a concatenated version, 1 large file. (Production)
Load all 60 files (development)
We're using a loader so we just have one script in the webpage
<script src="loader.js"></script>
That defaults to mode#1 (loading the one large concatenated file). To run the in mode#2 (separate files) we set some flag. It could be anything. A key in the query string. In this example we just do this
<script>useDebugVersion = true;</script>
<script src="loader.js"></script>
loader.js looks something like this
if (useDebugVersion) {
injectScript("app.js");
injectScript("somelib.js");
injectScript("someotherlib.js");
injectScript("anotherlib.js");
... repeat for 60 files ...
} else {
injectScript("large-concatinated.js");
}
The build script is just an .sh file that looks like this
cat > large-concantinated.js app.js somelib.js someotherlib.js anotherlib.js
etc...
If a new file is added we'll likely be using mode#2 since we're doing development we have to add an injectScript("somenewfile.js") line to loader.js
Then later for production we also have to add somenewfile.js to our build script. A step we often forget and then get error messages.
By switching to AMD we don't have to edit 2 files. The problem of keeping loader.js and the build script in sync goes away. Using r.js or webpack it can just read the code to build large-concantinated.js
It can also deal with dependencies, for example we had 2 files lib1.js and lib2.js loaded like this
injectScript("lib1.js");
injectScript("lib2.js");
lib2 needs lib1. It has code inside that does something like
lib1Api.installPlugin(...);
But as the injected scripts are loaded asynchronously there's no guarantee they'll load in the correct order. These 2 scripts are not AMD scripts but using require.js we can tell it their dependencies
require.config({
paths: {
lib1: './path/to/lib1',
lib2: './path/to/lib2',
},
shim: {
lib1: {
"exports": 'lib1Api',
},
lib2: {
"deps": ["lib1"],
},
}
});
I our module that uses lib1 we do this
define(['lib1'], function(lib1Api) {
lib1Api.doSomething(...);
});
Now require.js will inject the scripts for us and it won't inject lib2 until lib1 has be loaded since we told it lib2 depends on lib1. It also won't start our module that use lib1 until both lib2 and lib1 have loaded.
This makes development nice (no build step, no worrying about loading order) and it makes production nice (no need to update a build script for each script added).
As an added bonus we can use webpack's babel plugin to run babel over the code for older browsers and again we don't have to maintain that build script either.
Note that if Chrome (our browser of choice) started supporting import for real we'd probably switch to that for development but that wouldn't really change anything. We could still use webpack to make a concatenated file and we could use it run babel over the code for all browsers.
All of this is gained by not using script tags and using AMD