How to circumvent RequireJS to load module with global? - javascript

I'm trying to load a JS file from a bookmarklet. The JS file has this JS that wraps the module:
(function (root, factory) {
if (typeof module === 'object' && module.exports) {
// Node/CommonJS
module.exports = factory();
} else if (typeof define === 'function' && define.amd) {
// AMD. Register as an anonymous module.
define(factory);
} else {
// Browser globals
root.moduleGlobal = factory();
}
}(this, function factory() {
// module script is in here
return moduleGlobal;
}));
Because of this, if the webpage uses RequireJS, the script will not export a global when it loads. To get around this I temporarily set define to null, load the script, then reset define to its original value:
function loadScript(url, cb) {
var s = document.createElement('script');
s.src = url;
s.defer = true;
var avoidRequireJS = typeof define === 'function' && define.amd;
if (avoidRequireJS) {
var defineTmp = define;
define = null;
}
s.onload = function() {
if (avoidRequireJS) define = defineTmp;
cb();
};
document.body.appendChild(s);
}
This works, but it seems to me like it could be problematic to change a global variable when other parts of the application could depend on it. Is there a better way to go about this?

You may fetch the script using XMLHttpRequest, jQuery.ajax or the new Fetch API.
This will allow you to manipulate the script and reassign define before executing it. Two options:
Have the module export a global by wrapping the script with:
(function(define){ ... })(null);
Handle the module exports yourself by wrapping the script with:
(function(define, module){ ... })((function() {
function define(factory) {
var exports = factory();
}
define.amd = true;
return define;
})());
You can then load it using a new <script> element or eval 😲.
Note that when using XHR, you may have to address CORS issues.

If you can use the AJAX method above, that will be best. But as stated, you will need to deal with CORS issues, which is not always trivial - even impossible if you do not control the origin server.
Here is a technique which uses an iframe to load the script in an isolated context, allowing the script to export its global object. We then grab the global object and copy it to the parent. This technique does not suffer from CORS restrictions.
(fiddle: https://jsfiddle.net/qu0pxesd/)
function loadScript (url, exportName) {
var iframe = document.createElement('iframe');
Object.assign(iframe.style, {
position: 'fixed',
top: '-9999em',
width: '0px'
});
var script = document.createElement('script');
script.onload = function () {
window[exportName] = iframe.contentWindow[exportName];
document.body.removeChild(iframe);
}
script.src = url;
document.body.appendChild(iframe);
iframe.contentWindow.document.open();
iframe.contentWindow.document.appendChild(script);
iframe.contentWindow.document.close();
}
loadScript('https://ajax.googleapis.com/ajax/libs/jquery/3.2.1/jquery.min.js', 'jQuery');
I ran a quick test to see if a memory leak would happen from deleting the iframe, and it appears to be memory safe. Here's the snapshot of loading a script 100 times, resulting in 100 different iframes and 100 different instances of jQuery loading.
The parent window's jQuery variable is continuously overwritten, meaning only the last one prevails and all previous references are cleaned up. This is not entirely scientific and you will need to do your own testing, but this should be safe enough to get you started.
Update: The above code requires that you know the name of the exported object, which is not always known. Some modules may export multiple variables too. For example, jQuery exports both $ and jQuery. The following fiddle illustrates a technique for solving this issue by copying any global objects which did not exist before the script was loaded:
https://jsfiddle.net/qu0pxesd/3/

Which approach would work best really depends on the specific needs of the project. Context would determine which one I'd use.
Undefining define Temporarily
I'm mentioning it because you tried it.
DON'T DO THIS!
The approach of undefining define before you load your script and restoring it after is not safe. In the general case, it is possible for other code on the page to perform a require call that will resolve after you've undefined define and before you've defined it again. After you do document.body.appendChild(s); you're handing back control to the JavaScript engine, which is free to immediately execute scripts that were required earlier. If the scripts are AMD module, they'll either bomb or install themselves incorrectly.
Wrapping the Script
As Dheeraj V.S. suggests, you can wrap the script to make define locally undefined:
(function(define) { /* original module code */ }())
can work for trivial cases like the one you show in your question. However, cases where the script you try to load actually has dependencies on other libraries can cause issues when it comes to dealing with the dependencies. Some examples:
The page loads jQuery 2.x but the script you are trying to load depends on a feature added in jQuery 3.x. Or the page loads Lodash 2 but the script needs Lodash 4, or vice-versa. (There are huge differences between Lodash 2 and 4.)
The script needs a library that is not otherwise loaded by something else. So now you are responsible for producing the machinery that will load the library.
Using RequireJS Contexts
RequireJS is capable of isolating multiple configurations from one another by defining a new context. Your bookmarklet could define a new context that configures enough paths for the script you are trying to load to load itself and its dependencies:
var myRequire = require.config({
// Within the scope of the page, the context name must be unique to
// your bookmarklet.
context: "Web Designer's Awesome Bookmarklet",
paths: {
myScript: "https://...",
jquery: "https://code.jquery.com/jquery-3.2.1.min.js",
},
map: {...},
// Whatever else you may want.
});
myRequire(["myScript"]);
When you use contexts like this, you want to save the return value of require.config because it is a require call that uses your context.
Creating a Bundle with Webpack
(Or you could use Browserify or some other bundler. I'm more familiar with Webpack.)
You could use Webpack to consume all the AMD modules necessary for the script you are trying to load to produce a bundle that exports its "module" as a global. At a minimum, you'll need something like this in your configuration:
// Tell Webpack what module constitutes the entry into the bundle.
entry: "./MyScript.js",
output: {
// This is the name under which it will be available.
library: "MyLibrary",
// Tell Webpack to make it globally available.
libraryTarget: "global",
// The final bundle will be ./some_directory/MyLibrary.js
path: "./some_directory/",
filename: "MyLibrary.js",
}
Once this is done, the bookmarklet only needs to insert a new script element that points to the produced bundle and no longer has to worry about wrapping anything or dealing with dependencies.

If it were me, I would have the url provide the hint as to how to load the module. Instead of having just a "scripts/" directory -> I would make "scripts/amd/", "scripts/require/", etc. Then query the url for "amd", "require", etc. within your loadScript method... using, e.g.,
if (url.includes('amd')) {
// do something
} else if (url.includes('require')) {
// do something different
}
That should let you avoid the global var entirely. It might also provide a better structure for your app in general.
You could also return an object with a script property and loadType property that specifies amd, require, etc... but imho the first option would be the quickest and save you some additional typing.
Cheers

Related

How to use this in javascript module pattern?

I'm studying module pattern and have a question.
I want to use 'settings' anywhere.
script1.js
var MODULE = (function() {
var t = {};
this.settings = { // this == Window
users: ['john', 'emma']
}
return t;
})()
script2.js
MODULE.dom = function() {
var t = this; // this == MODULE
document.querySelector('p').textContent = t.settings.users[0]; // error
function _say() {
return t.settings.users[1] // error
}
return {
say: _say
}
}
MODULE.DOM = MODULE.dom.call(MODULE)
When use this.settings = {...}, 'this' means Window so code doesn't work.
When use t.settings = {...}, 'this' means MODULE so code works but when write MODULE in dev console, settings is exposed in MODULE variable. Is it ok?
I'd greatly appreciate any help, thank you!
When use t.settings = {...}, 'this' means MODULE so code works
That's the right way to do it.
but when write MODULE in dev console, settings is exposed in MODULE variable. Is it ok?
It's mostly OK.
If you're worried about the client being able to type in the variable name and see the code that gets run - there's no way to avoid that. They can just look at the devtools to see what the network requests are, and what is downloaded - or look at the Sources panel.
If you're worried about running into naming collisions with larger scripts, then - sometimes, libraries deliberately assign themselves to the window to allow other parts of the code to access them. Perhaps you'd like your MODULE to be like this. If not, then you should utilize JavaScript modules instead, which allow for scripts to be imported inside other scripts without polluting the global namespace at all or having any possibility of naming collisions. For example, you could do
// script1.js
export const MODULE = {
settings: {
users: ['john', 'emma'];
}
};
// script2.js
import { MODULE } from './script1.js';
// proceed to use MODULE
And you can do import { MODULE } from './script1.js'; from any script file, not just script2.js.
Personally, I consider the IIFE module pattern in JavaScript to be mostly obsolete nowadays. For any reasonable-sized script, better to write code in separate files and then import and export as needed. (A 1000 line file is somewhat hard to debug and refactor. Ten 100 line files are easier to debug and refactor.)

How to migrate legacy JS app to modules

I have a large (~15k LoC) JS app (namely a NetSuite app) written in old-style all-global way. App consists of 26 files and dependencies between them are totally unclear.
The goal is to gracefully refactor the app to smaller modules. By gracefully I mean not breaking\locking the app for long time, but doing refactoring in smaller chunks, while after completing each chunk app remains usable.
An idea I have here is to concat all the JS files we have now into single-file bundle. After that some code could be extracted into modules. And the legacy code could start importing it. The modules & imports should be transpiled with webpack\whatever, while legacy code remains all-globals style. Finally all this is packed into single JS file and deployed.
My questions are
is there a better approach maybe? This sounds like a typical problem
are there any tools available to support my approach?
I gave webpack a try and I haven't managed to get what I want out of it. The export-loader and resolve-loader are no options because of amount of methods\vars that needs to be imported\exported.
Examples
Now code looks like
function someGlobalFunction() {
...
}
var myVar = 'something';
// and other 15k lines in 26 files like this
What I would ideally like to achieve is
function define(...) { /* function to define a module */ }
function require(moduleName) { /* function to import a module */ }
// block with my refactored out module definitions
define('module1', function () {
// extracted modularised code goes here
});
define('module2', function () {
// extracted modularised code goes here
});
// further down goes legacy code, which can import new modules
var myModule = require('myNewModule');
function myGlobalLegacyFunction() {
// use myModule
}
I'm following an approach similar to that outlined here: https://zirho.github.io/2016/08/13/webpack-to-legacy/
In brief:
Assuming that you can configure webpack to turn something like
export function myFunction(){...}
into a file bundle.js that a browser understands. In webpack's entry point, you can import everything from your module, and assign it to the window object:
// using namespace import to get all exported things from the file
import * as Utils from './utils'
// injecting every function exported from utils.js into global scope(window)
Object.assign(window, Utils).
Then, in your html, make sure to include the webpack output before the existing code:
<script type="text/javascript" src="bundle.js"></script>
<script type="text/javascript" src="legacy.js"></script>
Your IDE should be able to help identify clients of a method as you bring them into a module. As you move a function from legacy.js to myNiceModule.js, check to see if it still has clients that are aware of it globally - if it doesn't, then it doesn't need to be globally available.
No good answer here so far, and it would be great if the person asking the question would come back. I will pose a challenging answer saying that it cannot be done.
All module techniques end up breaking the sequential nature of execution of scripts in the document header.
All dynamically added scripts are loaded in parallel and they do not wait for one another. Since you can be certain that almost all such horrible legacy javascript code is dependent on the sequential execution, where the second script can depend on the first previous one, as soon as you load those dynamically, it can break.
If you use some module approach (either ES7 2018 modules or require.js or you own) you need to execute the code that depends on the loading having occurred in a call-back or Promise/then function block. This destroys the implicit global context, so all these spaghetti coils of global functions and var's we find in legacy javascript code files will not be defined in the global scope any more.
I have determined that only two tricks could allow a smooth transition:
Either some way to pause continuation of a script block until the import Promise is resolved.
const promise = require("dep1.js", "dep2.js", "dep3.js");
await promise;
// legacy stuff follows
or some way to revert the scope of a block inside a function explicitly into the global scope.
with(window) {
function foo() { return 123; }
var bar = 543;
}
But neither wish was granted by the javascript fairy.
In fact, I read that even the await keyword essentially just packs the rest of the statements into function to call when promise is resolved:
async function() {
... aaa makes promise ...
await promise;
... bbb ...
}
is just, I suppose, no different from
async function() {
... aaa makes promise ...
promise.then(r => {
... bbb ...
});
}
So this means, the only way to fix this is by putting legacy javascript statically in the head/script elements, and slowly moving things into modules, but continue to load them statically.
I am tinkering with my own module style:
(function(scope = {}) {
var v1 = ...;
function fn1() { ... }
var v2 = ...;
function fn2() { ... }
return ['v1', 'fn1', 'v2', 'fn2']
.reduce((r, n) => {
r[n] = eval(n);
return r;
}, scope);
})(window)
by calling this "module" function with the window object, the exported items would be put on there just as legacy code would do.
I gleaned a lot of this by using knockout.js and working with the source readable file that has everything together but in such module function calls, ultimately all features are on the "ko" object.
I hate using frameworks and "compilation" so generating the sequence of HTML tags to load them in the correct order by the topologically sorted dependency tree, while I could write myself such a thing quickly, I won't do this, because I do not want to have any "compilation" step, not even my own.
UPDATE: https://stackoverflow.com/a/33670019/7666635 gives the idea that we can just Object.assign(window, module) which is somewhat similar to my trick passing the window object into the "module" function.

Replacing requirejs with systemjs -- variables not visible in local scope

I'm trying to convert our requirejs calls to use SystemJS, but I'm not exactly sure what I'm doing wrong.
Our original calls look like this:
return function(callback) {
requirejs(["/app/shared.js"], function(result){
callbackFunction = callback;
callback(dashboard);
main();
})
}
And what I'm trying instead is:
return function(callback) {
console.log(callback.toString())
SystemJS.import('app/shared.js').then(function(result){
callbackFunction = callback;
callback(dashboard);
main();
});
}
I've had to remove some leading / to get things to load properly, which is fine, but I've now ran into an issue where variables that were defined at the top of shared.js aren't visible in my local main.js file. In my browser console I get:
Potentially unhandled rejection [1] ReferenceError: dashboard is not defined
shared.js defines dashboard:
var dashboard = { rows: [], }
// Other definitions...
define(["/app/custom-config.js", /* etc */]);
I guess I have two questions:
is this the correct way to replace requirejs calls?
if so, why aren't my variables from shared.js accessible?
For a fuller picture, main() just sets up the dashboard object, and then calls callbackFunction(dashboard) on it.
Your problem can be reduced to the following case where you have two AMD modules, with one that leaks into the global space, and the 2nd one that tries to use what the first one leaked. Like the two following modules.
src/a.js requires the module that leaks and depends on what that module leaks:
define(["./b"], function () {
console.log("a loaded");
callback();
});
src/b.js leaks into the global space:
// This leaks `callback` into the global space.
var callback = function () {
console.log("callback called");
}
define(["./b"], function () {
console.log("b loaded");
});
With RequireJS, the code above will work. Oh, it is badly designed because b.js should not leak into the global space, but it will work. You'll see callback called on the console.
With SystemJS, the code above won't work. Why? RequireJS loads modules by adding a script element to the header and lets script execute the module's code so callback does end up in the global space in exactly the same way it would if you had written your own script element with an src attribute that points to your script. (You'd get an "Mismatched anonymous define" error, but that's a separate issue that need not detain us here.) SystemJS, by default, uses eval rather than create script elements, and this changes how the code is evaluated. Usually, it does not matter, but sometimes it does. In the case at hand here callback does not end up in the global space, and module a fails.
Ultimately, your AMD modules should be written so that they don't use the global space to pass information from one another.
However, there is another solution which may be useful as a stepping-stone towards a final solution. You can use scriptLoad: true to tell SystemJS to use script elements like RequirejS does. (See the documentation on meta for details and caveats.) Here is a configuration that does that:
System.config({
baseURL: "src",
meta: {
"*": {
scriptLoad: true, // This is what fixes the issue.
}
},
packages: {
// Yes, this empty package does something. It makes `.js` the
// default extension for modules.
"": {}
},
});
// We have to put `define` in the global space to
// so that our modules can find it.
window.define = System.amdDefine;
If I run the example code I've given here without scriptLoad: true, then module a cannot call the callback. With scriptLoad: true, it can call the callback and I get on the console:
b loaded
a loaded
callback called

Using functions from external JavaScript libraries in Moodle plugins

I am making a Moodle plugin and wanted to use bowser to detect the user's web browser. I referenced the file by putting
$PAGE->requires->js( new moodle_url($CFG->wwwroot.MOODLE_TINYMCE_RECORDRTC_ROOT.'tinymce/js/bowser.js') );
in the plugin's plugintype_pluginname.php file (placeholders of course), and I call the bowser function from the plugin's module.js file.
When I load the plugin (it appears as a button in TinyMCE), the console throws ReferenceError: bowser not defined, so I'm assuming this means Moodle doesn't make the functions in Bowser globally available.
I read many in many places that I need to wrap my code in an AMD, or something to that effect, but after lots of reading it still goes over my head. Is there any way to make bowser's functions available to the main plugin module?
Note: This works for me in Moodle 3.3.2, ymmv.
Put bowser.js into my_plugin_folder/amd/src/.
When using the original bowser.js I got Uncaught TypeError: bowser._detect is not a function. I don't exactly understand why I get this error but here's one way to fix it: Replace the top code block in bowser.js with this one from umdjs/umd.
Your file should now look like this:
(function (root, factory) {
if (typeof define === 'function' && define.amd) {
// AMD. Register as an anonymous module.
define([], factory);
} else if (typeof module === 'object' && module.exports) {
// Node. Does not work with strict CommonJS, but
// only CommonJS-like environments that support module.exports,
// like Node.
module.exports = factory();
} else {
// Browser globals (root is window)
root.returnExports = factory();
}
}(typeof self !== 'undefined' ? self : this, function () {
// module definition here
return bowser
}));
Moodle bundles all JavaScript modules together so that clients don't need to perform a separate HTTP request to get each one. This bundle is called first.js. It contains all modules that aren't lazy-loaded. If you load a Moodle page now it should contain the contents of bowser.js with some values replaced by Moodle.
If you don't want bowser to be loaded on every page, you can just rename it to bowser-lazy.js. Then it should only be loaded when you use it.
You can test if it worked by calling:
require(['plugintype_pluginname/bowser'], function(bowser) {
var ua = bowser._detect(navigator.userAgent);
console.log(ua);
});
Seems like you need to change the require call to use bowser-lazy instead of bowser when you want to use lazy-loading.

Why in Firefox Addon the imported objects can not access the importers' imported modules?

For clarification, please consider the following simplified example:
one.js
Components.utils.import('resource://gre/modules/Services.jsm');
let obj = {
init: function() {
Components.utils.import('chrome://myaddon/modules/two.jsm', this);
}
// code here has access to Services.jsm
}
two.js
this.EXPORTED_SYMBOLS = ['abc'];
this.abc = {
// abc is imported into obj()
// however as part of obj (), abc{} does not have access to Services.jsm
}
I know that this is how it works but the question is why?
Result is that for example Services.jsm has to be imparted in every module.
Although Firefox caches the modules and there isn't much of a performance difference, I would like to know if the repeated importation can be avoided?
Like #felix-kling already mentioned, that's due to module level isolation and it makes a lot of sense if you think about it. If it was otherwise not only Services would be seen by other modules, but also abc.
There is another important reason, though:
Since JS code modules are initiated once and cached after that, what would happen if you imported two.jsm twice, once from a module already having imported Services.jsm and once from another module not having done so? Now two.jsm "seeing" Services would depend on which of the other modules was imported first! Which would be extremely nasty.
In that context, your comment about "abc is imported into obj()" is wrong. Your code actually imports abc into the top-level scope. Cu.import will always import into the top-level scope, unless you explicitly specify another scope to import to.
"abc" in this; // false
"abc" in obj; // false
obj.init();
"abc" in this; // true
"abc" in obj; // false!
If you wanted to import two.jsm into obj, you'd need to call Cu.import with a second argument.
let obj = {
init: function() {
Components.utils.import('chrome://myaddon/modules/two.jsm', this);
}
};
"abc" in this; // false
"abc" in obj; // false
obj.init();
"abc" in this; // false
"abc" in obj; // true
But that does not affect the visibility of Services, of course.
It would be helpful I guess, if Cu.import just auto-imported some modules you'd import anyway, such as Services.jsm and XPCOMUtils.jsm. But this does not and likely will not ever happen due to legacy reasons and backward-compatibility constraints. (E.g. I had code break that imported const {Promise} = Cu.import(..., {}); because ES6 added a default Promise global...; that kind of backward-compatibility issues/constraints).
Alternatives?
Well, the obvious one is not to use Cu.import for your own stuff, but use something else. A bunch of add-ons, incl. all SDK add-ons of course, have their own CommonJS-style require() implementation.
You can actually re-use the SDK loader, without using the SDK, or with only using selected parts of the SDK if you like. See the loader documentation. I know that Erik creates a loader in the otherwse non-SDK Scriptish add-on.
You can write your own custom loader based on the subscript loader and maybe Sandbox. E.g. I did so in my extSDK boilerplate (all global symbols in loader.jsm == loader.jsm::exports will be visible to each required module).
But doing so may require a quite bit of extra work, extra knowledge, and effort to port existing JS code modules to require() based modules.

Categories

Resources