override setTimeout in require.js - javascript

we are using require.js in our project and we need to override the setTimeout in line 705 , this is the code which we need to ignore/omit somehow this setTimeout at all(I mean run over it) ,the problem that if I change it in the open source code explicit when I change version the code will be lost,How should I override this setTimout from outside only for the require.js file and keep it as long as I use this lib, is it possible to do it in elegant way in JS globally ?
https://github.com/jrburke/requirejs/blob/master/require.js
This is line 705
//If still waiting on loads, and the waiting load is something
//other than a plugin resource, or there are still outstanding
//scripts, then just try back later.
if ((!expired || usingPathFallback) && stillLoading) {
//Something is still waiting to load. Wait for it, but only
//if a timeout is not already in effect.
if ((isBrowser || isWebWorker) && !checkLoadedTimeoutId) {
checkLoadedTimeoutId = setTimeout(function () {
checkLoadedTimeoutId = 0;
checkLoaded();
}, 50);
}
}
FYI ,The reason that we do it is
Chrome: timeouts/interval suspended in background tabs?

You've stated your goal is to work around the throttling that Chrome performs on setTimeout for tabs that are in the background. I do not think it is a good idea to do so but if you must, then you should definitely patch RequireJS instead of messing with setTimeout globally. You said:
if I change it in the open source code explicit when I change version the code will be lost
This is true only if you do not use a sensible method to perform the change. It is possible to do it sensibly. For instance, you can use Gulp to take the require.js file installed in node_modules (after you install RequireJS with npm) and produce a patched file in build. Then you use this patched file in your application. Here is the gulpfile.js:
var gulp = require("gulp");
// Bluebird is a good implementation of promises.
var Promise = require("bluebird");
// fs-extra produces a `fs` module with additional functions like
// `ensureDirAsync` which is used below.
var fs = require("fs-extra");
// Make it so that for each the function in fs that is asynchronous
// and takes a callback (e.g. `fs.readFile`), a new function that
// returns promise is created (e.g. `fs.readFileAsync`).
Promise.promisifyAll(fs);
var to_replace =
"if ((isBrowser || isWebWorker) && !checkLoadedTimeoutId) {\n\
checkLoadedTimeoutId = setTimeout(function () {\n\
checkLoadedTimeoutId = 0;\n\
checkLoaded();\n\
}, 50);";
var replace_with =
"if (isBrowser || isWebWorker) {\n\
checkLoaded();";
gulp.task("default", function () {
// Use `fs.ensureDirAsync` to make sure the build directory
// exists.
return fs.ensureDirAsync("build").then(function () {
return fs.readFileAsync("node_modules/requirejs/require.js")
.then(function (data) {
data = data.toString();
// We use the split/join idiom to a) check that we get
// the string to be replaced exactly once and b)
// replace it. First split...
var chunks = data.split(to_replace);
// Here we check that the results of splitting the
// chunk is what we expect.
if (chunks.length < 2) {
throw new Error("did not find the pattern");
}
else if (chunks.length > 2) {
throw new Error("found the pattern more than once");
}
// We found exactly one instance of the text to
// replace, go ahead. So join...
return fs.writeFileAsync("build/require.js",
chunks.join(replace_with));
});
});
});
You need to have run npm install gulp fs-extra bluebird requirejs before running it. At any rate, you can use Gulp, you can use Grunt, or you can use any other system you want to perform a build. The points are:
You have a reproducible and automated method to patch RequireJS. If you install a new version of RequireJS with npm, when you rebuild your software the patch is applied automatically, so long as the code of RequireJS does not change in a way that prevents applying the patch. See the next point for what happens if a change prevents applying the patch.
This method is more robust than overriding setTimeout at runtime. Suppose James Burke decides in a newer version of RequireJS to rename checkLoaded to checkDone and renames the associated variables (so that checkLoadedTimeoutId becomes checkDoneTimeoutId). The gulpfile above will raise an exception when you run it again because it won't find the text to be replaced. You'll have to update the text to be replaced and the replacement so that the patch works with the new version of RequireJS. The benefit here is that you get an early warning that things have changed and that you need to review the patch. You won't have a surprise late in the game, perhaps after you've already delivered a new version of your software to clients.
The methods that override setTimeout at run time will just silently fail to do their job. They'll be looking for a function that contains checkLoadedTimeoutId, which won't exist anymore in the new version. So they will just let RequireJS behave the way it does by default. The failure will be a subtle one. (I've run RequireJS with the proposed custom versions of setTimeout with a project that loads upwards of 50 modules when not optimized. I saw no discernible difference between RequireJS using the stock setTimeout and RequireJS using a custom setTimeout.)
This method does not slow down every use of setTimeout. setTimeout is used by other code than RequireJS. No matter how you cut it, adding code in a custom replacement to setTimeout that starts looking for strings in each function passed to it will make all uses of setTimeout slower.

You can override setTimeout and check if the function passed as callback contains a variable used in that function from require.js (checkLoadedTimeoutId). If yes, call function immediately, otherwise call original setTimeout function.
(function(setTimeoutCopy) {
setTimeout = function(fn, timeout) {
if (fn.toString().indexOf("checkLoadedTimeoutId") >= 0) {
return fn();
} else {
return setTimeoutCopy.apply(null, arguments);
}
};
}(setTimeout));
Note that there are multiple issues with this code. If you pass a function to setTimeout which contains checkLoadedTimeoutId, it will be executed immediately too. Also, if require.js code is minified and variables are renamed, it won't work.
To sum up, there is no good way do this. Maybe try to find a different way to achieve what you want. Also be aware that, as Madara Uchiha said:
Changing global functions and objects is almost never a good idea.

If you really need this... Try add before loading requirejs:
function isTimeoutIgnoredFunction(fn, time) {
// you should configure this condition for safety, to exclude ignoring different timeOut calls!
return time == 50 && fn.toString().indexOf('checkLoadedTimeoutId') > -1;
}
window.setTimeoutOriginal = window.setTimeout;
window.setTimeout = function(fn, time) {
if (isTimeoutIgnoredFunction(fn, time)) {
return fn(); // or return true if you don't need to call this
} else {
return window.setTimeoutOriginal.apply(this, arguments);
}
};
Should work in last Chrome, Firefox, IE if no requirejs minify provided... You need to rewrite function "isTimeoutIgnoredFunction" for browsers you support and for minified require.js file.
To see what string you can use:
console.log((function () {
checkLoadedTimeoutId = 0;
checkLoaded();
}).toString());
But in some browsers it can just something like "Function".
If you have not many setTimeout's it can be suitable solution...

Other way to achieve it is making a ajax request of the library and patch it before load the library, but you will need to have loaded a way to make the request (vanilla js or jquery...)
The next code is an example of loading requirejs and patch it with a regexp before load it in the DOM.
$(function(){
// here you can put the url of your own hosted requirejs or a url from a CDN
var requirejsUrl = 'https://cdnjs.cloudflare.com/ajax/libs/require.js/2.1.22/require.js';
function patch(lib){
var toReplace = /if\s*\(\(isBrowser\s*\|\|\s*isWebWorker\)\s*\&\&\s*!checkLoadedTimeoutId\)\s*\{([^{}]|\{[^{}]*\})*\}/;
var by = 'if (isBrowser || isWebWorker) { checkLoaded();}';
return lib.replace(toReplace, by);
}
$.get(requirejsUrl, function(lib){
var libpatched = patch(lib);
var script=document.createElement('script');
script.innerText=libpatched;
$('body').append(script);
console.log(window.require); // requirejs patched is loaded
});
});
<script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.1/jquery.min.js"></script>

Related

How to migrate legacy JS app to modules

I have a large (~15k LoC) JS app (namely a NetSuite app) written in old-style all-global way. App consists of 26 files and dependencies between them are totally unclear.
The goal is to gracefully refactor the app to smaller modules. By gracefully I mean not breaking\locking the app for long time, but doing refactoring in smaller chunks, while after completing each chunk app remains usable.
An idea I have here is to concat all the JS files we have now into single-file bundle. After that some code could be extracted into modules. And the legacy code could start importing it. The modules & imports should be transpiled with webpack\whatever, while legacy code remains all-globals style. Finally all this is packed into single JS file and deployed.
My questions are
is there a better approach maybe? This sounds like a typical problem
are there any tools available to support my approach?
I gave webpack a try and I haven't managed to get what I want out of it. The export-loader and resolve-loader are no options because of amount of methods\vars that needs to be imported\exported.
Examples
Now code looks like
function someGlobalFunction() {
...
}
var myVar = 'something';
// and other 15k lines in 26 files like this
What I would ideally like to achieve is
function define(...) { /* function to define a module */ }
function require(moduleName) { /* function to import a module */ }
// block with my refactored out module definitions
define('module1', function () {
// extracted modularised code goes here
});
define('module2', function () {
// extracted modularised code goes here
});
// further down goes legacy code, which can import new modules
var myModule = require('myNewModule');
function myGlobalLegacyFunction() {
// use myModule
}
I'm following an approach similar to that outlined here: https://zirho.github.io/2016/08/13/webpack-to-legacy/
In brief:
Assuming that you can configure webpack to turn something like
export function myFunction(){...}
into a file bundle.js that a browser understands. In webpack's entry point, you can import everything from your module, and assign it to the window object:
// using namespace import to get all exported things from the file
import * as Utils from './utils'
// injecting every function exported from utils.js into global scope(window)
Object.assign(window, Utils).
Then, in your html, make sure to include the webpack output before the existing code:
<script type="text/javascript" src="bundle.js"></script>
<script type="text/javascript" src="legacy.js"></script>
Your IDE should be able to help identify clients of a method as you bring them into a module. As you move a function from legacy.js to myNiceModule.js, check to see if it still has clients that are aware of it globally - if it doesn't, then it doesn't need to be globally available.
No good answer here so far, and it would be great if the person asking the question would come back. I will pose a challenging answer saying that it cannot be done.
All module techniques end up breaking the sequential nature of execution of scripts in the document header.
All dynamically added scripts are loaded in parallel and they do not wait for one another. Since you can be certain that almost all such horrible legacy javascript code is dependent on the sequential execution, where the second script can depend on the first previous one, as soon as you load those dynamically, it can break.
If you use some module approach (either ES7 2018 modules or require.js or you own) you need to execute the code that depends on the loading having occurred in a call-back or Promise/then function block. This destroys the implicit global context, so all these spaghetti coils of global functions and var's we find in legacy javascript code files will not be defined in the global scope any more.
I have determined that only two tricks could allow a smooth transition:
Either some way to pause continuation of a script block until the import Promise is resolved.
const promise = require("dep1.js", "dep2.js", "dep3.js");
await promise;
// legacy stuff follows
or some way to revert the scope of a block inside a function explicitly into the global scope.
with(window) {
function foo() { return 123; }
var bar = 543;
}
But neither wish was granted by the javascript fairy.
In fact, I read that even the await keyword essentially just packs the rest of the statements into function to call when promise is resolved:
async function() {
... aaa makes promise ...
await promise;
... bbb ...
}
is just, I suppose, no different from
async function() {
... aaa makes promise ...
promise.then(r => {
... bbb ...
});
}
So this means, the only way to fix this is by putting legacy javascript statically in the head/script elements, and slowly moving things into modules, but continue to load them statically.
I am tinkering with my own module style:
(function(scope = {}) {
var v1 = ...;
function fn1() { ... }
var v2 = ...;
function fn2() { ... }
return ['v1', 'fn1', 'v2', 'fn2']
.reduce((r, n) => {
r[n] = eval(n);
return r;
}, scope);
})(window)
by calling this "module" function with the window object, the exported items would be put on there just as legacy code would do.
I gleaned a lot of this by using knockout.js and working with the source readable file that has everything together but in such module function calls, ultimately all features are on the "ko" object.
I hate using frameworks and "compilation" so generating the sequence of HTML tags to load them in the correct order by the topologically sorted dependency tree, while I could write myself such a thing quickly, I won't do this, because I do not want to have any "compilation" step, not even my own.
UPDATE: https://stackoverflow.com/a/33670019/7666635 gives the idea that we can just Object.assign(window, module) which is somewhat similar to my trick passing the window object into the "module" function.

Is it possible to create a require.js module that decides for itself when it is done loading?

In a "normal" require.js function module, the module is considered "loaded" as soon as the module function returns:
define(function() {
// As soon as this function returns, the module is "loaded"
});
But I have a module that needs to do some asynchronous script loading (specifically including some Google Javascript API-s) and I don't want my module to be considered "loaded" until I say it is.
When creating a loader plugin for require.js, you are supplied with an "onload" function that you can call when the plugin is done loading. This would be perfect for my case, but I don't want my Google API wrapper to be a plugin, I want it to appear to be a "normal" module. Plugins are treated differently by the optimizer and I don't want that headache. Also plugins must be required using special syntax, and I'd like to avoid having to remember that every time I use it.
I have combed through the API several times without finding a way to accomplish what I'm trying to do. Is there an undocumented (or poorly documented) method of defining a module, where the module itself gets to decide when it should be considered "loaded"?
As an example, an implementation like this would be awesome, if it existed:
define(["onload"], function(onload) {
setTimeout(onload, 5000);
});
The first time this module was required, it should take 5 seconds to "load".
We bootstrap a lot of stuff using the convention below which is based on early releases (0.3.x) of the MEAN stack and uses the awesome async library.
Using your example, it might look something like this:
// bootstrap.js
// Array of tasks to pass to async to execute.
var tasks = [];
// Require path
var thePath = __dirname + '/directoryContainingRequires';
// Build array of tasks to be executed in parallel.
fs.readdirSync(thePath).forEach(function (file) {
if (~file.indexOf('.js')) {
var filePath = thePath + '/' + file;
tasks.push(function(callback) {
require(filePath)(callback);
});
}
});
// Execute parallel methods.
async.parallel(
tasks,
function(err) {
if(err) console.error(err);
console.log('All modules loaded!');
process.exit(0);
}
);
The file being required looks similar to this:
// yourModule.js
module.exports = function(moduleCallback) {
setTimeout(function() {
console.log('5 seconds elapsed');
moduleCallback(null);
}, 5000);
};

Where to put "Q.longStackSupport = true"?

From the documentation of Q (the Javascript promise library):
Q.longStackSupport = true;
This feature does come with somewhat-serious performance and memory overhead, however. If you're working with lots of promises, or trying to scale a server to many users, you should probably keep it off. But in development, go for it!
I find myself always writing code like this:
var Q = require('q');
Q.longStackSupport = true;
However, if I decided to turn off longStackSupport, I would have to touch a lot of files in my code.
So, I wonder if there is a more elegant solution:
Is there a recommended pattern when including Q?
Is it sufficient to call Q.longStackSupport only once?
Yes, it is sufficient to only call it once in one place.
In init.js, or whatever your root file is, I would put
if (process.env.NODE_ENV === "development") {
Q.longStackSupport = true;
}
Then this will automatically enable it if you have the NODE_ENV environment variable set to development.
$ export NODE_ENV=development
$ node init.js

require.js for non-browser platform or the right way to use Function constructor

I am trying to use requirejs in an Apple TV project. We have a lot of requirejs modules written for web, would be cool if we could re-use them.
Apple TV platform has certain limitations and it's sorta impossible to use requirejs "as is". There's no DOM in common sense.
One possible way I found to overcome the problem is: first to load require.js itself and then override its .load() method, so whenever require('foo') gets called it would load foo.js via a simple XHR call:
requirejs.load = (context, moduleName, moduleUrl) ->
reqModule = new XMLHttpRequest()
reqModule.open('GET', appRoot+moduleUrl, true)
reqModule.send(null)
reqModule.onreadystatechange = ->
if reqModule.readyState is 4 and reqModule.status is 200
fn = (new Function(reqModule.responseText))() # parse module
context[moduleName] = fn
context.completeLoad(moduleName)
So this works for normally defined modules like this:
define [], ->
someField: 'empty field'
Even works for self executing functions like this (with shim configured):
(myFoo = ->
someField:"empty field"
)()
for example Undercore.js contains itself in a self executing wrapper
However, that doesn't work with modules defined like this:
myFoo = ->
someField:"empty field"
Question: how can I make it work for all 3 cases? When used in browser, requirejs successfully loads all of them.
One solution I found is to wrap the function in define block for non-wrapped modules like in the last example, so instead of doing fn = (new Function(reqModule.responseText))() I would do:
fn = define [], (new Function("return "+reqModule.responseText))()
But then that would break load for both first and second cases. Is there a way to find out if a function wrapped in a self-executing block or not? How can I distinguish first two cases from the last one?
Using the code in the question as a starting point, I was able to get the following code to work. I don't have Apple TV so I cannot test it on Apple TV. I've tested it in a browser. It is able to load all 3 types of modules you've shown in your question, provided that the 2nd and 3rd modules have appropriate shims. So the logic is sound. The missing piece is what needs to stand in for window in eval.call(window, ...). In Node.js, it would be global. I don't know the equivalent in Apple TV.
requirejs.load = function(context, moduleName, moduleUrl) {
var reqModule = new XMLHttpRequest();
reqModule.open('GET', moduleUrl, true);
reqModule.send(null);
return reqModule.onreadystatechange = function() {
if (reqModule.readyState === 4 && reqModule.status === 200) {
eval.call(window, reqModule.responseText);
return context.completeLoad(moduleName);
}
};
};
If I were you, I would use Browserify
Write your browser code with node.js-style requires.

Strange behavior with RequireJS using CommonJS sintax

I'm a strange behavior with RequireJS using the CommonJS syntax. I'll try to explain as better as possible the context I'm working on.
I have a JS file, called Controller.js, that registers for input events (a click) and uses a series of if statement to perform the correct action. A typical if statement block can be the following.
if(something) {
// RequireJS syntax here
} else if(other) { // ...
To implement the RequireJS syntax I tried two different patterns. The first one is the following. This is the standard way to load modules.
if(something) {
require(['CompositeView'], function(CompositeView) {
// using CompositeView here...
});
} else if(other) { // ...
The second, instead, uses the CommonJS syntax like
if(something) {
var CompositeView = require('CompositeView');
// using CompositeView here...
} else if(other) { // ...
Both pattern works as expected but I've noticed a strange behavior through Firebug (the same happens with Chrome tool). In particular, using the second one, the CompositeView file is already downloaded even if I haven't follow the branch that manages the specific action in response to something condition. On the contrary, with the first solution the file is downloaded when requested.
Am I missing something? Is it due to variable hoisting?
This is a limitation of the support for CommonJS-style require. The documentation explains that something like this:
define(function (require) {
var dependency1 = require('dependency1'),
dependency2 = require('dependency2');
return function () {};
});
is translated by RequireJS to:
define(['require', 'dependency1', 'dependency2'], function (require) {
var dependency1 = require('dependency1'),
dependency2 = require('dependency2');
return function () {};
});
Note how the arguments to the 2 require calls become part of the array passed to define.
What you say you observed is consistent with RequireJS reaching inside the if and pulling the required module up to the define so that it is always loaded even if the branch is not taken. The only way to prevents RequireJS from always loading your module is what you've already discovered: you have to use require with a callback.

Categories

Resources