RequireJS: finding the script responsible for an error - javascript

I'm looking for an elegant way to find out the full path to a script that caused a timeout error (i.e. failed to load a dependency).
requirejs.onError = function (err) {
// this works:
var script_that_failed_loading = err.originalError.target.src
// now I want:
var the_script_responsible_for_this = <???>
};

Use loader level errorbacks
require(["foo","bar"],function(foo,bar){
// perform some action
},function(error){
// handle error here
});
Note that failed module name is given in error.requireModules. Such errorbacks can be used both for loaders and modules. If you have multiple fallback paths for a resource, use path fallbacks.
As per my personal experience, I humbly disagree with ddotsenko. We're using RequireJS in our production environment. If setup properly, RJS is very reliable.

RequireJS chose a somewhat unreliable, disconnected mechanism for catching errors. It uses a timer to see if what it expected to get it got. Some other AMD loaders use other, more direct mechanisms to detect error conditions during loading.
My preferred AMD loader is CurlJS, which is hard-wired to catch the loading / parsing errors.
Because of the architectural choice it is more reliably detecting error conditions and you can attach error handlers directly to each require call. If stuff like error catching is important to you, I strongly suggest looking at CurlJS.

Related

Chrome automatically formats Error stacks, how does this work?

The content of (new Error('foo')).stack looks something like this:
Error: foo
at Object.notifier (http://localhost:6969/js/main.js:12705:37)
at trackHookChanges (http://localhost:6969/js/main.js:1813:27)
at UseState (http://localhost:6969/js/main.js:1982:13)
at K._.data (http://localhost:6969/js/main.js:70174:6005)
at K (http://localhost:6969/js/main.js:70174:6380)
at Z (http://localhost:6969/js/main.js:70174:9187)
However, when I console.log it, it looks like:
Error: foo
at Object.notifier (wdyr.ts:10)
at trackHookChanges (whyDidYouRender.js:1306)
at UseState (whyDidYouRender.js:1475)
at K._.data (index.esm.js:1)
at K (index.esm.js:1)
at Z (index.esm.js:1)
Is Chrome devtools is using sourcemaps to automatically change the string being logged? Is it there an easy way to access the source file names in my code? I want to ignore errors originating from certain NPM module.
Unfortunately (but luckly for devs) yes, Chrome uses sourcemaps to format errors in the console and there is (still) no way to access the same function it uses or output it produces. Even if it was possible, it would work only on a specific browser/platform.
TLDR
Emulate the browser sourcemap resolution with StackTraceJS or filter your errors by their prototype or any of their properties (like Error.message for example)
Discussion
JS Errors stacktrace are a mess, and so unreliable:
they are very dependant on the running environment, if you run code on Chrome you could end up with a different stack with respect of running it on Firefox, IE or node (even if in the latest times they are reaching to a "stacktrace agreement" between environments).
The maximum length of the error stacktrace is (almost always) 10 lines, so if your function (hook) was called before that time you will never catch it.
internal or delayed callbacks can erase/change/augment the stacktrace of a function in certain environments (it can be very hard, sometimes impossible, to catch the full stacktrace of a callback called inside a setTimeout for example)
Partial solution (StackTraceJS)
If you can afford to make a http request for a sourcemap, you can exploit the same mechanism that Chrome (or any other browser) uses to parse and map the error stacktrace files into the original files and filter those you don't like. The downside of this operation is that your code must be completely reworked with promise-like chain (because of the http request).
Luckly there is already a library which makes this process much much easier: StackTraceJS, you can give it a try.
This would be its usage (from library docs):
var error = new Error('BOOM!');
StackTrace.fromError(error).then(callback).catch(errback)
/*
==> Promise([
{functionName: 'fn', fileName: 'file.js', lineNumber: 32, columnNumber: 1},
{functionName: 'fn2', fileName: 'file.js', lineNumber: 543, columnNumber: 32},
{functionName: 'fn3', fileName: 'file.js', lineNumber: 8, columnNumber: 1}
], Error)
*/
Side Note
As you stated in the question comment, you are using React, and the usual working pipeline is using wepack or other js bundler to output a single JS file from all the dependencies. During developing you could encounter no problems to find out the file from the errors stack, but in production you could omit sourcemap informations from the bundle or either have some internal/uglified filenames which are not linked with the original file. This means that the behaviour of your code could change between dev/prod configuration depending on your building pipeline.
Theoretical solution
The (proto-OOP) theory states to use prototype to discriminate between Error types in order to filter unwanted behaviours.
So first of all you should use a custom class to define the errors thrown by your application/library (see Custom Error - MDN section). By doing so you must throw or extend only your CustomError(s) in your code.
Secondly you should filter the error by its type/properties and not by its source file, so (if you can) check what classes of Error the 3rd party function can throw.
In this way it's easy to isolate only those 3rd party errors, you can do just a simple check of inheritance within the try/catch block:
try { /* dangerous code */ }
catch (ex) {
if (ex instanceof MyError) { /* handle your errors */ }
else if (ex instanceof The3rdPartyCustomError) { /* handle specific 3rd party CustomError */ }
else if (ex.__proto__ instanceof Error) { /* handle generic 3rd party CustomErrors */ }
else { /* handle native errors (or bad practice 3rd party errors) */ }
}
But all of this theory can be difficult to implement, especially because 3rd party libraries very rarely implement their CustomError classes, so you will end up to handle only native errors and your defined classes.
Give it a try and check what kind of errors can throw your 3rd parties libs.
Maybe the simpler solution is to filter the erros by Error.message or any other properties which could work better than expected in your domain case.

NodeJS - Dynamically import built in modules

I'd like to get a built in module (for example Math or path or fs), whether from the global object or require, I thought about doing something like this:
function getModuleByName(name) {
return global[name] || require(name);
}
Is there a way to check that it is indeed a module and not something else? Would this make a security problem?
Is there a way to check that it is indeed a module and not something else?
Other methods but here's an example:
function getModuleByName(name)
{
let module = null;
try {
module = require(name);
} catch (e) {
// Recommend Logging e Somewhere
}
return module;
}
This will graciously fail as null where the module does not exist, or return it.
Would this make a security problem?
Quite possibly, it depends on how it's used. I'd argue it is more of a general design issue however and would blanket say avoid doing it (without any context, you may have a very good reason).
You, like anyone, obviously have a finite amount of modules you could be loading. These modules have all been selected by yourself for your application for specific reasons, or are bundled into your node version natively and are expected parts of your environment.
What you are doing by introducing this functionality is adding the addition of unexpected elements in your environment. If you are using getModuleByName to access a third party library- you should know outright that library is available and as such there's no reason why you can't just require it directly.
--
If you do think your use case warrants this, please let me know what it is as I may never have encountered it before. I have used dynamic imports like the following:
https://javascript.info/modules-dynamic-imports
But that hasn't been for global packages/libraries, but for dynamic reference to modules built internally to the application (i.e. routing to different views, invokation of internal scripts).
These I have protected by ensuring filepaths can't be altered by whitelisting the target directories, making sure each script follows a strict interface per use case and graciously failing where a module doesn't exist (error output "this script does not exist" for the script usage and a 404 view for the routing example).

Debugging Javascript/ReactJS errors

I'm building a small application with ReactJS and sometimes find it difficult to debug it.
Every time I make some Javascript error, like missing let/var in front of new variable, missing require for a component that I later use, my application just stops working (the code does not execute beyond the line where the error is), but I'm not getting any errors in browser's console. It seems as if some ReactJS code was intercepting errors, maybe handling them some custom way. Is there anything like that in ReactJS? How can I see errors in the console?
I'm using gulp/gulp-connect/browserify set to run the application.
Let me know if you need any additional data or code samples, I'll update the question.
If you know that an error is thrown but swallowed by some other code, you can enable in your browser's debugger to pause execution when an exception is thrown, even if it is caught somewhere:
Note however that libraries sometimes deliberately trigger exceptions while testing whether the browser supports certain features. You'd have to step over those.
Usage of React Hot Loader which is a Webpack plugin should solve most of the problems you have in a development process. It's easy to integrate into existing project and has quite a few examples how to put all the things together.
As a result:
your code changes would be pushed to the browser
in case of the error you will have meaningful stack trace in the browser console.
I'm guessing that the incorrect JS syntax is causing your gulp process to fail which would result in your application not being bundled/deployed in the browser at all.
It seems like there should be errors in your system console (where gulp is running) - as opposed to your browser console. Possibly your gulp process crashes when this happens too. If there are no errors in your console, you may have to listen for them. This post has an example of how to log errors from browserify:
gulp.task('browserify', function(){
var b = browserify();
b.add('./main.js');
return b.bundle()
.on('error', function(err){
console.log(err.message); //
this.end();
})
.pipe(source('main.out.js'))
.pipe(gulp.dest('./dist'));
});
Probably the best solution is to run your code through jshint before browserify so that you aren't trying to browserify syntactically invalid code.
Caveat: not a current gulp user
I suffered from the similar problem with this like missing let/var, require, and other trivial mistakes inside of the react context.
In my case, the cause is the mistake of Promise statement. Promise seems to suppress any exceptions that occur after it is called.
I could resolve the problem by handling exception like below.
var Promise = require('es6-promise').Promise;
var promise = new Promise(...)
promise
.then(function(data){...})
.catch(function(e) {
console.error(e.stack)
}
react-slingshot is a starter kit and it shows error at compile time and shows stack trace on the browser. It also has testing set up.

Declaring class differently for different Dojo versions without duplicating code?

I have an iWidget designed for IBM Connections, and my javascript code depends on Dojo (which is included by default in Connections).
It currently works in Connections 4.0 and 4.5, but is broken in Connections 5.0 (released last week), as Dojo has been updated to v1.9 and complains about my use of dojo.require.
These messages appear in the browser console when my widget tries to load on Connections 5.0:
Avoid calling dojo.require() to load classes at runtime, use net.jazz.ajax.xdloader.load_async() instead. Function '(anonymous)' required class 'dojox.atom.io.model'.
Avoid calling dojo.require() to load classes at runtime, use net.jazz.ajax.xdloader.load_async() instead. Function '(anonymous)' required class 'dojox.atom.io.Connection'.
I want to make conditional code that uses different ways of defining my widget class and requiring other Dojo modules depending on the Dojo version.
The widget javascript currently looks like this:
dojo.provide('insightCommunityWidgetClass');
dojo.require('dojox.atom.io.model');
dojo.require('dojox.atom.io.Connection');
dojo.declare('insightCommunityWidgetClass',null,{
// Class fields and methods. Currently 680 lines uncompressed.
});
I haven't yet created a version that works with Dojo 1.9 / Connections 5.0, but I think it would look something like this (and I'll have to make my javascript file name match the desired class name):
define(['dojo/_base/declare','dojox.atom.io.model','dojox.atom.io.Connection'], function(declare){
return declare(null, {
// Class fields and methods.
});
});
How can I have both of these in one file and choose between them without duplicating the class body?
Update:
I've attempted some conditional code, checking (define && define.amd) as suggested by Dimitri, tested this on Connections 4.0 and 4.5, and am getting very weird behaviour.
Temporarily ignoring any attempt to not duplicate my class, here's some conditional code which I've used exactly as shown, with a severely reduced widget class:
if (define && define.amd) {
console.log('Declaring insightWidgetClass with AMD (new method).');
define(['dojo/_base/declare','dojox/atom/io/model','dojox/atom/io/Connection'],
function(declare){
return declare(null,{
SVC_INV: 1,
onLoad: function() {
console.log('insightWidgetClass onLoad.');
}
});
}
);
} else {
console.log('Declaring insightWidgetClass with dojo.declare (old method).');
dojo.provide('insightWidgetClass');
dojo.require('dojox.atom.io.model');
dojo.require('dojox.atom.io.Connection');
dojo.declare('insightWidgetClass',null,{
SVC_INV: 1,
onLoad: function() {
console.log('insightWidgetClass onLoad.');
}
});
}
This seems not to run at all. None of my console.log messages appear in the browser console.
If I comment out the conditionals and make it so the only active code is the block after else, it runs. I get the "declaring ... (old method)" and the "insightWidgetClass onLoad" console messages.
I thought maybe enclosing the Dojo provide, require and declare calls in any kind of block might cause a problem, so I tested just putting the working code in an if (true) { block, and it still works.
The last things I've tried at this point are adding this one line before everything else, to see what define is:
console.log('dojo define',define);
... which breaks it. No console messages at all from my code.
Then I remove the define argument from that new line, so it's only sending a string to the console, and the code works again.
It seems like any mention of a define identifier silently stops the rest of the code from running.
There are no errors or warnings in the console indicating a problem. All I can say to that is: WTF?!
Now back to checking dojo.version instead.
Normally both should still work, dojo.provide() and dojo.require() are deprecated, but not entirely removed. Just make sure that your loading dojo in synchronous mode.
Besides that, the AMD way of coding is introduced in Dojo 1.7, which means that it should be supported on IBM Connections 4.5 as well (though I don't know about IBM Connections 4).
But if you really want to use both code bases, you can simply refer to the same object in stead of duplicating it, for example:
var myModule = {
// Class fields and methods.
};
if (dojo.version.major == 1 && dojo.version.minor == 9) {
define(['dojo/_base/declare','dojox.atom.io.model','dojox.atom.io.Connection'], function(declare){
return declare(null, myModule);
});
} else {
dojo.provide('insightCommunityWidgetClass');
dojo.require('dojox.atom.io.model');
dojo.require('dojox.atom.io.Connection');
dojo.declare('insightCommunityWidgetClass',null, myModule);
}
Or you could use the following check:
if (typeof define === 'function' && define.amd) {
// AMD style code
} else {
// Non-AMD style code
}
This is the approach most cross-loader libraries use. Libraries that work both on AMD loaders (Dojo, Require.js), but also on Node.js or simply by using global namespacing use a similar piece of code to determine how they load their module.
This is not your code, it should work as it is. We recently faced the same problem and identified the cause.
Connections 5 is using an AMD version of the Jazz framework which provides its own dojo loader. This framework is used to aggregate the needed dojo modules into a single JS file, which limits the number of requests to the server. Unfortunately, this loader no longer handles synchronous modules loading. It fails with the warning you reported when dojo.require() requests a module that is not yet loaded by the aggregator. If the module was already loaded, because it was part of the Jazz aggregated file, then it works. It explains why you can dojo.require() some modules, but not all of them.
-> A workaround is to deploy a server side OSGi bundle to get the modules you need part of the aggregated JS file. There is a documented extension point for this. This can unblock you while enhancing the performance of your page.
Now, we opened a PMR to IBM support. The development team is working on a resolution. We hope that they will be able to deliver a fix soon.
We reported the following issues:
dojo.require()
dojo.requireLocalization()
dojo.registerModulePath()/require({paths:})
If you think about something else, please let me know.

Is it a good idea to use conditional dependencies in AMD modules?

I'm thinking of using conditions for specifying module dependendies in the AMD module system. For example to load libraryA on the browser and libraryB on the server.
This could look like this:
define([window?"libraryA":"libraryB"],function(library){
//do some stuff
});
This would allow me to build an abstraction layer for 2 modules. But is this really a good idea? Are there any drawbacks in doing this?
That approach could cause problems for the build tool.
Update:
After further research, I find that config settings in your main JS file are not read by default by the optimizer. So, a cleaner solution would be to use a different map config for client and server.
Original:
A safer approach may be to define a module that adapts itself to the environment, thus keeping all the conditional code within your module definitions, and leaving all dependency lists in the most reliable format.
// dependent module
define(["libraryAB"], function (library) {
//do some stuff
});
// libraryAB.js dependency module
define([], function () {
return window ?
defineLibraryA() :
defineLibraryB();
});
You could alternatively keep the libraryA and libraryB code separate by defining libraryAB this way.
// libraryAB.js dependency module
define(["libraryA", "libraryB"], function (libraryA, libraryB) {
return window ? libraryA : libraryB;
});
//define libraryA.js and libraryB.js as usual
If you want to avoid executing libraryA on the server or libraryB on the client, you could have these modules return functions and memoize the result if necessary.
The moral is that it's safest to keep all your non-standard code inside module definitions, keeping dependency lists nice and predictable.

Categories

Resources