RequireJS loads resources I don't want to - javascript

I'm using RequireJS in my app, but don't quite well understand all aspects of it's work.
I have main.js file, where dependencies are described. I have Backbone.Router component up and running, which triggers different application classes (class which is responsible to create main view). Some code you can see here.
What I can see with requireJS: even if some view has not been yet 'required' (mean explicitly call to require('./subviews/view)), it's still loaded and inside it loads all templates (I use requireJS text plugin). If I'm adding new application but it's subviews are not ready, but I never used the application - nonexisting subviews are still loaded and I'm getting 404 errors.
Not sure I explained everything clearly, but hope you got point.

Looks like you are using the CommonJS sugared form of define(). Note that this is just a convenience for wrapping Node/CommonJS code, but AMD modules do not operate like Node modules.
An AMD loader will scan for require('') calls in the factory function of the module, and make sure to load all the modules. Otherwise, synchronous access to the that module, by doing a require('./apps/DashboardApp');, would fail in the browser, since file IO is by default async network IO.
If you want to delay loading of some scripts, then you need to use the callback-form of require:
require(['./apps/DashboardApp'], function (DashboardApp) {
});
but this call is an async call, so you would have to adjust your module's public API accordingly.
So basically, if you want to do on-demand loading of a dependency, the callback form of require is needed, given the async nature of file IO in the browser.

Because RequireJS loads all required dependencies. By taking quick look at your code I see that you load routing module and routing has:
var ViewManager = require('ViewManager');
That means it will load ViewManager, dependecies that are specified by ViewManager and other dependencies that those modules need. Essentially when you include require(...), it's the same as specifying dependency. This will be transformed by RequireJS into
define(['ViewManager'], ...)

Related

dynamically load modules in Meteor node.js

I'm trying to load multiple modules on the fly via chokidar (watchdog) using Meteor 1.6 beta, however after doing extensive research on the matter I just can't seem to get it to work.
From what I gather require by design will not take in anything other than static strings, i.e.
require("test/string/here")
Since if I try:
var path = "test/string/here"
require(path)
I just get Error: Cannot find module, even though the strings are identical.
Now the thing is I'm uncertain how to go on about this, am I really forced to either use import or static strings when using meteor or is there some workaround this?
watchdog(cmddir, (dir) => {
match = "." + regex_cmd.exec(dir);
match = dir;
loader.emit("loadcommand", match)
});
loader.on('loadcommand', (file) => {
require(file);
});
There is something intrinsically weird in what you describe.
chokidar is used to watch actual files and folders.
But Meteor compiles and bundles your code, resulting in an app folder after build that is totally different from your project structure.
Although Meteor now supports dynamic imports, the mechanism is internal to Meteor and does not rely on your actual project files, but on Meteor built ones.
If you want to dynamically require files like in Node, including with dynamically generated module path, you should avoid import and require statements, which are automatically replaced by Meteor built-in import mechanism. Instead you would have to make up your own loading function, taking care of the fact that your app built folder is different from your project folder.
That may work for example if your server is watching files and/or folders in a static location, different from where your app will be running.
In the end, I feel this is a sort of XY problem: you have not described your objective in the first place, and the above issue is trying to solve a weird solution that does not seem to fit how Meteor works, hence which may not be the most appropriate solution for your implicit objective.
#Sashko does a great job of explaining Meteor's dynamic imports here. There are also docs
A dynamic import is a function that returns a promise instead of just importing statically at build time. Example:
import('./component').then((MyComponent) => {
render(MyComponent);
});
The promise runs once the module has been loaded. If you try to load the module repeatedly then it only gets loaded once and is immediately available on subsequent requests.
afaict you can use a variable for the string to import.

How does backbone with require.JS work?

I am trying to understand how requireJS and backbone work. With this site when I open the DevTools in Chrome and opened a Sources Tab. I see a long list of folders/files (see image below) which seems to be the source code when it's uncompressed. However, I don't see it being loaded via the Network Tab.
I wonder how does it tie in and is it normal that the source code gets exposed like this, and whether it is normal that all views are being loaded even I just requested one page (i.e. the search page, see image below). I understand that modern Javascript applications like Angular likes to preload the application before it's presented. But wouldn't it be causing a lot of unnecessary traffic to users? especially those on mobile view?
First, your question mixes two things.
BackboneJS and RequireJS are completely unrelated
What you observe (module dependency structure in the scripting panel vs. all those files actually not downloaded) is due to the Debugger support of "Source maps"
Since I guess your confusion is caused by this I'll start with ...
Source maps
Modern browsers support source maps. Their intention is to reveal the original source code when it has been concatenated and/or minified into one file.
The source map file describes e.g. the symbol ZZyb at line 1 char 20563 is mapped to e.g. the file/somewhere/in/the/tree at line 34 char 1 and is named someView.
Minified files reference the source map using the
# sourceMappingURL=getaround-min.js.map
signature.
More on source maps: http://www.html5rocks.com/en/tutorials/developertools/sourcemaps/
When downloading the minified file (https://www.getaround.com/js/150502002818/getaround-min.js) on that website you've linked, you will observe that signature at the end of the file:
//# sourceMappingURL=getaround-min.js.map
You can then download that file. This is what your debugger does.
RequireJS vs. BackboneJS
You can use RequireJS to modularize your own code or in conjunction with other Frameworks that do not already ship with such technologies.
AngularJS for example has it's own dependency management which allows you to define named dependencies and finally you start the application. This allows you to just concatenate (and minify) all sources into one file without taking care of the definition order.
Even though Backbone and Require are unrelated they work very well together.
RequireJS
RequireJS implements the so called AMD spec.
A module defines dependencies and a callback to implement that module.
depA:
// Require depB and depC and after they've been loaded
// call the callback function and pass those
// dependencies. Finally return that module.
define(["depB", "depC"], function(depB, depC){
// by convention depB will resolve to depB.js relative
// to "depA"'s path
// object, string, number, function your module is made of
return something;
});
RequireJS will download the dependencies, store (cache) them intenally and pass them to the callback. This process repeats down the dependency tree. When a dependency has already been downloaded (by another upstream module) it can be passed directly without downloading it again.
depB:
define(["depC", "depD"], function(depC, depD){
// depC has already been loaded and cached - doesn't
// need to be downloaded again
});
RequireJS optimization
Downloading all dependencies file by file can (and - on internet seites - should) be avoided by packaging them using r.js. The modules will be converted into into one file during build time.

Using require.js for client side dependancies in Adobe CQ5

I was wondering if anyone had experience using require.js with the Adobe CQ5 platform. I'm writing a Chaplin.js(backbone-based) single page app that will be integrated into the rest of CQ5-based site we're working on. Chaplin requires the use of a module system like AMD/Common.js and I want to make sure my compiled javascript file will usable within CQ5's clientlibs. Is it as simple as adding require.js as a dependency in clientlibs prior to loading in my javascript modules? Someone's insight who has experience in doing this would be greatly appreciated.
I've implemented this as a solution of organize in a better way all the modules such as:
//public/js/modules/myModule.js
define('myModule',[ /* dependencies */] ,function( /* stuff here */ ))
and in the components such:
<% //components/component.jsp %>
<div>
<!-- stuff here -->
</div>
componentJS:
//components/component/clientlibs/js/component.js
require(['modules/myModule']);
and finally I've configured require (config.js) and I've stored the JSs modules in a new different design folder. Located the compiled JS after close </body> to guarantee the JS is always located at the bottom after the HTML.
<!-- page/body.jsp -->
...
<cq:includeClientLib js="specialclientlibs.footer"/>
</body>
solving with this the issue of have "ready" all the content before the JS is executed. I've had some problems to resolve with this async stuff managed for the clienlibs compilation tool, in production the problem was different, however, in development, the order in what CQ compiles the JS has produced me some lacks in terms of order of the JS. The problem really was a little bit more complex than the explanation because the number of JS was really big and the team too, but in simple terms it was the best way I've discovered so far..
The Idea
I think you can compile your Chaplin.js with one of the AMDShims to make it self contained, wraps every dependencies it needs inside a function scope, expose an entry point as global variable (which is a bad practise, but CQ do it all the time...) and then use the compiled.js inside a normal clientlib:
AMD Shims
https://github.com/jrburke/requirejs/wiki/AMD-API-Shims
Example
Here is an example of how we make the one of our libs self-contained:
https://github.com/normanzb/chuanr/blob/master/gruntfile.js
Basically, in source code level the lib "require"s the other modules just as usual. However after compiled, the generated chuanr.js file contains everything its needs, even wrapped a piece of lightweight AMD compatible implementation.
check out compiled file here: https://github.com/normanzb/chuanr/blob/master/Chuanr.js
and the source code: https://github.com/normanzb/chuanr/tree/master/src
Alternative
Alternatively rather than compile every lib you are using to be independent/self-contained, what we do in our project is simply use below amdshim instead of the real require.js. so on cq component level you can call into require() function as usual:
require(['foo/bar'], function(){});
The amd shim will not send the http request to the module immediately, instead it will wait until someone else loads the module.
and then at the bottom of the page, we collect all the dependencies, send the requirements to server side handler (scriptmananger) for dynamic merging (by internally calling into r.js):
var paths = [];
for (var path in amdShim.waiting){
paths.push(path);
}
document.write('/scriptmananger?' + paths.join(','));
The amdShim we are using: https://github.com/normanzb/amdshim/tree/exp/defer

Exposing almond module to window object fails due to asynchronicity

I am working on a JavaScript project where we use requirejs to manage our code. The product we create is to be used on 3rd party web sites, and therefore we cannot assume that an AMD compatible library is present. To solve this, we include almond in our target file and expose our main module on window. The file created by our build looks like this:
(function() {
//Almond lib
//Our code
define('window-exposer', ['main-module'], function(mainModule) {
window.mainModule = mainModule;
});
require('window-exposer');
}());
When building a site that want to use mainModule an error is thrown because when the site specific code tries to access window.mainModule it has not been set yet. There are also cases where the module has indeed been initialized and the code works.
Is there any way to guarantee that the window-exposer is run before other JavascriptCode is?
I solved it by using the solution provided here https://github.com/jrburke/almond#exporting-a-public-api

Node.js module loading

I am currently building a web app with Node and I am curious as to how Node loads its required files or modules.
I am using express for view and server config, however I am finding that all the Node examples (I know express offers an MVC example) don't really conform to the general MVC pattern. I am also aware that Node is not necessarily suited for MVC, but bear with me, because I like MVC.
If you consider the following route declaration, it would be effective to use this as a controller, since here you can control the request and response logic:
module.exports = function (app) {
app.get('/', function (req, res) {
res.render('index', { layout: false });
});
To try and follow an MVC architecture I have effectively divided the routes up into its relevant paths in effect creating controllers. However whenever I have a different route file it must contain its own set of required modules. For example:
var mongo = require('mongoskin');
I would then declare the required route files in the app.js or server.js file that holds the server config settings.
I am wondering whether splitting up the routes like this would slow down the application since I am unaware of how Node loads its modules. If it is loading as per needed then surely this implementation must slow it down?
Required modules are only loaded once and then cached, so feel free the break up your app into as many modules as needed to cleanly organize your app. If you have 20 files that call require('mongoskin'), that module is still only loaded once.
Quoting from the node.js documentation:
Modules are cached after the first time they are loaded. This means
(among other things) that every call to require('foo') will get
exactly the same object returned, if it would resolve to the same
file.
Multiple calls to require('foo') may not cause the module code to be
executed multiple times. This is an important feature. With it,
"partially done" objects can be returned, thus allowing transitive
dependencies to be loaded even when they would cause cycles.

Categories

Resources