I'm building a complex web app and to keep things simple, I made various modules (objects) in various files. Those modules can be required on some page, and not on others.
For that reason, I'd like to avoid loading all the modules for any pages, increasing the amount of useless requests.
So far, I work like this :
I include all needed libraries
After that, I instantiate these librairies in jQuery(function() {}); with specifics #ids or .classes arguments for that current page
Everything works fine, but since my app is growing beyond easy, I'd like to manage my JS with RequireJS.
And that's where things start to be a little confusing for me.
I know I can load my module when required, using require(['module1', 'module2', 'etc'], function (module1, module2, etc) {}), but how can I say :
"on this page, you load these modules, and instantiate them with those #ids and .classes"
and
"on this other page, you load only that module, with this #other-id"
?
module1 will, for example, load data from the api, and list them to a specific table given as parameter :
// in page 1
var mod1 = new Module1($('#content>table');
mod1.init(); // will load the data from the api with the url located at <table data-api-url="http://....">
// in page 2
var mod1 = new Module1($('#content .items>table'); // The table where we want the data to be populated is not at the same position!
mod1.init();
That means, depending on the page, I'll have to load my modules differently. That's how I don't know how to do using RequireJs :/
What you need is a javascript file for each page. That file will be responsible to execute your page-specific code.
I'll assume that you will use r.js to optmize and pack your code.
We can interpret Module1, Module2 etc. as libraries, because they will be used on multiple pages. To avoid browser do one request for each library module you can include this modules on your optimized main file:
Configure the "modules:" attribute of your build profile like this:
...
modules: [
{
name: "main" // this is your main file
includes: [
"module1",
"module2",
"etc..."
]
}
]
...
By doing this you tell to requirejs something like this: Optimize my "main.js" file and include in it all its dependencies, also includes the "module1", "module2" etc.
You have to do this because on your main file you do not include these modules on the require()/define() call, but you still want they available in case the page-specific module needs them. You don't have to do this for every library module you have, just for those that will be used by most of your pages.
Then, you create a javascript file for your page to use those modules:
define(function(require, exports, module) {
var $ = require("jquery"),
module1 = require("module1");
var mod1 = new Module1($('#content>table'));
mod1.init();
//other page specific-code.
});
And then on the html file:
<script data-main="main" data-start="home" src="require.js"></script>
So, when page loads, it will make a request for require.js, then another for main.js and then another for home.js and that's all.
I'll put the link for the another question so this answer get some context: How to use RequireJS build profile + r.js in a multi-page project
The creator of RequireJS actually made an example project that uses page-specific main files: https://github.com/requirejs/example-multipage
Related
I'm moving from requirejs to webpack.
In my require js setup, I have a site-wide file site.js that does some global setup, and non-page-specific stuff, like ajax prefilters, some global ui component setup, etc.
Some pages have self contained js apps specifically for a given page, where I'd have window.appPaths = ['some/path/to/app.js'] set in the template, and site.js would load up the app once the initial site wide stuff was done.
However, moving to webpack, it's reading my require(window.appPaths) and then it converts it to some regex and attempts to bundle every single js file in my src directory.
So basically, I'm wondering if there's a way for me to set this up to have multiple entry points, one being something like site.js and another being something like somePageApp.js wherein site.js is actually a dependency of the latter, but the latter doesn't always exist.
Sigh, hopefully that's clear, i'll elaborate where necessary.
edit: This is how it was set up in require.js setup, my output was:
html for an app page:
<script src="site.js">
<script>
window.appPaths = ['someApp.js'];
</script>
site.js:
require([
// deps for sitewide stuff
], function () {
// doing global general stuff
someTask(function () {
if(window.appPaths) require(window.appPaths)
})
})
someApp.js:
require([
// app specific deps
], function () {
// application code that doesn't run because not `require`d until the site.js tasks are done.
})
My "ideal" setup isn't necessarily the exact same, but I would want someApp.js to not load and/or run until site.js loaded. In my requirejs setup, site.js was also essentially the commons chunk but I don't think that's necessary here... just having trouble finding a parallel style of organization in webpack...
I've set up a single html page that uses three dojo widgets and I'm trying to create a custom build from it using dojo 1.7.5. The build succeeds leaving me with a dojo.js that includes the files I need using this build file:
var dependencies = {
action: "release",
selectorEngine: "acme",
stripConsole: "none",
cssOptimize: "comments.keepLines",
layers: [
{
name: "dojo.js",
dependencies: [
"dijit.form.ValidationTextBox",
"dijit.form.DropDownButton",
"dijit.form.Button",
"dijit.form.Form",
"dijit._base",
"dijit._Container",
"dijit._HasDropDown",
"dijit.form.ComboButton",
"dijit.form.ToggleButton",
"dijit.form._ToggleButtonMixin",
"dojo.parser",
"dojo.date.stamp",
"dojo._firebug.firebug"
]
}, {
name: "../test/test.js",
dependencies: [
"test.test"
]
}
],
prefixes: [
[ "dijit", "../dijit" ],
[ "dojox", "../dojox" ],
[ "ourpeople", "../ourpeople" ]
]
};
The questions I can't seem to find an answer to:
I'm using cssOptimize, I was expecting a single css file in which all the used css files were imported. However I can't find such a file. Is this the way dojo compresses it's css or are my expectations wrong? If so where can I find it in my release folder?
My test.js contains a function test1() if I call it from my built js it states test1 is not defined. I call that function directly without dojo. I'm assuming that building custom js only works if it is a dojo class using declare?
Final question, I needed to include several dojo files in the build manually such as dojo._firebug.firebug since after my initial build it was still using xhr calls to get those files. After including the files manually I still see xhr calls from dojo to specific resources: dojo/nls/dojo_ROOT and dijit/form/nls/validate.js. Those files are created during the build process and therefore can't be included in the dependencies in the build profile. Anyone any thoughts on this matter since I'm looking to distribute dojo in a single file.
I'm fairly new to the dojo build system and (especially) so perhaps I'm expecting things that the dojo build system isn't designed to do or maybe om going about this the wrong way if so any tips or suggestions are more than welcome.
Cheers!
Test.js:
function test1() {
console.log("test1");
}
Index.php:
<script type="text/javascript" src="js/release/dojo/dojo/dojo.js"></script>
<script type="text/javascript" src="js/release/dojo/test/test.js"></script>
<script type="text/javascript">
dojo.require("dijit.form.ValidationTextBox");
dojo.require("dijit.form.Button");
dojo.require("dijit.form.Form");
dojo.ready(function() {
test1();
});
</script>
I'm using cssOptimize, I was expecting a single css file in which all the used css files were imported. However I can't find such a file. Is this the way dojo compresses it's css or are my expectations wrong? If so where can I find it in my release folder?
When you use cssOptimize, the Dojo build optimizes and flattens CSS files in place. So for example, if you're using Dijit's Claro theme, when you load dijit/themes/claro/claro.css from source, it contains a series of #import statements which in turn load more files. When you load claro.css from a build with cssOptimize, it is one file containing all of the styles previously referenced via those separate files.
My test.js contains a function test1() if I call it from my built js it states test1 is not defined. I call that function directly without dojo. I'm assuming that building custom js only works if it is a dojo class using declare?
Dojo doesn't expect every JS file to be a "class" using declare but it does expect each file to be a module which doesn't implicitly define globals (since globals should be avoided in modules anyway). When the build process encounters a module that it thinks or knows isn't AMD, it assumes it's a legacy Dojo module and wraps it in a boilerplate to convert it to AMD. This boilerplate ends up encapsulating your globals into a function scope, so they are no longer globals.
Given that you're using Dojo 1.7, you should ideally be using the AMD format to define and consume modules. dojotoolkit.org has a tutorial introducing AMD modules, and if you're migrating from Dojo 1.6 or earlier, there's also a tutorial to help you transition.
Final question, I needed to include several dojo files in the build manually such as dojo._firebug.firebug since after my initial build it was still using xhr calls to get those files. After including the files manually I still see xhr calls from dojo to specific resources: dojo/nls/dojo_ROOT and dijit/form/nls/validate.js. Those files are created during the build process and therefore can't be included in the dependencies in the build profile. Anyone any thoughts on this matter since I'm looking to distribute dojo in a single file.
I'm not sure why you're seeing dojo/_firebug/firebug being automatically loaded, but based on what you've said/shown above I would immediately suggest the following:
Convert your modules/code to AMD format
Add async: true to your dojoConfig which will cause the loader to operate in asynchronous mode, which means:
It loads modules through script injection instead of synchronous XHR
It won't unconditionally load all of dojo/_base
Add customBase: true to your dojo/dojo layer which will prevent the build from defaulting to include all of dojo/_base
As for the nls modules, to an extent it's normal to still see NLS files requested, though if your build is configured properly there would ordinarily just be one NLS file per layer and that's it (the fact that you're seeing a separate request for validate leads me to think you haven't covered all of your dependencies). The reason NLS remains separate is because there is one NLS bundle per locale, and it doesn't make sense to build all locales into one layer - that would force people to pay for resources in 20 languages they don't care about.
I'm using RequireJS to help me manage complex relationships/dependencies between some of my homegrown Javascript modules. It works very well for that -- loads them in the correct order based on their dependencies.
I'm also using RequireJS to load known libraries such as jQuery and KnockoutJS.
This being said, my issue is this -- let's say I have a simple login form page. It uses jQuery to enable some interaction (example: validating input, etc.). As such, I use RequireJS to include jQuery in my page's Javascript code. But, since RequireJS require() calls are asynchronous, there's a potential 'delay' between the moment the page is shown to the user, and the moment the jQuery library is loaded and kicks in.
So here's my problem: in the hypothetical scenario where the jQuery library takes a while to load, I want to prevent the user from being able to manipulate/submit the form until jQuery has kicked in. So, at the moment, the login form is initially hidden (displays a 'Loading...' message), and at the end of my require() async callback, once jQuery is loaded and applied, I make the form visible.
I find that this leads to a somewhat poor user experience -- you load the page, it's missing stuff at first (showing 'Loading...'), and then the form appears. In most cases it loads pretty quickly, so the page looks like it 'blinks' as it goes from the 'Loading...' phase to showing the full form almost instantly.
I've been thinking of moving the big libraries (jQuery, KnockoutJS) outside of RequireJS for that reason.
Is this normal or expected? Am I approaching this wrong?
TL;DR version: since RequireJS's require() mechanism is asynchronous... if your page needs some modules to work properly, do you hide the page's contents until the modules are loaded, and then make the page's contents visible in the end? Would this be considered poor UX?
Ah you probably aren't optomising your requirejs assets,
i use gruntjs to compile all my requirejs assets into one big js file,
if you leave requirejs unoptimized yes it can all be a bit slow, for dev it's great to easily have everything link dynamically, for production it's normally best to compile everything together into one big file for download speed (every file adds about 100ms and 1.3k onto your page load)
should mention for dynamic javascript perhaps created by php or something, you can ignore them from the concat process using empty:, and then add a query string which will make you able to using a .php file end ending, but your normally better writing static code which uses a dynamic json feed loaded by the static code
Some links about the requirejs optomiser, ps you can do this your self by concating all the files together in a bat/sh script yourself
http://requirejs.org/docs/optimization.html
you will probaly need to repeat your main.js javascript lookup rules in your grunt file, if you have any special library locations or shims in place
example grunt file i use currently
module.exports = function(grunt) {
// Project configuration.
...
requirejs: {
production: {
options: {
// REMEMBER TO DUPLICATE CHANGES IN MAIN.JS, example dynamic javascript created by ajax, and static javascript in library folder
paths: {
"moment": "../shared/js/moment/2.5.0/moment.min",
"dynamic.ottconfig": "empty:"
},
shim: {
"lib.filesaver": {deps: ["shim.blob"]},
...
},
name: "main", // link to almond.js or requirejs.js
appUrl: "./web/tmp/js",
baseUrl: "./web/tmp/js",
out: "web/bin/js/main.min.js",
optimize: "uglify2",
preserveLicenseComments: false,
generateSourceMaps: true,
insertRequire: [ "main" ]
}
}
},
...);
grunt.loadNpmTasks('grunt-contrib-requirejs');
// Default Production Build task(s).
grunt.registerTask('default', [
...
'requirejs',
...
]);
};
I my webapp I use about 30 js files, each file containing 1 function. All these function are now selfinvoking and have references to each other.
The problem with this is that the order of scripts in the index.jsp matters. If a method is called on a function which has not been invoked yet we get a undefined error.
For a while we could overcome this by controlling the order of the <script> tags, but I would like to do this by using a loader script.
I have set up a small fiddle to show my concept. My biggest concern is that I have to declare my objects globally, in order to have them be accessible in the jquery(document).ready() function.
Is this an OK pattern? Any hints highly appreciated!
You could use RequireJS or similar loader, which would handle script dependencies for you.
You would need to modify each of JS file to make it a module in a similar fashion to this example:
// File: module3.js
define(["module1", "module2"], function(m1, m2) {
// Here, module1 and module2 are guaranteed to be loaded.
});
Then, you would make one "main" script (I usually call it main.js) and require several modules:
require(["module3"], function (m3) {
// Here module3 is loaded, as well as module1 and module2
// - because module3 depends on them.
});
And put this in your HTML:
<script data-main="scripts/main" src="scripts/require.js"></script>
Try to build your server-side architecture to serve proper js files (and other static files) per page. Create 1 minified js file for page and initialize objects scope in html files.
I think this question will give mine a little more context:
Using pre-compiled templates with Handlebars.js (jQuery Mobile environment)
Basically I'm trying to learn the precompiling stuff, so I can save load time and keep my html documents neat. I haven't started it yet, but based on the above link, every template needs to have it's own file. Isn't that going to be a lot of links to load? I don't want to be making multiple HTTP requests if I don't have to.
So if someone could shed some light, perhaps offer an alternative where I can get the templates out of my html, but not have to load up 100 different template files.
Tools like Grunt.js allow you to have your templates and consume them too. For example, this file compiles the templates and then concatenates them into a single file:
module.exports = function(grunt) {
grunt.loadNpmTasks("grunt-contrib-handlebars");
// Project configuration.
grunt.initConfig({
// Project metadata, used by the <banner> directive.
meta: {},
handlebars: {
dist: {
options: {
namespace: "JST",
wrapped: "true"
},
files: {
"templates.js": [
"./fileA.tmpl",
"./fileB.tmpl"
]
}
}
}
});
// Default task.
grunt.registerTask("default", "handlebars");
};
What I've yet to work out since I'm just getting started with pre-compiled templates is the workflow. I want to be able to have compiled templates when we're running a deployed version of the app but when doing development and debugging I'd much rather have my original individual files in uncompiled form so I can just edit them and reload the page.
Follow Up:
I wanted to come back to this after having worked out some of how to both have my pre-compiled templates when available and use individual templates which can be edited on the fly when people are doing development and debugging work and would like to have quick edit-reload-test cycles without doing Grunt builds.
The answer I came up with was to check for the existence of the JST[] data structure and then further to test and see whether a particular pre-compiled template is present in that structure. If it is then nothing further need be done. If it's not there then the template is loaded (we use RequireJS to do that) and compiled and put into the same JST[] array under the same name it would have if pre-compiled templates had been loaded.
That way when it comes time to actually use the template, the code only looks for it in one place and it's always the same.
In the near future I think we'll likely have RequireJS plugins to perform the test and load/compile code while keeping it simple for developers.