Using require.js for client side dependancies in Adobe CQ5 - javascript

I was wondering if anyone had experience using require.js with the Adobe CQ5 platform. I'm writing a Chaplin.js(backbone-based) single page app that will be integrated into the rest of CQ5-based site we're working on. Chaplin requires the use of a module system like AMD/Common.js and I want to make sure my compiled javascript file will usable within CQ5's clientlibs. Is it as simple as adding require.js as a dependency in clientlibs prior to loading in my javascript modules? Someone's insight who has experience in doing this would be greatly appreciated.

I've implemented this as a solution of organize in a better way all the modules such as:
//public/js/modules/myModule.js
define('myModule',[ /* dependencies */] ,function( /* stuff here */ ))
and in the components such:
<% //components/component.jsp %>
<div>
<!-- stuff here -->
</div>
componentJS:
//components/component/clientlibs/js/component.js
require(['modules/myModule']);
and finally I've configured require (config.js) and I've stored the JSs modules in a new different design folder. Located the compiled JS after close </body> to guarantee the JS is always located at the bottom after the HTML.
<!-- page/body.jsp -->
...
<cq:includeClientLib js="specialclientlibs.footer"/>
</body>
solving with this the issue of have "ready" all the content before the JS is executed. I've had some problems to resolve with this async stuff managed for the clienlibs compilation tool, in production the problem was different, however, in development, the order in what CQ compiles the JS has produced me some lacks in terms of order of the JS. The problem really was a little bit more complex than the explanation because the number of JS was really big and the team too, but in simple terms it was the best way I've discovered so far..

The Idea
I think you can compile your Chaplin.js with one of the AMDShims to make it self contained, wraps every dependencies it needs inside a function scope, expose an entry point as global variable (which is a bad practise, but CQ do it all the time...) and then use the compiled.js inside a normal clientlib:
AMD Shims
https://github.com/jrburke/requirejs/wiki/AMD-API-Shims
Example
Here is an example of how we make the one of our libs self-contained:
https://github.com/normanzb/chuanr/blob/master/gruntfile.js
Basically, in source code level the lib "require"s the other modules just as usual. However after compiled, the generated chuanr.js file contains everything its needs, even wrapped a piece of lightweight AMD compatible implementation.
check out compiled file here: https://github.com/normanzb/chuanr/blob/master/Chuanr.js
and the source code: https://github.com/normanzb/chuanr/tree/master/src
Alternative
Alternatively rather than compile every lib you are using to be independent/self-contained, what we do in our project is simply use below amdshim instead of the real require.js. so on cq component level you can call into require() function as usual:
require(['foo/bar'], function(){});
The amd shim will not send the http request to the module immediately, instead it will wait until someone else loads the module.
and then at the bottom of the page, we collect all the dependencies, send the requirements to server side handler (scriptmananger) for dynamic merging (by internally calling into r.js):
var paths = [];
for (var path in amdShim.waiting){
paths.push(path);
}
document.write('/scriptmananger?' + paths.join(','));
The amdShim we are using: https://github.com/normanzb/amdshim/tree/exp/defer

Related

Understanding the Communication between Modules in jQuery Source Code Structure [duplicate]

Uncompressed jQuery file: http://code.jquery.com/jquery-2.0.3.js
jQuery Source code: https://github.com/jquery/jquery/blob/master/src/core.js
What are they doing to make it seem like the final output is not using Require.js under the hood? Require.js examples tells you to insert the entire library into your code to make it work standalone as a single file.
Almond.js, a smaller version of Require.js also tell you to insert itself into your code to have a standalone javascript file.
When minified, I don't care for extra bloat, it's only a few extra killobytes (for almond.js), but unminified is barely readable. I have to scroll all the way down, past almond.js code to see my application logic.
Question
How can I make my code to be similar to jQuery, in which the final output does not look like a Frankenweenie?
Short answer:
You have to create your own custom build procedure.
Long answer
jQuery's build procedure works only because jQuery defines its modules according to a pattern that allows a convert function to transform the source into a distributed file that does not use define. If anyone wants to replicate what jQuery does, there's no shortcut: 1) the modules have to be designed according to a pattern which will allow stripping out the define calls, and 2) you have to have a custom conversion function. That's what jQuery does. The entire logic that combines the jQuery modules into one file is in build/tasks/build.js.
This file defines a custom configuration that it passes to r.js. The important option are:
out which is set to "dist/jquery.js". This is the single
file produced by the optimization.
wrap.startFile which is set to "src/intro.js". This file
will be prepended to dist/jquery.js.
wrap.endFile which is set to "src/outro.js". This file will
be appended to dist/jquery.js.
onBuildWrite which is set to convert. This is a custom function.
The convert function is called every time r.js wants to output a module into the final output file. The output of that function is what r.js writes to the final file. It does the following:
If a module is from the var/ directory, the module will be
transformed as follows. Let's take the case of
src/var/toString.js:
define([
"./class2type"
], function( class2type ) {
return class2type.toString;
});
It will become:
var toString = class2type.toString;
Otherwise, the define(...) call is replace with the contents of the callback passed to define, the final return statement is stripped and any assignments to exports are stripped.
I've omitted details that do not specifically pertain to your question.
You can use a tool called AMDClean by gfranko https://www.npmjs.org/package/amdclean
It's much simpler than what jQuery is doing and you can set it up quickly.
All you need to do is to create a very abstract module (the one that you want to expose to global scope) and include all your sub modules in it.
Another alternative that I've recently been using is browserify. You can export/import your modules the NodeJS way and use them in any browser. You need to compile them before using it. It also has gulp and grunt plugins for setting up a workflow. For better explanations read the documentations on browserify.org.

Dojo build css and custom javascript

I've set up a single html page that uses three dojo widgets and I'm trying to create a custom build from it using dojo 1.7.5. The build succeeds leaving me with a dojo.js that includes the files I need using this build file:
var dependencies = {
action: "release",
selectorEngine: "acme",
stripConsole: "none",
cssOptimize: "comments.keepLines",
layers: [
{
name: "dojo.js",
dependencies: [
"dijit.form.ValidationTextBox",
"dijit.form.DropDownButton",
"dijit.form.Button",
"dijit.form.Form",
"dijit._base",
"dijit._Container",
"dijit._HasDropDown",
"dijit.form.ComboButton",
"dijit.form.ToggleButton",
"dijit.form._ToggleButtonMixin",
"dojo.parser",
"dojo.date.stamp",
"dojo._firebug.firebug"
]
}, {
name: "../test/test.js",
dependencies: [
"test.test"
]
}
],
prefixes: [
[ "dijit", "../dijit" ],
[ "dojox", "../dojox" ],
[ "ourpeople", "../ourpeople" ]
]
};
The questions I can't seem to find an answer to:
I'm using cssOptimize, I was expecting a single css file in which all the used css files were imported. However I can't find such a file. Is this the way dojo compresses it's css or are my expectations wrong? If so where can I find it in my release folder?
My test.js contains a function test1() if I call it from my built js it states test1 is not defined. I call that function directly without dojo. I'm assuming that building custom js only works if it is a dojo class using declare?
Final question, I needed to include several dojo files in the build manually such as dojo._firebug.firebug since after my initial build it was still using xhr calls to get those files. After including the files manually I still see xhr calls from dojo to specific resources: dojo/nls/dojo_ROOT and dijit/form/nls/validate.js. Those files are created during the build process and therefore can't be included in the dependencies in the build profile. Anyone any thoughts on this matter since I'm looking to distribute dojo in a single file.
I'm fairly new to the dojo build system and (especially) so perhaps I'm expecting things that the dojo build system isn't designed to do or maybe om going about this the wrong way if so any tips or suggestions are more than welcome.
Cheers!
Test.js:
function test1() {
console.log("test1");
}
Index.php:
<script type="text/javascript" src="js/release/dojo/dojo/dojo.js"></script>
<script type="text/javascript" src="js/release/dojo/test/test.js"></script>
<script type="text/javascript">
dojo.require("dijit.form.ValidationTextBox");
dojo.require("dijit.form.Button");
dojo.require("dijit.form.Form");
dojo.ready(function() {
test1();
});
</script>
I'm using cssOptimize, I was expecting a single css file in which all the used css files were imported. However I can't find such a file. Is this the way dojo compresses it's css or are my expectations wrong? If so where can I find it in my release folder?
When you use cssOptimize, the Dojo build optimizes and flattens CSS files in place. So for example, if you're using Dijit's Claro theme, when you load dijit/themes/claro/claro.css from source, it contains a series of #import statements which in turn load more files. When you load claro.css from a build with cssOptimize, it is one file containing all of the styles previously referenced via those separate files.
My test.js contains a function test1() if I call it from my built js it states test1 is not defined. I call that function directly without dojo. I'm assuming that building custom js only works if it is a dojo class using declare?
Dojo doesn't expect every JS file to be a "class" using declare but it does expect each file to be a module which doesn't implicitly define globals (since globals should be avoided in modules anyway). When the build process encounters a module that it thinks or knows isn't AMD, it assumes it's a legacy Dojo module and wraps it in a boilerplate to convert it to AMD. This boilerplate ends up encapsulating your globals into a function scope, so they are no longer globals.
Given that you're using Dojo 1.7, you should ideally be using the AMD format to define and consume modules. dojotoolkit.org has a tutorial introducing AMD modules, and if you're migrating from Dojo 1.6 or earlier, there's also a tutorial to help you transition.
Final question, I needed to include several dojo files in the build manually such as dojo._firebug.firebug since after my initial build it was still using xhr calls to get those files. After including the files manually I still see xhr calls from dojo to specific resources: dojo/nls/dojo_ROOT and dijit/form/nls/validate.js. Those files are created during the build process and therefore can't be included in the dependencies in the build profile. Anyone any thoughts on this matter since I'm looking to distribute dojo in a single file.
I'm not sure why you're seeing dojo/_firebug/firebug being automatically loaded, but based on what you've said/shown above I would immediately suggest the following:
Convert your modules/code to AMD format
Add async: true to your dojoConfig which will cause the loader to operate in asynchronous mode, which means:
It loads modules through script injection instead of synchronous XHR
It won't unconditionally load all of dojo/_base
Add customBase: true to your dojo/dojo layer which will prevent the build from defaulting to include all of dojo/_base
As for the nls modules, to an extent it's normal to still see NLS files requested, though if your build is configured properly there would ordinarily just be one NLS file per layer and that's it (the fact that you're seeing a separate request for validate leads me to think you haven't covered all of your dependencies). The reason NLS remains separate is because there is one NLS bundle per locale, and it doesn't make sense to build all locales into one layer - that would force people to pay for resources in 20 languages they don't care about.

Including an external javascript library in pebble js file?

Is there any way I can include an external JS library in my pebble code?
Conventionally on a webpage I would do this in my head tags:
<script type='text/javascript' src='https://cdn.firebase.com/js/client/1.0.11/firebase.js'></script>
But in pebble, I am unable to do that since I am only using JS. So how can I include an external library for a JavaScript file.
At present, you cannot include external JS files.
If you're using CloudPebble, then the only way to do this is to copy and paste the content of the JS library files into your JS file.
If you're doing native development, you can modify the wscript file to combine multiple JS files into one at build time.
I think there's some confusion over Pebble.js vs PebbleKit JS (v3.8.1). Pebble.js is a fledgling SDK where eventually the programmer will be able to write pure JavaScript. It's still cooking so there's some functionality missing like the ability to draw a line or obtain the screen dimensions. The repo is a mix of C and JS sources where you can add C code to augment missing functionality but otherwise all your code lives in src/js/app.js or src/js/app/. Anyway, Pebble.js does support require.
I don't use CloudPebble but I got the impression that it either supports Pebble.js (and hence require) or is planning to. I think all this SDK boilerplate code would be hidden.
PebbleKit JS does not support require out of the box AFAIK. I've made a demo that ports require support from Pebble.js to PKJS. The summary of changes is:
Move your project's src/js/pebble-js-app.js to src/js/app/index.js.
Remove any ready event listener from src/js/app/index.js. index.js will
be loaded when the ready event is emitted.
Add src/js/loader.js from Pebble.js.
Add a src/js/main.js that calls require('src/js/app') on the ready event.
Update your wscript with the following
deltas.
When adding new modules, place them under src/js/app/ and require('./name') will work.
I've tried to cover this all in the demo's readme.
BTW, here's the official breakdown of all the different SDKs but it's a little confusing.
I am not sure if there have been changes since the above answer, but it looks like there is in fact a way to include additional resources while keeping things tidy. On the pebbleJS page, there is the following section with an some information on the subject.
GLOBAL NAMESPACE - require(path)
Loads another JavaScript file allowing you to write a multi-file project. Package loading loosely follows the CommonJS format. path is the path to the dependency.
You can then use the following code to "require" a JS library in your pebble project. This should be usable on both Cloud Pebble as well as native development.
// src/js/dependency.js
var dep = require('dependency');
You can then use this as shown below:
dep.myFunction(); // run a function from the dependency
dep.myVar = 2; // access or change variables
Update:
After some digging into this for my own, I have successfully gotten CloudPebble to work with this functionality. It is a bit convoluted, but follows the ECMAScript 6 standards. Below I am posting a simple example of getting things set up. Additionally, I would suggest looking at this code from PebbleJS for a better reference of a complex setup.
myApp.js
var resource = require('myExtraFile'); // require additional library
console.log(resource.value); // logs 42
resource.functionA(); // logs "This is working now"
myExtraFile.js
var myExtraFile = { // create a JSON object to export
"value" : 42, // variable
functionA : function() { // function
console.log("This is working now!");
}
};
this.exports = myExtraFile; // export this function for
// later use

Best practice for using JavaScript in Django

I want to push my Django project with some JavaScript/jQuery. To make it right from the beginning on I'd like to know, which way of organizing the .js-files ist the optimal one.
For loading one big file includes less overhead than loading many small ones and also because it looks cleaner in the code I considered to make one global .js-file and include that with the base.html (from which every template inherites). However, the result would be, that JavaScript would try to assign all the event-binings, even if the elements which the events should be bind to aren't in the current document. With all the jQuery-selectors which then would have to do their work that can't be too efficient. From earlier web-development experience I know that one can do something like if(location.href == '/some/url/') { (JavaScript code) ... }. That seems not practicable for me in this case, for with changing URLs, I'd have to change the URLconf and the .js-file (while using reverse() and {% url %} to prevent that elsewhere). I guess there is no possibility to make use of the named URLs here?
Has anyone an idea how to organize the JavaScript without having a file for every single template on the one hand and without killing performance unnecessarily?
I don't know that this question is specific to Django - similar issues come up managing Javascript in all sorts of systems.
That said, I usually try to tier my Javascript files, so that truly global scripts and libraries are included in one file, scripts specific to a section of the site are included in a set of section-specific files, and scripts specific to a single page are included in yet another site of page-specific files (or in inline code, depending on the context).
Django has good support for this approach, because you can tier your templates as well. Include the global script in your base.html template, then create a mysection-base.html template that inherits from base.html and just adds the Javascript (and CSS) files specific to that section. Then subpages within that section can inherit from mysection-base.html instead of base.html, and they'll all have access to the section-specific scripts.
I find django-compressor invaluable as it automatically compresses and minifies your JavaScript and CSS pre-deployment. It even automatically handles SASS, LESS and CoffeeScript if they float your boat.
Apps from http://djangopackages.com/grids/g/asset-managers/ may help.
You use modular javascript.
Choose your packager of choice (mine is browserify) that packages all your modules into one package that you minify and gzip. You send this file to the client and it is cached.
This means you have all your code cached, minimize HTTP requests and stay lean and efficient.
And since you have modular code you just load your code as you would normally.
Personally I would use some form feature detection to load modules. You can choose to feature detect on almost any feature (some css selector, routes, url segments).
Feature detection would look like this :
var Features = {
"class": "name",
"class2": "name2",
"dynamic-scroll": "dynamic-scroll",
"tabstrip": "tabstrip",
...
}
for (var key in Features) {
require(Features[key]);
}
Where as routing with davis would look like
Davis(function() {
this.get("blog", function(req) {
require("blog")(req);
});
this.get("blog/:post", function(req) {
require("blog-post")(req);
});
this.get("shop", function(req) {
require("shop")(req);
});
...
});
Alternatively you can try an event driven architecture. This means each module binds to events
// some-module
mediator.on("blog-loaded", function() {
// load in some libraries
// construct some widgets
mediator.emit("blog-ui-build", widgets);
});
And you would need some bootstrapping in place to kick off the event loop. Feel free to look at an EDA demo

JavaScript dependency management

I am currently maintaining a large number of JS files and the dependency issue is growing over my head. Right now I have each function in a separate file and I manually maintain a database to work out the dependencies between functions.
This I would like to automate. For instance if I have the function f
Array.prototype.f = function() {};
which is referenced in another function g
MyObject.g = function() {
var a = new Array();
a.f();
};
I want to be able to detect that g is referencing f.
How do I go about this? Where do I start? Do I need to actually write a compiler or can I tweak Spidermonkey for instance? Did anyone else already do this?
Any pointers to get me started is very much appreciated
Thanks
Dok
Whilst you could theoretically write a static analysis tool that detected use of globals defined in other files, such as use of MyObject, you couldn't realistically track usage of prototype extension methods.
JavaScript is a dynamically-typed language so there's no practical way for any tool to know that a, if passed out of the g function, is an Array, and so if f() is called on it there's a dependency. It only gets determined what variables hold what types at run-time, so to find out you'd need an interpreter and you've made yourself a Turing-complete problem.
Not to mention the other dynamic aspects of JavaScript that completely defy static analysis, such as fetching properties by square bracket notation, the dreaded eval, or strings in timeouts or event handler attributes.
I think it's a bit of a non-starter really. You're probably better of tracking dependencies manually, but simplifying it by grouping related functions into modules which will be your basic unit of dependency tracking. OK, you'll pull in a few more functions that you technically need, but hopefully not too much.
It's also a good idea to namespace each module, so it's very clear where each call is going, making it easy to keep the dependencies in control manually (eg. by a // uses: ThisModule, ThatModule comment at the top).
Since extensions of the built-in prototypes are trickier to keep track of, keep them down to a bare minimum. Extending eg. Array to include the ECMAScript Fifth Edition methods (like indexOf) on browsers that don't already have them is a good thing to do as a basic fixup that all scripts will use. Adding completely new arbitrary functionality to existing prototypes is questionable.
Have you tried using a dependency manager like RequireJS or LabJS? I noticed no one's mentioned them in this thread.
From http://requirejs.org/docs/start.html:
Inside of main.js, you can use require() to load any other scripts you
need to run:
require(["helper/util"], function(util) {
//This function is called when scripts/helper/util.js is loaded.
//If util.js calls define(), then this function is not fired until
//util's dependencies have loaded, and the util argument will hold
//the module value for "helper/util".
});
You can nest those dependencies as well, so helper/util can require some other files within itself.
As #bobince already suggested, doing static analysis on a JavaScript program is a close to impossible problem to crack. Google Closure compiler does it to some extent but then it also relies on external help from JSDoc comments.
I had a similar problem of finding the order in which JS files should be concatenated in a previous project, and since there were loads of JS files, manually updating the inclusion order seemed too tedious. Instead, I stuck with certain conventions of what constitutes a dependency for my purposes, and based upon that and using simple regexp :) I was able to generated the correct inclusion order.
The solution used a topological sort algorithm to generate a dependency graph which then listed the files in the order in which they should be included to satisfy all dependencies. Since each file was basically a pseudo-class using MooTools syntax, there were only 3 ways dependencies could be created for my situation.
When a class Extended some other class.
When a class Implemented some other class.
When a class instantiated an object of some other class using the new keyword.
It was a simple, and definitely a broken solution for general purpose usage but it served me well. If you're interested in the solution, you can see the code here - it's in Ruby.
If your dependencies are more complex, then perhaps you could manually list the dependencies in each JS file itself using comments and some homegrown syntax such as:
// requires: Array
// requires: view/TabPanel
// requires: view/TabBar
Then read each JS file, parse out the requires comments, and construct a dependency graph which will give you the inclusion order you need.
It would be nice to have a tool that can automatically detect those dependencies for you and choose how they are loaded. The best solutions today are a bit cruder though. I created a dependency manager for my particular needs that I want to add to the list (Pyramid Dependency Manager). It has some key features which solve some unique use cases.
Handles other files (including inserting html for views...yes, you can separate your views during development)
Combines the files for you in javascript when you are ready for release (no need to install external tools)
Has a generic include for all html pages. You only have to update one file when a dependency gets added, removed, renamed, etc
Some sample code to show how it works during development.
File: dependencyLoader.js
//Set up file dependencies
Pyramid.newDependency({
name: 'standard',
files: [
'standardResources/jquery.1.6.1.min.js'
]
});
Pyramid.newDependency({
name:'lookAndFeel',
files: [
'styles.css',
'customStyles.css',
'applyStyles.js'
]
});
Pyramid.newDependency({
name:'main',
files: [
'createNamespace.js',
'views/buttonView.view', //contains just html code for a jquery.tmpl template
'models/person.js',
'init.js'
],
dependencies: ['standard','lookAndFeel']
});
Html Files
<head>
<script src="standardResources/pyramid-1.0.1.js"></script>
<script src="dependencyLoader.js"></script>
<script type="text/javascript">
Pyramid.load('main');
</script>
</head>
It does require you to maintain a single file to manage dependencies. I am thinking about creating a program that can automatically generate the loader file for you based on includes in the header but since it handles many different types of dependencies, maintaining them in one file might actually be better.
JSAnalyse uses static code analysis to detect dependencies between javascript files:
http://jsanalyse.codeplex.com/
It also allows you to define the allowed dependencies and to ensure it during the build, for instance. Of course, it cannot detect all dependencies because javascript is dynamic interpretet language which is not type-safe, like already mentioned. But it at least makes you aware of your javascript dependency graph and helps you to keep it under control.
I have written a tool to do something like this: http://github.com/damonsmith/js-class-loader
It's most useful if you have a java webapp and you structure your JS code in the java style. If you do that, it can detect all of your code dependencies and bundle them up, with support for both runtime and parse-time dependencies.

Categories

Resources