I just need to use prelude-ls library in LiveScript, but no from the REPL. In my little test, I have 4 files:
main.htm
application.ls
application.js
require.js
I have the latest version of require.js (2.1.15) and in my main.htm I load the scripts:
<!DOCTYPE html>
<html>
<head>
<script src="./require.js" type="text/javascript"></script>
<script src="./prelude.js" type="text/javascript"></script>
</head>
<body>
</body>
</html>
Then, I go to my application.ls to test:
require! 'prelude-ls'
[1 2 3] |> prelude-ls.map (* 2)
My compile command is:
lsc -cwd $myFilePath
And it compiles just fine. Then, I go there to get the final result, to test and receive the following error:
Uncaught Error: Module name "prelude-ls" has not been loaded yet for context: _. Use require([])
Well, I saw the this is a very common error and its corrections would happen in the js file, not in the ls and none of the links I followed solved my problems. I've tried it in 2 computers and had the really same result.
My final js file, application.js is:
// Generated by LiveScript 1.2.0
(function(){
var preludeLs;
preludeLs = require('prelude-ls');
preludeLs.map((function(it){
return it * 2;
}))(
[1, 2, 3]);
}).call(this);
Plase, help me if possible. I really read all the documentation of livescript and it doens't cites its first use with prelude-ls.
I think you're confusing AMD.js (here, Require.JS) and CommonJS.
AMD.js uses the
require(['dep1', 'dep2'], function (dep1, dep2) {
dep1.callSomething();
});
(this allows for async loading)
CommonJS (which is what prelude-ls uses), on the other hand, is basically what node does (and what we generate with require!):
var preludeLs = require('prelude-ls')
At first I didn't understand, I find the answer confusing, though correct. Please let me sum it up.
Actually it's very simple and it works as says the doc.
We must install prelude-ls with npm or bower. We need to include the file prelude-browser.js (at prelude-ls/browser/prelude-browser.js).
Then we just require prelude at the beginning of our code:
prelude = require 'prelude-ls'
and we can call prelude.sum, etc.
A second option is to import exactly the function we need, like so:
{map, filter, lines} = require 'prelude-ls'
so we can call them directly.
That works with gulp and angularjs FYI.
This answer is clear.
Related
I have the following Node.js project (which is a Minimal Working Example of my problem):
module1.js:
module.exports = function() {
return "this is module1!";
};
module2.js:
var module1 = require('./module1');
module.exports = function() {
return module1()+" and this is module2!";
};
server.js:
var module2 = require('./module2');
console.log(module2()); // prints: "this is module1! and this is module2!"
Now I want to create a client.html file that will also use module2.js. Here is what I tried (and failed):
naive version:
<script src='module2.js'></script>
<script>alert(module2());</script> // should alert: "this is module1! and this is module2!"
This obviously doesn't work - it produces two errors:
ReferenceError: require is not defined.
ReferenceError: module2 is not defined.
Using Node-Browserify: After running:
browserify module2.js > module2.browserified.js
I changed client.html to:
<script src='require.js'></script>
<script>
var module2 = require('module2');
alert(module2());
</script>
This doesn't work - it produces one error:
ReferenceError: module2 is not defined.
Using Smoothie.js by #Torben :
<script src='require.js'></script>
<script>
var module2 = require('module2');
alert(module2());
</script>
This doesn't work - it produces three errors:
syntax error on module2.js line 1.
SmoothieError: unable to load module2 (0 )
TypeError: module2 is not a function
I looked at require.js but it looks too complicated to combine with Node.js - I didn't find a simple example that just takes an existing Node.js module and loads it into a web page (like in the example).
I looked at head.js and lab.js but found no mention of Node.js's require.
So, what should I do in order to use my existing Node.js module, module2.js, from an HTML page?
The problem is that you're using CJS modules, but still try to play old way with inline scripts. That won't work, it's either this or that.
To take full advantage of CJS style, organize your client-side code exactly same way as you would for server-side, so:
Create client.js:
var module2 = require('./module2');
console.log(module2()); // prints: "this is module1! and this is module2!"
Create bundle with Browserify (or other CJS bundler of your choice):
browserify client.js > client.bundle.js
Include generated bundle in HTML:
<script src="client.bundle.js"></script>
After page is loaded you should see "this is module1! and this is module2!" in browser console
You can also try simq with which I can help you.
Your problems with Smoothie Require, were caused by a bug (https://github.com/letorbi/smoothie/issues/3). My latest commit fixed this bug, so your example should work without any changes now.
I'm trying to add unit testing for JavaScript into my web site. I use VS2013 and my project is an ASP.NET web site.
Based on recommendations (http://www.rhyous.com/2013/02/20/creating-a-qunit-test-project-in-visual-studio-2010/) I've done so far:
Created new ASP.NET app
Imported QUnit (using NuGet)
Into "Scripts" added links to js-file in my original web site (files PlayerSkill.js - containts PlayerSkill class and trainings.js - contains Trainer and some other classes)
Created new folder "TestScripts"
Added TrainingTests.js file
Wrote simple test:
test( "Trainer should have non-empty group", function () {
var group = "group";
var trainer = new Trainer(123, "Name123", group, 123);
EQUAL(trainer.getTrainerGroup(), group);
});
Notice: my trainings.js file among others contains
function Trainer(id, name, group, level) {
...
var _group = group;
this.getTrainerGroup = function () { return _group ; }
};
When I execute my test I see error: Trainer is not defined.
It looks like reference to my class is not recognized. I feel like linking file is not enough, but what did I miss?
Please help add reference to the original file with class and run unit test.
Thank you.
P.S. Question 2: Can I add reference to 2 files (my unit test will require one more class which is in another file)? How?
You should add all the relevant logic of your application to your unit testing file so they all execute before you run your tests
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<title>QUnit Test Results</title>
<link rel="stylesheet" href="/Content/qunit.css">
</head>
<body>
<div id="qunit"></div>
<div id="qunit-fixture"></div>
<script src="/Scripts/qunit.js"></script>
<script src="/Scripts/PlayerSkill.js"></script>
<script src="/Scripts/trainings.js"></script>
<script src="/TestScripts/TrainingTests.js"></script>
</body>
</html>
You should not use linked files because they will not exist physically in the script folder.
If you really want to use them you should let the Visual Studio intellisense resolve the physical path of the file like this.
Type the script tag <script src=""></script>
Place the cursor inside the quotes in the src attribute and press CTRL + SPACE
Search your files and let the resolved path untouched
If your project location changes you must update the linked files and also the script references.
{Edit1}
Solution 2:
You could also use an MVC Controller and a Razor View to create your unit testing page and the linked files will work as expected with the only issue that you will have an extra controller in your project but this is not bad at all if for example you want to test the loading of content using ajax that is by default blocked by the browser if they are run from a local file.
Solution 3:
You can also setup a new MVC project just for your javascript unit testing just as you usually setup a new project for any server side code and this will help to prevent your testing to interfere with your production code
{Edit 2}
Solution 4:
As part of the javascript ecosystem you could use grunt or gulp to automate the copy of your scripts from anywhere to your project before running the tests. You could write a gulpfile.js like this
var sourcefiles = [/*you project file paths*/];
gulp.task('default', function () {
return gulp.src(sourcefiles).pipe(gulp.dest('Scripts'));
});
And then run it opening a console and running the command gulp or gulp default
Looks like trainings.js is not defined when calling TrainingTests.js . See this question for more details regarding why this happens! Once that is fixed it does work. And yes similar to trainings.js you can have any number of files in any folder as long as you reference them properly. I have created a sample fiddle accessible # http://plnkr.co/edit/PnqVebOzmPpGu7x2qWLs?p=preview
<body>
<div id="qunit"></div>
<div id="qunit-fixture"></div>
<script src="http://code.jquery.com/qunit/qunit-1.18.0.js"></script>
<script src="trainings.js"></script>
<script src="TrainingTests.js"></script>
</body>
In my case I wanted to run my tests from within my ASP.NET web application, and also on a CI server. In addition to the other information here I needed the following, otherwise I experienced the same error as the OP on my CI server:
Add one or more require() calls to test scripts.
Set the NODE_PATH environment variable to the root of my application.
Example of require()
Within my test scripts I include a requires block, the conditional allows me to use this script from a web browser without needing to adopt a third-party equivalent such as requirejs (which is convenient.)
if (typeof(require) !== 'undefined') {
require('lib/3rdparty/dist/3p.js');
require('js/my.js');
require('js/app.js');
}
Example of setting NODE_PATH
Below, 'wwwroot' is the path of where /lib/ and other application files are located. My test files are located within /tests/.
Using bash
#!/bin/bash
cd 'wwwroot'
export NODE_PATH=`pwd`
qunit tests
Using powershell
#!/usr/bin/pwsh
cd 'wwwroot'
$env:NODE_PATH=(pwd)
qunit tests
This allowed me to run tests both within my ASP.NET web application, and also from a CI server using a script.
HTH.
If you're wondering how to make your tests see your code when running from command line (not from browser!), here is a bit expanded version of Shaun Wilson's answer (which doesn't work out-of-the-box, but contains a good idea where to start)
Having following structure:
project
│ index.js <--- Your script with logic
└───test
tests.html <--- QUnit tests included in standard HTML page for "running" locally
tests.js <--- QUnit test code
And let's imagine that in your index.js you have following:
function doSomething(arg) {
// do smth
return arg;
}
And the test code in tests.js (not that it can be the whole content of the file - you don't need anything else to work):
QUnit.test( "test something", function( assert ) {
assert.ok(doSomething(true));
});
Running from command line
To make your code accessible from the tests you need to add two things to the scripts.
First is to explicitly "import" your script from tests. Since JS doesn't have sunch a functionality out-of-the box, we'll need to use require coming from NPM. And to keep our tests working from HTML (when you run it from browser, require is undefined) add simple check:
// Add this in the beginning of tests.js
// Use "require" only if run from command line
if (typeof(require) !== 'undefined') {
// It's important to define it with the very same name in order to have both browser and CLI runs working with the same test code
doSomething = require('../index.js').doSomething;
}
But if index.js does not expose anything, nothing will be accessible. So it's required to expose functions you want to test explicitly (read more about exports). Add this to index.js:
//This goes to the very bottom of index.js
if (typeof module !== 'undefined' && module.exports) {
exports.doSomething = doSomething;
}
When it's done, just type
qunit
And the output should be like
TAP version 13
ok 1 Testing index.js > returnTrue returns true
1..1
# pass 1
# skip 0
# todo 0
# fail 0
Well, due to help of two answers I did localize that problem indeed was in inability of VS to copy needed file into test project.
This can be probably resolved by multiple ways, I found one, idea copied from: http://www.javascriptkit.com/javatutors/loadjavascriptcss.shtml
Solution is simple: add tag dynamically
In order to achieve this, I've added the following code into tag:
<script>
var fileref = document.createElement('script');
fileref.setAttribute("type", "text/javascript");
var path = 'path'; // here is an absolute address to JS-file on my web site
fileref.setAttribute("src", path);
document.getElementsByTagName("head")[0].appendChild(fileref);
loadjscssfile(, "js") //dynamically load and add this .js file
</script>
And moved my tests into (required also reference to jquery before)
$(document).ready(function () {
QUnit.test("Test #1 description", function () { ... });
});
Similar approach also works for pure test files.
I deploy my project by building source files with gulp right on the server. To prevent caching issues, the best practice could be adding a unique number to request url, see: Preventing browser caching on web application upgrades;
In npm repositories, I couldn't find a tool for automatically adding version number to request. I'm asking if someone has invented such tool before.
Possible implementation could be the following:
I have a file index.html in src/ folder, with following script tag
<script src="js/app.js<!-- %nocache% -->"></script>
During build it is copied to dist/ folder, and comment is replaced by autoincrement number
<script src="js/app.js?t=1234"></script>
You can use gulp-version-number for this. It can add version numbers to linked scripts, stylesheets, and other files in you HTML documents, by appending an argument to the URLs. For example:
<link rel="stylesheet" href="main.css">
becomes:
<link rel="stylesheet" href="main.css?v=474dee2efac59e2dcac7bf6c37365ed0">
You don't even have to specify a placeholder, like you showed in your example implementation. And it's configurable.
Example usage:
const $ = gulpLoadPlugins();
const versionConfig = {
'value': '%MDS%',
'append': {
'key': 'v',
'to': ['css', 'js'],
},
};
gulp.task('html', () => {
return gulp.src('src/**/*.html')
.pipe($.htmlmin({collapseWhitespace: true}))
.pipe($.versionNumber(versionConfig))
.pipe(gulp.dest('docroot'));
});
NOTE:
I can no longer recommend this plugin. It is no longer maintained and there are some issues with it. I created a pull request some time ago, but there is no response from the author.
You can use the gulp-rev module. This will append a version number to the files, the version is a hash of the file content, so it will only change if the file changes.
You then output a manifest file containing the mapping between the file e.g. Scripts.js to Scripts-8wrefhn.js.
Then use a helper function when returning the page content to map the correct values.
I have used the above process. However there's another module gulp-rev-all which is an forked extension of gulp-rev which does a little more, e.g. automatic updating of file references in pages.
Documentation here:
gulp-rev: https://github.com/sindresorhus/gulp-rev
gulp-rev-all: https://www.npmjs.com/package/gulp-rev-all
I worked onto writing an regex, which in use along with gulp-replace works just fine.
Please find the code below. Following is a quick code for the image and css for views files codeigniter framework. But it should work fine for all the kinds of files given the source folder specified correctly.
You may customize the code as per your use.
You can call the tasks altogether, using gulp default or individual task at a time.
'use strict';
var gulp = require('gulp');
var replace = require('gulp-replace');
function makeid() {
return (Math.random() + 1).toString(36).substring(7);
}
gulp.task('versioningCss', () => {
return gulp.src('application/modules/**/views/*.php')
.pipe(replace(/(.*)\.css\?(_v=.+&)*(.*)/g, '$1.css?_v='+makeid()+'&$3'))
.pipe(replace(/(.*)\.css\"(.*)/g, '$1.css?_v='+makeid()+'"$2'))
.pipe(replace(/(.*)\.css\'(.*)/g, '$1.css?_v='+makeid()+'\'$2'))
.pipe(gulp.dest('application/modules'));
});
gulp.task('versioningJs', () => {
return gulp.src('application/modules/**/views/*.php')
.pipe(replace(/(.*)\.js\?(_v=.+&)*(.*)/g, '$1.js?_v='+makeid()+'&$3'))
.pipe(replace(/(.*)\.js\"(.*)/g, '$1.js?_v='+makeid()+'"$2'))
.pipe(replace(/(.*)\.js\'(.*)/g, '$1.js?_v='+makeid()+'\'$2'))
.pipe(gulp.dest('application/modules'));
});
gulp.task('versioningImage', () => {
return gulp.src('application/modules/**/views/*.php')
.pipe(replace(/(.*)\.(png|jpg|jpeg|gif)\?(_v=.+&)*(.*)/g, '$1.$2?_v='+makeid()+'&$4'))
.pipe(replace(/(.*)\.(png|jpg|jpeg|gif)\"(.*)/g, '$1.$2?_v='+makeid()+'"$3'))
.pipe(replace(/(.*)\.(png|jpg|jpeg|gif)\'(.*)/g, '$1.$2?_v='+makeid()+'\'$3'));
});
gulp.task('default', [ 'versioningCss', 'versioningJs', 'versioningImage']);
It looks like you may have quite a few options.
https://www.npmjs.com/package/gulp-cachebust
https://www.npmjs.com/package/gulp-buster
Hope this helps.
You can use
<script type="text/javascript" src="js/app.js?seq=<%=DateTime.Now.Ticks%>"></script>
or
<script type="text/javascript" src="js/app.js?seq=<%=DateTime.Now.ToString("yyyyMMddHHmm") %>"></script>
I'm working on a library that targets both browsers and NodeJS applications. Modules use AMD convention which is theorically flexible enough to map pretty much any situation today. Source files are then to be converted with tools to be distributed for different platforms - again browsers and NodeJS.
By the way, there's a wonderful tool called uRequire to help with that but I'm still not sure what my best option is, so I'm asking here for relevant experience.
Here are the files hierarchy I have:
- bower_components/
- eventemitter2/ ...
- lodash/ ...
- source/
- library/
- lodash.js -> ../../bower_components/lodash/dist/lodash.js
- EventEmitter.js -> ../../bower_components/eventemitter2/lib/eventemitter2.js
- Observable.js:
define(["lodash", "EventEmitter"], function(Utility, EventEmitter) {
function Observable(options) { ... };
return Observable;
});
At the end, the big difference between browser and NodeJS sides is:
Browser-side: EventEmitter implementation simply is the eventemitter2 browser module that is configured to be "library/EventEmitter";
NodeJS-side: EventEmitter is gotten from require("events").EventEmitter, with events being a native package, not a local file or module;
So, my question is: how can I have that Observable object work with NodeJS without massive tinkering? What I'm not sure about how I can make the EventEmitter implementation available to my module since it is not a local module (as such I cannot write any paths mapping) and moreover it is not directly the module itself we'll use but the "EventEmitter" property of it...
Any help/thinking would be appreciated. I believe that many have run in similar situations and I'd be curious to know what they have to say!
uRequire makes it trivial to use runtimeInfo and selectivelly load alternative dependencies at runtime (you can always choose to have alternative builds and replace deps with alternative/mocks at build time, if you dont want to write selective code like this).
Runtime info works the same in all templates, including UMD and combined, so either if executing on:
nodejs
browser with an AMD loader like requirejs or
browser with the plain </script> tag,
you can choose what each module dependency means in each case dynamically, using __isAMD, __isNode & __isWeb runtime variables.
What you need is :
- bower_components/
- eventemitter2/ ...
- lodash/ ...
- requirejs/ ...
- source/
- library/
- EventEmitter.js
- Observable.js:
where Observable.js is for example
define(["lodash", "EventEmitter"], function(_, EventEmitter) {
function Observable(options) { this.myOptions = options };
Observable.EventEmitter = EventEmitter;
Observable._ = _;
return Observable;
});
and EventEmitter.js is :
define(function(){
var EventEmitter2;
if (__isNode) {
return require("events").EventEmitter;
} else {
if (__isAMD) {
return EventEmitter2 = require("eventemitter2");
} else if (__isWeb) {
return window.EventEmitter2;
}
}
});
** notes** :
You don't need to worry about "events" trying to load on the AMD side, cause its a known node dep (otherwise you would need to list it).
EventEmitter2 = require(...) is needed to establish the inference of exported dependency identifier EventEmitter2 to window. The last case, Web/Script uses window.EventEmitter2 thanks to this! Alternatively you can list it in depsVars.
Then with the following grunt-urequire config (in coffeescript):
module.exports = gruntFunction = (grunt) ->
grunt.initConfig gruntConfig =
urequire:
library:
path: "source/library"
dstPath: "build/UMD"
runtimeInfo: ['EventEmitter'] # dont need it in other files
template: 'UMDplain'
combined:
derive: 'library'
main: 'Observable'
dependencies: exports: root: {'Observable': 'Obs'}
dstPath: "build/almond/Observable.js"
template: 'combined'
grunt.loadNpmTasks "grunt-urequire"
you have two builds:
A) library : with separate UMD files, where you can run eg from source\test\load_node.js :
var Observable = require("../../build/UMD/Observable");
console.log(Observable.EventEmitter);
or from the browser (source/test/Loader_unoptimized_AMD.html):
<!DOCTYPE html>
<html>
<head><title>test crossdev: RequireJs, UMD</title></head>
<body>Check console!</body>
<script src="../../bower_components/requirejs/require.js"></script>
<script>
require.config ({
baseUrl: '../../build/almond',
paths: {
lodash: "../../bower_components/lodash/dist/lodash.min",
eventemitter2: "../../bower_components/eventemitter2/lib/eventemitter2"
}
});
require(["Observable" ], function(Observable){
console.log(Observable);
console.log(Observable.EventEmitter);
});
</script>
</html>
and
B) combined with all files inlined & having its own mini-loader (almond) that works on nodejs, Web/AMD and Web/Script. Running from source/test/Loader_almondJs_plainScript.html :
<!DOCTYPE html>
<html>
<head><title>test crossdev: plain script, combined/almond</title></head>
<body>Check console!</body>
<script src="../../bower_components/lodash/dist/lodash.min.js"></script>
<script src="../../bower_components/eventemitter2/lib/eventemitter2.js"></script>
<script src="../../build/almond/Observable.js"></script>
<script>
console.log(window.Obs);
console.log(window.Obs.EventEmitter);
</script>
</html>
or using RequireJs as AMD loader (source/test/Loader_almondJs_AMD.html):
<!DOCTYPE html>
<html>
<head><title>test crossdev: RequireJs, combined/almond</title></head>
<body>Check console!</body>
<script src="../../bower_components/requirejs/require.js"></script>
<script>
require.config ({
baseUrl: '../../build/almond',
paths: {
lodash: "../../bower_components/lodash/dist/lodash.min",
eventemitter2: "../../bower_components/eventemitter2/lib/eventemitter2"
}
});
require(["Observable" ], function(Observable){
console.log(Observable);
console.log(Observable.EventEmitter);
});
</script>
</html>
You can see the test project in https://github.com/anodynos/nodejs-browser-cross-development
We're developing a portal with lots of portlets (independent application within the page/portal). Each portlets have to be independent : They have to be able to run on stand-alone page from within the portal.
We've been ask not to add tons of javascript files to the portal base-page (the one that calls everything). It also comes with dojo (but no one uses it).
Are there any way to load javascript files (including jQuery aka, it can't be the solution) if they are not loaded yet? The answer can use dojo
Right now we though of
if (!window.jQuery) {
document.write('<script src="/Scripts/jquery-1.5.1.min.js" type="text/javascript"><' + '/script>');
}
if (!window.jQuery.ui) {
document.write('<script src="/Scripts/jquery-ui-1.8.11.min.js" type="text/javascript"></scr' + 'ipt>');
}
[...] other includes
The problem with this is that jquery isn't loaded when the jQuery.ui test is done, so an error is thrown and the 2nd file is not loaded.
Edit
Re-writing the issue : The problem is that we could have 4 portlets, each requiring jQuery + jQuery-ui + differents others plugins/files. So they need to all include code to load all those files independantly. But we don't want to load jQuery and jQuery-ui 4 times either.
The solution to this seems to be to use separate script blocks. Apparently the document.write will not effect the loading of the scripts, until the script block closes.
That is, try this:
<script>
if (!window.jQuery) {
document.write('<script src="/Scripts/jquery-1.5.1.min.js" type="text/javascript"><' + '/script>');
}
</script>
<script>
if (!window.jQuery.ui) {
document.write('<script src="/Scripts/jquery-ui-1.8.11.min.js" type="text/javascript"></scr' + 'ipt>');
}
</script>
Works for me. Tested in IE and Firefox.
Misread the question slightly (can and can't look very similar).
If you're willing to use another library to handle it, there are some good answers here.
loading js files and other dependent js files asynchronously
I've always injected js files via js DOM manipulation
if (typeof jQuery == 'undefined') {
var DOMHead = document.getElementsByTagName("head")[0];
var DOMScript = document.createElement("script");
DOMScript.type = "text/javascript";
DOMScript.src = "http://code.jquery.com/jquery.min.js";
DOMHead.appendChild(DOMScript);
}
but it's a bit picky and may not work in all situations
Just write your own modules (in Dojo format, which since version 1.6 has now switched to the standard AMD async-load format) and dojo.require (or require) them whenver a portlet is loaded.
The good thing about this is that a module will always only load once (even when a portlet type is loaded multiple times), and only at the first instance it is needed -- dojo.require (or require) always first checks if a module is already loaded and will do nothing if it is. In additional, Dojo makes sure that all dependencies are also automatically loaded and executed before the module. You can have a very complex dependency tree and let Dojo do everything for you without you lifting a finger.
This is very standard Dojo infrastructure. Then entire Dojo toolkit is built on top of it, and you can use it to build your own modules as well. In fact, Dojo encourages you to break your app down into manageable chunks (my opinion is the smaller the better) and dynamically load them when necessary. Also, leverage class hierachies and mixins support. There are a lot of Dojo intrastructure provided to enable you to do just that.
You should also organize your classes/modules by namespaces for maximal manageability. In my opinion, this type of huge enterprise-level web apps is where Dojo truely shines with respect to other libraries like jQuery. You don't usually need such infrastructure for a few quick-and-dirty web pages with some animations, but you really appreciate it when you're building complicated and huge apps.
For example, pre-1.6 style:
portletA.js:
dojo.provide("myNameSpace.portletA.class1");
dojo.declare("myNameSpace.portletA.class1", myNameSpace.portletBase.baseClass, function() { ...
});
main.js:
dojo.require("myNameSpace.portletA.class1");
var myClass1 = new myNameSpace.portletA.class1(/* Arguments */);
Post-1.6 style:
portletA.js:
define("myNameSpace/portletA/class1", "myNameSpace/portletBase/baseClass", function(baseClass) { ...
return dojo.declare(baseClass, function() {
});
});
main.js:
var class1 = require("myNameSpace/portletA/class1");
var myClass1 = new class1(/* Arguments */);
Pyramid is a dependency library that can handle this situation well. Basically, you can define you dependencies(in this case, javascript libraries) in a dependencyLoader.js file and then use Pyramid to load the appropriate dependencies. Note that it only loads the dependencies once (so you don't have to worry about duplicates). You can maintain your dependencies in a single file and then load them dynamically as required. Here is some example code.
File: dependencyLoader.js
//Set up file dependencies
Pyramid.newDependency({
name: 'standard',
files: [
'standardResources/jquery.1.6.1.min.js'
//other standard libraries
]
});
Pyramid.newDependency({
name:'core',
files: [
'styles.css',
'customStyles.css',
'applyStyles.js',
'core.js'
],
dependencies: ['standard']
});
Pyramid.newDependency({
name:'portal1',
files: [
'portal1.js',
'portal1.css'
],
dependencies: ['core']
});
Pyramid.newDependency({
name:'portal2',
files: [
'portal2.js',
'portal2.css'
],
dependencies: ['core']
});
Html Files
<head>
<script src="standardResources/pyramid-1.0.1.js"></script>
<script src="dependencyLoader.js"></script>
</head>
...
<script type="text/javascript">
Pyramid.load('portal1');
</script>
...
<script type="text/javascript">
Pyramid.load('portal2');
</script>
So shared files only get loaded once. And you can choose how you load your dependencies. You can also just define a further dependency group such as
Pyramid.newDependency({
name:'loadAll',
dependencies: ['portal1','portal2']
});
And in your html, just load the dependencies all at once.
<head>
<script src="standardResources/pyramid-1.0.1.js"></script>
<script src="dependencyLoader.js"></script>
<script type="text/javascript">
Pyramid.load('loadAll');
</script>
</head>
Some other features that might also help is that it can handle other file types (like css) and also can combine your separate development files into a single file when ready for a release. Check out the details here - Pyramid Docs
note: I am biased since I worked on Pyramid.