I would like to pass a requireJS config.json and a source file like my main.js and receive a list of all its dependencies.
This is probably part of the r.js optimization application however I couldn't find any documentation about its inner structure.
Does RequireJS provide such an api to generate a dependency list?
There is no API that I know of. However, if you run r.js (the optimizer) with a build configuration like this:
({
baseUrl: ...,
dir: ...,
mainConfigFile: ".../config.js",
findNestedDependencies: true,
name: "main",
optimize: "none"
})
then you'll get a build.txt file in the directory where you specify to store the optimized build (the value of dir). The general format of this file is:
<output-file1>
----------------
<module A>
<module B>
<output-file2>
----------------
<module C>
<module D>
This tells you that the output file output-file1 contains modules A and B, output-file2 contains modules C and D.
With the configuration I suggested above, you should have only one output file listed in the build.txt file for your main module, and the list of modules listed under it are all the modules it depends on.
In the configuration above I've set optimize: "none" because it would save time if the only thing you care about is dependencies. In a real build you'd want to let r.js use uglifyjs to minify your code. Also, findNestedDependencies: true is there to tell r.js to find dynamic calls to require in the middle of your code. And thins brings up a major warning. If you have this:
if (blah)
require(["foo"], function (foo) {...});
then findNestedDependencies: true will be able to detect that there is a dependency on the module foo. However, there is no way for r.js to handle this:
var module_name = obj[key];
require([module_name], function (module) {...});
To know what module this code is loading, r.js would have to execute the code (and even then there's a limit to what code execution can discover).
Related
I'm using gulp.js and an optimization tool for requirejs (gulp-requirejs) it combines all scritps into one file. I have one define module with no name but it generates a name for it. The problem is I don't know how to call that module from another file.
For example in page1.js:
...
define("map",["jquery"],function($){
...
});
define("lib", ["jquery"],function($){
...
});
and in page2.js I would like to call page1.js lib module but I am not sure how to do it? I would prefer if the optimization tool did not set a name then my code works but this way I have no idea how to make it work. Any ideas?
It is not possible to use multiple modules in a single file, unless these modules are named. Otherwise RequireJS won't know how to relate a request for a module with the actual code that defines it.
The typical scenario when optimizing all modules into a single bundle is that there is going to be one specific module in the bundle which serves as the entry point of your application, and then you can just put that module name in paths:
require.config({
paths: {
lib: 'path/to/page1',
}
});
If you do not have a single entry point but may in fact also have code outside the bundle that will initiate loading modules that are in the bundle, then you need to list those modules in bundles:
require.config({
paths: {
lib: 'path/to/page1',
},
bundles: {
lib: ['map', ...],
}
});
The bundles setting I have shown above says essentially "when you look for the module named map, fetch the module lib, and you will have the definition of map."
I am using requirejs optimizer (r.js) through grunt and here is my requirejs config :
requirejs.config
baseUrl: '/scripts'
locale: window.localStorage.getItem('locale') || null
...
The thing is that the grunt r.js plugin (https://github.com/gruntjs/grunt-contrib-requirejs) throw an error everytime I try to use a variable inside my requirejs config.
The main config file cannot be used because it cannot be evaluated correctly while running in the optimizer. Try only using a config that is also valid JSON, or do not use mainConfigFile and instead copy the config values needed into a build file or command line arguments given to the optimizer.
Have you managed to use a variable as a locale and r.js at the same time ?
Your locale setting acquires a real value only at runtime. For parts of RequireJS' config that can only be given values at runtime, what I do is:
Just call require.config (or requirejs.config) once with the information which is static. The config does not contain any variables. I point r.js to this static information.
At runtime, I have at least one additional call to require.config that sets those values that are to be computed. RequireJS combines multiple calls to require.config into one configuration.
r.js will only use the first configuration it recognizes in a file. So you may be able to just split your single requirejs.config call into a static and dynamic part and leave them in the same file.
When I write tests for my in-browser TS code, I hit the following problem. My "test" code files are located in a separate folder from the "application" code files (an arrangement that I am not willing to give up). Therefore, in order to import my "app" modules, I have to do this:
// tests/TS/SubComponent/Module.Test.ts
import m = module("../../Web/Scripts/SubComponent/Module");
This compiles just fine. But when loaded in browser, it will obviously not work, because from the standpoint of RequireJS running in the browser, the module is located at "app/SubComponent/Module" (after being remapped through web server and RequireJS config).
With TS 0.8.3 I was able to pull off this clever trick, but in 0.9.0 it no longer works, because now the compiler doesn't let me treat a module as an interface.
So the question is: how do you test your client-side code?
Clearly, I can't be the only person to be doing it, can I? :-)
I can't tell if you are using Visual Studio - this next bit is Visual Studio specific...
This is how I do it:
In my test project, I created a folder named "ReferencedScripts" and
referenced the scripts from the project being tested (add existing
item > add as link). Set the file to copy to the output folder.
Source: Include JavaScript and TypeScript tests in Visual Studio.
Using add-as-link makes the scripts available in your test project.
Not using Visual Studio? I recommend creating a task / job / batch file to copy the files into the test folder. You could even use tsc to do this task for you.
I am in the middle of a project where I have to migrate parts of a large javascript project to typescript and this is how I managed to keep the tests running:
Use grunt-typescript task to watch and compile all my .ts files from the source to a tmp folder (with their source-maps). If you only have to deal with typescript files, then you can use the tsc in watch mode to do it as well. The latter would be faster, but the former allowed me to simultaneous edit javascript and typescript files with livereload.
Include the .ts files in karma.conf but don't watch them or include them:
// list of files / patterns to load in the browser
files = [
JASMINE,
JASMINE_ADAPTER,
// ...
// We want the *.js to appear in in the window.__karma__.files list
{ pattern: 'app/**/*.ts', included: false, watched: false, served: true },
{ pattern: 'app/**/*.js', included: false },
// We do watch the folder where the typescript files are compiled
{ pattern: 'tmp/**/*.js', included: false },
// ...
// Finally, the test-main file.
'tests/test-main.js'
];
Finally, in the test-main.js file, I mangle the names of typescript files and declare them as require modules with the correct paths (to the corresponding .js file) in test-main.js:
var dynPaths = {
'jquery' : 'lib/jquery.min',
'text' : 'lib/text'
};
var baseUrl = 'base/app/',
compilePathUrl = '../tmp/';
Object.keys(window.__karma__.files)
.forEach(function (file) {
if ((/\.ts$/).test(file)) {
// For a typescript file, include compiled file's path
var fileName = file.match(/(.*)\.ts$/)[1].substr(1),
moduleName = fileName.substr(baseUrl.length);
dynPaths[moduleName] = compilePathUrl +
fileName.substr(baseUrl.length);
}
});
require({
// Karma serves files from '/base'
baseUrl: '/' + baseUrl,
paths: dynPaths,
shim: { /* ... */ },
deps: [ /* tests */ ],
// start test run, once requirejs is done
callback: function () {
window.__karma__.start();
}
});
Then as I edit the typescript files, they are compiled and put in the tmp folder as javascript files. These trigger karma's auto watch and it reruns the tests. In the tests, the require calls resolve correctly since we have explicitly overwritten the paths to the typescript files.
I realise that this is a bit hacky, but I had to jump through similar hoops while trying to include all my tests with REQUIRE_ADAPTER. So I assumed that there is no cleaner way of doing it.
Hopefully, if typescript becomes more prevalent, we will see better support for testing.
So here's ultimately what I've done: it turns out that Karma can handle/watch/serve files that are not within the base directory, and it makes them look to the browser in the form of "/absolute/C:/dir/folder/blah/file.js". This happens whenever files -> pattern starts with "../".
This feature can be used to make RequireJS see the whole directory structure exactly as it exists on the file system, thus allowing the tests to import app modules in the form of "../../Web/App/Module.ts".
files = [
// App files:
{ pattern: '../../Web/App/**/*', watched: true, served: true, included: false },
// Test files:
{ pattern: '../js/test/**/*.js', watched: true, served: true, included: false }
];
Reference (version 0.8): http://karma-runner.github.io/0.8/config/files.html
Since the typescript code is compiled to Javascript you can use all Javascript test frameworks.
I am using Jasmine: https://github.com/pivotal/jasmine/wiki
You can write your tests in Typescript with the .d.ts file here: https://github.com/borisyankov/DefinitelyTyped/blob/master/jasmine/jasmine.d.ts
But my client code is rather small and compiled to one output file, so I don't have the module issues that you describe.
Might be that I misunderstood your question - can't comment yet...
The runtime of the browser does not need any typescript information. So your test script should import the compiled ts files the same way as any other javascript files they need. Might be that you have to copy them to a subfolder of your test-project before you run your script.
I assume the bigger problem is that you have no interface information. Why do you want to import these informations instead of referencing them? Especially since importing them will also occur in the browser.
The Reference will only take place in the IDE , so it does not matter in which folders the interface-files are located.
/// <reference path="../../Web/Scripts/SubComponent/Module/References.ts" />
Okay, I've just gotten dropped into a project I have several different modules written in AMD format. I need to get these javascript files that are loosely related into one javascript file, that will then be referenced as yet another AMD module across different projects (probably from a CDN).
The problem I'm facing is when I run r.js against these files and get them into one file, when I pull that combined file into another project it just gets spit out as undefined.
To give an idea of what I'm talking about
words.spelling.js
define(['jquery', 'some.other.class'], function($, foo){
...
}
words.grammar.js
define(['jquery', 'some.other.class'], function($, foo){
...
}
words.translation.js
define(['jquery', 'some.other.class'], function($, foo){
...
}
Run through r.js into words.min.js
Then say I pull it into app.js as
require(['jquery', 'app/main/main', 'words.min'], function($, main, words) {
$(document).ready(function() {
console.log(words);
}
words just shows up as undefined.
Just concatenating them all doesn't do anything as that just gives me a bunch of define statements one after another.
I tried creating a fake class that has
define(['word.grammar', 'word.translation', 'word.spelling'], function( g, t, s){
return {
grammar: g,
translation: t,
spelling: s
};
});
and running that through r.js, but no dice there either. Am I missing something here or am I going to have re-write these in non-AMD format so I can concatenate them together and return one giant object? Keep in mind, words.min.js is going to have to be hosted on a CDN and cached as it'll be shared throughout a number of projects so I need that as a separate file.
One solution would be to use a paths configuration to map these module names to their actual file:
So in development you use something like this
require.config({
paths: {
'words.spelling': 'libs/words.spelling',
'words.grammar': 'libs/words.grammar',
'words.translation': 'libs/words.translation'
}
}
You'll want to pass the same paths config from development into the r.js optimizer, so that the module names it puts inside the combined file have just the name, not some extra path info. For example, you want the modules name inside your combined bundle to be: 'words.spelling', not 'some/other/path/words.spelling'
And then to use the combined version in another application, you do something like this to map all those module names to the same file:
require.config({
paths: {
'words.spelling': 'libs/words.min',
'words.grammar': 'libs/words.min',
'words.translation': 'libs/words.min'
}
}
Part of the confusion is that this is not the primary use of the r.js optimizer. It seems to be designed for use by the final web site developers, not by the module developers. But as you see above, it's possible to coerce it into that mode.
I have a feeling that the title just might not be explanatory :)
Setup
Suppose that I have the following structure:
where app.js files are main bootstrapping/entry modules for the applications that look like this:
app01
require.config({});
require([
'app/component1.js'
],
function(component){
// do something with component1
});
app02
require.config({});
require([
'app/component2.js'
],
function(component){
// do something with component2
});
which both work with appropriate index.html files.
I have a RequireJS build configuration file (assume correct placement related to the paths) for app01:
({
appDir: 'apps/app01',
baseUrl: '.',
dir: 'built_apps/app01',
optimize: 'closure',
paths: {
},
modules: [
{
name: 'app'
}
]
})
which works just fine. Similar file (replacing app01 with app02) works just fine for app02.
Problem/target
Now I want to be able to run RequireJS build tool (using Google Closure with Rhino under Ant, not that it really matters in this case) for both app01 and app02 applications using the same build configuration file and, preferably, without actually listing all the apps by name (since the number and names may vary over time).
Basically I expect (or rather hope) to have something like this:
({
appDir: 'apps',
baseUrl: '.',
dir: 'built_apps',
optimize: 'closure',
paths: {
},
modules: [
{
name: 'app*/app' // notice the wildcard
}
]
})
which would run over built_apps directory, find all apps under app*/app and optimize each one of them.
I know I can use Ant to create such build configuration file on the fly per app, run build against it and then clean up, but I'd rather have RequireJS solution.
Is there a way to do something like this with RequireJS?
There's no built-in wildcarding configuration for RequireJS. One way or another, you'll need code to do this. I'll observe that what you're asking for here amounts to translating a wildcard into some kind of implicit iteration on the module objects, akin to what mustache.js provides for its templates. IMO, that's a fairly brittle and limited approach in this context.
Instead, I recommend generating your modules array programmatically in JavaScript right in the build config. Recall, the build config is JavaScript not just a JSON data structure. This gives you sophisticated scripting capabilities there.
I've done this kind of scripting in the build config for the requirejs-rails gem. Here's an example gist that shows what r.js would see at build time.