If I do
repl = require 'repl'
repl.start {useGlobal: true}
It starts a Node repl. How do I start a CoffeeScript repl instead?
Thanks
Nesh is a project to try and make this a bit easier and extensible:
http://danielgtaylor.github.com/nesh/
It provides a way to embed a REPL with support for multiple languages like CoffeeScript as well as providing an asyncronous plugin architecture, support to execute code in the context of the REPL on startup, etc. For example:
nesh = require 'nesh'
nesh.loadLanguage 'coffee'
nesh.start (err, repl) ->
nesh.log.error err if err
It also supports a bunch of options with the default plugins and exposes some built-in convenience functions as well:
opts =
welcome: 'Welcome to my interpreter!'
prompt: '> '
evalData: CoffeeScript.compile 'hello = (name="world") -> "Hello, #{world}!"', {bare: true}
nesh.start opts, (err, repl) ->
nesh.log.error err if err
I think the coffee-script module does not export the REPL functionality to be used programmatically, like the Node repl module does. But CoffeeScript has a repl.coffee file that can be used, even though it's not exported in the main coffee-script module. Taking a hint from command.coffee (which is the file that's executed when you run the coffee command) we can see that the REPL works just by requiring the repl file. So, running this script should start a CoffeeScript REPL:
require 'coffee-script/lib/coffee-script/repl'
This approach, however, is quite hacky. The most important flaw is that it heavily depends on how the coffee-script module works internally and how it's organized. Nothing prevents the repl.coffee file from being moved from coffee-script/lib/coffee-script, or changing the way it works.
A better approach might be calling the coffee command without arguments, just like one would do from the commandline, from Node:
{spawn} = require 'child_process'
spawn 'coffee', [], stdio: 'inherit'
The stdio: 'inherit' option makes the spawned command to read from stdin and write to the stdout of the current process.
Related
My colleague put something like this in our code:
const information = require('../relative/path/' + tag + '.json');
The funny thing is that it works, and I don't really see how.
I have created this minimal project:
$ head *.json main.js
==> 1.json <==
["message #1"]
==> 2.json <==
["message two"]
==> 3.json <==
["message III"]
==> package.json <==
{
"dependencies": {
"webpack": "^5.38.1"
"webpack-cli": "^4.7.2"
}
}
==> package-lock.json <==
...
==> main.js <==
const arg = process.argv[2] ? process.argv[2] : 1;
console.log(require(`./${arg}.json`)[0]);
when I run the original program, I get this:
$ node main.js 1
message #1
$ node main.js 2
message two
$ node main.js 3
message III
so now I compile with webpack
$ node_modules/.bin/webpack ./main.js
and it creates a dist directory with a single file it in, and that new bundled program works too:
$ node dist/main.js 1
message #1
$ node dist/main.js 2
message two
$ node dist/main.js 3
message III
and when I look inside the bundle, all the info is bundled:
When I remove the require from the program, and just print the arg, the bundled program is a single line.
So how does it do it?
somehow calculate every possible file?
just include everything from the current directory down?
Funny thing is in my simple example, package.json ended up in there too, but in the real one that gave me the idea, it didn't.
Does anybody know how this works?
I mean the simple practical answer for me is, never put variables in require... but I am still curious.
PS the real one is a web project. just used node and args for the example
Webpack always bundles all require'd files into the final output file (by default called bundle.js). You cannot require anything that was not bundled.
If you require something that is not a constant, as you pointed out, it might lead to some trouble. That is why eslint has a no-dynamic-require rule for that. But if you know what you are doing, everything is just fine.
Webpack uses some heuristics to support non-build-time-constant values (i.e. expressions) for require. The exact behavior is documented in webpack's documentation on dependency management.
As explained in that link, your require('../relative/path/' + tag + '.json') will lead webpack to determine:
Directory: ../relative/path
Regular expression: /^.*\.json$/
And will bundle all files matching that criterion.
When your require call is executed, it will provide that file that matches it exactly, or throw an error if that file was not bundled.
Important: This means, of course, that you cannot add files after bundling. You must have files in the right place, before bundling, so they can be found, added and ultimately resolved by webpack.
Also note that often times, you don't need to write your own webpack experiments. Webpack has plenty of official samples. E.g. your case is illustrated exactly by this official webpack sample.
I have some JavaScript that is going to run in the browser, but I have broken the logic based functions that have nothing to do with the DOM into their own js files.
If it's possible, I would prefer to test these files via command line, why have to open a browser just to test code logic? After digging through multiple testing libraries for Node.js. I suppose it's not a big deal, but that seems to require that I build a whole node project, which requires that I provide a main, which doesn't really exist in my project since it's just functions that get fired from a web page.
Is there a solution for testing JavaScript functions that can be as simple as just writing a .js file with tests in it and calling a utility to run those tests? Something more simple than having to set up a runner, or build a project and manage dependencies? Something that feels like writing and running JUnit Tests in Eclipse, and a little less like having to set up a Maven project just to run MVN test?
As a follow-up question, is this even the right way to go about it? Is it normal to be running tests for JavaScript that is meant to run in the browser in Node.js?
Use test runners like mocha or jasmine. Very easy to setup and start writing test code. In mocha for example, you can write simple test cases like
var assert = require('assert');
var helper = require('../src/scripts/modules/helper.js');
var model = require('../src/scripts/modules/model.js');
model.first.setMenuItem ({
'title': 'Veggie Burger',
'count': 257,
'id': 1
});
describe('increment', function(){
console.log ("Count : " + model.first.getMenuItem().count);
it('should increment the menu item', function(){
helper.increment();
assert.equal(model.first.getMenuItem().count, 258);
});
});
and run them like
$ ./node_modules/mocha/bin/mocha test/*.js
where test/*.js are the specifications file (unit test file like the one above)
the output will be something like:
Count : 257
increment
✓ should increment the menu item
1 passing (5ms)
You can even use headless browser like PhantomJS to test containing DOM manipulation code.
I'm going to accept Ari Singh's answer for recommending Mocha, and special kudos to Ayush Gupta for leading me down a road that eventually let me write my js files in a format that could be ran in the browser and node.js.
Just to expand on Ari's answer a bit on some things that made life a little easier.
I installed mocha globally using npm install -g mocha. Additionally, I created a test directory that I put all my test in. By doing this, all I had to do to run my unit tests was call mocha test. No package.json, no lengthy paths into node_modules to run mocha.
Node js requires you to export the functions in one file that you want to use in another file, which JavaScript in browsers does not. In order to support both Node.js and JavaScript, I did the following:
In my root directory, I have foo.js with the following contents:
function bar() {
console.log("Hi")
}
module.export = bar
Then in the test directory I have test_foo.js with the following contents (Note this example doesn't have a test, see Ari's answer for an example of writing tests in Mocha):
var bar = require('../foo.js')
bar()
Using this approach, I can test the bar function in node using mocha test and still use it in my HTML page by importing it as a script.
I have more than javascript files in my html documents as external which I'd like to combine on account of not to be crowded. is there any way to combine my js files ? for example;
my files:
a.js
b.js
c.js
d.js
and i want;
all.js
Take a look at requirejs.org and especially look at r.js (http://requirejs.org/docs/optimization.html)
a.js
var i=0;
function fun1()
{
...
}
b.js
var k=0;
function fun2()
{
...
}
all.js just copy, paste like css
var i=0;
function fun1()
{
...
}
var k=0;
function fun2()
{
...
}
take care of semicolons and closed braces when you purticularly write whole script in an eventlistener, especially 'DOMContentLoaded'
document.addEventlistener('DOMContentloaed',function()
{
//whole big script
}
);
instead use
document.addEventlistener('DOMContentloaed',some_function);
var some_function = function(){*bla bla bla*};
A simple bash cat operation will do what you want, but, at some stage you're probably going to want more right?
Grunt and Grunt-contrib-concat is a good starter, but you'll quickly realise grunt is not particularly good. To summarise usage, you create a gruntfile, install a few dependencies (i.e. install grunt and its command line interface, this is easy) and run grunt from your project root. It then parses the gruntfile to find out what you want it to do, and it does it. Pretty simple, and simple is good.
Next up is Gulp, which is a nice build system using streams, so, slightly more complex (well, easier and more powerful but, streams can be kind-of confusing at first). Gulp works in the same way only it parses a gulpfile for instructions. For a concat operation the actual gulp command is trivial:
gulp.src( '*.js' )
.pipe( gulp.dest( 'all.js' )
Between the .src and the .dest you can pipe the files through multiple transforms, such as minifying, transpiling, notifying—the list of plugins and modules is dizzying (as it is for grunt).
However, if you’re a fan of node and npm (you probably should be) then you can use npm scripts to create a build system. npm is the node package manager and requires a package.json to give some clues as to how to work. Part of that json specification is a scripts block
"scripts": {
"build" : "cat *.js > all.js"
}
You can then use npm run build from the command line, whereby npm will parse the package.json and execute the script using bash (sh actually).
Note that these are build systems, and there are many others.
There are also other packagers (which you would probably use as part of your build system, although for some projects they are you entire build system) but they are more complex than your needs, for your own research browserify, webpack and jspm are all excellent (bare in mind AMD modules lost so require.js is probably not worth your time), although this area is becoming congested. Each of these are very powerful modularisation tools, but they will require some changes to how you structure your code. If you are serious about modularisation then they are worth your time learning.
On a slightly different tangent, there is some discussion about whether one large file is actually more beneficial than a number of smaller scripts. In many cases simply serving a few small files is actually quicker, and may be easier, although there can be other benefits of smashing code together. Currently it is probably still best to concat at least into less HTTP requests, but this requirement for performance is going away.
It might be helpful : https://github.com/mrclay/minify
OR
create file all.js and paste a.js,b.js,c.js,d.js file code
I am trying to browserify javascript es6 code with es6ify.
My code is using THREE js library (a webgl abstraction library), and everything works pretty well until I try to add the traceur compiler runtime at the top of the bundle.
Here is my gulp task (the problem mustn't be related to gulp):
gulp.task('build', function(){
browserify({debug: true})
.add(es6ify.runtime)
.transform(es6ify)
.require(require.resolve('./app/index.js'), {entry: true})
.bundle()
.pipe(source('bundle.js'))
.pipe(gulp.dest('./build/'));
});
somewhere in my application, I am trying to do something like:
import THREE from 'three';
var toto = new THREE.WebGLRenderer([...]);
and this fails because THREE is actualy an empty object, thus WebGLRenderer is undefined.
THREE js is in the dependencies of node package.json, and THREE JS is usually imported well. But when I add add(es6ify.runtime) in my build process, it causes require('three') to be an empty object...
is there something I missed?
thanks!
well, sorry for the inconvenience, I just found a solution.
If I simply exclude the node modules from traceur compilation, it works:
so instead of:
.transform(es6ify)
I have now:
.transform(es6ify.configure(/^(?!.*node_modules)+.+\.js$/))
(this is an es6ifyAPI documentation sample)
which is by the way faster to compile :-)
My use case is the following:
I decided to try coffeescript for some nodejs project and i want some of my source files to begin with #!/usr/bin/env node
Coffeescript treats lines that begin with # as comments.
I know that you can embed js code in .coffee but that is not the case because
file.coffee
`#!/usr/bin/env node`
foo = 'bar'
Compiles to:
file.js
(function() {
#!/usr/bin/env node;
var foo;
foo = 'bar';
}).call(this);
The compiler doesn't support this. See: https://github.com/jashkenas/coffee-script/issues/2215
But why not run it with coffee instead?
#!/usr/bin/env coffee
console.log 'Hello World'
Then just run ./my_code.coffee. The coffee executable is simply a wrapper around node, and can be used instead in nearly all circumstances.
Or create some sort of build system that tacks it on after the compile step. But you shouldn't really need to.
What you want is not possible with CoffeeScript, though you could - as Alex Wayne suggested - prepend the shebang manually to the file if you want to.
What I did for a project of mine, is making a very small JS script with a she-bang, that loads the JS code compiled from CoffeeScript. See this file https://github.com/meryn/jumpstart/blob/master/bin/jumpstart . This works well.
Another advantage of doing this is easier testing. You don't have to start a new child process to run the code. Instead, you can use call the run function, or however you have called it. This of course leaves the problem of passing proper parameters. I did this by making the run function (see https://github.com/meryn/jumpstart/blob/master/src/run.coffee for source) delegate practically everything to runWith function which can be passed all the input variables (environment, cli args, etc) the script needs.