I am trying to understand how does this work in case of javascript. Webpack basically minifies the javascript when it says compiling I suppose. So in this case, how can some untrusted javascript code can lead to execution of malicious code through webpack on the server? A kind person in the javascript IRC channel told me this could be achieved using inline loader syntax. But I still don't understand how this is possible.
For reference the warning displayed in the getting started page is this:
Do not compile untrusted code with webpack. It could lead to execution
of malicious code on your computer, remote servers, or in the Web
browsers of the end users of your application.
Ref: https://webpack.js.org/guides/getting-started/
More information:
There are 2 ways it can execute code, when being compiled (via loaders), and when being run. SSR would be the latter, if you compile code on your server the former. https://webpack.js.org/concepts/loaders/#inline allows you to specify a loader without it being in the webpack config, and loaders can run whatever they want.
A script that is being webpacked (and any script it includes) can run code when it is packed, and this is by design.
That warning was added after I sent an email into the npm security folks (request 86380) about webpack running code during the packing phase with this POC.
The trick is to abuse the "Magic Comments" feature documented here.
Just checked on current webpack 5.23.0 and this still works.
Apparently inline loaders are another abuse pathway. See: https://github.com/webpack/webpack/issues/10231
Related
I am learning ES6 modules at this current time. It took me so long to finally understand and be proficient in closures, IIFEs and scope using one script that I was almost upset to find ES6 modules bring about a different, more manageable way to organise modular code into various different scripts and then for a bundler like Webpack to bundle it all back into one (or only a few) scripts.
I get the normal cross origin error when I put script type = ‘module’ and try to run modules on my local file system which is different to when I run a normal script just simply specifying a src!
Wherever I look the solution is to use a local host to get round this which I have done! But at what point does Webpack work its bundling? Is it when I run it in the command line or when it’s loaded into the browser?
If installing Webpack via npm in my project and setting up the configuration, does this mean I wouldn’t have to use a local host because my distribution code during the runtime is now in one script file, so it doesn’t have to import scripts not on the same URL?
I know it’s only on my local file system, but I cannot request scripts in the same folder when using ES6 modules due to the cross origin policy as I could if just specifying a script src without using modules.
Each time I run Webpack from the command line, it bundles together the latest code taken from the entry point (specified during my configuration).
This means I can use the ES6 module syntax on my entry file without having to use a live server as I never intend to load this JavaScript entry file into the browser. It’s simply to be the controller of all other script exports by importing what I need from them.
Then this script will be the target of Webpacks bundling (the entry point) meaning the only script that is loaded into the browser is the bundled script, which has no other imports. This avoids the need to run a live server when testing ES6 modules!!
If not using Webpack, I would have to run a server because my main script would be importing other scripts and unless they have the same URL (they don’t even have one yet in my file system) then I will get a cross origin error. As soon as I run my local server 127.0.0.1 then it would work. I prefer testing with Webpack.
When we talk about JavaScript vanilla it's frontend programming language; It needs a webserver like IIS, Apache or nginx etc to deliver the content to a client when requested. After that, JavaScript runs on client browser, but every video or article I found said we need to install node.js to make this work. What I know about node.js is its a runtime environment to make JavaScript work outside the browser; like for a backend api or regular desktop application.
Here is my question:
Why do we need to use Node.js if our target is to deploy a frontend webapp that's gonna run on the client browser?
You don't have to install and use Node to make frontend applications, but it can help a lot, especially in large projects. The main reason it's used is so that script-writers can easily install, use, and update external packages via NPM. For a few examples:
Webpack, to consolidate multiple script files into a single one for production (and to minify, if desired)
Babel, to automatically transpile scripts written in modern syntax down to ES6 or ES5
A linter like ESLint to avoid accidental bugs and enforce a consistent code style
A CSS preprocessor for Sass that can turn (concise) Sass into standard (more verbose) CSS consumable by browsers
And so on. Organizing an environment for these sorts of things would be very difficult without NPM (which depends on Node).
None if it is necessary, but many find that it can make the development process much easier.
In the process of creating files for the client to consume, if you want to do anything more elaborate than write plain raw .js, .html, .css files, you'll need something extra - which is most often done via NPM.
It's only for extra support during development, and ease of installing libraries. almost like an extra IDE / helpful editor
for example you might want to see changes you make on your HTML and frontend javascript code, without having to refresh the preview browser. node will provide a package that does that...
it also helps install and use libraries easier. for example, if you want to add a library like bootstrap to your frontend, rather than searching around and downloading the files... but if you use node project, you can simply use npm install bootstrap that will automatically download the lastest version from the right source.
that's all
I read through many articles on Browserify like http://javascriptplayground.com/blog/2013/11/backbone-browserify/ and there is always a step such as below:
$ browserify app/app.js | uglifyjs > app/bundle.js
This seems to be done before you run the script in the browser to see how it works. Is there a way NOT having to do build each time I change code? Something similar to define() function in requirejs...
It's 2015 now and there's a library for this, it's called drq. It uses internally sync xhr requests, so it's only well suited for development purposes. You just have to include it:
<script src="drq.js"></script>
And then, you can do your require calls in any script of the page:
<script>
var myModule = require('my-module'),
myClass = require('./classes/my-class.js');
// etc.
</script>
It will look for node modules up to your web root, so be sure to npm install them at a directory not upper than it. Also, please take a look at the GitHub page where you can find some tips to increase performance.
Again, please remember that bundles are the optimal solution for production.
I originally said you can't do this for the reasons below, but I want to add that where there is a will there is a way. I'm sure given enough time and effort, you (or someone) could (and probably will) come up with a way to accomplish this task - but as of right now (12/12/13), I don't know if there's any out of the box tools that will facilitate it.
browserify "modules" are written using the same concept as node.js modules. You write your code, and export any public methods/properties/etc via a module.exports object. Javascript in the browser doesn't support this sort of thing natively. There are some boilerplate templates (some info here) to help facilitate this in the browser, and they can be compatible with browserify, but...
When you browserify your code, the browserify script analyzes your syntax and finds the modules that it has to make available via the require method. This require method gets defined right in your bundle.js that you export, along with all the code for all the dependencies that your module needs. This allows the require method that browserify defines to work synchronously, returning a reference to the module that you requested immediately without waiting for any kind of web response (like, loading a js script).
Require.js works fundamentally differently than browserify. Require.js defines your packages using the define syntax you referenced, and exposes a require method which you use to tell Require.js which modules your code depends on. Require.js then, in turn, looks up the dependencies you require and if it hasn't loaded them for another module yet, generates a new script tag and forces your browser to load that module, waiting to execute your code until that is complete. This is an asynchronous process, which means, the javascript engine continues to process instructions while it waits for the new script to download, parse, and execute. Require.js wraps all of this up in some callbacks, so it can wait until all your dependencies are satisfied, before allowing your defined code to execute (which is why you pass functions to require and define, so require.js can execute them when it's ready).
The biggest reason to not want to bundle every time you make a change in development, is just for speed of iteration. Some things you can do (with browserify) to improve performance (that is, speed of bundling) are:
Don't uglify your code during development. You can just bundle it using browserify (make sure you use -d, for sourcemaps) without uglifying/minifying it, that should speed up the bundle performance a bit (for larger projects, anyway).
Split up your modules a little bit. Modules that don't have direct dependencies with one another don't have to be built at the same time. You can include different modules in your application using multiple script tags, or you can concatenate browserify bundles files together. You could absolutely set up some grunt tasks to watch your code for changes, and only compile the modules which contained the code change. This will cut out a lot of wasted cpu cycles, since browserify won't have to parse and transform multiple modules, just the ones that changed. From there you can re-concatenate into one big bundle, or just stick with the multiple bundle includes on the page.
If I have a node.js application that is filled with many require statements, how can I compile this into a single .js file? I'd have to manually resolve the require statements and ensure that the classes are loaded in the correct order. Is there some tool that does this?
Let me clarify.
The code that is being run on node.js is not node specific. The only thing I'm doing that doesn't have a direct browser equivalent is using require, which is why I'm asking. It is not using any of the node libraries.
You can use webpack with target: 'node', it will inline all required modules and export everything as a single, standalone, one file, nodejs module
https://webpack.js.org/configuration/target/#root
2021 edit: There are now other solutions you could investigate, examples.
Namely:
https://esbuild.github.io
https://github.com/huozhi/bunchee
Try below:
npm i -g #vercel/ncc
ncc build app.ts -o dist
see detail here https://stackoverflow.com/a/65317389/1979406
If you want to send common code to the browser I would personally recommend something like brequire or requireJS which can "compile" your nodeJS source into asynchronously loading code whilst maintaining the order.
For an actual compiler into a single file you might get away with one for requireJS but I would not trust it with large projects with high complexity and edge-cases.
It shouldn't be too hard to write a file like package.json that npm uses to state in which order the files should occur in your packaging. This way it's your responsibility to make sure everything is compacted in the correct order, you can then write a simplistic node application to reads your package.json file and uses file IO to create your compiled script.
Automatically generating the order in which files should be packaged requires building up a dependency tree and doing lots of file parsing. It should be possible but it will probably crash on circular dependencies. I don't know of any libraries out there to do this for you.
Do NOT use requireJS if you value your sanity. I've seen it used in a largish project and it was an absolute disaster ... maybe the worst technical choice made at that company. RequireJS is designed to run in-browser and to asynchronously and recursively load JS dependencies. That is a TERRIBLE idea. Browsers suck at loading lots and lots of little files over the network; every single doc on web performance will tell you this. So you'll very very quickly end up needing a solution to smash your JS files together ... at which point, what's the point of having an in-browser dependency resolution mechanism? And even though your production site will be smashed into a single JS file, with requireJS, your code must constantly assume that any dependency might or might not be loaded yet; in a complex project, this leads to thousands of async load barriers wrapping every interaction point between modules. At my last company, we had some places where the closure stack was 12+ levels deep. All that "if loaded yet" logic makes your code more complex and harder to work with. It also bloats the code increasing the number of bytes sent to the client. Plus, the client has to load the requireJS library itself, which burns another 14.4k. The size alone should tell you something about the level of feature creep in the requireJS project. For comparison, the entire underscore.js toolkit is only 4k.
What you want is a compile-time step for smashing JS together, not a heavyweight framework that will run in the browser....
You should check out https://github.com/substack/node-browserify
Browserify does exactly what you are asking for .... combines multiple NPM modules into a single JS file for distribution to the browser. The consolidated code is functionally identical to the original code, and the overhead is low (approx 4k + 140 bytes per additional file, including the "require('file')" line). If you are picky, you can cut out most of that 4k, which provides wrappers to emulate common node.js globals in the browser (eg "process.nextTick()").
I want to write an HttpHandler that compiles CoffeeScript code on-the-fly and sends the resulting JavaScript code. I have tried MS [JScript][1] and IronJS without success. I don't want to use [Rhino][2] because the Java dependency would make it too difficult to distribute.
How can CoffeeScript be compiled from .NET?
CoffeeScript-dotnet
Command line tool for compiling CoffeeScript. Includes a file system watcher to automatically recompile CoffeeScripts when they change. Roughly equivalent to the coffee-script node package for linux / mac.
CoffeeSharp
Includes a command line tool similar to CoffeeScript-dotnet as well as a http handler that compiles CoffeeScripts when requested from an asp.net site.
SassAndCoffeeScript
Library for Asp.net mvc that compiles sass and coffeescript files on request. Also supports minification and combination.
Manually Compile With IronJS
IronJS is a .NET javascript interpreter that can successfully load the CoffeeScript compiler and compile CoffeeScript.
Manually Compile With Node.js
Get the node binaries and add the bin directory to your path. Write a node.js script to load the CoffeeScript compiler and your CoffeeScript files and save the compiled javascript.
CoffeeScript is now fully supported by Chirpy:
http://chirpy.codeplex.com/
You specifically said that you wanted to write a runtime compiler, so this may not be exactly what you are looking for, but if the main point is to have a way to generate the javascript result, the Mindscape Web Workbench is interesting. It is a free extension for Visual Studio.NET 2010 and available in the Extension Manager. It gives Intellisense, syntax highlighting and compiles to JS as you write. I am just getting started using it but looks promising. Scott Hanselman talks about it here. It also supports LESS and Sass.
I've managed to compile CoffeeScript from .NET using IKVM, jcoffeescript and Rhino. It was straightforward, except that the JCoffeeScriptCompiler constructor overload without parameters didn't work. It ran OK with a java.util.Collections.EMPTY_LIST as parameter.
This is how I did it:
Download IKVM, jcoffeescript and Rhino.
Run ikvmc against js.jar, creating js.dll.
Run ikvmc against the jcoffeescript jar.
Add a reference to the jcoffeescript dll in Visual Studio. More references may be needed, but you will be warned about those.
Run new org.jcoffeescript.JCoffeeScriptCompiler(java.util.Collections.EMPTY_LIST).compile() in your code.
The next step would be to create a build task and/or an HTTP handler.
Check out the new coffeescript-dotnet project, which uses the Jurassic JavaScript implementation.
Since the CoffeeScript compiler now runs on Internet Explorer, after a couple of recent tweaks, it should be good to go within other MS-flavors of JavaScript as well. Try including extras/coffee-script.js from the latest version, and you should be good to go with CoffeeScript.compile(code).
I tried running the bundled extras/coffee-script.js through Windows Based Script Host (or just wscript) and it didn't report any issues. I then added this line:
WScript.Echo(CoffeeScript.compile('a: 1'));
at the end of the file and run it through wscript again and it printed the resulting JavaScript correctly.
Are you using COM objects? Can you share some more of the code responsible for initialising the MScript object reference?
CoffeeScript in Visual Studio 2010
It's Chirpy's fork (chirpy is a tool for mashing, minifing, and validating javascript, stylesheet, and dotless files)
"OK, I think I got it working on my fork, based mostly on other peoples' work. Check it out:
http://chirpy.codeplex.com/SourceControl/network/Forks/Domenic/CoffeeScriptFixes"
from http://chirpy.codeplex.com/workitem/48
I don't have a direct answer, (I hope you find one), but maybe take a look at the following to see how it might be done.
Jint - JavaScript interpreter for .NET
Using IKVM to compile Rhino would get rid of the Java runtime requirement.
jcoffeescript. I haven't looked at jcoffeescript, but I think it depends on JRuby and Rhino. You could possibly IKVM.NET this as well.
IronJS now supports CoffeeScript and is generally faster than the other .NET JS engines:
I have a blog post about wiring the two together here:
http://otac0n.com/blog/2011/06/29/CoffeeDemo-A-Simple-Demo-Of-IronJS-Using-CoffeeScript.aspx
My main editor is VS 2010 and I love the WorkBench extension. it's nice it auto compiles to js everytime you hit save on your .coffee file, also introduces you to SASS which I had read about but never got around.
They offer a pay version to that will autmaically shrink/minify your js and css files as well, since your.coffee and .scss are your source files anyway.
I'd encourage all VS users to go ahead and install this especially if you run VS 2010.
The only knock, and someone please correct me or enlighten me, is that with .coffee syntax it's not highlighted the way say html, js, c# code is. it might be because I am using a color scheme from http://studiostyl.es/ and for the record http://studiostyl.es/schemes/coffee- just shares the name coffee no special syntax highlight support for coffeescript that I am aware of. but no reason not to start using the workbench addin today!
Okay workbench website claims: syntax highlighting so again maybe it's the studiostyle.es i chose.
I know this is old but I came here to answer a very similar question: How do I get my CoffeeScript to compile using Visual Studio 2012 Express? Note that the free Express version does not allow any extensions so I could not continue to use the Mindscape Workbench extension that had served me well for quite some time.
It turns out to be very easy. Just use NuGet to install the Jurassic-Coffee package and off you go.
One advantage of using this package over mindscape workbench is that you can reference your coffee directly from the script tags in the html. It minifies and caches the compiled JS so you only do work if the requested coffee file has changed.
<head>
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.9.1/jquery.min.js"></script>
<script src="home.coffee"></script>
</head>
The mindscape workbench allows you to bundle together different coffescript files which is very handy for modularising your coffeescript. You can also do this using Jurassic Coffee by utilising the #= require statement to include other coffee module files, for example:
#= require Classes\GridWrapper.coffee
class UsersGrid
constructor:->
#grid = new GridWrapper()
I think having the #= require staement in the coffee file is actually cleaner and clearer than the mindscape workbench approach, which kind of hides all this behind their interface so you forget easily what dependencies you have.
Note
There is one potential gotcha. The Nuget installer will put in an httphandler entry into your web.config that may not be compatible with IIS Express integrated managed pipeline mode.
You might therefore see the following error:
An ASP.NET setting has been detected that does not apply in Integrated
managed pipeline mode.
To fix this just remove the handler shown below.
<system.web>
//other stuff
<httpHandlers>
<add type="JurassicCoffee.Web.JurassicCoffeeHttpHandler,JurassicCoffee" validate="false" path="*.coffee" verb="*" />
</httpHandlers>
</system.web>
You could simply write a port of it to C#. I have ported Jison to C# (which is the underlying project that makes CoffeeScript run). I would think it may be a bit different, but both Jison parsers work the same.
I have not pull requested it back yet to Jison's main architecture, but will be doing so soon.
https://github.com/robertleeplummerjr
Instead of shelling out to CScript you could shell out to Node.js (here are self-contained Windows binaries)
I've tried to compile the extras/coffee-script.js file, unmodified, to jsc, the JScript.NET compiler for .NET, and I got many errors. Here are the noteworthy ones:
'require' is a new reserved word and should not be used as an identifier
'ensure' is a new reserved word and should not be used as an identifier
Objects of type 'Global Object' do not have such a member
Other errors were caused by the above said errors.
You might also want to check out jurassic-coffee, it is also a coffee-script compiler running the original compiler in jurassic.
It features sprocket style "#= require file.coffee" or "#= require file.js" wich can be used to keep .coffee files modular and combined right before compilation as well as embedding .js files.
It sports a HttpHandler with file watchers for .js and .coffee files that keeps track of what .coffee files needs to be re-compiled and pass through to the compiled *.js files for the rest.
jurassic-coffee is also available as a Nuget package
https://github.com/creamdog/JurassicCoffee
I've done an HttpHandler that uses Windows Script Host behind the scenes: https://github.com/duncansmart/LessCoffee and works great (it also compiles *.less files).
It's on NuGet: http://nuget.org/List/Packages/LessCoffee
It's based on this simple wrapper: https://github.com/duncansmart/coffeescript-windows
I wrote an inteructive shell using v8.
https://github.com/mattn/coffee-script-v8
This work as single executable file. (Don't use external files)
It can't use require(). But enough to learn coffeescript.