Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Angular 2 looks better and simpler than Angular, however, I have a problem using NPM - it is not allowed at my work. But the bigger question I have is why do we need NPM at all?
I have used Angular with CDN versions, which was always claimed to be better than a local version (better caching) so, what is the advantage of using NPM manager vs CDN references if any? Why grow a local size of a project?
Today I see WEB development uses NPM everywhere.
I want to understand WHY all of the sudden WEB development started to move toward local resources vs common, online resources.
I am looking for convincing explanations, good articles/blogs pointing to why chose one vs another.
One of the benefits of Angular is that the framework is structured in a way that allows you to tailor the application bundle to your specific application needs.
This is not possible with a one size fits all iife download from a CDN
If you look at the Angular npm packages you will see that they consists of a number of smaller modules that make up the framework.
Using a technique called "Tree shaking" your bundler can run static analysis on your code dependencies and create a bundle that only includes referenced modules. This can drastically reduce the bundle size.
Here is a some more info about Tree Shaking:
http://www.syntaxsuccess.com/viewarticle/tree-shaking-in-javascript
Mainly because a modern web app will use some kind of dependency or module loader, like requireJS or (in case of Angular2) SystemJS, or commonJS, and CDN sources make that more complicated, since it requires a new resource http connection to get your source asset, and from a different domain (crossserver scripting issues)
Related
Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I know a lot of people use the word "compile" quite loosely and interchangeably but I was hoping someone could explain to me like I'm 5 if it's technically incorrect to call a JS module bundler (eg. Webpack) a compiler or a build tool? I often hear things like "you have to compile your JS in order to update your bundle".
Thanks in advance.
It's definitely a build tool, and one which can be automated. One of the main use cases is to bundle various javascript sources into one or several bundle(s) of javascript, which is generally referred to as 'transpiling'. Transpiling basically means the output is the same language as the input, ie javascript in javascript out. Compiling is generally the act of turning source code into machine language, or IL. Webpack can of course also bundle other things, which is why on their own webpage they refer to it as a bundler.
In a colloquial sense, people often mean 'compile' to be the same as 'build', in the sense that you run your build tool.
Webpack it is a build tool.
Maybe you heard some like "some node package could not compiled for your OS" like
node-sass module. This module compliled from source code
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Several projects include its source code in the NPM distribution package bundle. For instance, React includes its unminified/unbuilt javascript files in a lib folder but it also has a dist folder with the built files.
Is this a good practice ?
On the downsides I think it increases the time the package will take to be downloaded and the disk consumption.
(that's why I usually add source code folders to the .npmignore file)
But I ask myself, why so many libraries do so ? Which are the advantages ?
I'm not sure if this question really falls under something that's asked here on SO, mostly because it's opinion based and could be more of a discussion. But here are my 2 cents anyways:
I believe most of these libraries add their source code (partially because they're open source) to help with debugging purposes. They are typically (but not always) bundled with a .map file as well. Conveniently there's a post that explains what a map file is..
If you think about it like this: anyone who is using your distribution will really only need to install it "once", as in they will probably not be installing it every time they want to use it, but just when they either want to cleanly deploy their project or simply when they install it.
Another thing to think about is: how large is your distribution? Will it really be so big that it will slow down installation time?
As for space a few MB will be negligent on pretty much any modern machine.
I personally think that it's good practice to include the source code as well, I like to know how libraries do what they do, and I like being able to have the option to look into why my code may cause the library to throw errors. But I'm sure there are good reasons not to as well.
tl;dr
They do it to help developers debug
Unless your project really takes a long time to install, don't worry about it
Unless your project is super big, don't worry about it
As a dev I like it when projects include it, but "good practice" is quite opinionated and it depends on the situation
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Am I missing something out of not using browserify?
I am a big fan of Yeoman, especially because of how they do things. By that I mean, their opinionated approach using, among other things, usemin and wiredep to handle client-side dependencies, transforms, and bundling.
However, I keep bumping into this one library, Browserify. Also as of late there's been a lot of hype regarding another, Webpack.
Having just read the latest npm blog post about the future of npm and module packaging in focus of the browser, all of this lead me to question myself - am I missing something here not using browserify?
Is it fair comparing something like browserify, webpack or inject to something like usemin with wiredep? If so are there any clear benefits to using any?
Its pretty fair to compare these. They all do multiple things with a lot of overlap between tools.
The main difference is if you are using some type of standard module loader like ES6 modules, requireJS etc.
usemin + wiredep works the old school way, you point it to all the files you want to minify etc and it will smash them all up and wire that up to the script tag.
The others read your imports/require and will track down the code they are using and smash that together. There are a ton of ways to optimize what code is actually imported compared to usemin. (dead code optimization, lazy loading)
In short if you are using a module loader like require then yes you are missing out.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
D3 has a ton of source code, but when they release they only release one long JavaScript file. How do they get all of the source into that JavaScript file? Is there a standard way to do this?
d3 is open source, so you can see exactly how it is done.
In this case, they use a Makefile using the smash node package to concatenate the files. Appears to be a custom solution (given that the author of this module is the primary developer of d3)
Others use different techniques. I prefer writing small scripts and simply concatenating them together
There are a number of tools that you can use to compress and obfuscate Javascript code. One of the best tools is Google's Closure Compiler. With such tools your code generally has abide by certain conventions in order to be compiled correctly and without introducing new errors. Closure provide the Linter tool to check your syntax and recommend changes. The Closure compiler is a command line tool, so you could concatenate your files and pipe them to the compiler for compression as described here: Compress all file .js with Google Closure Compiler Application in one File
Other tools are available as well, such as Require.JS which provides an JS optimizer that can compress your code as well as provide a number of other features like asynchronous loading.
What is becoming the standard way of doing this is to use Grunt and the Grunt concat plugin.
Grunt: http://gruntjs.com/
Grunt concat plugin: https://github.com/gruntjs/grunt-contrib-concat
Note: D3 is using a Makefile, might be historical, but Grunt is a simpler option in my opinion.
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 9 years ago.
Improve this question
I have just watched the meteor.js screencast and I'm quite blown away by how easy building a web application with it seems, in terms of live updates and database synchronisation. However, I am not sure of how well it would scale once it's live.
What problems (potential or real) could I have if I decide to build and deploy a web application written on meteor.js?
Well, I would advice you have a play about with meteor and make the judgement yourself. It really depends on what you wish to develop
Certain constraints I have found are
Meteor comes bundled with only mongodb. Other database support are planned for later releases
No model/object form binding (in road map)
Package system is not npm (although Meteor is built on node) and is closed to community. All meteor packages are developed by meteor themselves
Regarding performance, I found this article helpful
Here is another link to meteors roadmap
From my experience, I would say the advantages I have found outweigh any disadvantages at the moment
Having built client projects in meteor there is 2 things I immediately found hindering about the system:
1) No native support for MSSQL / MySQL or in fact any other DB than MongoDB (which jamin mentioned). That said, it sort of makes sense as to why this is the case, as a NoSQL solution with an easy to use JS api makes sense over a clunk RDMB database. However there is a plugin called Meteor SQL which supports MySQL at the moment https://github.com/drorm/meteor-sql
2) No native support for windows - Meteor is only released on linux & OSX meaning us windows users are out of the loop. There is an unofficial windows build on http://win.meteor.com but it's stuck at 0.5.9.
I probably wouldn't recommend building full sites out of Meteor yet as well due to it's various instabilities - https://github.com/meteor/meteor/issues however in a controlled environment it's perfect.
Also bear in mind Meteor have achieved an $11m funding grant - http://venturebeat.com/2012/07/25/meteor-funding/ meaning it will continue to improve and grow.
A huge problem for Application development are missing things like validation or translation.
You have todo everything on your own an include and use many external sources.
NPM support is not optimal, for backend usage ok, but in frontend a hack.