Browserify vs Usemin [closed] - javascript

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
Am I missing something out of not using browserify?
I am a big fan of Yeoman, especially because of how they do things. By that I mean, their opinionated approach using, among other things, usemin and wiredep to handle client-side dependencies, transforms, and bundling.
However, I keep bumping into this one library, Browserify. Also as of late there's been a lot of hype regarding another, Webpack.
Having just read the latest npm blog post about the future of npm and module packaging in focus of the browser, all of this lead me to question myself - am I missing something here not using browserify?
Is it fair comparing something like browserify, webpack or inject to something like usemin with wiredep? If so are there any clear benefits to using any?

Its pretty fair to compare these. They all do multiple things with a lot of overlap between tools.
The main difference is if you are using some type of standard module loader like ES6 modules, requireJS etc.
usemin + wiredep works the old school way, you point it to all the files you want to minify etc and it will smash them all up and wire that up to the script tag.
The others read your imports/require and will track down the code they are using and smash that together. There are a ton of ways to optimize what code is actually imported compared to usemin. (dead code optimization, lazy loading)
In short if you are using a module loader like require then yes you are missing out.

Related

Should I run my tests against production build transforms (i.e. Babel)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
A coworker of mine recently setup testing in a new project (a JS library) where a transform step hooks in to the babel config for Webpack in the production config.
For reference, this is the setting used with Jest: https://jestjs.io/docs/en/configuration.html#transform-object-string-string
The production build targets ES5, while our CI is on Node 10 and up. This means that, for all of our tests, the source code is getting transformed by all the unnecessary Babel transforms. Mind you, our source code is regular ES2016 Javascript, nothing too fancy. The only transform required might be the ES6 import syntax.
My gut reaction was that this was quite wasteful and unnecessarily couples the tests to the production build config. But my coworker's justification was that he wants to make sure that the tests run against the same artifacts that users will be using.
That makes a lot of sense to me, but I am not sure what the right answer is. What are the pros and cons of each approach? What are the dangers of running your tests against the production build transforms?

Is it incorrect to say a JS module bundler (eg. Webpack) is compiling? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I know a lot of people use the word "compile" quite loosely and interchangeably but I was hoping someone could explain to me like I'm 5 if it's technically incorrect to call a JS module bundler (eg. Webpack) a compiler or a build tool? I often hear things like "you have to compile your JS in order to update your bundle".
Thanks in advance.
It's definitely a build tool, and one which can be automated. One of the main use cases is to bundle various javascript sources into one or several bundle(s) of javascript, which is generally referred to as 'transpiling'. Transpiling basically means the output is the same language as the input, ie javascript in javascript out. Compiling is generally the act of turning source code into machine language, or IL. Webpack can of course also bundle other things, which is why on their own webpage they refer to it as a bundler.
In a colloquial sense, people often mean 'compile' to be the same as 'build', in the sense that you run your build tool.
Webpack it is a build tool.
Maybe you heard some like "some node package could not compiled for your OS" like
node-sass module. This module compliled from source code

Should I include my project source files in npm distribution? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
Several projects include its source code in the NPM distribution package bundle. For instance, React includes its unminified/unbuilt javascript files in a lib folder but it also has a dist folder with the built files.
Is this a good practice ?
On the downsides I think it increases the time the package will take to be downloaded and the disk consumption.
(that's why I usually add source code folders to the .npmignore file)
But I ask myself, why so many libraries do so ? Which are the advantages ?
I'm not sure if this question really falls under something that's asked here on SO, mostly because it's opinion based and could be more of a discussion. But here are my 2 cents anyways:
I believe most of these libraries add their source code (partially because they're open source) to help with debugging purposes. They are typically (but not always) bundled with a .map file as well. Conveniently there's a post that explains what a map file is..
If you think about it like this: anyone who is using your distribution will really only need to install it "once", as in they will probably not be installing it every time they want to use it, but just when they either want to cleanly deploy their project or simply when they install it.
Another thing to think about is: how large is your distribution? Will it really be so big that it will slow down installation time?
As for space a few MB will be negligent on pretty much any modern machine.
I personally think that it's good practice to include the source code as well, I like to know how libraries do what they do, and I like being able to have the option to look into why my code may cause the library to throw errors. But I'm sure there are good reasons not to as well.
tl;dr
They do it to help developers debug
Unless your project really takes a long time to install, don't worry about it
Unless your project is super big, don't worry about it
As a dev I like it when projects include it, but "good practice" is quite opinionated and it depends on the situation

Angular 2: why to use NPM manager instead of CDN references? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
Angular 2 looks better and simpler than Angular, however, I have a problem using NPM - it is not allowed at my work. But the bigger question I have is why do we need NPM at all?
I have used Angular with CDN versions, which was always claimed to be better than a local version (better caching) so, what is the advantage of using NPM manager vs CDN references if any? Why grow a local size of a project?
Today I see WEB development uses NPM everywhere.
I want to understand WHY all of the sudden WEB development started to move toward local resources vs common, online resources.
I am looking for convincing explanations, good articles/blogs pointing to why chose one vs another.
One of the benefits of Angular is that the framework is structured in a way that allows you to tailor the application bundle to your specific application needs.
This is not possible with a one size fits all iife download from a CDN
If you look at the Angular npm packages you will see that they consists of a number of smaller modules that make up the framework.
Using a technique called "Tree shaking" your bundler can run static analysis on your code dependencies and create a bundle that only includes referenced modules. This can drastically reduce the bundle size.
Here is a some more info about Tree Shaking:
http://www.syntaxsuccess.com/viewarticle/tree-shaking-in-javascript
Mainly because a modern web app will use some kind of dependency or module loader, like requireJS or (in case of Angular2) SystemJS, or commonJS, and CDN sources make that more complicated, since it requires a new resource http connection to get your source asset, and from a different domain (crossserver scripting issues)

How do projects like d3 get all of their source into one large javascript file? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 9 years ago.
Improve this question
D3 has a ton of source code, but when they release they only release one long JavaScript file. How do they get all of the source into that JavaScript file? Is there a standard way to do this?
d3 is open source, so you can see exactly how it is done.
In this case, they use a Makefile using the smash node package to concatenate the files. Appears to be a custom solution (given that the author of this module is the primary developer of d3)
Others use different techniques. I prefer writing small scripts and simply concatenating them together
There are a number of tools that you can use to compress and obfuscate Javascript code. One of the best tools is Google's Closure Compiler. With such tools your code generally has abide by certain conventions in order to be compiled correctly and without introducing new errors. Closure provide the Linter tool to check your syntax and recommend changes. The Closure compiler is a command line tool, so you could concatenate your files and pipe them to the compiler for compression as described here: Compress all file .js with Google Closure Compiler Application in one File
Other tools are available as well, such as Require.JS which provides an JS optimizer that can compress your code as well as provide a number of other features like asynchronous loading.
What is becoming the standard way of doing this is to use Grunt and the Grunt concat plugin.
Grunt: http://gruntjs.com/
Grunt concat plugin: https://github.com/gruntjs/grunt-contrib-concat
Note: D3 is using a Makefile, might be historical, but Grunt is a simpler option in my opinion.

Categories

Resources