I'm building a library, and plan to use parts (could be entire files or arbitrary lines) of other libraries in my code. Also, I would like to have fixes on the other library reflect onto my library as well.
I could just add the entire library (script tag, AMD, etc.) and use it. But I don't want to use the entire bulk of another library for my very small library. One of these libraries is Modernizr, but I'll only be using at most a dozen checks only.
I could just copy-paste the implementation from one library to mine. However, when the library I need updates, this would mean copy-paste all over again.
I read about GIT and submodules, where a subfolder could contain a sub-project. This sounds promising, where a build script could extract parts of the other library and put it into my code. However, the library could have different code structure than mine which would lead to manual editing, which defeats the purpose of some steps.
I haven't gone that deep into automation but I have had basic experience with makefiles. How would one go about in doing such integration?
It's not generally a good idea to use parts of libraries that didn't pre-package those parts for you. The correct way to do it would depend on each library and how it is meant to be made/compiled. There may also be license considerations, which will vary depending on the license of the library, some would require maintaining the license for the part of the library that you use.
[Edit]
Would it be possible to include the entire other libraries, and then use some sort of minification on your library to keep the size down?
Related
I have an Angular project and it uses a bunch of Javascript Libraries, starting with jQuery, going through Modal Forms, Tooltips and many more, mostly from third party providers. The thing is that, even when my Angular website makes use of these Libraries, the Website does not make exactly FULL use of the complete Libraries, but at the moment of Building the Dist files, the styles.xx.css and main.js are quite big files containing all these Libraries and Styles inside.
So, I was thinking there must be a way to only include in the final Distribution, only the "actual" code that is used by the Website and not the complete Libraries that includes the used and unused code. There are many features in those Libraries that the Website actually does not use, but these are at the same time, big files that make it difficult to just get in there and remove code by hand.
If there would be some sort of Code Coverage test that I can run on the complete website, just to "mark" all the actual used code and remove/discard from Dist compilation, all the unused code, that would be just awesome. This would be no-doubt a very efficient way to put on diet the Production compilations on any website.
Anyone knows if something like this exists?
You can certainly think of:
Implementing Lazy loading -> Helps in reducing main file sizes and only chunks are produced with less size
Go with modular architecture
Import the package as provider for the particular modules
I've been using a grunt file to concatenate all my JS into a single file which is then sent to the client. What advantage do I have in using require calls then? The dependencies are inherent from the concatenation order and I don't have to muddy all my JS with extra code and another third-party library.
Further, backbone models (for example) clearly state their inheritance in their definitions. Not to mention that they simply wouldn't work if their dependencies weren't included anyway.
Also, wouldn't maintenance be easier if all comments related to dependencies were in one place (the grunt file) to prevent human error and having to open every JS file to understand its dependencies?
EDIT
My (ordered) file list looks something like:
....
files: [
"js/somelib.js",
"js/somelib2.js",
"js/somelib3.js",
"js/models.js",
"js/views.js",
"js/controllers.js",
"js/main.js"
], ...
So perhaps requireJS isn't worth it for small projects anyway.
Using require.js allow you to break down each part of your application into reusable modules (AMD) and to manage those dependencies easily. It is not easy to manage dependencies in a javascript application with 100 classes, for example.
Also, if you don't want all the overhead of require, check this out (developed by the same guy who created require.js): https://github.com/jrburke/almond
The answer depends on the size of your app and the end use case..
A single site.min.js payload for the front end (client) generally aims for small file sizes and simple architectures (1 single file generated from maybe 10).
back end based (server) apps are usually much bigger and complicated and therefore may warrant the use of another tool to help with managing large code libraries and dependencies (50 files for example).
In general, RequireJS is worthwhile but only if you have many files and dependencies. An alternative for use in the client would be almond. Again, using a tool like this must warrant the need (many files and dependencies).
The answer from orourkedd is also worth reading.
So my question is kinda a clone of this one except the answer proposed use .net technology and I'm working on linux.
Here is a summary :
I'm working with html5 based slide for presentation. These slides are created like every website with subfolders containing resources. I'm looking for a way to convert this slides in a standalone file to be able to share them easily.
This just means replacing all images by base64 images and js/css import by inline plain text.
I'm also using require.js so replacing javascript import could be a bit more tricky but this will be a second time problem.
I'm not using MHTML because it's not really supported by browsers.
Try to use Gulp.js or Grunt.js which operate with files and have plenty of plugins each. I personally prefer to use Gulp because of its stream-based model—it's fast and flexible, but you may find Grunt more simple or (very likely) find an appropriate plugin faster. Both of them are Node.js utils accepting configuration files written in JavaScript, so you don't have to use Java or any nonconventional technology for this task.
You may start with reading an introductionary article about Gulp, then search for available gulp plugins by one of the keywords: inline, asset, minify, etc.
Good luck with workflow optimization!
Use the single-file-cli which does what you need.
We have a large web project, where we need components which can talk to each other which can be put in a central repository of components for different projects.
Using reuirejs and Backbone for the modular development. Went through different boilerplate available for backbone and requirejs, but none matched my requirement. So I have created following directory structure. It can be explained as follows.
---resources
|---custom-components
|---mycomponent
|---js
|---views
|---models
|---collections
|---css
|---templates
|---mycomponent.js
|---mycomponent2
|---js
|---views
|---models
|---collections
|---css
|---templates
|---mycomponent2.js
|---libraries
|---backbone
|---underscore
|---jquery
|---jquery-ui
|---jqueryplugins
|---jcarouselite
|---thirdpartyplugins
|---page-js
|---mypage.js
|---mypage2.js
resources directory will contain all the resources. Under that we will have 4 directories as mentioned.
libraries, jqueryplugins and thirdpartyplugins are obviusly the directories for the name they say.
page-js directory will contain the actual main-js which will be used inside our html file as requirejs data-main attribute.
Custom-component is where all widgets created by us will reside, as you can see it has a js file with same name as that of the component, which will be entry point of this widget. This directory also has directories for js, css and templates. CSS and templates will be loaded by text plugin and CSS plugin respectively. Js directory will contain all the backbone code to make this widget work.
Custom components will be asked by main-js residing in page-js.
Coming to what I need.
1. I want experts to have review this directory structure in perspective of large web projects, where you will need to share your widgets with other teams. suggestions are welcome.
2. My each custom-component will define a module, which will have dependencies within package structure as well as outside package structure. I want to know, if there is any way to use r.js to optimize only my custom widget dependency within package structure and let the plugins and libraries optimized separately.
3. I am developing single page ajax application, so I will be asking modules on demand so I need to cleanup modules and widgets when I dont need them, is there any way for cleaning up I should be aware of?
About the directory structure
As a directory structure pattern, I highly recommend using directory structure of cakePHP. it's really robust as in words!! I'm running multiple apps (one of them is as big as Groupon) and it works just like a charm.
You may need to tweak it a little because, you know, cake is a PHP framework and yours is a javascript one.
Here is the cake's awesome MVC directory structure:
Please note that you may host thousands of apps on a single cake installation. so if you're interested, what are you waiting for? go to their site and read their docs.
About the cleaning up techniques
Well, here is one of the downsides of the Javascript which I don't like. there is no real way to destroy a OO module like in Java or C++. here we don't have such things like C++'s ~ destructors.
For many years, programmers use module = null to free up memory from un-used codes.
Take a look at these also:
Can dynamically loaded JavaScript be unloaded?
Loading/unloading Javascript dynamically
How to unload a javascript from an html?
Hope it helps and good luck on designing your app ;D
Probably I'm late in answering this, but anyway let me share my views here, incase someone else finds it useful.
Your directory structure looks alright. It is always a better design to keep your business components self contained in to a particular directory. I will not recommend Cake MVC structure which break the Open Close Principle. Also have a look at the directory structure recommended by http://boilerplatejs.org which is a reference architecture for large scale JavaScript development.
I do not get the question very clear. when r.js is run it will optimize all JS files it find in the directory (exclude possible) and then create a single script by going though the dependency tree. In production you only need that single script (plus locale files if i18n plugin is used)
Read my blog post below. It might give you some hints: http://blog.hasith.net/2012/11/how-much-multi-page-single-page.html
When developing javascript code, what are the best practices for maintaining the code in repositories?
For example, suppose I develop a set of useful functions and put them in a script called "sugar.js". In the code repository I put them in c:/codebase/suger.js.
Now I want to use the script in a web site being developed and I locate it at c:\mywebsite\sugar.js (ready for uploading to a server)
Do I keep a copy of sugar.js? What if I fix sugar.js in one location - it won't be synchronized with the other?
What if I build a second web site that also uses sugar.js? Do I take another copy located at, say, c:\mywebsite2\sugar.js?
If you are using something like visual studio, you can use NuGet for versioning many of the popular javascript frameworks on a per-project basis.
If you are writing in something else, you could try package managers such as npmjs or http://jspkg.com/JSPkg.
If it is your own library, I would recommend setting up source control and having versioned releases as branches or tags, that way you can keep track of everything. Git and GitHub support this type of thing, and you can set it up to have each version as a zipped download.
I would also try to keep each project's javascript files separate, that way any changes won't immediately break every site, just the one you recently updated. This advice could go out the window if you are running hundreds of sites and really just need a CDN.