What should I use to manage growing number of JavaScript files in my application?
We are building a django application with several apps. Each app has different functionality, and has to be rendered in three different modes (pc, tablet, mobile). There is a lot of things happening in JavaScript: managing data received from the server, handling user events, injecting HTML snippets, and loading sub-components. Some of the functinality is shared between apps and view modes, but often it makes sense to write a specific functions (for example, hover and click events may have to be handled differently on a PC layout vs. a tablet layout) so we are grouping this in files based on app/layout/function.
Up to a point we were using a flat file structure with naming to differentiate types of files:
ui.common.js
ui.app1.pc.handlers.js
ui.app1.pc.domManupulators.js
ui.app1.tablet.js
ui.app2.pc.js
...
Right now, however, as the number of apps (and corner cases) grows this way is fast becoming unusuable (we're approaching 20+ files and expecting maybe 40+ by the time we're done), so we are putting everything in directories like so:
js/
common/
core1.js
ajax2.js
app1/
tablet.js
pc.js
app2/
mobile.js
...
I have been looking at JavaScriptMVC to help with this. While it does offer useful tools it doesn't seem to have anything that would specifically make managing our giant JavaScript library better. We are expanding our dev team soon and code maintainability is very important.
Is there something that may make our life easier? Are there any habits/rules of thumb you use in your work that could alleviate this?
Backbone.js is used to organize javascript heavy applications in an MVC-style pattern. It's going to take some learning, but it's definitely something you'll want to look into and learn a bit about even if you don't end up using it.
It's used on quite a few pretty impressive projects
And, here's a site to learn more with tutorials.
Typically, grouping libraries by commonality (like your second example) would be preferred. However, more importantly would be making sure you have namespaced or otherwise make them unique so that you are unlikely to get naming collisions with other potential scripts.
Related
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Background
I need to create a potentially very large HTML/JS mobile web app that will be delivered as a mobile web site and natively using Phonegap. I'm currently working to determine the best way to organize the app itself.
The basic plan is to have many modules that will each focus on a different subject of interest . Some of these modules will be very basic (ie, announcements / news) and some will be very complex (ie, sports: team players, schedules, video, etc). There will be a side-drawer navigation that will apply to most pages so users can quickly navigate to a different module. There needs to be the ability to deep-link within modules. These modules will be created by a variety of developers and vendors.
Single Page App
Most of the mobile solutions I see involve Single Pages, which seem like a bad idea to me in this case, since there is the potential for so much memory use. It also seems like it would be difficult to reconcile hash navigation between modules and hash navigation between section within modules. Module development would have to be done with the app framework in mind and limits how things can be done by vendors and developers. On the other hand, things aren't getting loaded as often and everything can easily communicate with each other.
Multiple Page App
Using multiple pages, it seems like each module could easily be created in whatever technology a vendor was comfortable with (and could do quickly and cheaply). It would cut down on memory use, but also remove the ability for modules to communicate (a feature that I don't know is necessary for us at this point). I could see making a javascript library every module would use for common handling of various events (like logging errors, navigation, etc). Each app navigation between modules would be a new page call, resetting the DOM. Each module could use a single page design if it wished.
Help Me Please
So, is there any common or new knowledge about how things like this should be designed? I'm eager to begin work, but don't want to be rewriting things that may already exist. Do I have any glaring flaws in my reasoning? I'd love to hear from anyone that has insight.
Honestly, if you are considering building any app that you believe will be high volume and of high complexity, you really ought to consider doing native development, or at least use something like Appcelerator where the application will be "compiled" down to native code for better performance. If you are intending to just let any number of developers build their own javascript components that may or may not do a good job of managing limited system resources, so are going to quickly run into application performance issues.
On the other hand, if you are just going for proof of concept and don't mind potentially having to greatly refactor your application architecture when and if you get a sufficient level of complexity, then you may want to simply go with the web app approach.
Really, you need to also be considering your backend service architecture as much or more than the frontend architecture. that is really where you are going to run into problems down the line in trying to integrate offerings from other developers.
I had a similar architectural problem to contend with a couple of years ago. It wasn't mobile, but it wasn't entirely web-based either. The target applications were a mix of web sites and desktop apps, with the potential to go mobile in the future.
The interesting part was that there had been two prior attempts at creating a framework that could be used in a variety of situations. The problem, and the reason both attempts failed, was that the developers saw it as a UX problem space. They approached it using several different technologies, but became mired months down the road because they had made assumptions up front and flown the project by the seat of their pants.
My approach was to eschew all the UI discussion completely and focus on a backend architecture that could be approached from any standpoint. To this end, I created a web service that had data going in both directions, and was ultimately serving a mathematical model. The service is being accessed from a variety of sources using different technologies: Flash, Unity, a Google Earth plugin, and finally, from an unrelated web architecture serving up good ol' HTML.
My advice to you, is don't concentrate on the front-end mapping so much as get your back-end right. Once you have a data structure in place, you can build outward, and several issues such as memory management, monolithic app or not, i.e. one page versus many, will almost resolve themselves. Work on creating a great API with lots of good interfaces and you won't fall into a "many chefs" hole. Give a bunch of dispersed developers enough rope, on the other hand, and you'll never find where all the knots are!
The decision whether to ultimately go native API over HTML5-based technology such as Sencha Touch, jQuery Mobile, or Phonegap is an evangelical black hole that will be played out over the coming months and years. Native apps may be more fluid and speedy in some cases, but the investment in resources is something that should be considered. On the other hand, JavaScript developers are lurking around every corner and are not in short supply.
Your first step is to nail down those requirements.
If you're doing this for yourself or for your own company, then nail down how these modules (co-)operate.
If you are doing this for your employer, then somebody there ought to have a clue what they want to see, otherwise, how are you going to build it?
A solution which supports multiple pages, opening and closing modules with no communication is going to require different things than a framework which is responsible for maintaining multiple widgets at the same time, which may or may not communicate through system-calls or services.
There's no way around that -- building services/sandboxing/etc for modules is going to take more work than treating each like a page-change (or making them be literal page-changes).
When you figure out what you want your program to do, start building out an idea of the API you'd like other people to have.
Are you going to provide them with an API for building UI components, or are you going to leave that to their own whims?
Personally, I'd avoid a situation where each module change just replaces iFrames, and then the end-user can do whatever they want in there.
Likewise, I'd avoid situations where you're allowing module-creators to run whatever they'd like in a non-sandboxed environment... It ends poorly for your end-users (or you, in UK court).
But that's not a concern, yet.
First-concern is what does your platform do.
Then figure out what your back-end communication is going to look like, and what interfaces you're going to provide to module creators, and how you're going to get data from your end to theirs (http-based API, REST or whatever else... ...but work it out WELL, if you don't already have it).
THEN, when you know what your platform is expected to do, AND you have a backend which can serve all kinds of tasks well, figure out what services you're going to provide to content-creators, to make their widgets, and to upload/download data from your service, and sandbox, and the like.
I am building my own JS library;
The idea is that the library should be comprised of small, independent modules, and some slightly larger utilities, that serve mainly to iron out browser differences.
I am having trouble getting anything done, because I am not being able to decide between staying dry or being loosely coupled.
An example? Given:
A small library that takes care of generating dom elements from a template
Another one that takes care of duck-typing issues (is_function, is_array...)
And a last one that creates modal boxes. That last one needs:
some type checking
will be creating the modals using only one function from the templating library
My options for the modal box library:
Be 100% dry, and dependant on the two other libraries. But that means if you are a user wanting to download only the modal box lib, you'll have to make with the two others
Allow users to pass an object of options on initiation that would allow them to specify the functions needed; defaulting to the ones of the libraries. This is better, but in practice, it still means, in 90% cases, using the provided libraries, as creating functions with the same signature might be cumbersome. Furthermore, it adds complexity to the modal box code.
Be 100% loose, and reproduce the functions needed in my modal box library; possibly more efficient because more targeted and there is no need to check for edge cases; but: any bug will have to be fixed in two places, and my download size increases.
So I am wasting time oscillating between the two extremes, refactoring a million times and never being satisfied.
I was going for a more generic question, but then I realized it is really pertaining to JS, because of the size & performance concern as well as the widespread usage.
Is there any known pattern to follow in such cases? What's the accepted way to go about this? Any thoughts are welcome.
[edit:]
This is the only article I found that spells out my concerns. Like the article says,
DRY is important, but so are [...] low coupling and high cohesion. [...] You have to take all [principles] into account and weigh their relative value in each situation
I guess I am not able to weigh their value in this situation.
Personally, I've always taken the view that Loose Coupling refers to creating seams in your code. In classical languages, such as Java, this is achieved by creating Interfaces which hide the concrete implementation. This is a powerful concept as it allows developers to 'unpick the seams' in your application and replace the concrete implementations with mocks and test doubles (or indeed, their own implementation). As JavaScript is a dynamic language developers rely on duck-typing instead of Interfaces: as nothing is frozen, every object becomes a seam in your code and can be replaced.
In direct answer to your question I think it pays dividends to always aim to decompose and modularize your code into smaller libraries. Not only do your avoid repeating code (not a good idea for a host of reasons) but you encourage re-use by other developers who only want to re-use parts of your library.
Instead of re-inventing the wheel, why not leverage some of the more popular JavaScript libraries that are out there; for example, underscore.js is a lightweight library which provides a rich toolkit for duck-type checks and Mustache.js may well take care of your templating needs.
Many existing projects already use this approach, for example, Backbone.js depends on underscore.js and jQuery Mobile depends on jQuery. Tools such as RequireJS make it easy to list and resolve your application's javascript dependencies and can even be used to combine all the separate.js files into a single, minified resource.
I like the concept of DRY, but your right it has a couple of problems.
Your end-user-developers will need to know that they need to download the dependencies of components.
Your end-user-developers will need to know that they need to configure the dependencies (i.e. the options to pass in).
To help solve 1. your project website could customise the download on the fly, so the core code is downloaded along with optional components. Like the modernizer download page.
To help solve 2. Rather than allowing users to pass in options, use sensible defaults to detect what parts of your packages have been loaded in the browser and automatically tie them up.
This loose coupling could also give you the great advantage that could also rely on 3rd party frameworks if the user already has them installed. For example selectivizr allows you to use jquery or dojo etc etc depending on what the browser has already loaded.
Perhaps you could use requirejs to help solve dependency management. I get the impression it's not really meant for libraries to use directly, but instead the end-user-developer... but it could be a nice fit.
I realise my answer doesn't answer your question directly, but perhaps it could help balance out some of the negative points of DRY.
As an Actionscript programmer shifting to JS/jQuery I often have to author multipage apps targeted mainly to iOS and I'd like to know what is the best way to structure such apps.
Most of the time my apps are presentations, where each page has a different behavior (i.e., some popups on page1, a group of sliders on page2, some drag and drop action on page3... you get the picture), and more often than not I have to keep track of several variables across different pages.
Right now I handle it like this: I have a group of common functions in a script named my_app.js, while each page has its dedicated pageX.js script to account for its specific duties. I store persistent values through the storage.js library and somehow manage to stick it all together and make it work.
However I recognize that there may be a vast area for improvement to this approach, so I'd like to know how more seasoned developers deal with this situation.
Thanks a lot,
Goblin
What you've done seems OK for a smallish app, but as another answerer said, I'd look at an MVC architecture. I can heartily recommend backbone.js, it's pretty lightweight, and simple to use.
You could easily make a controller for each type of view that you need (e.g. sliderController, dragDropController, etc) and then if you needed to, subclass ('extend') these controllers to be platform specific (e.g. iPhoneSliderController, iPadSliderController, desktopSliderController, etc).
If I had more info about this app - like the data behind it, what the user is achieving by dragging/sliding - then I might be able to give a more specific layout for the models, views, and controllers you might want. But hopefully this is a good starting point, and if you take a look at the backbone.js documentation, it should give you a good idea if it's appropriate for your app.
The structure you have sounds sensible enough (common JS file complemented by page-specific JS files). It also sounds like you're onto the right lines with storage.
What I would do in your situation is focus on how your code is structured in terms of architecture. Chapter 6 of Stoyan Stefanov's Javascript Patterns (O'Reilly) would probably be quite enlightening.
I would also probably explore JS MVC implementations given your situation would lend itself well to this methodology (lots of views).
I realise this is only scattered thoughts, but hopefully it might give you some ideas.
Here is how I organize stuff
in /
modFOO.php
modBAR.php
in /js/
main.js
resourceloader.php //this is a resource loader, so I can load multiple JS in a single request
in /js/pages
modFOO.js
modBAR.js //javascript that for page modBAR
in /css/
main.css
resourceloadercss.php //this a resource loader, so I can load multiple CSS in a single request
in /css/pages/
modFOO.css
modBAR.css
With this setup I know exactly where to find stuff, and where to put stuff. And based on the filename, modepic.css, I know exactly where to put the file, and what is (the CSS file for modepic).
I'm not a big fan of the way code is organized in the jqtouch examples I can find. So far, all I've seen are monolithic "index.html" files, which contain all the separate views for the iPhone app as separate divs.
Are there any examples out there of better organized jqtouch code?
I'm not looking for generic advice - I'd like to see specific examples of differently organized code.
What you're seeing is usually thought of as a feature of JQTouch, not as a negative "monolithic" style. -- Mobile networks tend to have a large time overhead per http request, so the general idea is to use the one request to download multiple small "pages" (as divs) all at once.
Of course, this paradigm may not fit your use case...
Added Re: alternatives: There are lots of mobile frameworks, see a list or Google. For JQTouch, you can return a response that includes only a single page if you wish to. The reason you're not seeing such examples is because the whole idea of the framework is to make it easy for the developer to return multiple "pages" as a single web server response.
For your server's responses which are a set of mobile pages, the multiple pages-at-a-time trick is the usual approach. For responses which include an infinite scroll page, or which have a lot of dynamic content, you can do Ajax updating of the mobile page, esp if you limit yourself to iPhone and Android browsers.
Overall, the per-request overhead is the big issue for good mobile web app performance. Anytime you can (or probably can) avoid a browser/server round-trip, you should aggressively do so.
There are different JavaScript frameworks like jQuery, Dojo, mooTools, Google Web Toolkit(GWT), YUI, etc.
Which one from this is suitable for high performance websites?
(Full disclaimer: I am a Dojo developer and this is my unofficial perspective).
All major libraries can be used in high load scenarios. There are several things to consider:
Initial load
The initial load affects your response time: from requesting a web page to being responsive and in working mode. Trivial things to do are:
concatenate several JavaScript files together (works for CSS files too)
minimize and/or compress your JavaScript
The idea is to send less — good for the server, and good for the client.
The less trivial thing to do:
structure your program in such a way so it is operational without all modules loaded
Example of the latter: divide your modules into essential (e.g., the core logic), and non-essential (e.g., helpers: tooltips, hints, verifiers, help facilities, various "gradual enhancers", and so on). The idea is that frequently there are things which are not important for frequent users, but nice for casual users ⇒ they can be delayed.
We can load essential modules first and load the rest asynchronously. Example: if user wants to edit an object we need to show it first, after that we have several hundred milliseconds to load the rest: lookup tables, hints, and so on.
Obviously it helps when asynchronous loading of modules is supported by the framework you use. Dojo has this facility built-in.
Distribute files
Everybody knows that due to browser restrictions on number of parallel downloads from the same site it is beneficial to load resources (images, CSS, JavaScript) from different domains:
we can download more in parallel, if user's line has enough bandwidth — these days it is almost always true
we can set up web servers optimized for serving static files: huge disk cache, small workers, keep-alive, async serving, and so on
we can remove all unnecessary features we don't need when serving static files: sessions, cookies, and so on
One frequently overlooked optimization in JavaScript applications is to use CDN:
your web site can benefit from the geographical distribution of CDN (files can be served from the closest/fastest server)
user may have required files in her cache, if they were used by other application
intermediate/corporate caches increase the probability that required files are already cached
the last but not least: these are files that you don't serve — think about it
Again, Dojo supports CDNs for a long time and distributed publicly by AOL CDN and Google CDN. The latter carries practically all popular JavaScript toolkits too. Obviously you can create your own CDN and your very own CDN- and app- specific Dojo build, if you feel you need it — it is trivial and well documented.
Communication bandwidth
How that can be different for different toolkits? XHR is XHR.
You need to reduce the load on your servers as much as possible. Analyze all traffic and consider how much static/immutable stuff is sent over the pipe. For example, typically a lot of HTML is redundant across several pages: a header, a footer, a menu, and so on. Do you really need all of these to be sent over every time?
One obvious solution is to move from static HTML + "gradual enhancements" with JavaScript to real "one page" JavaScript applications. Again, this is a frequently overlooked, but the most rewarding optimization.
While the idea sounds easy, in reality it is not as simple as it seems. As soon as we go from one-liners to apps we have a plethora of questions, and the biggest of them is the packaging: what your components are, what components are provided by the toolkit, and how to package and deliver them.
Dojo provides modules, good OOP for general classes, widgets (a combination of an optional HTML and related behaviors), and a lot of facilities to work with them. You can:
load modules on demand rather than in the head
load modules asynchronously
find all dependencies between modules automatically and create a "build" — one file in simple cases, or more, if your app is big and requires several layers
while doing the "build" it can inline all HTML snippets for your widgets, optimize CSS, and minify/compress JavaScript
Dojo can automatically find and instantiate widgets in HTML saving a lot of boilerplate code
and much much more
All these features help greatly when building applications on the client side. That's why I like Dojo.
Obviously there are more ways to optimize high load web sites but according to my practice these are the most specific for JavaScript frameworks.
Quite simply: all of them.
All frameworks have been built in order to provide the fastest performance possible and provide the developers with useful functions and tools. Your choice should be based on your requirements.
JavaScript runs on the client-side, so none will affect your server performance. The only difference server-side is the amount of bandwidth used to transfer the .js files to the client.
I'm personally fond of MooTools because it answers my requirements and also sticks to my coding ideals. A lot of people adopted jQuery (I personally don't like it, doesn't mean it's not great). I haven't used the other ones.
But none is better than the other, it's all a question of requirements and personal preference.
I do not really think it makes a bit of difference. The big ones seem to use a mixture of Jquery & prototype along with others.
Quite frankly, it makes no difference what you use for heavily visited websites as we are talking about client technologies. After the file is loaded, there are not really any overheads. So, if you just want to do one simple thing and multiple frameworks support it, use whatever one has the smaller file size (of course, if it performs really bad, use another!)
This being said, google hosts a lot of the frameworks, so even this is really a non issue. I use Jquery hosted by Google and am very happy.
http://code.google.com/apis/ajaxlibs/
Backend and what the server should be using is a whole different question where you will get a thousand different answers!
I'd recommend you look into Dojo.
Dojo 1.6 is also the first (and only) popular JavaScript Library that can be successfully used with the Closure Compiler's Advanced mode, with the massive size, performance and obfuscation benefits attached to it -- other than Google's own Closure Library, that is.
http://dojo-toolkit.33424.n3.nabble.com/file/n2636749/Using_the_Dojo_Toolkit_with_the_Closure_Compiler.pdf?by-user=t
In other words, a program using Dojo can be 100% obfuscated -- even the library itself.
Compiled code has exactly the same behavior as plain-text code, except that it is much smaller (average 25% over minifiers), runs much faster (especially on mobile devices), and almost impossible to reverse-engineer, even after passing through a beautifier, because the entire code base (including the library) is obfuscated.
Code that is only "minified" (e.g. YUI compressor, Uglify) can be easily reverse-engineered after passing through a beautifier.
Well - as an example stackoverflow relies on jQuery ( and uses the google apis link ) - it's one of the speediest and most popular libraries and not only that but I'd say it's the easiest to use. What type of behavior are you going to have on the site? It really all depends on your needs.
The answer, as always, is: it depends. What kind of performance are you talking about? Download speed? Use a minimiser and there's probably not a lot of difference. Or client-side performance, and what are you doing with it?
But, I would suggest that if you're after raw performance, I would not use a framework at all, and create low level javascript that will be far more difficult to maintain.
Some good information can be found on the YUI site.
As other answers already explained, the framework's not going to be the bottleneck in your site's performance -- rather, many other factors are. If you use popular frameworks and load them from popular URLs for them (e.g. AOL's or Google's) they're likely to be cached in your users' browsers, so you don't have to worry much about that, either.
If you care at all about performance, however, absolutely DO check out Steve Souders; work -- including both of his books, "High Performance Web Sites" and "Even Faster Web Sites".
I'm biased, as Steve is a friend and a colleague (and we share publishers as well), but I praised and admired his work even before we met in person and became colleagues -- I'm mostly a back-end person, as he used to be, so I just can't help admire somebody who, coming from the same background, had the integrity and courage to switch almost entirely to front-end focus as he realized THAT was by far the bottleneck for user-perceived performance (i.e., somebody who had the gumption to put user experience first, something we all pay homage to, of course, but don't always practice, when that "overriding priority" gets in the way of our own professional specialties, interests and skills...).