Why write modular javascript? [closed] - javascript

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
The benefits of well-factored and modular code, in my understanding are re-usability and organization. Code written in a big chunk all in one file is difficult to read, and re-using small portions of the code requires careful copy-pasting, rather than include statements.
In particular, with regards to Javascript, I came across an example recently that got me thinking about this. A comment was made on SO to the effect that, if you are not including your javascripts conditionally on a page-by-page basis, this "represents a failure to modularize JS code properly". However, from a code re-use and organization point of view, there is no reason to consider what happens at page load time. The code will be just as readable if it is written in a bunch of separate files and then mashed together and minified before being served. The rails asset pipeline, for example, does just this.
When I first encountered the asset pipeline, my mind reeled and I started wondering "how do I make javascripts load only when needed?" I read a few SO questions and an article on the matter, and began to think that maybe I shouldn't worry about what happens to my code after it "compiles".
Is the purpose of writing modular code purely a human-level activity, should we stop worrying about modularity after the code starts running? In the case of Javascript, should we be concerned that our scripts are being mashed together before being included?

I think the one thing that you are not really talking about in this with regards to performance is actual HTML browser download behavior. I believe you have to walk a fine line between only displaying the javascript needed on a page by page basis and leveraging browser caching and download behavior.
For example, say you have 20 different javascript snippets that are going to be used on every page. In this case it is a no-brainer to compile/minify them into a single file, as the fewer files your browser needs to download, the better. This single file would also be able to be cached, that is assuming it is a static file or appearing to be static (via headers sent) if it is dynamically compiled.
Now say of those 20 snippets, 15 are used on every page and the others are used intermittently. Of course you put all 15 of the always used snippets into a single file. But what about the others? In my opinion you need to consider the size and frequency of use of the files. If they are small and used relatively frequently, I might consider putting them into the main file, with the thought that the extra size in the main file is outweighed by the need to have additional request to download the content later. If the code is large, I would tend to only use it where necessary. Of course once it is used, it should remain in cache.
This approach might best be suited for a web application where users are expect to typically have multiple page loads per session. Of course if you are designing an advertising landing pages or seomthing where the user only may see that single page, you might lean on keeping the initial javasciprt download as small as possible and only loading new javascript in as necessary based on user interaction.

Every aspect of this question boils down to "it depends".
Are you writing an enterprise-level application, which results in 80,000 lines of code, when you stuff it all together?
If so, then yes, compilation time is going to be huge, and if you stuff that in the <head> of your document, people are going to feel the wait time.
Even if it's already cached, compile time alone will be palpable.
Are you writing dozens of widgets which might never be seen by an end-user?
Especially on mobile?
If so, then you might want to save them the download/compile time, and instead load your core functionality, and then load extra functionality on-demand, as more studies are showing that the non-technical end-user expects their mobile-internet experience to be similar to their desktop experience, not only in terms of content, but in general wait-times.
Fewer and fewer people are willing to accept 5s-8s for a mobile experience (or a desktop experience on mobile) to get to the point of interactivity, just based on the "well, it's mobile, so it'll take longer" train of thought.
So again, if you've got an 80,000 line application, or a 300kB JS file, or are doing a whole lot of XML parsing, et cetera, prior to load, without offering a separate mobile experience, your stats on mobile are bound to hurt -- especially if you're a media site or a commercial/retail site.
So the correct answer to your question is to say that there is no correct answer to your question, excepting that there are good ideas and bad ideas, based on the target-devices, the intent of the site/application, the demographic, the code-base, the anticipation that users will frequent the site (and thus benefit from cached assets), the frequency of updates to the codebase (having one updated module, with 20 cached modules, versus a fully-invalid 21-module chunk, due to one updated line, with a client-base of 250,000 customers, is a consideration for several reasons)...
...and more...
Figure out what you're doing.
Figure out what you need to do to make that happen.
Figure out how to do it, while providing your customers a good experience.
Know how to combine files.
Know how to load on demand.
Know how to build a light bootstrap, which can intelligently load modules (and/or learn require/AMD).
Use these as tools to offer your users the best experience possible, given what you're trying to accomplish.

Related

Does using more external files, opposed to cramming everything into one file, reduce run-time-efficiency?

I am relatively new to web design and the world of jquery, javascript and php. I guess this question would also suit css style sheets as well. Is it better to have everything stuffed into one "external document"? Or does this not affect the run time speeds?
Also to go along with this. Is it wrong, or less efficient, to use php in places where jquery / javascript could be implemented? Which of the two languages is generally faster?
The way you should look at it would be to load the minimum resources required initially which would be needed on page load, not everything. Make sure you group all of these resources together into a single file, and minify them.
Once your page is loaded, you can thereafter load other resources on demand. For e.g a modal, which does not need to be immediately visible can be loaded at a later point of time, when user does some action, and it needs to be shown. This is called lazy loading. But when you do load any module on demand, make sure you load all of its resources together and minified as well.
It's important to structure your code correctly and define the way you batch files together for concatenation and minification. It will help you save on performance by optimizing the number of calls made to the server.
About PHP and JavaScript, I would say in general JavaScript is faster than PHP, but it depends on your application, as one runs on the server and other on the client. So if you are doing too heavy and memory intensive operations, the browser might limit your capabilities. If that is not a problem, go ahead with JavaScript.
There's a lot of different factors that come into play here. Ultimately, it is better to call the least amount of resources possible to make the site run faster. Many sites that check page speed will dock points if you call a ton of resources. However, you don't want to go insane condensing and try to cram everything into a single file either... The best way to approach it is to use as few files as possible while maintaining a logical organization.
For example, maybe you're using a few different JS libraries... well merging those all into one would eventually get confusing and hard to update so it makes sense to keep them all separate. However, you can keep all your custom JS where you call those libraries in one separate file. This can even be applied to images. Let's say you're uploading 5 different social media icons and 5 different hover states for them. Well, instead of making the site call 10 different files, use a sprite and just call one.
You can also do things like use google's hosted libraries: https://developers.google.com/speed/libraries/ Many sites use these and therefore many users already have these resources cached which means they don't need to freshly load the libraries when visiting your site. It's very helpful for things like jQuery.
Another thing to keep in mind is minifying those files. Any library you use should have a minified version and you should use that as opposed to a full version. While you should keep unminified copies of your work around, whatever ends up on the live site should be minified to help with page speed. Here are a few resources for that: https://cssminifier.com/ https://javascript-minifier.com/ If you're using WP, there's tons of plugins out there that have similar functions like WP Fastest Cache.
You php/js/jquery question I can't really weigh in on too heavily. As mentioned, the base difference between php and JS ist whether the requests are client-side or server-side. Personally, I use whatever is prevalent in the project and whatever works best for your changes. For example, if you're working with variables and transferring data, PHP can be a really great

When do you want more/less http requests? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
It seems like to have you page load fast, you would want a series of small http requests.
If it was one big one, the user might have to wait much longer to see that the page was there at all.
However, I'v heard that minimizing your HTTP requests is more efficient. For example, this is why sprites are created for multiple images.
Is there a general guideline for when you want more and when you want less?
Multiple requests create overhead from both the connection and the headers.
Its like downloading the contents of an FTP site, one site has a single 1GB blob, another has 1,000,000 files totalling a few MB. On a good connection, the 1GB file could be downloaded in a few minutes, but the other is sure to take all day because the transfer negotiation ironically takes more time that the transfer itself.
HTTP is a bit more efficient than FTP, but the principle is the same.
What is important is the initial page load, which needs to be small enough to show some content to the user, then load additional assets outside of the user's view. A page with a thousand tiny images will benefit from a sprite always because the negotiations would not only cause strain to the connection, but also potentially the client computer.
EDIT 2 (25-08-2017)
Another update here; Some time has passed and HTTP2 is (becoming) a real thing. I suggest reading this page for more information about it.
Taken from the second link (at the time of this edit):
It is expected that HTTP/2.0 will:
Substantially and measurably improve end-user perceived latency in
most cases, over HTTP/1.1 using TCP. Address the "head of line
blocking" problem in HTTP.
Not require multiple connections to a server to enable parallelism,
thus improving its use of TCP, especially regarding congestion
control.
Retain the semantics of HTTP/1.1, leveraging existing documentation
(see above), including (but not limited to) HTTP methods, status
codes, URIs, and where appropriate, header fields.
Clearly define how HTTP/2.0 interacts with HTTP/1.x, especially in
intermediaries (both 2->1 and 1->2).
Clearly identify any new extensibility points and policy for their
appropriate use.
The bold sentence (emphasis mine) explains how HTTP2 will handle requests differently from HTTP1. Whereas HTTP1 will create ~8 (differs per browser) simultaneous (or "parallel") connections to fetch as much resources as possible, HTTP2 will re-use the same connection. This reduces overall time and network latency required to create a new connection which in turn, speeds up asset delivery. Additionally, your webserver will also have an easier time keeping ~8 times less connections open. Imagine the gains there :)
HTTP2 is also already quite widely supported in major browsers, caniuse has a table for it :)
EDIT (30-11-2015)
I've recently found this article on the topic 'page speed'. this post is very thorough and it's an interesting read at worst so I'd definitely give it a shot.
Original
There are too many answers to this question but here's my 2cents.
If you want to build a website you'll need few basic things in your tool belt like HTML, CSS, JS - maybe even PHP / Rails / Django (or one of the 10000+ other web frameworks) and MySQL.
The front-end part is basically all that gets sent to the client every request. The server-sided language calculates what needs to be sent which is how you build your website.
Now when it comes to managing assets (images, CSS, JS) you're diving into HTTP land since you'll want to do as few requests as possible. The reason for this is that there is a DNS penalty.
This DNS penalty however does not dictate your entire website of course. It's all about the balance between amount of requests and read- / maintainability for the programmers building the website.
Some frameworks like rails allow you to combine all your JS and CSS files into a big meta-like JS and CSS file before you deploy your application on your server. This ensures that (unless done otherwise) for instance ALL the JS and ALL the CSS used in the website get sent in one request per file.
Imagine having a popup script and something that fetches articles through AJAX. These will be two different scripts and when deploying without combining them - each page load including the popup and article script will send two requests, one for each file respectively.
The reason this is not true is because browsers cache whatever they can whenever they can because in the end browsers and people who build websites want the same thing. The best experience for our users!
This means that during the first request your website will ever answer to a client will cache as much as possible to make consecutive page loads faster in the future.
This is kind of like the browser way of helping websites become faster.
Now when the brilliant browserologists think of something it's more or less our job to make sure it works for the browser. Usually these sorts of things with caching etc are trivial and not hard to implement (thank god for that).
Having a lot of HTTP requests in a page load isn't an end-of-the-world thing since it'll only slow your first request but overall having less requests makes this "DNS-penalty" thing appear less often and will give your users more of an instant page load.
There are also other techniques besides file-merging that you could use to your advantage, when including a javascript you can choose it to be async or defer.
For async it means the script will be loaded and executed in the background whenever it's loaded, regardless of order of inclusion within HTML. This also pauses the HTML parser to execute the script directly.
For defer it's a bit different. It's kind of like async but files will be executed in the correct order and only after the HTML parser is done.
Something you wouldn't want to be "async" would be jQuery for instance, it's the key library for a lot of websites and you'll want to use it in other scripts so using async and not being sure when it's downloaded and executed is not a good plan.
Something you would want to be "async" is a google analytics script for instance, it's effectively optional for the end-user and thus should be labelled as not important - no matter how much you care about the stats your website isn't built for you but by you :)
To get back to requests and blend all this talk about async and deferred together, you can have multiple JS on your page for instance and not have the HTML parser pause to execute some JS - instead you can make this script defer and you'll be fine since the user's HTML and CSS will load while the JS parser waits nicely for the HTML parser.
This is not an example of reducing HTTP requests but it is an example of an alternative solution should you have this "one file" that doesn't really belong anywhere except in a separate request.
You will also never be able to build a perfect website, nor will http://github.com or http://stackoverflow.com but it doesn't matter, they are fast enough for our eyes to not see any crazy flashing content and those things are truly important for end-users.
If you are curious about how much requests is normal - don't. It's different for every website and the purpose of the website, tho I agree some things do go over the top sometimes but it is what it is and all we have to do is support browsers like they are supporting us - Even looking at IE / Edge there since they are also improving (slowly but steady anyways).
I hope my story made sense to you, I did re-read before the post but couldn't find anything while scouting for irregular typing or other kinds of illogical things.
Good luck!
The HTTP protocol is verbose, so the ratio of header size to payload size makes it more efficient to have a larger payload. On top of that, this is still a distributed communication which makes it inherently slow. You also, usually, have to set up and tear down the TCP connection for each request.
Also, I have found, that the small requests repeat data between themselves in an attempt to achieve RESTful purity (like including user data in every response).
The only time small requests are useful is when the data may not be needed at all, so you only load it when needed. However, even then it may be more performant to.simply retrieve it all in one go.
You always want less requests.
The reason we separate any javascript/css code in other files is we want the browser to cache them so other pages on our website will load faster.
If we have a single page website with no common libraries (like jQuery) it's best if you include all the code in your html.

How do I design / architect a large, modular mobile web app? [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
Background
I need to create a potentially very large HTML/JS mobile web app that will be delivered as a mobile web site and natively using Phonegap. I'm currently working to determine the best way to organize the app itself.
The basic plan is to have many modules that will each focus on a different subject of interest . Some of these modules will be very basic (ie, announcements / news) and some will be very complex (ie, sports: team players, schedules, video, etc). There will be a side-drawer navigation that will apply to most pages so users can quickly navigate to a different module. There needs to be the ability to deep-link within modules. These modules will be created by a variety of developers and vendors.
Single Page App
Most of the mobile solutions I see involve Single Pages, which seem like a bad idea to me in this case, since there is the potential for so much memory use. It also seems like it would be difficult to reconcile hash navigation between modules and hash navigation between section within modules. Module development would have to be done with the app framework in mind and limits how things can be done by vendors and developers. On the other hand, things aren't getting loaded as often and everything can easily communicate with each other.
Multiple Page App
Using multiple pages, it seems like each module could easily be created in whatever technology a vendor was comfortable with (and could do quickly and cheaply). It would cut down on memory use, but also remove the ability for modules to communicate (a feature that I don't know is necessary for us at this point). I could see making a javascript library every module would use for common handling of various events (like logging errors, navigation, etc). Each app navigation between modules would be a new page call, resetting the DOM. Each module could use a single page design if it wished.
Help Me Please
So, is there any common or new knowledge about how things like this should be designed? I'm eager to begin work, but don't want to be rewriting things that may already exist. Do I have any glaring flaws in my reasoning? I'd love to hear from anyone that has insight.
Honestly, if you are considering building any app that you believe will be high volume and of high complexity, you really ought to consider doing native development, or at least use something like Appcelerator where the application will be "compiled" down to native code for better performance. If you are intending to just let any number of developers build their own javascript components that may or may not do a good job of managing limited system resources, so are going to quickly run into application performance issues.
On the other hand, if you are just going for proof of concept and don't mind potentially having to greatly refactor your application architecture when and if you get a sufficient level of complexity, then you may want to simply go with the web app approach.
Really, you need to also be considering your backend service architecture as much or more than the frontend architecture. that is really where you are going to run into problems down the line in trying to integrate offerings from other developers.
I had a similar architectural problem to contend with a couple of years ago. It wasn't mobile, but it wasn't entirely web-based either. The target applications were a mix of web sites and desktop apps, with the potential to go mobile in the future.
The interesting part was that there had been two prior attempts at creating a framework that could be used in a variety of situations. The problem, and the reason both attempts failed, was that the developers saw it as a UX problem space. They approached it using several different technologies, but became mired months down the road because they had made assumptions up front and flown the project by the seat of their pants.
My approach was to eschew all the UI discussion completely and focus on a backend architecture that could be approached from any standpoint. To this end, I created a web service that had data going in both directions, and was ultimately serving a mathematical model. The service is being accessed from a variety of sources using different technologies: Flash, Unity, a Google Earth plugin, and finally, from an unrelated web architecture serving up good ol' HTML.
My advice to you, is don't concentrate on the front-end mapping so much as get your back-end right. Once you have a data structure in place, you can build outward, and several issues such as memory management, monolithic app or not, i.e. one page versus many, will almost resolve themselves. Work on creating a great API with lots of good interfaces and you won't fall into a "many chefs" hole. Give a bunch of dispersed developers enough rope, on the other hand, and you'll never find where all the knots are!
The decision whether to ultimately go native API over HTML5-based technology such as Sencha Touch, jQuery Mobile, or Phonegap is an evangelical black hole that will be played out over the coming months and years. Native apps may be more fluid and speedy in some cases, but the investment in resources is something that should be considered. On the other hand, JavaScript developers are lurking around every corner and are not in short supply.
Your first step is to nail down those requirements.
If you're doing this for yourself or for your own company, then nail down how these modules (co-)operate.
If you are doing this for your employer, then somebody there ought to have a clue what they want to see, otherwise, how are you going to build it?
A solution which supports multiple pages, opening and closing modules with no communication is going to require different things than a framework which is responsible for maintaining multiple widgets at the same time, which may or may not communicate through system-calls or services.
There's no way around that -- building services/sandboxing/etc for modules is going to take more work than treating each like a page-change (or making them be literal page-changes).
When you figure out what you want your program to do, start building out an idea of the API you'd like other people to have.
Are you going to provide them with an API for building UI components, or are you going to leave that to their own whims?
Personally, I'd avoid a situation where each module change just replaces iFrames, and then the end-user can do whatever they want in there.
Likewise, I'd avoid situations where you're allowing module-creators to run whatever they'd like in a non-sandboxed environment... It ends poorly for your end-users (or you, in UK court).
But that's not a concern, yet.
First-concern is what does your platform do.
Then figure out what your back-end communication is going to look like, and what interfaces you're going to provide to module creators, and how you're going to get data from your end to theirs (http-based API, REST or whatever else... ...but work it out WELL, if you don't already have it).
THEN, when you know what your platform is expected to do, AND you have a backend which can serve all kinds of tasks well, figure out what services you're going to provide to content-creators, to make their widgets, and to upload/download data from your service, and sandbox, and the like.

Aggressive Caching vs Combining of Javascript files for an Intranet Site

I'm working on improving the page performance of my company's intranet page. We're looking to (dynamically) combine our javascript files as well as cache them for 30+ days. The page launches on login for everyone.
One of my coworkers asked if it's worth the time to combine the JS files if we're already caching them for a month. He's hesitant to do both because the combining tool is server side and doesn't run on our desktop, requiring a somewhat hacky workaround.
I've been doing some research and most of the recommendations for performance I've seen are for external sites. Since we're in a closed system it would seem like we wouldn't get much benefit from combining the files once everyone's cache is primed. What would combining the files buy us that aggressive caching wouldn't?
We're on IE8 if that makes any difference.
The most notable impact with having multiple JavaScript files is the time required to render the page. Each script tag is processed separately and adds time to the overall render process.
A pretty good answer can be found here # multiple versus single script tags
If we are talking a large number of scripts then you may see an improvement in render time; if it is just two or three files then it likely won't bring abount a noticable difference once the files have been cached.
I would suggest testing the page render time in both cases and see how much improvement you see in your case and decide based on that information.
As a useful example, here are some stats from Xpedite (runtime minification tool I created a while back); note the difference in time from load to ready for combined vs uncombined scripts.
Combine all your JavaScript files into a single big file (thus minimizing the number of requests made to the server), and set its name to something like "application_234234.js"; the number represents the last time you have changed the file and will help the browser know it's a new file (thus no cache when you change it).
Add an Expires or a Cache-Control Header (set it really far into the future). Since the file name will change each time you'll modify it, you don't have to worry.
Last and not least, compress and gzip the JavaScript file.
These are some important advices, but learn more about best practices on: http://developer.yahoo.com/performance/rules.html

Javascript Coding - Good Practices/Standards [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I've been working on a project (Javascript with some AJAX, JQuery, etc) and my project has gotten rather large; I'm finding myself using more and more clientside Javascript arrays, and am now considering a 2D Javascript array as well.
I know from incremental testing that my current implementation is very much manageable in terms of browser resources and performance, but was wondering if there was a general consensus as to how "heavy" a website could be in terms of Javascript memory usage and processing before it could be declared "bloated."
Also, if I'm already treading into dangerous territory by using this much storage and processing (I feel like I'm coding a rather substantial Java/C app), what is the best way to lighten the load?
Thanks!
One man's "bloated" is another man's "robust". In other words, there is no "right" answer to this question, as it depends heavily on your audience and your site. For some sites, every millisecond of load time matters, while for others a 20 second load time is perfectly acceptable. It really just depends.
The best advice that I can suggest is to use one of the many site performance analysis tools (eg. YSlow) to get a sense of just how slow your app actually is. These tools can also give you a better idea of what is making your site slow; for instance, you might think it's the amount of JS code you have, but really the number of JS files (every HTTP request has a cost) might be the bigger factor.
Once you have some objective, metricable sense of how slow your site is, you can then take the time to consider your audience and determine how slow is "too slow" for them. And once you've done that, you can then consider all of the different suggestions that YSlow (or whatever tool you choose) offers, and pick which ones (if any) make the most sense for your site.
you have a big problem that the internet connection speed is varied and that peoples hardware is varied
OVERVIEW
with my company we run on a 3 second system, if they page is not working in 3 seconds of it starting to download we have a problem but this could be server connection speed issue, images are to big Javascript is quite quick most of the time but if you worried about memory and speed with javascript trying to keep your script tidy helps,
TIDY
By tidy i mean if you have one giant js file will all your functions in or every thing in one $(document).ready() when its only used on 1 or 2 of your pages try splitting it up a little, but don't split it up too much as connecting to a server to download more files is a lot longer than that of running it i have script about half a MB that run fine.
TESTING
A good way to test that i use is to have a Windows install in Virtual Box and reduce the memory to the common amount users would have 256MB (really old), 512(old), 1024(a lot of users), 2GB+(high end users) and remember your target if your building a JS heavy site do you want to support IE6 if not then your main target is windows vista or newer and as such a minimum of 512MB of Ram is required just remember to test in the different browser FF(3.x.x, 4/5), IE(7/8/9) and chrome because different browser use different amounts of memory
Main Causes of Bounce Rate,
Page takes to long to load... Images are the worst cause of this
Page Crashes Browser... Javascript is looping when a certain thing happens people wont come back if there browser crashes
There will be people that always complain about JavaScript usage cater for the masses not the unique
UPDATE
Another good way to keep script small is to use this, also what google use for there min Script, it will drop file size (so download time) and will lower memory a little as variables name will be using less space in memory
http://code.google.com/closure/compiler/
One thing you can do if you've got a lot of data is to take advantage of localstorage in browsers that support it.
Other than that I can't think of a clear definition of bloat.. Does your website load quickly on modest connections? How does it do on slower browsers? If it does well in those scenarios then you're good to go.

Categories

Resources