Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I've been working on a project (Javascript with some AJAX, JQuery, etc) and my project has gotten rather large; I'm finding myself using more and more clientside Javascript arrays, and am now considering a 2D Javascript array as well.
I know from incremental testing that my current implementation is very much manageable in terms of browser resources and performance, but was wondering if there was a general consensus as to how "heavy" a website could be in terms of Javascript memory usage and processing before it could be declared "bloated."
Also, if I'm already treading into dangerous territory by using this much storage and processing (I feel like I'm coding a rather substantial Java/C app), what is the best way to lighten the load?
Thanks!
One man's "bloated" is another man's "robust". In other words, there is no "right" answer to this question, as it depends heavily on your audience and your site. For some sites, every millisecond of load time matters, while for others a 20 second load time is perfectly acceptable. It really just depends.
The best advice that I can suggest is to use one of the many site performance analysis tools (eg. YSlow) to get a sense of just how slow your app actually is. These tools can also give you a better idea of what is making your site slow; for instance, you might think it's the amount of JS code you have, but really the number of JS files (every HTTP request has a cost) might be the bigger factor.
Once you have some objective, metricable sense of how slow your site is, you can then take the time to consider your audience and determine how slow is "too slow" for them. And once you've done that, you can then consider all of the different suggestions that YSlow (or whatever tool you choose) offers, and pick which ones (if any) make the most sense for your site.
you have a big problem that the internet connection speed is varied and that peoples hardware is varied
OVERVIEW
with my company we run on a 3 second system, if they page is not working in 3 seconds of it starting to download we have a problem but this could be server connection speed issue, images are to big Javascript is quite quick most of the time but if you worried about memory and speed with javascript trying to keep your script tidy helps,
TIDY
By tidy i mean if you have one giant js file will all your functions in or every thing in one $(document).ready() when its only used on 1 or 2 of your pages try splitting it up a little, but don't split it up too much as connecting to a server to download more files is a lot longer than that of running it i have script about half a MB that run fine.
TESTING
A good way to test that i use is to have a Windows install in Virtual Box and reduce the memory to the common amount users would have 256MB (really old), 512(old), 1024(a lot of users), 2GB+(high end users) and remember your target if your building a JS heavy site do you want to support IE6 if not then your main target is windows vista or newer and as such a minimum of 512MB of Ram is required just remember to test in the different browser FF(3.x.x, 4/5), IE(7/8/9) and chrome because different browser use different amounts of memory
Main Causes of Bounce Rate,
Page takes to long to load... Images are the worst cause of this
Page Crashes Browser... Javascript is looping when a certain thing happens people wont come back if there browser crashes
There will be people that always complain about JavaScript usage cater for the masses not the unique
UPDATE
Another good way to keep script small is to use this, also what google use for there min Script, it will drop file size (so download time) and will lower memory a little as variables name will be using less space in memory
http://code.google.com/closure/compiler/
One thing you can do if you've got a lot of data is to take advantage of localstorage in browsers that support it.
Other than that I can't think of a clear definition of bloat.. Does your website load quickly on modest connections? How does it do on slower browsers? If it does well in those scenarios then you're good to go.
Related
Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
It seems like to have you page load fast, you would want a series of small http requests.
If it was one big one, the user might have to wait much longer to see that the page was there at all.
However, I'v heard that minimizing your HTTP requests is more efficient. For example, this is why sprites are created for multiple images.
Is there a general guideline for when you want more and when you want less?
Multiple requests create overhead from both the connection and the headers.
Its like downloading the contents of an FTP site, one site has a single 1GB blob, another has 1,000,000 files totalling a few MB. On a good connection, the 1GB file could be downloaded in a few minutes, but the other is sure to take all day because the transfer negotiation ironically takes more time that the transfer itself.
HTTP is a bit more efficient than FTP, but the principle is the same.
What is important is the initial page load, which needs to be small enough to show some content to the user, then load additional assets outside of the user's view. A page with a thousand tiny images will benefit from a sprite always because the negotiations would not only cause strain to the connection, but also potentially the client computer.
EDIT 2 (25-08-2017)
Another update here; Some time has passed and HTTP2 is (becoming) a real thing. I suggest reading this page for more information about it.
Taken from the second link (at the time of this edit):
It is expected that HTTP/2.0 will:
Substantially and measurably improve end-user perceived latency in
most cases, over HTTP/1.1 using TCP. Address the "head of line
blocking" problem in HTTP.
Not require multiple connections to a server to enable parallelism,
thus improving its use of TCP, especially regarding congestion
control.
Retain the semantics of HTTP/1.1, leveraging existing documentation
(see above), including (but not limited to) HTTP methods, status
codes, URIs, and where appropriate, header fields.
Clearly define how HTTP/2.0 interacts with HTTP/1.x, especially in
intermediaries (both 2->1 and 1->2).
Clearly identify any new extensibility points and policy for their
appropriate use.
The bold sentence (emphasis mine) explains how HTTP2 will handle requests differently from HTTP1. Whereas HTTP1 will create ~8 (differs per browser) simultaneous (or "parallel") connections to fetch as much resources as possible, HTTP2 will re-use the same connection. This reduces overall time and network latency required to create a new connection which in turn, speeds up asset delivery. Additionally, your webserver will also have an easier time keeping ~8 times less connections open. Imagine the gains there :)
HTTP2 is also already quite widely supported in major browsers, caniuse has a table for it :)
EDIT (30-11-2015)
I've recently found this article on the topic 'page speed'. this post is very thorough and it's an interesting read at worst so I'd definitely give it a shot.
Original
There are too many answers to this question but here's my 2cents.
If you want to build a website you'll need few basic things in your tool belt like HTML, CSS, JS - maybe even PHP / Rails / Django (or one of the 10000+ other web frameworks) and MySQL.
The front-end part is basically all that gets sent to the client every request. The server-sided language calculates what needs to be sent which is how you build your website.
Now when it comes to managing assets (images, CSS, JS) you're diving into HTTP land since you'll want to do as few requests as possible. The reason for this is that there is a DNS penalty.
This DNS penalty however does not dictate your entire website of course. It's all about the balance between amount of requests and read- / maintainability for the programmers building the website.
Some frameworks like rails allow you to combine all your JS and CSS files into a big meta-like JS and CSS file before you deploy your application on your server. This ensures that (unless done otherwise) for instance ALL the JS and ALL the CSS used in the website get sent in one request per file.
Imagine having a popup script and something that fetches articles through AJAX. These will be two different scripts and when deploying without combining them - each page load including the popup and article script will send two requests, one for each file respectively.
The reason this is not true is because browsers cache whatever they can whenever they can because in the end browsers and people who build websites want the same thing. The best experience for our users!
This means that during the first request your website will ever answer to a client will cache as much as possible to make consecutive page loads faster in the future.
This is kind of like the browser way of helping websites become faster.
Now when the brilliant browserologists think of something it's more or less our job to make sure it works for the browser. Usually these sorts of things with caching etc are trivial and not hard to implement (thank god for that).
Having a lot of HTTP requests in a page load isn't an end-of-the-world thing since it'll only slow your first request but overall having less requests makes this "DNS-penalty" thing appear less often and will give your users more of an instant page load.
There are also other techniques besides file-merging that you could use to your advantage, when including a javascript you can choose it to be async or defer.
For async it means the script will be loaded and executed in the background whenever it's loaded, regardless of order of inclusion within HTML. This also pauses the HTML parser to execute the script directly.
For defer it's a bit different. It's kind of like async but files will be executed in the correct order and only after the HTML parser is done.
Something you wouldn't want to be "async" would be jQuery for instance, it's the key library for a lot of websites and you'll want to use it in other scripts so using async and not being sure when it's downloaded and executed is not a good plan.
Something you would want to be "async" is a google analytics script for instance, it's effectively optional for the end-user and thus should be labelled as not important - no matter how much you care about the stats your website isn't built for you but by you :)
To get back to requests and blend all this talk about async and deferred together, you can have multiple JS on your page for instance and not have the HTML parser pause to execute some JS - instead you can make this script defer and you'll be fine since the user's HTML and CSS will load while the JS parser waits nicely for the HTML parser.
This is not an example of reducing HTTP requests but it is an example of an alternative solution should you have this "one file" that doesn't really belong anywhere except in a separate request.
You will also never be able to build a perfect website, nor will http://github.com or http://stackoverflow.com but it doesn't matter, they are fast enough for our eyes to not see any crazy flashing content and those things are truly important for end-users.
If you are curious about how much requests is normal - don't. It's different for every website and the purpose of the website, tho I agree some things do go over the top sometimes but it is what it is and all we have to do is support browsers like they are supporting us - Even looking at IE / Edge there since they are also improving (slowly but steady anyways).
I hope my story made sense to you, I did re-read before the post but couldn't find anything while scouting for irregular typing or other kinds of illogical things.
Good luck!
The HTTP protocol is verbose, so the ratio of header size to payload size makes it more efficient to have a larger payload. On top of that, this is still a distributed communication which makes it inherently slow. You also, usually, have to set up and tear down the TCP connection for each request.
Also, I have found, that the small requests repeat data between themselves in an attempt to achieve RESTful purity (like including user data in every response).
The only time small requests are useful is when the data may not be needed at all, so you only load it when needed. However, even then it may be more performant to.simply retrieve it all in one go.
You always want less requests.
The reason we separate any javascript/css code in other files is we want the browser to cache them so other pages on our website will load faster.
If we have a single page website with no common libraries (like jQuery) it's best if you include all the code in your html.
I'm working on a BIG project, written in RoR with jQuery frontend. I'm adding AngularJS which has intelligent dependency injection, but what I want to know is how much javascript can I put on a page before the page becomes noticeably slow? What are the specific limits of each browser?
Assuming that my code is well factored and all operations run in constant time, how many functions, objects, and other things can I allocate in javascript before the browser hits it's limit (which there must be one, because any computer has a finite amount of RAM and disk space (although disk space would be an ambitious limit to hit with javascript)
I've looked online but I've only seen questions about people asking how many assets they can load, i.e. how many megabytes can I load etc. I want to know if there is an actual computation limit set out by browsers and how they differ
-- EDIT --
For the highly critical, I guess a better question is
How does a modern web browser determin the limit for the resources it allocates to a page? How much memory is a webpage allowed to use? How much disk space can a page use?
Obviously I use AJAX, I know a decent amount about render optimization. It's not a question of how can I make my page faster, but rather what is my resource limitation?
Although technically, it sounds a monumental task to reach the limits of a client machine, it's actually very easy to reach these limits with an accidental loop. Everyone has done it at least once!
It's easy enough to test, write a JS loop that will use huge amounts of memory and you'll find the memory usage of your PC will peg out and will indeed consume your virtual memory too, before the browser will fall over.
I'd say, from experience, even if you don't get anywhere near the technological limits you're talking about, the limits of patience of your visitors/users will run out before the resources.
Maybe it's worth looking at AJAX solutions in order to load relevant parts of the page at a time if loading times turn out to be an issue.
Generally, you want to minify and package your javascript to reduce initial page requests as much as possible. Your web application should mainly consist of one javascript file when you're all done, but its not always possible as certain plugins might not be compatible with your dependency management framework.
I would argue that a single page application that starts to exceed 3mb or 60 requests on an initial page load (with cache turned off) is starting to get too big and unruly. You'll want to start looking for ways of distilling copy-and-pasted code down into extendable, reusable objects, and possibly dividing the one big application into a collection of smaller apps that all use the same library of models, collections, and views across all of them. If using RequireJS (what I use) you'll end up with different builds that will need to be compiled before launching any code if any of the dependencies contained within that build have changed.
Now, as for the 'speed' of your application, look at tutorials for render optimization for your chosen framework. Tricks like appending a model's view one-by-one as they are added to the collection results in a faster rendering page then trying to attach a huge blob of html all at once. Be careful of memory leaks. Ensure you're closing references to your views when switching between the pages of your single page application. Create an 'onClose' method in your views that ensures all subviews and attached data references are destroyed when the view itself is close, and garbage collection will do the rest. Use a global variable for storing your collections and models. Something like window.app.data = {}. Use a global view controller for navigating between the main sections of your application, which will help you close out view chains effectively. Use lazy-loading wherever possible. Use 'base' models, collections, and views and extend them. Doing this will give you more options later on for controlling global behavior of these things.
This is all stuff you sort of learn from experience over time, but if enough care is taken, it's possible to create a well-running single page application on your first try. You're more likely to discover problems with the application design as you go though, so be prepared to refactor your code as these problems come up.
It depends much more on the computer than the browser - a computer with a slow CPU and limited amount of RAM will slow down much sooner than a beefy desktop.
A good proxy for this might be to test the site on a few different smartphones.
Also, slower devices sometimes run outdated and/or less feature-rich browsers, so you could do some basic user-agent sniffing or feature detection on the client and fall back to plane server-rendered HTML.
Note: I'm really interested in "large" arrays sizes that work for other people. This is not the first time I've wondered about this and welcome feedback.
The scheme that I detail below requires development-effort towards that goal, so it's better to know going in whether I should eat the bandwidth and drop the idea, or whether this is a practical method for marginal gain on bandwidth. In other cases, for others who may be curious, or for future projects of mine, the gain may be larger.
Usually when I encounter a situation where a large javascript dataset is an option, there are usually other options and I tend to favor other options. I'm not actually sure what qualifies as large or too-large.
I'm wondering what safely works in production. I know that the theoretical limits but I can't find anything on the practical limits. As I detail below, my array is 14000 elemnts and 356k in size.
I realize, of course, that javascript is client side and depends on the specs of the client machine, my code, and to some extent, the version of the browser (in cases where related internal performance improved between versions).
Like any respectable modern website, it will have a mobile version as well and that's honestly where the memory concerns come in.
I don't actually think that this size is too large or near it for reasonable specs expectations of client machines, but I could be wrong and this is something I've long wondered about and haven't found any good information on. (Further, of course, the answer to this question would be quite different than it was N years ago).
Edit: I am aware of the 32-bit limitations on Array size, but I'm asking about what can work in healthy production and perform well.
Background:
I'm in the planning stages of a bible search engine.
I have a copy of the King James Version of the Bible filled with citations. Some of the citations are quite lengthy and as I'll be serving this over the net, I decided to rewrite the citations to an indexed array of the citations. That way I can rewrite the citations to shorter codes to shorter text and consume less bandwidth.
This leaves me with an array with 14,000 elements 356k in size, (216k of data) and the overall size of all the verses has shrunk by 20% (2mb, average of 65 bytes per verse). I would call this a gain.
A preview of the file looks like
['s=h07225','s=h0430','s=h0853','m=sm:th8804']
However, the potential downside is the memory consumed by using this array consistently with search results.
My recommendation is to use localStorage, this way you can AJAX the data in so it doesn't impact initial page load. Once cached on the client, you are free to use it as you please. Also, you might need to add a way to keep checking for your array. You can check to see if it exists, and AJAX it in again if it doesn't. Local storage can be removed by the browser at any point in time.
Another alternative is to put in a JavaScript web resource that gets cached in the browser. You use <script> tags to bring it in. This file will declare the global array in your application. There is an async attribute available to have it load asynchronously. The downside here is figuring out when your array becomes available.
Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
The benefits of well-factored and modular code, in my understanding are re-usability and organization. Code written in a big chunk all in one file is difficult to read, and re-using small portions of the code requires careful copy-pasting, rather than include statements.
In particular, with regards to Javascript, I came across an example recently that got me thinking about this. A comment was made on SO to the effect that, if you are not including your javascripts conditionally on a page-by-page basis, this "represents a failure to modularize JS code properly". However, from a code re-use and organization point of view, there is no reason to consider what happens at page load time. The code will be just as readable if it is written in a bunch of separate files and then mashed together and minified before being served. The rails asset pipeline, for example, does just this.
When I first encountered the asset pipeline, my mind reeled and I started wondering "how do I make javascripts load only when needed?" I read a few SO questions and an article on the matter, and began to think that maybe I shouldn't worry about what happens to my code after it "compiles".
Is the purpose of writing modular code purely a human-level activity, should we stop worrying about modularity after the code starts running? In the case of Javascript, should we be concerned that our scripts are being mashed together before being included?
I think the one thing that you are not really talking about in this with regards to performance is actual HTML browser download behavior. I believe you have to walk a fine line between only displaying the javascript needed on a page by page basis and leveraging browser caching and download behavior.
For example, say you have 20 different javascript snippets that are going to be used on every page. In this case it is a no-brainer to compile/minify them into a single file, as the fewer files your browser needs to download, the better. This single file would also be able to be cached, that is assuming it is a static file or appearing to be static (via headers sent) if it is dynamically compiled.
Now say of those 20 snippets, 15 are used on every page and the others are used intermittently. Of course you put all 15 of the always used snippets into a single file. But what about the others? In my opinion you need to consider the size and frequency of use of the files. If they are small and used relatively frequently, I might consider putting them into the main file, with the thought that the extra size in the main file is outweighed by the need to have additional request to download the content later. If the code is large, I would tend to only use it where necessary. Of course once it is used, it should remain in cache.
This approach might best be suited for a web application where users are expect to typically have multiple page loads per session. Of course if you are designing an advertising landing pages or seomthing where the user only may see that single page, you might lean on keeping the initial javasciprt download as small as possible and only loading new javascript in as necessary based on user interaction.
Every aspect of this question boils down to "it depends".
Are you writing an enterprise-level application, which results in 80,000 lines of code, when you stuff it all together?
If so, then yes, compilation time is going to be huge, and if you stuff that in the <head> of your document, people are going to feel the wait time.
Even if it's already cached, compile time alone will be palpable.
Are you writing dozens of widgets which might never be seen by an end-user?
Especially on mobile?
If so, then you might want to save them the download/compile time, and instead load your core functionality, and then load extra functionality on-demand, as more studies are showing that the non-technical end-user expects their mobile-internet experience to be similar to their desktop experience, not only in terms of content, but in general wait-times.
Fewer and fewer people are willing to accept 5s-8s for a mobile experience (or a desktop experience on mobile) to get to the point of interactivity, just based on the "well, it's mobile, so it'll take longer" train of thought.
So again, if you've got an 80,000 line application, or a 300kB JS file, or are doing a whole lot of XML parsing, et cetera, prior to load, without offering a separate mobile experience, your stats on mobile are bound to hurt -- especially if you're a media site or a commercial/retail site.
So the correct answer to your question is to say that there is no correct answer to your question, excepting that there are good ideas and bad ideas, based on the target-devices, the intent of the site/application, the demographic, the code-base, the anticipation that users will frequent the site (and thus benefit from cached assets), the frequency of updates to the codebase (having one updated module, with 20 cached modules, versus a fully-invalid 21-module chunk, due to one updated line, with a client-base of 250,000 customers, is a consideration for several reasons)...
...and more...
Figure out what you're doing.
Figure out what you need to do to make that happen.
Figure out how to do it, while providing your customers a good experience.
Know how to combine files.
Know how to load on demand.
Know how to build a light bootstrap, which can intelligently load modules (and/or learn require/AMD).
Use these as tools to offer your users the best experience possible, given what you're trying to accomplish.
I have written pure JavaScript front ends before and started noticing performance decrease when working with large stores of data. I have tried using xml and json, but in both cases, it was a lot for the browser to handle.
That poses my question, which is how much is too much?
You can't know, not exactly and not always. You can make a good guess.
It depends on the browser, OS, RAM, CPU, what else is running at that moment, how fast their connection is, what else they're transferring, etc.
Figure out several situations you expect for your average user, and test those. Add for various best, worst, and interesting (e.g. mobile, tablet) cases.
You can, of course, apply experience and extrapolate from your specific cases, and the answer will change for the future.
But don't fall into the trap of "it works for me!"
I commonly see this with screen resolutions: as those have increased, it's much more popular to have multiple windows visible at the same time. In 1995 it was rare for me to not have something maximized; now fifteen years later, it's exactly the opposite.
Yet sometimes people will design some software or a website, use lower contrast[1], maximize it, and connect to a server on localhost—and that's the only evaluation they do.
[1] Because they know what the text says and don't need to read it themselves, so lower contrast looks aesthetically better.
In my opinion, if you need to stop and think about this issue, then the data is too much. In general you should design your applications so that users with a low-end netbooks and/or slow internet connections are still able to run them. Also keep in my mind that more often than not your application isn't the only page your users are visiting at the same time.
My recommendation is to use Firefox with Firebug to do some measurements. See how long a request takes to complete in a modest configuration. If it takes noticeable time for the browser to render data, then you'd better off doing a redesign.
A good guiding principle should be that instead of worrying about whether the browser can handle the volume of data you're sending it, worry about whether your user can handle it. It all depends on the presentation of course (i.e., a lot of data bound for a visualization tool that'll render a complex graph in a canvas is different than a lot of raw numbers bound for a gigantic table), but in my experience a user's brain reaches data overload before the browser/network/client computer.
It really depends on the form that your external data is going to take in your Javascript. If you want to load all your data at once and keep it in memory as a large object with lots of properties (associative array), then you will find that most current desktops can only handle about 100k entries (with small key-value pairs) before performance really degrades.
If it is possible, you should see if there are ways to only load the data that is needed by the user for a given request / interaction. You can use AJAX to request needed data and prefetch data that you think the user may need.