minimize response from controller action - javascript

I am working with asp.net mvc. I note when calling controller actions that return a view via javascript, the html markup returned is not minimized - it includes whitespace etc. Therefore the response size is larger than what it should be.
Is there a way to minimize the response from calling a controller action from javascript?

You might want to look into creating a custom filter to be applied to responses that you want to minify. A technique for this is given in this answer or in this blog post, though you will need to be sure that your implementation of the minification (removing whitespace) does not inadvertantly mess up your code (for example, if you have a javascript content, removing all newline characters can result in all of the following javascript being included in the comment, per this comment).
To this end, it may be worthwhile to use the C# port of Google's htmlcompressor library as a guide for minifying your html.
Of course, you can also just turn on gzip compression on the web server (as Justin points out in the comment below), and get the benefits of compressed output without the headache of implementing (and maintaining) what I detail above.
Note: this may not be worth the effort. A few extra spaces and newline characters in the file that is being sent down the wire will probably not amount to very much space. Even if you save a few KB (which may not even be the case), the increase in performance will most likely not be noticeable. You will however notice that when you try to look at the source of your html in order to debug any issues that you have on the client side, it will be extremely hard to read (spaces and new lines are pretty important for readability).

Related

Spaces in equal signs

I'm just wondering is there a difference in performance using removing spaces before and after equal signs. Like this two code snippets.
first
int i = 0;
second
int i=0;
I'm using the first one, but my friend who is learning html/javascript told me that my coding is inefficient. Is it true in html/javascript? And is it a huge bump in the performance? Will it also be same in c++/c# and other programming languages? And about the indent, he said 3 spaces is better that tab. But I already used to code like this. So I just want to know if he is correct.
Your friend is a bit misguided.
The extra spaces in the code will make a small difference in the size of the JS file which could make a small difference in the download speed, though I'd be surprised if it was noticeable or meaningful.
The extra spaces are unlikely to make a meaningful difference in the time to parse the file.
Once the file is parsed, the extra spaces will not make any difference in execution speed since they are not part of the parsed code.
If you really want to optimize download or parse speed, the way to do that is to write your code in the most readable fashion possible for best maintainability and then use a minimizer for the deployed code and this is a standard practice by many web sites. This will give you the best of both worlds - maintainable, readable code and minimum deployed size.
A minimizer will remove all unnecessary spacing, shorten the names of variables, remove comments, collapse lines, etc... all designed to make the deployed code as small as possible without changing the run-time meaning of the code at all.
C++ is a compiled language. As such, only the compiler that the developer uses sees any extra spaces (same with comments). Those spaces are gone once the code has been compiled into native code which is what the end-user gets and runs. So, issues about spaces between elements in a line are simply not applicable at all for C++.
Javascript is an interpreted language. That means the source code is downloaded to the browser and the browser then parses the code at runtime into some opcode form that the interpreter can run. The spaces in Javascript will be part of the downloaded code (if you don't use a minimizer to remove them), but once the code is parsed, those extra spaces are not part of the run-time performance of the code. Thus, the spaces could have a small influence on the download time and perhaps an even smaller influence on the parse time (though I'm guessing unlikely to be measurable or meaningful). As I said above, the way to optimize this for Javascript is to use spaces to enhance readability in the source code and then run a minimizer over the code to generate a deployed version of the code to minimize the deployed size of the file. This preserves maximum readability and minimizes download size.
There is little (javascript) to no (c#, c++, Java) difference in performance. In the compiled languages in particular, the source code compiles to the exact same machine code.
Using spaces instead of tabs can be a good idea, but not because of performance. Rather, if you aren't careful, use of tabs can result in "tab rot", where there are tabs in some places and spaces in others, and the indentation of the source code depends on your tab settings, making it hard to read.

Chars sanitization and XSS

I was doing the Google's XSS game (https://xss-game.appspot.com/level4) and I managed to solve the 4th level. I didn't completely undestand how, though.
I don't understand why if I inject the encoding version of a char (let's say %3B) this is translated into the char itself (that is ';') inside the final HTML page. I mean who does this, the browser? Why?
Furthermore, I don't understand where in the code the the injected chars are checked. I made some tests and I've seen that if I try to inject strings like '()';"' whatever comes after the ; is cut out! Where does this happen in the code?
Finally, if I inject a tag like <asd> it is encoded within the <div> (that is <asd>) but it does not in the onload attribute of the <img> tag, where in the code this stuff is performed?
(This answer makes a number of assumptions because I don't have access to Google's client side or server side code (the link goes to an error page because I haven't played the game to reach the level)).
The ((probably) server side) URL parser (which will be part of the server side code) is responsible for converting percent-encoded data in URLs into characters.
; is a key/value separator in form encoding syntax. The URL parser will cut off data at that point.
Responsibility for converting text into HTML is usually given to the template engine, but might be done in some general server side code before data gets to the template (assuming there is a template, the general server side code might just smash strings together).
In order to manage level 4 just enter
')*alert('xss

HTML as a result for an AJAX call (PROs an CONs)

What is your opinion (PRO an CONS) about returning HTML code as a result for an AJAX call. It is, if the app creates a new item in a list and it needs some extra parameters or some pattern customization, instead of modify it through JS, we can send it templatized through an AJAX call.
The point is that HTML snippets are sent from the server to the client computer and integrated in the document DOM. Any problem with this approach?
No problem with it at all, perfectly normal and reasonable thing to do.
There is sometimes a use-case for sending data rather than markup and expanding it with client-side templating, but that's mostly for situations where you're sending a lot of data and so want to keep the size on the wire down. (E.g., a large table where the HTML representation of it is 100k but the raw data in, say, JSON format would only be 10k.) Or when the templating varies depending on client-side conditions. But by and large, perfectly fine to send HTML you then incorporate into the DOM via innerHTML (or any of several libraries' wrappers for it that help you with the odd niggle).
This is a common approach.
If you're adding items to a list or replacing the contents of a pod with something completely different this is fine.
This also makes it easier to apply AJAX to existing sites (for example overlays or something) because you can make requests to existing pages and then strip out the bits you don't want.
However, it would be better for updates where only a value is changing then you should perhaps use Json there.
Personally, I almost always choose to receive a JSON response with no markup or formatting applied, but that's just because I like having a really flexible, granular response so I can do whatever I want with the returned data, without having to possibly strip it out of HTML. This is NOT necessarily the easiest or most elegant solution in a lot of cases! :)

GWT reduce compiled javascript size

I have found that the size of the compiled JavaScript grows faster than I had expected. Adding a few lines of Java code to my project can increase the script size in several Kbs.
At the moment my compiled project weights 1Mb. I'm not using any external libraries except for those for MVP (Activities & Places) , testing (JUnit) and logging.
I would like to know if there are any coding practices/recommendations to keep the compiled script as small as possible. I'm not refering to code splitting, but to coding techniques or patterns that can make the compiled JavaScript effectively smaller.
Many thanks
GWT uses a "pay as you go" design philosophy, and since you're not allowed to use reflection the compiler can statically prove (on a method-by-method basis) that a section of code is "reachable", and eliminate those that are not. For example, if you never use the remove() method on ArrayList, then that code does not get included in the resulting JavaScript.
If you are seeing several kilobyte jumps with the addition of just a few lines, it probably means that you've introduced the use of a new type (and possibly one that depends on other new types) that you had not yet been using. It might also mean that you've made a change to send this new type "over the wire" back to the server, in which case a GWT generator had to include JavaScript for marshaling that type, and any new types that are reachable via its "has-a" and "is-a" references.
So if it were me, I would begin there: when you catch a 2-line change making a multi-kilobyte increase, start by looking at the types and asking whether it is a type that you have used before, and whether you're sending a new type over the wire, and whether that type also depends on other types under the hood.
One final thought: in Ray Ryan's 2009 presentation at Google I/O he mentioned a superstition that he had picked up from the GWT compiler team, where they recommended against using generic types (I'm not speaking of Java Generics here, but rather supertypes) as RPC arguments & return values. In particular, instead of having your RPC call take or return a Map, have it take or return a HashMap instead. The belief is that the GWT generator can then narrow the amount of serialization code that it has to create at compile time (because it could, for example, refrain from generating serialization code for a TreeMap).
I hope this helps.
GWT creates different output versions for each supported browser, so when you say the project size is 1MB are you then referring to the combined size of these ? (each browser only download's the one it actually needs).
I have tried to experiment with the generated output when using various inheritance/class/generics constructs. Unfortunately the extra complexity introduced far outweighs the small size improvements gained (when fx. dumping generics).
I have been on some large GWT projects (+50.000 lines) and have found that code obfuscating coupled with turning on compression on the web server to be the simplest most effective way to minimize the downloads. If this does not shrink the code enough, then look into GWT's compilation report which you can use to pinpoint potential problematic classes and places to insert code splitting.

Creating and parsing huge strings with javascript?

I have a simple piece of data that I'm storing on a server, as a plain string. It is kind of ridiculous, but it looks like this:
name|date|grade|description|name|date|grade|description|repeat for a long time
this string can be up to 1.4mb in size. The idea is that it's a bunch of student records, just strung together with a simple pipe delimeter. It's a very poor serialization method.
Once this massive string is pushed to the client, it is split along the pipes into student records again, using javascript.
I've been timing how long it takes to create, and split, these strings on the client side. The times are actually quite good, the slowest run I've seen on a few different machines is 0.2 seconds for 10,000 'student records', which has a final string size of ~1.4mb.
I realize this is quite bizarre, just wondering if there are any inherent problems with creating and splitting such large strings using javascript? I don't know how different browsers implement their javascript engines. I've tried this on the 'major' browsers, but don't know how this would perform on earlier versions of each.
Yeah looking for any comments on this, this is more for fun than anything else!
Thanks
String splitting for 1.4mb data is not a problem for decent machines, instead you should worry about the internet connection speed of your users. I've tried to do spell check with 800 kb dictionary (which is half of your data), main issue was loading time.
But looks like your students records data could be put in database, and might not need to load everything at loading time, So, how about do a pagination to show user records or use ajax to request to search certain user names?
If it's a really large string it may pay to continuously slice the string with 'string'.slice(from, to) to only process a smaller subset, appending all of the individual items to the end of the output with list.push() or something similar might work.
String split methods are probably the most efficient way of doing this though, even in IE. Processing individual characters using string.charAt(x) is extremely slow and will often show a security error as it stalls the browser. Using string split methods would certainly be much faster than splitting using regular expressions.
It may also be possible to encode the data using a JSON array, some newer browsers such as IE8/Webkit/FF3.5 have fast JSON parsing built in using JSON.parse(data). But using eval(JSON) may overflow the browser if there's enough data, so is probably a bad idea. It may pay to compare for performance though.
A much better approach in a lot of cases is to use AJAX and only load some of the data at once from the server, which would also save download time.
Besides S. Mark's excellent comments about local vs. x-fer speed and the tip to re-encode using AJAX, I suggest a (longterm) move away from JavaScript in the Browser (assuming that's were it runs) to either a non-browser implementation of JS (or possibly another language).
A browser based JS seems a week link in a data-x-fer chain and nothing I would want to run unmonitored, since the browsers are upgraded from time to time and breaking your JS-x-fer might be an unanticipates side effect!

Categories

Resources