Place javascript files in 12 Hive or in Document Library? - javascript

Besides the obvious benefit of placing the custom javascript files (or any other resource files) in a document library, such as:
versioning, history, tracking
easy to change/edit
Is there any other benefits?
Performance? Page Load time?
Are there any cons?
PS. This is not meant as a question on number of files / resources has a general HTTP performance, but rather this specific SharePoint issue on the file location.
http://site/_layouts/myjavascript.js
vs.
http://site/DocumentLibrary/myjavascript.js

If you are storing the javascript in a library then it is stored in the database.
It means that:
It has version control
It is slower then the filesystem (unless you are using blob cache)
It will be included in any backups you do of your sharepoint install (stsadm for example)
It will be accessible (changeable) by anyone with access to the document library (easier to maintain, less secure)
Client side caching will behave differently (you'll need to configure it, it's a bit complicated for MOSS content vs filesystem content)
We decided to store it in the 12 hive as it feels better in regards to code vs. data separation. If you consider this file to be data then store it in MOSS, if you consider it to be "code" then store it in the filesystem.

have you considered using google to host javascript files (such as jquery)
this benefits from using their bandwidth for downloading the files
faster page loading times
higher availability
chances are high that your javascript file is already cached on the user's machine

Document Library
Pros - Automatic delivery to all web front ends, easy, versioning, history, ease of editing
Cons - Slower (it's in the database),Security issues brought about by accidentally securing the item's site, login prompts if you are referencing the js via absolute URL your users may get repeated login prompts
Plasing the js file in the 12 hive
Pros- faster, no issues with the aformentioned security prompts
Cons - Not automagically delivered to all of your web front ends, possible AAM issues, technically you are not supposed to modify files in the 12 hive

Related

How to download and query html pages where JS processing is necessary?

I often compile informal datasets by running some kind of XPath/XQuery on publicly available web pages. Usually the structure of the HTML is regular enough that useful information can be extracted easily.
But today I've come across tunefind.com. This website makes extensive use of the REACTJS framework, and so most of the structure of the page is configured client-side by Javascript. The pages, when initially downloaded, are very basic and missing a lot of information. The pages are populated by a script that uses a hopelessly messy blob of JSON data at the bottom of the page.
The only way I can think of to deal with this would be to use some kind of GUI-based web engine and just not display the GUI part. But that is a preposterous amount of work for these casual little CLI tools that I use to gather information.
Is there any way to perform the javascript preprocessing without dealing with unnecessary graphics?
Even if you were to process without the graphics the react javascript will be geared towards running in a browser context, at the very least it will expect a functioning DOM to exist, the application itself may also require clicks / transitions to happen before you can see some data.
Your best bet then is to load the page in a browser, to keep this simple, there are plenty of good browser automation frameworks designed for this.
I've used a fair few libraries over the years including phantomJS and recently I've gotten the most mileage out of nightmarejs.
It runs an electron browser for you and gives you a useful promisified javascript API to control it with, that has common browser functions such as clicking, following links etc.
You can configure it to hide the browser which is useful for making a CLI tool, however its a bit of a pseudo-headless mode and will still require a windowing/graphical context (e.g. x window).
Hope this helps.
PS - If you're at all used to docker it's not hard to make this just a running container!

Lightweight JS Library vs Google-hosted CDN

When page-load speed is the priority, is it better to use a minimal, lightweight javascript library (hosted on a CDN), or is it better to use something like jQuery, hosted on Google's CDN that the browser more than likely already has loaded?
Edit: What my question really boils down to is whether the cross-site caching effect of using jQuery hosted on Google's CDN outweighs the benefits of using an ultra-light library, also on a CDN.
jQuery is not heavy as compared to any other javascript library at present looking at the amount of features and browsers it supports.
You can consider this factor while selecting the plugins to be used on the page because they are written by various users and some may right it intelligently considering this factor or some may just right it for the sake.
Yes, if you use CDN like Google for jQuery it is most likely that the library must be cached by the browser and also Google has number of servers based on location so you don't have to worry about it.
Decreased Latency
A CDN distributes your static content across servers in various, diverse physical locations. When a user’s browser resolves the URL for these files, their download will automatically target the closest available server in the network.
In the case of Google’s AJAX Libraries CDN, what this means is that any users not physically near your server will be able to download jQuery faster than if you force them to download it from your arbitrarily located server.
There are a handful of CDN services comparable to Google’s, but it’s hard to beat the price of free! This benefit alone could decide the issue, but there’s even more.
Increased parallelism
To avoid needlessly overloading servers, browsers limit the number of connections that can be made simultaneously. Depending on which browser, this limit may be as low as two connections per hostname.
Using the Google AJAX Libraries CDN eliminates one request to your site, allowing more of your local content to downloaded in parallel. It doesn’t make a gigantic difference for users with a six concurrent connection browser, but for those still running a browser that only allows two, the difference is noticeable.
Better caching
Potentially the greatest benefit of using the Google AJAX Libraries CDN is that your users may not need to download jQuery at all.
No matter how well optimized your site is, if you’re hosting jQuery locally then your users must download it at least once. Each of your users probably already has dozens of identical copies of jQuery in their browser’s cache, but those copies of jQuery are ignored when they visit your site.
However, when a browser sees references to CDN-hosted copies of jQuery, it understands that all of those references do refer to the exact same file. With all of these CDN references point to exactly the same URLs, the browser can trust that those files truly are identical and won't waste time re-requesting the file if it's already cached. Thus, the browser is able to use a single copy that's cached on-disk, regardless of which site the CDN references appear on.
This creates a potent "cross-site caching" effect which all sites using the CDN benefit from. Since Google's CDN serves the file with headers that attempt to cache the file for up to one year, this effect truly has amazing potential. With many thousands of the most trafficked sites on the Internet already using the Google CDN to serve jQuery, it's quite possible that many of your users will never make a single HTTP request for jQuery when they visit sites using the CDN.
Even if someone visits hundreds of sites using the same Google hosted version of jQuery, they will only need download it once!
It's better to use the library that best suits the needs of your application and your development team. A super-lightweight library might save you a few hundred milliseconds of load time, but may end up costing you in development hours if your team has significantly more experience with jQuery/MooTools/Dojo etc.
If new feature implementation and bug fixing is hindered by using a second-rate tool solely to improve load times, your users are ultimately going to suffer.

Can local JavaScript edit/save files on same local machine? How using jQuery?

I'm building a little locally run CSS driven site map for auditing a huge intranet site. I've already coded the ability, to bring up a context menu which provides for options to make updates to the DOM of index.html. I would like to save these changes to index.html.
I know JavaScript doesn't allow manipulation to the client file system, but I've also read in places that it is allowed if the JavaScript is retrieved from the local machine.
Can anyone confirm this and point me in the right direction on how this can be done WITHOUT setting up a local server?
It is possible to write to the local filesystem using the Filewriter object, as described here: http://www.html5rocks.com/en/tutorials/file/filesystem/
It is also possible to edit local files using the File System Access API.
I don't think this is possible even in IE's trusted zone, at least not with pure javascript (you may be able to with an activeX control / maybe flash or silverlight with the right trust levels).
It's a slightly different subject, but there is a writeup on HTML 5 local storage which may help for background reading: http://diveintohtml5.ep.io/storage.html

Why do web applications send HTML over the wire?

This question pertains to web applications. I have very little web app development experience, so might be missing some very obvious points/issues. Please point them out.
As I understand, in most web applications, a web server sends HTML over the wire to a client (browser). This happens every time a HTTP request is made. I feel this is very wasteful of bandwidth.
1) Since browsers can run JavaScript, why don't we just send a JavaScript program which can generate the webpage's HTML content (which the browser then renders).
2) Further a browser might cache the JavaScript program and next time the server only need send the data. The protocol might involve the browser sending the "program version" it has.
Consider an example of a relatively simple website Hacker News [http://news.ycombinator.com]. Let us separate the data (30 posts + their metadata) from its presentation. Assuming 1) above, the server can just send the data (say in JSON) + a JavaScript program to generate HTML. This gist shows the idea. The data for the 30 posts is in JSON [http://www.json.org/js.html] format. For this particular example the data transferred is cut in 1/2 (size of data+JavaScript / size of HTML). Further if browsers can do 2) above, it reduces the data transferred on each visit to 1/4 (size of data / size of HTML). [Note: this analysis is without considering compression; gzip,deflate is very successful in reducing the size of HTML. But isn't prevention better than cure?]
I see atleast the following advantages of this :-
* For most web pages, it will reduce the size of data transferred over the wire.
* Forces web apps to separate data from its presentation.
Disadvantages might include - more complex browsers, time to run the JavaScript program to generate HTML (this might get offset by the reduction in data size).
Now my question is - why are web applications not developed this way, or, why do web applications send HTML over the wire? Surely the web server (sending out HTML) doesn't care about HTML at all, so why should it, first, generate it, and then send it over the wire?
There are a few reasons, some of them historical this is by no means a complete list but just some of my experiences:
HTML predates JS, and a lot of scripts and libraries predate JS
Older browsers (think IE<=6) had rubbish, inconsistent JS engines, their rendering engines were much more consistent in how they treat HTML. So many more libraries and scripts predate consistent JS
It is a nightmare to debug applications written as you suggest if they are not constructed right (we have one at my work, it takes 30 minutes to find where a piece of html is actually generated)
It is a lot more work to do it right - why not use templates or static docs or something much simpler
Its not really a problem - HTML compresses really well
What you suggest is done - its called AJAX (OK, so ajax is more general than this but you all know what i mean)
It simply doesn't work for most plain-text user agents including those used by most search engines. If this page is serving most of your content, its generally a good idea to make it easy for Google to parse
Well the obvious reason on why this is the case is that JavaScript wasn't around when we started sending HTML around, and HTML was an improvement to sending around plaintext documents.
The reason we don't do this now: we eschew complex solutions to problems that aren't really problems.
Average internet connections download nearly 1M bytes per second, and web browsers are quite adept at parsing and starting to render this HTML before it's even all ready to be. They're also great at parallelizing the downloading of resources on the page. If we want to save a few bytes at the cost of some compute cycles, we gzip content before sending it. Problem solved.
And for the record, we do this with AJAX in complex webpages (checkout Github's source browsing for a great example of how awesome this can be).
What you suggest can, and is, done. Remember, web pages used to be static documents. Full blown web-based applications are a relatively recent idea.
I might also suggest that it isn't necessarily more efficient, especially when your pages are sent gzipped.
What you suggest is basically what a JavaScript full stack framework like ExtJS does. You can create rich, data intensive applications without writing any HTML -- well, only enough to reference the necessary .js libraries. The complex DOM needed for layouts, grids, forms etc is all created by the framework.
The simple answer is that HTML is older. Why is C99 not fully implemented with a lot of compilers? They figure 1989 is new enough for them. Also, JavaScript exercises a lot more control over people's browsers than they seem to want. Conditional statements and encoded data pose a security concern, and some people want to keep that can of worms closed to begin with. True, HTML is a very inefficient markup, but the size is insignificant compared to the images you download from the internet. That favicon takes up as much data as the page itself, and it's only 16 pixels across.
A good reason that the server-side code of a web application might do lots of HTML template work on the server side is that in many server environments it's not made easy to bundle up server-side data structures (object graphs) for easy delivery to the client. There may be information kept in server-side data structures that really shouldn't be delivered out to the client. Thus in order to send out a "pure" data-only response, the server would have to trim off sensitive data before delivering out the JSON. That's not an unsolvable problem, but I don't know of many server frameworks that facilitate a solution.
The server has direct, unfettered access to the database and to everything else that makes an application work: user preferences, history, account details, system settings, etc. To build an application that's client-centric for rendering purposes would mean concocting ways of keeping all that information intact and up-to-date on the client. For a lot of applications, that might not be terribly easy.
Finally, it's only relatively recently that it would make sense to trust a browser to provide a stable enough platform for building a long-lived "application environment" as a continually-updating web page. By building a web app such that pages are sometimes completely reloaded, there are lots of little "reboots". That's a cheap and dumb way of keeping a lid on at least some kinds of memory leaks.
Most implementations of sites with heavy Javascript use won't start executing until the DOM has fully loaded; then you'll get every page with 'loading screens' when the page wrapper has downloaded, but none of the content has.
Also, do remember that not all users have Javascript enabled, and not all browsers support high-level Javascript (think mobiles).
I would send HTML in a response if I wanted my application to work without Javascript. I would write HTML rendering code in my server-side language (most of the time not Javascript), which could then be used for two purposes: serving whole HTML pages, and serving bits of HTML in response to XHRs.
If the Javascript code is restricted to things like reporting UI events and replacing innerHTML with server-generated code, I don't have to duplicate any of my application logic across languages/frameworks. This duplication problem is one of the reasons why server-side Javascript is getting people excited.

Accessing contents of a file in a web-application without uploading

As far as I can tell, it is impossible to access the content of files on the user's computer in a web application without first uploading to the server, then re-downloading to user, unless some sort of plug-in is used. (Flash, etc.) Ideally, the user would upload the file directly to localstorage and then scripts would have a chance to process/display/validate/filter without the user having to wait on an upload.
Are there any features in upcoming web standards such as html5 that will allow this? If not, why has there been no effort to make this possible, and how can I work around it without getting stuck with plugins?
EDIT: DO NOT assume that I want to let JavaScript access arbitrary files on the hard drive without any user intervention. We already have the ability to prompt the user for a file and upload it, I only want the ability to prompt the user for a file to be loaded into the browser's memory. I was only hoping HTML5 would have support for something you can already do with both Flash and Java applets.
Doesn't the File API ( http://www.w3.org/TR/FileAPI/ ) do that ?
It's implemented in Firefox 3.6 (see https://developer.mozilla.org/en/DOM/FileReader and https://developer.mozilla.org/en/Using_files_from_web_applications )
According to http://code.google.com/events/io/2010/sessions/html5-status-chrome.html it is supported in chrome.
What you can do in HTML 5 (or 6, 7, ...) depends on what a diverse group of vendors with competing agendas think the new HTML version should or should not do... it is designed by committee.
Giving a web page that you create permission to access resources (e.g. files) on your computer creates a very large security hole (would you like my web page to read your emails and home banking files?)
It's very unlikely that a committee will agree to standardize on a feature that creates a security risk, given that only one browser on one device/platform needs to poorly implement that standard to open Pandora's Box to hackers.
Individual vendors (the people that make plugins) don't have to get a bunch of other companies to agree on a feature. They just implement it, and users get to decide if they trust it enough to install it. Microsoft's first attempt at this was a major security disaster.
Like Raul and Eric pointed out, there is a significant trust issue involved, and requiring people to give code they don't know access to their hard drives will not make your site popular.
You are probably stuck with choosing between plugins or browser specific features/addons for a long time.
That said, you can do cool things by just making the best of this situation. One approach I've used several times is to have an invisible plugin (Applet in my case) present on a web page, but control it entirely via JavaScript, giving the web app a very "natural" look and feel.
Another approach is progressive enhancement of some sort - providing an enhanced experience for users who have the required plugin installed and opt to use it. I've experimented with this on sites such as http://www.pdfcombine.com - users who don't have the Java plugin installed get to merge PDF files by uploading them to a server and downloading the merged file, whereas users with the Java plugin are given the option to do it all locally with the Applet.

Categories

Resources