How to prevent an HTTP request just for a favicon? - javascript

Everybody knows how to set up a favicon.ico link in their HTML:
<link rel="shortcut icon" href="http://hi.org/icon.ico" type="image/x-icon">
But it's silly that for only a several-byte-tiny icon we need yet yet another potentially speed-penalizing HTTP request.
So I wondered, how could I make that favicon part of a usable sprite (e.g., background-position=0px -200px;) that doubles as, say, a logo on the rest of the website, in order to speed up the site and save that precious and valuable HTTP request. How can we get this to go into an existing sprite image along with our logo and other artworks?

I think for the most part it does not result in another HTTP request as these are usually dumped in the browser's cache after the first access.
This is actually more efficient than any of the proposed "solutions".

A minor improvement to #yc's answer is injecting the Base64-encoded favicon from a JavaScript file that would normally be used and cached anyway, and also suppressing the standard browser behavior of requesting favicon.ico by feeding it a data URI in the relevant meta tag.
This technique avoids the extra http request and is confirmed to work in recent versions of Chrome, Firefox and Opera on Windows 7. However it doesn't appear to work in Internet Explorer 9 at least.
File index.html
<!doctype html>
<html lang="en">
<head>
<meta charset="utf-8">
<!-- Suppress browser request for favicon.ico -->
<link rel="shortcut icon"type="image/x-icon" href="data:image/x-icon;,">
<script src="script.js"></script>
...
File script.js
var favIcon = "\
iVBORw0KGgoAAAANSUhEUgAAABAAAAAQCAYAAAAf8/9hAAABrUlEQVR42mNkwAOepOgxMTD9mwhk\
[...truncated for brevity...]
IALgNIBUQBUDAFi2whGNUZ3eAAAAAElFTkSuQmCC";
var docHead = document.getElementsByTagName('head')[0];
var newLink = document.createElement('link');
newLink.rel = 'shortcut icon';
newLink.href = 'data:image/png;base64,'+favIcon;
docHead.appendChild(newLink);
/* Other JavaScript code would normally be in here too. */
Demo: turi.co/up/favicon.html

You could try a data URI. No HTTP request!
<link id="favicon" rel="shortcut icon" type="image/png" href="data:image/png;base64,....==">
Unless your pages have static caching, your favicon wouldn't be able to be cached, and depending on the size of your favicon image, your source code could get kind of bloated as a result.
Data URI favicons seems to work in most modern browsers; I have it working in recent versions of Chrome, Firefox and Safari on a Mac. Doesn't seem to work in Internet Explorer, and possibly some versions of Opera.
If you're worried about old Internet Explorer versions (and you probably shouldn't be these days), you could include an Internet Explorer conditional comment that would load the actual favicon.ico in the traditional way, since it seems that older Internet Explorer doesn't support data URI favicons.
`<!--[if IE ]><link rel="shortcut icon" href="http://example.com/favicon.ico" type="image/x-icon" /><![endif]--> `
Include the favicon.ico file in your root directory to cover browsers that will request it either way, since for those browsers, if they're already checking no matter what you do, you might as well not waste the HTTP request with a 404 response.
You could also just use the favicon of another popular site which is likely to have their favicon cached, like http://google.com/favicon.ico, so that it is served from cache.
As commenters have pointed out, just because you can do this doesn't mean you should, since some browsers will request favicon.ico regardless of the tricks we devise. The amount of overhead you'd save by doing this would be minuscule compared to the savings you'd get from doing things like gzipping, using far-future expires headers for static content, minifying JavaScript files, putting background images into sprites or data URIs, serving static files off of a CDN, etc.

Killer Solution in 2020
This solution necessarily comes nine years after the question was originally asked, because, until fairly recently, most browsers have not been able to handle favicons in .svg format.
That's not the case anymore.
See: https://caniuse.com/#feat=link-icon-svg
1) Choose SVG as the Favicon format
Right now, in June 2020, these browsers can handle SVG Favicons:
Chrome
Firefox
Edge
Opera
Chrome for Android
KaiOS Browser
Note that these browsers still can't:
Safari
iOS Safari
Firefox for Android
Nevertheless, with the above in mind, we can now use SVG Favicons with a reasonable degree of confidence.
2) Present the SVG as a Data URL
The main objective here is to avoid HTTP Requests.
As other solutions on this page have mentioned, a pretty smart way to do this is to use a Data URL rather than an HTTP URL.
SVGs (especially small SVGs) lend themselves perfectly to Data URLs, because the latter is simply plaintext (with any potentially ambiguous characters percentage-encoded) and the former, being XML, can be written out as a long line of plaintext (with a smattering of percentage codes) incredibly straightforwardly.
3) The entire SVG is a single Emoji
N.B. This step is optional. Your SVG can be a single emoji, but it can just as easily be a more complex SVG.
In December 2019, Leandro Linares was one of the first to realise that since Chrome had joined Firefox in supporting SVG Favicons, it was worth experimenting to see if a favicon could be created out of an emoji:
https://lean8086.com/articles/using-an-emoji-as-favicon-with-svg/
Linares' hunch was right.
Several months later (March 2020), Code Pirate Lea Verou realised the same thing:
https://twitter.com/leaverou/status/1241619866475474946
And favicons were never the same again.
4) Implementing the solution yourself:
Here's a simple SVG:
<svg
xmlns="http://www.w3.org/2000/svg"
viewBox="0 0 16 16">
<text x="0" y="14">🦄</text>
</svg>
And here's the same SVG as a Data URL:
data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2016'%3E%3Ctext%20x='0'%20y='14'%3E🦄%3C/text%3E%3C/svg%3E
And, finally, here's that Data URL as a Favicon:
<link rel="icon" href="data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2016'%3E%3Ctext%20x='0'%20y='14'%3E🦄%3C/text%3E%3C/svg%3E" type="image/svg+xml" />
5) More tricks (...these are not your parents' favicons!)
Since the Favicon is an SVG, any number of filter effects (both SVG and CSS) can be applied to it.
For instance, alongside the White Unicorn Favicon above, we can easily make a Black Unicorn Favicon by applying the filter:
style="filter: invert(100%);"
Black Unicorn Favicon:
<link rel="icon" href="data:image/svg+xml,%3Csvg%20xmlns='http://www.w3.org/2000/svg'%20viewBox='0%200%2016%2016'%3E%3Ctext%20x='0'%20y='14'%20style='filter:%20invert(100%);'%3E🦄%3C/text%3E%3C/svg%3E" type="image/svg+xml" />

You could use a Base64-encoded favicon, like:
<link href="data:image/x-icon;base64,iVBORw0KGgoAAAANSUhEUgAAABAAAAAQEAYAAABPYyMiAAAABmJLR0T///////8JWPfcAAAACXBIWXMAAABIAAAASABGyWs+AAACbUlEQVRIx7WUsU/qUBTGv96WSlWeEBZijJggxrREdwYixMnByYEyOvgfsBAMG0xuDsZ/QGc3NDFhgTioiYsmkhBYGLSBkLYR0va8gSjvQXiIT7/l5ibfOd/v3pN7gSmVSMTj8ThRfzdYk8lkMpl83/+AVFVVVXU0eHiVJEmSpB8DIcpkMplsdhCYz+fzhQJROBwOh8PDQN+oQCAQCASIRFEURZHI45GkP0/e7Xa73e70AMJnjel0Op1OA6oaDB4eAkAw6PcDvZ5t6zrw/Hx2trAw/cHYZ426ruu6DtzcGEYuBzQa19etFvD4WKtls4AoRqMPDwBjjLGPrt84ilgsFovF6EOapmmaRiP6O/jbAIguL4vFYpHGqlKpVCoVomq1Wq1Wibxer9fn+w+Q9+cUiUQikQhNrfdgWZZlWf4yyGhj27Zt254MUK/X6/X6F0aiKIqiKIOCYRmGYRjGZADLsizLIgqFQqHV1SkAnp5OTn79ItK0qyuPZ7SxaZqmaU4GKJfPzxmbfAPc/f3pqaIQLS8vLtZqgOP0bYyJoiAARC5Xrwf4/Vtbb2+Th1YqlUqlErC01GgkEkCz2WxyHLC+LsuiCAiCJLlcgM+3vd3pcBzXaJTLR0dEs7Ptdv+D4TiOG/A6DsBxQKvV621sAGtru7vl8ngAjuvXv7xcXIgiwNjMjCj2h+k4fQfPA4LA8xwHCO323V2hABiG223bwPy8xwMAbvfcHGMAY32j47y+3t4OAsZpZ2dzEwAsy7IcBxAExhwHMIxOx3GAlZVUyjT/1WFIudzenstFlEpFo9M8o+Pj/X2eJzo4SCR4fnzdb2N4Pyv9cduVAAAAAElFTkSuQmCC" rel="icon" type="image/x-icon" />

I found an interesting solution on this page. It is in German, but you will be able to understand the code.
You put the base64 data of the icon into an external style sheet, so it will be cached. In the head of your website you have to define the favicon with an id and the favicon is set as a background-image in the style sheet for that id.
link#icon {
background-image:url("data:image/x-icon;base64,<base64_image_data>");
}
and the html
<html>
<head>
<link id="icon" rel="shortcut icon" type="image/x-icon" />
<link rel="stylesheet" type="text/css" href="/styles.css" />
...
</head>
<body>
...
</body>
</html>

Good point and nice idea, but impossible. A favicon needs to be a single, separate resource. There is no way to combine it with another image file.

Does it really matter?
Many browsers load the favicon as a low priority so that it doesn't block the page load in anyway, so yes it's an extra request, but it's not on any critical path.
A JavaScript solution is horrible because JavaScript code has been retrieved and executed, all the DOM elements below will be blocked from rendering and it doesn't reduce the number of requests!

The proper solution is to use HTTP pipelining.
HTTP pipelining is a technique in which multiple HTTP requests are written out to a single socket without waiting for the corresponding responses. Pipelining is only supported in HTTP/1.1, not in 1.0.
It's required that servers support it, but not necessarily participate.
HTTP pipelining requires both the client and the server to support it. HTTP/1.1 conforming servers are required to support pipelining. This does not mean that servers are required to pipeline responses, but that they are required not to fail if a client chooses to pipeline requests.
Many browser clients don't do it, when they should.
HTTP pipelining is disabled in most browsers.
Opera has pipelining enabled by default. It uses heuristics to control the level of pipelining employed depending on the connected server.
Internet Explorer 8 does not pipeline requests, due to concerns regarding buggy proxies and head-of-line blocking.
Mozilla browsers (such as Mozilla Firefox, SeaMonkey and Camino), support pipelining however it is disabled by default. It uses some heuristics, especially to turn pipelining off for IIS servers.
Konqueror 2.0 supports pipelining, but it's disabled by default.[citation needed]
Google Chrome does not support pipelining.
I would recommend you try enabling pipelining in Firefox and try it there, or just use Opera (shudder).

This is not really an answer to the question, but simply to compliment the answers given by Marcel and yahelc. I offer an elegant solution to the 404 favicon issue.
Some applications and browsers check for a favicon.ico file and if the icon is not found in the site root, you can simply respond to the request with the 204 response header.
Apache Examples:
Apache option one (and my favorite), a simple one-liner in your .htacces or .conf:
Redirect 204 /favicon.ico
Apache option two:
<Files "favicon.ico">
ErrorDocument 204 ""
</Files>
For further reading there is a nice blog post by Stoyan Stefanov.

It's a great idea, but if Google hasn't done it on their homepage, I'm betting it can't (currently) be done.

I'm sorry, but you can't combine the favicon with another resource.
This means you have basically two options:
If you're comfortable with your site not having a favicon - you can just have the href point to a non-icon resource that is already being loaded (e.g., a style sheet, script file, or even some resource that benefits from being pre-fetched).
(My brief testing indicates that this works across most, if not all, major browsers.)
Accept the extra HTTP request and just make sure your favicon file has aggressive HTTP cache-control headers set.
(If you have other websites under your control, you might even have them sneakily preload the favicon for this website - along with other static resources.)
P.S. Creative solutions that will not work:
The weird CSS data URI trick (linked to by commenter Felix Geenen) does not work.
Using JavaScript to perform a delayed injection of the favicon <link> element (as suggested by user yc) will likely just make things worse - by resulting in two HTTP requests.

You can use an 8-bit PNG image instead of the ICO format for an even smaller data footprint. The only thing you have to change is using "data:image/png" instead of "data:image/x-icon" MIME type header:
<link
href="data:image/png;base64,your-base64-encoded-string-goes-here"
rel="icon" type="image/png"
/>
"type" attribute can be "image/png" or "image/x-icon". Both work for me.
You can convert ICO to 8-bit PNG using GIMP or convert:
convert favicon.ico -depth 8 -strip favicon.png
And encode the PNG binary to a Base64-string using the base64 command:
base64 favicon.png

Here's the easiest way:
<!DOCTYPE html><html><head>
<link rel="shortcut icon" href="data:image/png;base64,
iVBORw0KGgoAAAANSUhEUgAAACAAAAAgCAYAAABzenr0AAAAAXNSR0IArs4c6QAAAA
lwSFlzAAALEwAACxMBAJqcGAAAAVlpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADx4
OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3
JlIDUuNC4wIj4KICAgPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9y
Zy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICAgICAgPHJkZjpEZXNjcmlwdG
lvbiByZGY6YWJvdXQ9IiIKICAgICAgICAgICAgeG1sbnM6dGlmZj0iaHR0cDovL25z
LmFkb2JlLmNvbS90aWZmLzEuMC8iPgogICAgICAgICA8dGlmZjpPcmllbnRhdGlvbj
4xPC90aWZmOk9yaWVudGF0aW9uPgogICAgICA8L3JkZjpEZXNjcmlwdGlvbj4KICAg
PC9yZGY6UkRGPgo8L3g6eG1wbWV0YT4KTMInWQAABLJJREFUWAnFV+trW2UY/yVN0j
S32q6bnWunQ1e11k3YXMUy8AbDoSBDv4mfFPTj/gD9ug/7B/woqCCISnGItcyppbgN
Rhn2st5G2yxt1svSJe1yPYnP703fcs7JSZo4YS9Ncs57eX6/5/Y+T12fX7hYdAHy92
iG+1GCU2X3/6V3k9sNl8uFUqnUkMiHJkDfuQV4M7WFbC4Hr8fTEAlPQ3RtmwleFI0z
mQxO95/EQP8pLEaj+PaHQbS3RmAUi7YTla8PRYDGLhgGPnz/HI719Srp0VgMhszRHf
WM/+wC+jy1tY2zb72uwOl7WmPi1gwCfr887609CdZtAfpZa0XNc/k8nug8gBPHjylF
uTY7N4/JmVmEQkFlBbWwx9eeBNyiKbVLZ7IKlEAtomEmm0Vvz1G0tLQoiKXoHXzz/Y
9qX75QgM/jhb/ZB5KtlRk1CdDM2+k0SKLn6SM4KBoXRHhsOY6J2XmsbWxgdv42Jqdn
cH3sJo4c7sKhg50oFku4s7yMuYUleJqahEizzDm7pCoBgt8XH7/Q8wzOvPmamLtz1w
UMvPjdu1hYjIrPp9He9hjOf/oJ9rW37e5hBiwuRXFpaBjR5RWEgkFHEq4vLlysuDkI
nhTwV068hPfeOau0oJ21KXUsKNvbvux7suKq734axNT0nLhOLCHuNI8KC1B4Wg7RnO
++fUaB03x0gxlYA1EYZcoxta73cJ3PD8SFuWxOnbdCl2k4EuCN9sbpATT7fMpsBLcP
DcR5gpuHBk8mU/jyq6+xcW8TYckMpziwSKagvKTX4x0deOpwt5JpBjKD1PM8eu064q
vraI2EHcEpw0pAqnK+YODA/n3wS6pxNEpAa58TK05IdoQl+AyjoGQ5fVkI0JbFoqEi
tlFgu/C01IeM3B2UY4s7y1YrAbXUeEm1SNx58Un8sDJKiNbsdiwEaD6m4JakoFPAOA
HZ57TleFs+2d2lMsotl1G1YSHATR5PEzYSCXXt8p2kGh36zED/y+o8qyNridOwEOBB
j5htI7GJ1bU1p/11zZX9XkJ31yF89MG5nTpScAxoCwFKZ86z8NxeWKwLrNom7YrjL/
bh/GcfY39Hu2RYJYkKAvR9KBjAjZv/CJHMThQ37gYS066IhMNgWmpSZtIVBHio2euV
AhLH+OSU2qsFmQ828jx69Rpi8VX4vJX9YgUBCmbBCIsVhv8cQTKVKt/jDsFIYvpjJ0
RLUuPYygqujF5FG3tEo7IkOxKgUObwvcR9DF2+omRTmE5NDco5/dFzSoGd4sWaMvjL
kCJZJQmsV7FZC9bziBSQv2+M4Y+RUbWki5IGZXPC1oy/eo4buY9W/PnX3zC/uFTuEa
VJcRoV1dC8iSRawyFcGv4dqe1tvHrqpFS1EFbX1zE+dQvT0hVlpNSyzvc+exR9zz8n
zUmbVL8ELv81grHxSbTK/lrtuWNDYibBZ14iSSEQlP6PGbIpZTadzkg/6Fdr1PaBvI
cCLYgIYa7TKsFAYNdtdpn6vaYF9CYCREQTxkBS/gPySZb42SvIvDhYNQRsQBlkal3i
R/cSWka137oI8LAOQF7VDDiDwHrw3Si/l9eFl5CtZzhmQa2DZlynfXut28/8C/JOMz
7+5SRKAAAAAElFTkSuQmCC">
</head></html>
What icon does it represent? Answer below!

Related

Does the Web App Manifest include order matter?

I am trying to add PWA functionality to a simple site, and I am running into odd issues with Chrome version 68.0.3440.106.
I am running on http://localhost:4502 for my testing, in the context of a CMS (so I am working within a few constraints). Also, note that I removed all my PWA/Service Worker code so there is no caching at play.
In the <head> .. I include the manifest via:
<link rel="manifest" href="/content/foo/bar.pwa.manifest.webmanifest" type="application/manifest+json">
(The "odd" name of the manifest file is a constraint of the CMS i'm working in; i've also used the "json" extension to the same effect as described below).
Which contains...
{"name":"Sample",
"short_name":"sample",
"theme_color":"#001F3F",
"background_color":"#FF4136",
"display":"standalone",
"scope":".",
"start_url":"/content/foo/bar.html",
"icons":[{"src":"/content/assets/sample.jpg","size":"192x192","type":"image/jpeg"},{"src":"/content/assets/sample.jpg","size":"512x512","type":"image/jpeg"},{"src":"/content/assets/sample.jpg","size":"144x144","type":"image/jpeg"}]}
I've observed the following:
When I have JS/CSS includes ABOVE my <link rel="manifest" .. things go haywire.
* If i copy the page's URL, paste it into a NEW chrome tab (a refresh of an existing tab does not cut it), and open up Chrome Dev Tools FOR that new tab, I can see the HTTP request/response for the manifest, and in Dev Tools > Application > Manifest is appears to display everything correctly. (so far so good).
* If I then REFRESH this tab, i don't see the request for the manifest and the Application > Manifest section is blank. The HTML source has not changed at all.
* No matter how many times I refresh it doesn't seem to help (also clear cache, etc.)
2) When i move the <link rel="manifest" to be ABOVE any CSS/JS in the <head> it seems to load consistently and correctly. Refreshing the page the HTTP request for the manifest displays, and Application > Manifest displays the correct info.
I've been scouring docs (Google, MDN) to see if the manifest MUST be included before ANY other includes, but I can't find anything that says that - and it's odd that on the first load of a tab, it seems to work even when included after CSS/JS.
Im hoping there is a way to get this to load regardless of where in the <head> the <link rel="manifest".. happens to be (as the CMS makes it hard to guarantee the order)
No, the relative position of the <link rel="manifest"> within your page's <head> does not make a difference from the perspective of how it's parsed and interpreted.
I'm hard pressed to explain the behavior you're seeing, other than perhaps a syntax error in the HTML that includes your JavaScript and CSS, causing the subsequent <link rel="manifest"> to be parsed incorrectly. But in general, the position shouldn't matter.

Cache is not cleared in Google Chrome

When I deploy the version I will add the number as a query string with the JavaScript and CSS file like following?
'app/source/scripts/project.js?burst=32472938'
I am using the above to burst the cache in the browser.
But in Firefox, I am getting the latest script that I have modified.
But in Chrome, I am not getting the latest script that I have modified. Instead of that, I am getting the old one.
But in the developer console, I am seeing the burst number which is modified in latest.
According to the Google documentation, the best way to invalidate and reload the file is to add a version number to the file name and not as a query parameter:
'app/source/scripts/project.32472938.js'
Here is a link to the documentation:
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching#invalidating_and_updating_cached_responses
Another way is to use an ETag (validation token):
https://developers.google.com/web/fundamentals/performance/optimizing-content-efficiency/http-caching#validating_cached_responses_with_etags
Here is how you would set up an ETag with Nginx:
http://nginx.org/en/docs/http/ngx_http_core_module.html#etag
And lastly, a tutorial about browser caching with Nginx and ETag:
https://www.digitalocean.com/community/tutorials/how-to-implement-browser-caching-with-nginx-s-header-module-on-centos-7#step-2-%14-checking-the-default-behavior
I'm uncertain of whether this still applies these days, but there were some cases in the past where proxies could cause a query-string value to be ignored for caching purposes. There's an article from 2008 that discussed the idea that query-string values weren't ideal for the purpose of breaking caching, and that it was better to revise the filename itself -- so, referencing project_32472938.js instead of using the query-string.
(I've also seen, in places, some discussion of unusual cases where certain clients were not seeing these updates, but it seemed to be inconsistent -- not tied to Chrome, necessarily, but more likely tied to a specific installation of Chrome on a specific machine. I'd certainly recommend checking the site on another computer to see if the issue is repeated there, as you could at least narrow down to whether it's Chrome in general, or your specific install of Chrome that is having problems.)
All that said, it's been quite a while since 2008, and that may not be applicable these days. However, if it continues to be a problem -- and you can't find a solution to the underlying problem -- it at least offers a method use to circumvent it.
I don't think that Chrome actually causes the problem, because
it would break almost all web applications
(eg: https://www.google.com/search?q=needle)
It could be that your deployment was a bit delayed, eg.
Start install new scripts
Check with Chrome (receives old version on new ID)
Install finishes
You try with Firefox (receives new version)
Chrome still shows old version because it cached the old script with new ID
Or you have a CDN like Azure between your web server and your browser.
With standard settings Azure CDN ignores the query string for the caching hash.
try those meta tags:
<meta http-equiv="cache-control" content="max-age=0" />
<meta http-equiv="cache-control" content="no-cache" />
<meta http-equiv="expires" content="0" />
<meta http-equiv="expires" content="Tue, 01 Jan 1980 1:00:00 GMT" />
<meta http-equiv="pragma" content="no-cache" />
i not sure , but for try...
google crome always ignore it..
you need to add a '?random.number' or "?date.code" to every link each time a url is pressed on your website.
eg if 'myhomepage.html?272772' is stored in the cache, then by generating a new random number eg 'myhomepage.html?2474789', google chrome will be forced to look for a new copy.

Images failing to load in IE with DOM: 7009 error (unable to decode) in console

When loading many images on a single page in IE (reproduced in IE11), some of them begin to fail to load, and have something similar to the following warning in the console:
DOM7009: Unable to decode image at URL: '[some unique url]'.
When I look at the network traffic, there does appear to valid responses received for each of these images from the server. It's not always the same images each time. If I use the dev tools to force the image to reload (example: I update the url to include some some extraneous url parameter "&test=1"), it loads/renders normally without error. I've reproduced this behavior with different types of images (jpegs/pngs; example png included below). It seems to happen more frequently as the number of images go up, and may also have some correlation with the size of each image.
Any thoughts as to what might be causing this? Potential work-arounds? Any help is appreciated.
It looks like the actual problem is addressed in another Stack Overflow question. All of the answers here get around the issue in various ways, but this is likely happening because the file is not the format it claims to be. Because nosniff is enabled, the browser is unable to work around this issue, and attempts to decode the wrong image type.
In other words: Your file extension does not match the actual encoding
I had this problem in a site hosted in IIS due the X-Content-Type-Options header being set in a parent applications web.config like this:
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="X-Content-Type-Options" value="nosniff" />
</customHeaders>
</httpProtocol>
</system.webServer>
Removing it in the applications web.config fixed it:
<remove name="X-Content-Type-Options" />
I had a similar problem where a file was reported in the HTTP headers as a JPEG but was in fact a PNG. Changing the filetype to match the file or removing the "X-Content-Type-Options" header fixed the problem.
The problem I was facing off was similar. I have a Java web application which shows pages and thumbnails of a document through Servlet requests, which responds to the browser sending PNG images. Like #user1069816 said, the responses were arriving with a header that was causing the problem "Unable to decode image":
X-Content-Type-Options: nosniff
In my case, this header were introduced by Spring Security. Besides this is a security mechanism for Internet Explorer to avoid XSS attacks, the fastest solution to disable this header on response were putting the following line on the application context file of Spring Security, headers section:
<http use-expressions="true" create-session="never" auto-config="true">
<headers>
<!-- this section disable put the header 'X-Content-Type-Options' -->
<content-type-options disabled="true"/>
</headers>
This problem were only happening on Internet Explorer 11. Not in Chrome or Firefox.
I've encountered this error as well — IE 11.0.9600.18059. According to my testing, it was almost certainly due to the amount of memory consumed by the tab (eg: adding extra DOM elements increases the memory usage) — not to be confused with the amount of data loaded over the network.
Using the memory profiler, I found that errors occurred once memory usage hit a ceiling around 1.5GB. This caused the following weirdness:
Some images would be loaded, but wouldn't render. They'd show up as an empty space in the page (with the correct dimensions), as though the image had been set to visibility: hidden.
Some images would be loaded, but would fail to decode. They'd show up as a small black box with an X. These images would also show a DOM7009 error in the console.
Flash SWFs would show up as black boxes.
Other random weirdness.
Different images/SWFs would be affected each time I reloaded the page.
The solution for me was to simply adjust the way the page is designed, so it's not causing IE to consume as much memory.
If it's of any use I have been seeing this on my WinJS application and I believe it's a way of renderer reporting that it's out of memory (albeit cryptically!)
Reason I say that is that if I load an compressed png image which is say 500KB, but large in terms of pixel dimensions I get this problem.
E.g.
If I try this with a 20000 x 6000 image I get this error sporadically, which I'm guessing is because it's 20000 x 6000 x 4 (480,000,000 bytes) or ~480MB.
If I try this with a 14000 x 6000 it would be ~336MB which seems OK and I have yet to get the error.
If I try this with a 35000 x 20000 image ~2.8GB it always happens.
Got the same issue in the IE11 when I load images it was giving me an error:
DOM7009: Unable to decode image at URL
In all other browsers, it works like a charm!!
After a bit of the research finally came on the conclusion as below:
in the Web.config file::
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="X-Frame-Options" value="Deny" />
<remove name="X-Powered-By" />
<add name="X-XSS-Protection" value="1" />
<!--To resolve the user image not displaying in the chat and in the header for IE 11-->
<!--<add name="X-Content-Type-Options" value="nosniff"/>-->
</customHeaders>
</httpProtocol>
</system.webServer>
Please see the commented code I have removed the "X-Content-Type-Options" and it works!!!
I experienced this same error on a page that was essentially an image gallery, where each image was being loaded at full resolution as the thumbnail. The page weight was approx 220mb. Some of the thumbnails weren't loading, and the "unable to decode image at url" error was being given as the reason.
However, IE could load up each image individually by viewing the image's URL directly, which I think means there wasn't a problem with the image type/encoding. So while IE11 could load up the individual image, it couldn't load up all the images as thumbnails (and the images that weren't loaded changed each time the page was refreshed).
My workaround was to display a low res thumbnail on the page (page weight changed to 220kb), and have the thumbnail link to the full, hi-res image.
It would be worth checking if you can reduce the dimensions of the images served, and also the file size.
I had this problem just now when the image was ~2.5MB (.jpg). Shrunk it down to 540kb and the problem no longer occurs. Looks like it's definitely an IE memory issue (or can be in some occasions).
This is the only fix that worked for me as I didn't have anything relating to X-Content-Type-Options in my web config.
The only way that I found to solve this is disable this rule by browser in apache server configuration.
BrowserMatch MSIE explorer
Header set X-Content-Type-Options nosniff env=!explorer
It work for me but this solution doesnt like me. I would prefer to rewrite in apache server the correct mime-type.
My problem is that the URL contains the "captcha" string, but I can`t set it.
SetEnvIf Request_URI ^(.*)captcha$ headerpng
Header set "Content-type image/png" env=headerpng
This doesnt work. It's a little frustrating. It's a url so long and I thing that **SetEnvIf** doesnt read it until the end.
I get the same issue frequently with IE11 and I can't pinpoint what is causing it. However, it starts to happens right after JavaScript has crashed. I'ts not an imgur problem, its an IE11 problem.
The only way that I have been able to get out of the issue is to crash explorer and reload it or reboot.

Javascript src starts with //? [duplicate]

I have the following element:
<script type="text/javascript" src="https://cdn.example.com/js_file.js"></script>
In this case the site is HTTPS, but the site may also be just HTTP. (The JS file is on another domain.) I'm wondering if it's valid to do the following for convenience sake:
<script type="text/javascript" src="//cdn.example.com/js_file.js"></script>
I'm wondering if it's valid to remove the http: or https:?
It seems to work everywhere I have tested, but are there any cases where it doesn't work?
A relative URL without a scheme (http: or https:) is valid, per RFC 3986: "Uniform Resource Identifier (URI): Generic Syntax", Section 4.2. If a client chokes on it, then it's the client's fault because they're not complying with the URI syntax specified in the RFC.
Your example is valid and should work. I've used that relative URL method myself on heavily trafficked sites and have had zero complaints. Also, we test our sites in Firefox, Safari, IE6, IE7 and Opera. These browsers all understand that URL format.
It is guaranteed to work in any mainstream browser (I'm not taking browsers with less than 0.05% market share into consideration). Heck, it works in Internet Explorer 3.0.
RFC 3986 defines a URI as composed of the following parts:
foo://example.com:8042/over/there?name=ferret#nose
\_/ \______________/\_________/ \_________/ \__/
| | | | |
scheme authority path query fragment
When defining relative URIs (Section 5.2), you can omit any of those sections, always starting from the left. In pseudo-code, it looks like this:
result = ""
if defined(scheme) then
append scheme to result;
append ":" to result;
endif;
if defined(authority) then
append "//" to result;
append authority to result;
endif;
append path to result;
if defined(query) then
append "?" to result;
append query to result;
endif;
if defined(fragment) then
append "#" to result;
append fragment to result;
endif;
return result;
The URI you are describing is a scheme-less relative URI.
are there any cases where it doesn't work?
If the parent page was loaded from file://, then it probably does not work (it will try to get file://cdn.example.com/js_file.js, which of course you could provide locally as well).
Many people call this a Protocol Relative URL.
It causes a double-download of CSS files in IE 7 & 8.
Here I duplicate the answer in Hidden features of HTML:
Using a protocol-independent absolute
path:
<img src="//domain.com/img/logo.png"/>
If the browser is viewing an page in
SSL through HTTPS, then it'll request
that asset with the https protocol,
otherwise it'll request it with HTTP.
This prevents that awful "This Page
Contains Both Secure and Non-Secure
Items" error message in IE, keeping
all your asset requests within the
same protocol.
Caveat: When used on a <link> or
#import for a stylesheet, IE7 and IE8
download the file twice. All other
uses, however, are just fine.
It is perfectly valid to leave off the protocol. The URL spec has been very clear about this for years, and I've yet to find a browser that doesn't understand it. I don't know why this technique isn't better known; it's the perfect solution to the thorny problem of crossing HTTP/HTTPS boundaries. More here: Http-https transitions and relative URLs
are there any cases where it doesn't work?
Just to throw this in the mix, if you are developing on a local server, it might not work. You need to specify a scheme, otherwise the browser may assume that src="//cdn.example.com/js_file.js" is src="file://cdn.example.com/js_file.js", which will break since you're not hosting this resource locally.
Microsoft Internet Explorer seem to be particularly sensitive to this, see this question: Not able to load jQuery in Internet Explorer on localhost (WAMP)
You would probably always try to find a solution that works on all your environments with the least amount of modifications needed.
The solution used by HTML5Boilerplate is to have a fallback when the resource is not loaded correctly, but that only works if you incorporate a check:
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<!-- If jQuery is not defined, something went wrong and we'll load the local file -->
<script>window.jQuery || document.write('<script src="js/vendor/jquery-1.10.2.min.js"><\/script>')</script>
UPDATE: HTML5Boilerplate now uses <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js after deciding to deprecate protocol relative URLs, see [here][3].
1. Summary
Answer for 2019: you can still use protocol-relative URLs, but this technique an anti-pattern.
Also:
You may have problems in developing.
Some third-party tools may not support them.
Migrating from protocol-relative URLs to https:// it would be nice.
2. Relevance
This answer is relevant for January 2019. In the future, the data of this answer may be obsolete.
3. Anti-pattern
3.1. Argumentation
Paul Irish — front-end engineer and a developer advocate for the Google Chrome — write in 2014, December:
Now that SSL is encouraged for everyone and doesn’t have performance concerns, this technique is now an anti-pattern. If the asset you need is available on SSL, then always use the https:// asset.
Allowing the snippet to request over HTTP opens the door for attacks like the recent GitHub Man-on-the-side attack. It’s always safe to request HTTPS assets even if your site is on HTTP, however the reverse is not true.
3.2. Another links
Loads page resources using protocol relative URIs
Stop using protocol-relative URLs
3.3. Examples
In 2017 Stack Overflow switched from protocol-relative URLs to https
In 2018 Chrome will flag all unencrypted websites as “not secure”
4. Developing process
For example, I try to use clean-console.
Example file KiraCleanConsole__cdn_links_demo.html:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<title>clean-console without protocol demonstration</title>
<!-- Really dead link -->
<script src="https://unpkg.com/bowser#latest/bowser.min.js"></script>
<!-- Package exists; link without “https:” -->
<script src="//cdn.jsdelivr.net/npm/jquery#3.3.1/dist/jquery.min.js"></script>
<!-- Package exists: link with “https:” -->
<script src="https://cdn.jsdelivr.net/npm/gemini-scrollbar/index.js"></script>
</head>
<body>
Kira Goddess!
</body>
</html>
output:
D:\SashaDebugging>clean-console -i KiraCleanConsole__cdn_links_demo.html
checking KiraCleanConsole__cdn_links_demo.html
phantomjs: opening page KiraCleanConsole__cdn_links_demo.html
phantomjs: Unable to load resource (#3URL:file://cdn.jsdelivr.net/npm/jquery#3.3.1/dist/jquery.min.js)
phantomjs: phantomjs://code/runner.js:30 in onResourceError
Error code: 203. Description: Error opening //cdn.jsdelivr.net/npm/jquery#3.3.1/dist/jquery.min.js: The network path was not found.
phantomjs://code/runner.js:31 in onResourceError
phantomjs: Unable to load resource (#5URL:https://unpkg.com/bowser#2.1.0/bowser.min.js)
phantomjs: phantomjs://code/runner.js:30 in onResourceError
Error code: 203. Description: Error downloading https://unpkg.com/bowser#2.1.0/bowser.min.js - server replied: Not Found
phantomjs://code/runner.js:31 in onResourceError
phantomjs: Checking errors after sleeping for 1000ms
2 error(s) on KiraCleanConsole__cdn_links_demo.html
phantomjs process exited with code 2
Link //cdn.jsdelivr.net/npm/jquery#3.3.1/dist/jquery.min.js is valid, but I getting an error.
Pay attention to file://cdn.jsdelivr.net/npm/jquery#3.3.1/dist/jquery.min.js and read Thilo and bg17aw answers about file://.
I didn't know about this behavior and couldn't understand why I have problems like this for pageres.
5. Third-party tools
I use Clickable URLs Sublime Text package. Use it, I can simply open links from my text editor in browser.
Both links in example are valid. But first link I can successfully open in browser use Clickable URLs, second link — no. This may not be very convenient.
6. Conclusion
Yes:
If you have problems as in Developing process item, you can set your development workflow.
Else you have problems as in Third-party tools item, you can contribute tools.
But you don't need this additional problems. Read information by links in Anti-pattern item: protocol-relative URLs is obsolete.
Following the gnud's reference, the RFC 3986 section 5.2 says:
If the scheme component is defined, indicating that the reference
starts with a scheme name, then the reference is interpreted as an
absolute URI and we are done. Otherwise, the reference URI's scheme
is inherited from the base URI's scheme component.
So // is correct :-)
Yes, this is documented in RFC 3986, section 5.2:
(edit: Oops, my RFC reference was outdated).
It is indeed correct, as other answers have stated. You should note though, that some web crawlers will set off 404s for these by requesting them on your server as if a local URL. (They disregard the double slash and treat it as a single slash).
You may want to set up a rule on your webserver to catch these and redirect them.
For example, with Nginx, you'd add something like:
location ~* /(?<redirect_domain>((([a-z]|[0-9]|\-)+)\.)+([a-z])+)/(?<redirect_path>.*) {
return 301 $scheme:/$redirect_domain/$redirect_path;
}
Do note though, that if you use periods in your URIs, you'll need to increase the specificity or it will end up redirecting those pages to nonexistent domains.
Also, this is a rather massive regex to be running for each query -- in my opinion, it's worth punishing non-compliant browsers with 404s over a (slight) performance hit on the majority of compliant browsers.
We are seeing 404 errors in our logs when using //somedomain.com as references to JS files.
The references that cause the 404s come out looking like this:
ref:
<script src="//somedomain.com/somescript.js" />
404 request:
http://mydomain.com//somedomain.com/somescript.js
With these showing up regularly in our web server logs, it is safe to say that: All browsers and Bots DO NOT honor RFC 3986 section 4.2. The safest bet is to include the protocol whenever possible.
The pattern I see on html5-boilerplate is:
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.10.2/jquery.min.js"></script>
<script>window.jQuery || document.write('<script src="js/vendor/jquery-1.10.2.min.js"><\/script>')</script>
It runs smoothly on different schemes like http, https, file.
As your example is linking to an external domain, if you are using HTTPS then you should verify that the external domain is setup for SSL as well. Otherwise, your users may see SSL errors and/or 404 errors (e.g. older versions of Plesk store HTTP and HTTPS in separate folders). For CDNs, it shouldn't be an issue but for any other website it could be.
On a side note, tested while updated an old website and also works in the url= part of a META REFRESH.

Where do you include the jQuery library from? Google JSAPI? CDN?

There are a few ways to include jQuery and jQuery UI and I'm wondering what people are using?
Google JSAPI
jQuery's site
your own site/server
another CDN
I have recently been using Google JSAPI, but have found that it takes a long time to setup an SSL connection or even only to resolve google.com. I have been using the following for Google:
<script src="https://www.google.com/jsapi"></script>
<script>
google.load('jquery', '1.3.1');
</script>
I like the idea of using Google so it's cached when visiting other sites and to save bandwidth from our server, but if it keeps being the slow portion of the site, I may change the include.
What do you use? Have you had any issues?
Edit: Just visited jQuery's site and they use the following method:
<script type="text/javascript" src="http://ajax.googleapis.com/ajax/libs/jquery/1.3/jquery.min.js"></script>
Edit2: Here's how I've been including jQuery without any problems for the last year:
<script src="//ajax.googleapis.com/ajax/libs/jquery/1.4.3/jquery.min.js"></script>
The difference is the removal of http:. By removing this, you don't need to worry about switching between http and https.
Without a doubt I choose to have JQuery served by Google API servers. I didn't go with the jsapi method since I don't leverage any other Google API's, however if that ever changed then I would consider it...
First: The Google api servers are distributed across the world instead of my single server location: Closer servers usually means faster response times for the visitor.
Second: Many people choose to have JQuery hosted on Google, so when a visitor comes to my site they may already have the JQuery script in their local cache. Pre-cached content usually means faster load times for the visitor.
Third: My web hosting company charges me for the bandwidth used. No sense consuming 18k per user session if the visitor can get the same file elsewhere.
I understand that I place a portion of trust on Google to serve the correct script file, and to be online and available. Up to this point I haven't been disappointed with using Google and will continue this configuration until it makes sense not to.
One thing worth pointing out... If you have a mixture of secure and insecure pages on your site you might want to dynamically change the Google source to avoid the usual warning you see when loading insecure content in a secure page:
Here's what I came up with:
<script type="text/javascript">
document.write([
"\<script src='",
("https:" == document.location.protocol) ? "https://" : "http://",
"ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js' type='text/javascript'>\<\/script>"
].join(''));
</script>
UPDATE 9/8/2010 -
Some suggestions have been made to reduce the complexity of the code by removing the HTTP and HTTPS and simply use the following syntax:
<script type="text/javascript">
document.write("\<script src='//ajax.googleapis.com/ajax/libs/jquery/1.2.6/jquery.min.js' type='text/javascript'>\<\/script>");
</script>
In addition you could also change the url to reflect the jQuery major number if you wanted to make sure that the latest Major version of the jQuery libraries were loaded:
<script type="text/javascript">
document.write("\<script src='//ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js' type='text/javascript'>\<\/script>");
</script>
Finally, if you don't want to use Google and would prefer jQuery you could use the following source path (keep in mind that jQuery doesn't support SSL connections):
<script type="text/javascript">
document.write("\<script src='http://code.jquery.com/jquery-latest.min.js' type='text/javascript'>\<\/script>");
</script>
One reason you might want to host on an external server is to work around the browser limitations of concurent connections to particular server.
However, given that the jQuery file you are using will likely not change very often, the browser cache will kick in and make that point moot for the most part.
Second reason to host it on external server is to lower the traffic to your own server.
However, given the size of jQuery, chances are it will be a small part of your traffic. You should probably try to optimize your actual content.
jQuery 1.3.1 min is only 18k in size. I don't think that's too much of a hit to ask on the initial page load. It'll be cached after that. As a result, I host it myself.
If you want to use Google, the direct link may be more responsive. Each library has the path listed for the direct file. This is the jQuery path
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js"></script>
Just reread your question, is there a reason your are using https? This is the script tag Google lists in their example
<script src="http://www.google.com/jsapi"></script>
I wouldn't want any public site that I developed to depend on any external site, and thus, I'd host jQuery myself.
Are you willing to have an outage on your site when the other (Google, jquery.com, etc.) goes down? Less dependencies is the key.
Pros: Host on Google has benefits
Probably faster (their servers are more optimised)
They handle the caching correctly - 1 year (we struggle to be allowed to make the changes to get the headers right on our servers)
Users who have already had a link to the Google-hosted version on another domain already have the file in their cache
Cons:
Some browsers may see it as XSS cross-domain and disallow the file.
Particularly users running the NoScript plugin for Firefox
I wonder if you can INCLUDE from Google, and then check the presence of some Global variable, or somesuch, and if absence load from your server?
There are a few issues here. Firstly, the async load method you specified:
<script type="text/javascript" src="https://www.google.com/jsapi"></script>
<script type="text/javascript">
google.load('jquery', '1.3.1');
google.setOnLoadCallback(function() {
// do stuff
});
</script>
has a couple of issues. Script tags pause the page load while they are retrieved (if necessary). Now if they're slow to load this is bad but jQuery is small. The real problem with the above method is that because the jquery.js load happens independently for many pages, they will finish loading and render before jquery has loaded so any jquery styling you do will be a visible change for the user.
The other way is:
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.3.1/jquery.min.js"></script>
Try some simple examples like, have a simple table and change the background of the cells to yellow with the setOnLoadCallback() method vs $(document).ready() with a static jquery.min.js load. The second method will have no noticeable flicker. The first will. Personally I think that's not a good user experience.
As an example run this:
<html>
<head>
<title>Layout</title>
<style type="text/css">
.odd { background-color: yellow; }
</style>
</head>
<body>
<table>
<tr><th>One</th><th>Two</th></tr>
<tr><td>Three</td><td>Four</td></tr>
<tr><td>Five</td><td>Six</td></tr>
<tr><td>Seven</td><td>Nine</td></tr>
<tr><td>Nine</td><td>Ten</td></tr>
</table>
<script src="http://www.google.com/jsapi"></script>
<script>
google.load("jquery", "1.3.1");
google.setOnLoadCallback(function() {
$(function() {
$("tr:odd").addClass("odd");
});
});
</script>
</body>
</html>
You (should) see the table appear and then the rows go yellow.
The second problem with the google.load() method is that it only hosts a limited range of files. This is a problem for jquery since it is extremely plug-in dependent. If you try and include a jquery plugin with a <script src="..."> tag and google.load() the plug-in will probably fail with messages of "jQuery is not defined" because it hasn't loaded yet. I don't really see a way around this.
The third problem (with either method) is that they are one external load. Assuming you have some plugins and your own Javascript code you're up to a minimum of two external requests to load your Javascript. You're probably better off getting jquery, all relevant plug-ins and your own code and putting it into one minified file.
From Should You Use Google's Ajax Libraries API for Hosting?:
As to load times, you're actually
loading two scripts - the jsapi script
and the mootools script (the
compressed version from above). So
that is two connections, rather than
one. In my experience, I found that
the load time was actually 2-3 times
slower than loading from my own
personal shared server, even though it
was coming from Google, and my version
of the compressed file was a couple of
K larger than Google's. This, even
after the file had loaded and
(presumably) cached. So for me, since
the bandwidth doesn't matter much,
isn't going to matter.
Lastly you have the potential problem of a paranoid browser flagging the request as some sort of XSS attempt. It's not typically a problem with default settings but on corporate networks where the user may not have control over which browser they use let alone the security settings you may have a problem.
So in the end I can't really see me using the Google AJAX API for jQuery at least (the more "complete" APIs are a different story in some ways) much except to post examples here.
In addition to people who advices to host it on own server, I'd proposed to keep it on separate domain (e.g. static.website.com) to allow browsers to load it into separate from other content thread. This tip also works for all static stuff, say images and css.
I have to vote -1 for the libraries hosted on Google. They are collecting data, google analytics style, with their wrappers around these libraries. At a minimum, I don't want a client browser doing more than I'm asking it to do, much less anything else on the page. At worse, this is Google's "new version" of not being evil -- using unobtrusive javascript to gather more usage data.
Note: if they've changed this practice, super. But the last time I considered using their hosted libraries, I monitored the outbound http traffic on my site, and the periodic calls out to google servers were not something I expected to see.
I might be old-school about this, but I still frown on hotlinking. Maybe Google is the exception, but in general, it's really just good manners to host the files on your own server.
I will add this as a reason to locally host these files.
Recently a node in Southern California on TWC has not been able to resolve the ajax.googleapis.com domain (for users with IPv4) only so we are not getting the external files. This has been intermittant up until yesterday (now it is persistant.) Because it was intermittant, I was having tons of problems troubleshooting SaaS user issues. Spent countless hours trying to track why some users were having no issues with the software, and others were tanking. In my usual debugging process I'm not in the habit of asking a user if they have IPv6 turned off.
I stumbled on the issue because I myself was using this particular "route" to the file and also am using only IPV4. I discovered the issue with developers tools telling me jquery wasn't loading, then started doing traceroutes etc... to find the real issue.
After this, I will most likely never go back to externally hosted files because: google doesn't have to go down for this to become a problem, and... any one of these nodes can be compromised with DNS hijacking and deliver malicious js instead of the actual file. Always thought I was safe in that a google domain would never go down, now I know any node in between a user and the host can be a fail point.
I just include the latest version from the jQuery site: http://code.jquery.com/jquery-latest.pack.js It suits my needs and I never have to worry about updating.
EDIT:For a major web app, certainly control it; download it and serve it yourself. But for my personal site, I could not care less. Things don't magically disappear, they are usually deprecated first. I keep up with it enough to know what to change for future releases.
Here some useful resource, hope can help you to chose your CDN.
MS has recently add a new domain for delivery Libraries trough their CDN.
Old Format: http://ajax.microsoft.com/ajax/jQuery/jquery-1.5.1.js
New Format: http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.5.1.js
This should not send all cookies for microsoft.com.
http://www.asp.net/ajaxlibrary/cdn.ashx#Using_jQuery_from_the_CDN_11
Here some statistics about most popular technology used on the web across all technology.
http://trends.builtwith.com/
Hope can help you to choose.
If I am responsible for the 'live' site I better be aware of everything that is going
on and into my site. For that reason I host the jquery-min version myself either on the same server or a static/external server but either way a location where only I (or my program/proxy) can update the library after having verified/tested every change
In head:
(function() {
var jsapi = document.createElement('script'); jsapi.type = 'text/javascript'; jsapi.async = true;
jsapi.src = ('https:' == document.location.protocol ? 'https://' : 'http://') + 'www.google.com/jsapi?key=YOUR KEY';
(document.getElementsByTagName('head')[0] || document.getElementsByTagName('head')[0]).appendChild(jsapi);
})();
End of Body:
<script type="text/javascript">
google.load("jquery", "version");
</script>
I host it with my other js files on my own server, and, that's that point, combine and minify them (with django-compresser, here, but that's not the point) to be served as just one js file, with everything the site needs put into it. You'll need to serve your own js files anyway, so I see no reason to not add the extra jquery bytes there too - some more kbs are much more cheaper to transfer, than more requests to be made. You are not dependent to anyone, and as soon as your minified js is cached, you're super fast as well.
On first load, a CDN based solution might win, because you must load the additional jquery kilobytes from your own server (but, without an additional request). I doubt the difference is noticable, though. And then, on a first load with cleared cache, your own hosted solution will probably always be much faster, because of more requests (and DNS lookups) needed, to fetch the CDN jquery.
I wonder how this point is almost never mentioned, and how CDNs seem to take over the world :)

Categories

Resources