We use the js lib retina.js which swaps low quality images with "retina" images (size times 2). The problem is, that retina.js throws a 404 for every "retina" image which can't be found.
We own a site where users can upload their own pictures which are most likely not in a retina resolution.
Is there no way to prevent the js from throwing 404s?
If you don't know the lib. Here is the code throwing the 404:
http = new XMLHttpRequest;
http.open('HEAD', this.at_2x_path);
http.onreadystatechange = function() {
if (http.readyState != 4) {
return callback(false);
}
if (http.status >= 200 && http.status <= 399) {
if (config.check_mime_type) {
var type = http.getResponseHeader('Content-Type');
if (type == null || !type.match(/^image/i)) {
return callback(false);
}
}
RetinaImagePath.confirmed_paths.push(that.at_2x_path);
return callback(true);
} else {
return callback(false);
}
}
http.send();
There are a few options that I see, to mitigate this.
Enhance and persist retina.js' HTTP call results caching
For any given '2x' image that is set to swap out a '1x' version, retina.js first verifies the availability of the image via an XMLHttpRequest request. Paths with successful responses are cached in an array and the image is downloaded.
The following changes may improve efficiency:
Failed XMLHttpRequest verification attempts can be cached: Presently, a '2x' path verification attempt is skipped only if it has previously succeeded. Therefore, failed attempts can recur. In practice, this doesn't matter much beacuse the verification process happens when the page is initially loaded. But, if the results are persisted, keeping track of failures will prevent recurring 404 errors.
Persist '2x' path verification results in localStorage: During initialization, retina.js can check localStorage for a results cache. If one is found, the verification process for '2x' images that have already been encountered can be bypassed and the '2x' image can either be downloaded or skipped. Newly encounterd '2x' image paths can be verified and the results added to the cache. Theoretically, while localStorage is available, a 404 will occur only once for an image on a per-browser basis. This would apply across pages for any page on the domain.
Here is a quick workup. Expiration functionality would probably need to be added.
https://gist.github.com/4343101/revisions
Employ an HTTP redirect header
I must note that my grasp of "server-side" matters is spotty, at best. Please take this FWIW
Another option is for the server to respond with a redirect code for image requests that have the #2x characters and do not exist. See this related answer.
In particular:
If you redirect images and they're cacheable, you'd ideally set an HTTP Expires header (and the appropriate Cache-Control header) for a date in the distant future, so at least on subsequent visits to the page users won't have to go through the redirect again.
Employing the redirect response would get rid of the 404s and cause the browser to skip subsequent attempts to access '2x' image paths that do not exist.
retina.js can be made more selective
retinajs can be modified to exclude some images from consideration.
A pull request related to this: https://github.com/imulus/retinajs/commit/e7930be
Per the pull request, instead of finding <img> elements by tag name, a CSS selector can be used and this can be one of the retina.js' configurable options. A CSS selector can be created that will filter out user uploaded images (and other images for which a '2x' variant is expected not to exist).
Another possibility is to add a filter function to the configurable options. The function can be called on each matched <img> element; a return true would cause a '2x' variant to be downloaded and anything else would cause the <img> to be skipped.
The basic, default configuration would change from the current version to something like:
var config = {
check_mime_type: true,
retinaImgTagSelector: 'img',
retinaImgFilterFunc: undefined
};
The Retina.init() function would change from the current version to something like:
Retina.init = function(context) {
if (context == null) context = root;
var existing_onload = context.onload || new Function;
context.onload = function() {
// uses new query selector
var images = document.querySelectorAll(config.retinaImgTagSelector),
retinaImages = [], i, image, filter;
// if there is a filter, check each image
if (typeof config.retinaImgFilterFunc === 'function') {
filter = config.retinaImgFilterFunc;
for (i = 0; i < images.length; i++) {
image = images[i];
if (filter(image)) {
retinaImages.push(new RetinaImage(image));
}
}
} else {
for (i = 0; i < images.length; i++) {
image = images[i];
retinaImages.push(new RetinaImage(image));
}
}
existing_onload();
}
};
To put it into practice, before window.onload fires, call:
window.Retina.configure({
// use a class 'no-retina' to prevent retinajs
// from checking for a retina version
retinaImgTagSelector : 'img:not(.no-retina)',
// or, assuming there is a data-owner attribute
// which indicates the user that uploaded the image:
// retinaImgTagSelector : 'img:not([data-owner])',
// or set a filter function that will exclude images that have
// the current user's id in their path, (assuming there is a
// variable userId in the global scope)
retinaImgFilterFunc: function(img) {
return img.src.indexOf(window.userId) < 0;
}
});
Update: Cleaned up and reorganized. Added the localStorage enhancement.
Short answer: Its not possible using client-side JavaScript only
After browsing the code, and a little research, It appears to me that retina.js isn't really throwing the 404 errors.
What retina.js is actually doing is requesting a file and simply performing a check on whether or not it exists based on the error code. Which actually means it is asking the browser to check if the file exists. The browser is what gives you the 404 and there is no cross browser way to prevent that (I say "cross browser" because I only checked webkit).
However, what you could do if this really is an issue is do something on the server side to prevent 404s altogether.
Essentially this would be, for example, /retina.php?image=YOUR_URLENCODED_IMAGE_PATH a request to which could return this when a retina image exists...
{"isRetina": true, "path": "YOUR_RETINA_IMAGE_PATH"}}
and this if it doesnt...
{"isRetina": false, "path": "YOUR_REGULAR_IMAGE_PATH"}}
You could then have some JavaScript call this script and parse the response as necessary. I'm not claiming that is the only or the best solution, just one that would work.
Retina JS supports the attribute data-no-retina on the image tag.
This way it won't try to find the retina image.
Helpful for other people looking for a simple solution.
<img src="/path/to/image" data-no-retina />
I prefer a little more control over which images are replaced.
For all images that I've created a #2x for, I changed the original image name to include #1x. (* See note below.) I changed retina.js slightly, so that it only looks at [name]#1x.[ext] images.
I replaced the following line in retina-1.1.0.js:
retinaImages.push(new RetinaImage(image));
With the following lines:
if(image.src.match(/#1x\.\w{3}$/)) {
image.src = image.src.replace(/#1x(\.\w{3})$/,"$1");
retinaImages.push(new RetinaImage(image));
}
This makes it so that retina.js only replaces #1x named images with #2x named images.
(* Note: In exploring this, it seems that Safari and Chrome automatically replace #1x images with #2x images, even without retina.js installed. I'm too lazy to track this down, but I'd imagine it's a feature with the latest webkit browsers. As it is, retina.js and the above changes to it are necessary for cross-browser support.)
One of solutions is to use PHP:
replace code from 1st post with:
http = new XMLHttpRequest;
http.open('HEAD', "/image.php?p="+this.at_2x_path);
http.onreadystatechange = function() {
if (http.readyState != 4) {
return callback(false);
}
if (http.status >= 200 && http.status <= 399) {
if (config.check_mime_type) {
var type = http.getResponseHeader('Content-Type');
if (type == null || !type.match(/^image/i)) {
return callback(false);
}
}
RetinaImagePath.confirmed_paths.push(that.at_2x_path);
return callback(true);
} else {
return callback(false);
}
}
http.send();
and in yours site root add file named "image.php":
<?php
if(file_exists($_GET['p'])){
$ext = explode('.', $_GET['p']);
$ext = end($ext);
if($ext=="jpg") $ext="jpeg";
header("Content-Type: image/".$ext);
echo file_get_contents($_GET['p']);
}
?>
retina.js is a nice tool for fixed images on static web pages, but if you are retrieving user uploaded images, the right tool is server side. I imagine PHP here, but the same logic may be applied to any server side language.
Provided that a nice security habit for uploaded images is to not let users reach them by direct url: if the user succeeds in uploading a malicious script to your server, he should not be able to launch it via an url (www.yoursite.com/uploaded/mymaliciousscript.php). So it is usually a good habit to retrieve uploaded images via some script <img src="get_image.php?id=123456" /> if you can... (and even better, keep the upload folder out of the document root)
Now the get_image.php script can get the appropriate image 123456.jpg or 123456#2x.jpg depending on some conditions.
The approach of http://retina-images.complexcompulsions.com/#setupserver seems perfect for your situation.
First you set a cookie in your header by loading a file via JS or CSS:
Inside HEAD:
<script>(function(w){var dpr=((w.devicePixelRatio===undefined)?1:w.devicePixelRatio);if(!!w.navigator.standalone){var r=new XMLHttpRequest();r.open('GET','/retinaimages.php?devicePixelRatio='+dpr,false);r.send()}else{document.cookie='devicePixelRatio='+dpr+'; path=/'}})(window)</script>
At beginning of BODY:
<noscript><style id="devicePixelRatio" media="only screen and (-moz-min-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2/1), only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (min-device-pixel-ratio: 2)">#devicePixelRatio{background-image:url("/retinaimages.php?devicePixelRatio=2")}</style></noscript>
Now every time your script to retrieve uploaded images is called, it will have a cookie set asking for retina images (or not).
Of course you may use the provided retinaimages.php script to output the images, but you may also modify it to accomodate your needs depending on how you produce and retrieve images from a database or hiding the upload directory from users.
So, not only it may load the appropriate image, but if GD2 is installed, and you keep the original uploaded image on the server, it may even resize it and crop accordingly and save the 2 cached image sizes on the server. Inside the retinaimages.php sources you can see (and copy) how it works:
<?php
$source_file = ...
$retina_file = ....
if (isset($_COOKIE['devicePixelRatio'])) {
$cookie_value = intval($_COOKIE['devicePixelRatio']);
}
if ($cookie_value !== false && $cookie_value > 1) {
// Check if retina image exists
if (file_exists($retina_file)) {
$source_file = $retina_file;
}
}
....
header('Content-Length: '.filesize($source_file), true);
readfile($source_file); // or read from db, or create right size.. etc..
?>
Pros: image is loaded only once (retina users on 3G at least won'tload 1x+2x images), works even without JS if cookies are enabled, can be switched on and off easily, no need to use apple naming conventions. You load image 12345 and you get the correct DPI for your device.
With url rewriting you may even render it totally transparent by redirecting /get_image/1234.jpg to /get_image.php?id=1234.jpg
My suggestion is that you recognize the 404 errors to be true errors, and fix them the way that you are supposed to, which is to provide Retina graphics. You made your scripts Retina-compatible, but you did not complete the circle by making your graphics workflow Retina-compatible. Therefore, the Retina graphics are actually missing. Whatever comes in at the start of your graphics workflow, the output of the workflow has to be 2 image files, a low-res and Retina 2x.
If a user uploads a photo that is 3000x2400, you should consider that to be the Retina version of the photo, mark it 2x, and then use a server-side script to generate a smaller 1500x1200 non-Retina version, without the 2x. The 2 files together then constitute one 1500x1200 Retina-compatible image that can be displayed in a Web context at 1500x1200 whether the display is Retina or not. You don’t have to care because you have a Retina-compatible image and Retina-compatible website. The RetinaJS script is the only one that has to care whether a client is using Retina or not. So if you are collecting photos from users, your task is not complete unless you generate 2 files, both low-res and high-res.
The typical smartphone captures a photo that is more than 10x the size of the smartphone’s display. So you should always have enough pixels. But if you are getting really small images, like 500px, then you can set a breakpoint in your server-side image-reducing script so that below that, the uploaded photo is used for the low-res version and the script makes a 2x copy that is going to be no better than the non-Retina image but it is going to be Retina-compatible.
With this solution, your whole problem of “is the 2x image there or not?” goes away, because it is always there. The Retina-compatible website will just happily use your Retina-compatible database of photos without any complaints.
Related
I want to make it such that an image on a website gets its "onclick" event disabled and a gray filter applied, if a certain file on the same domain is not found. I want to use purely JS and have tried this so far:
function fileNonExist(url, callback){
var http = new XMLHttpRequest();
http.onreadystatechange = function() {
if (http.readyState === XMLHttpRequest.DONE && callback) {
if(http.status != 200){
callback();
}
}
}
http.open('HEAD', url);
http.send();
}
fileNonExist("theFileIAmLookingFor.html", () => {
console.log("image changed");
image.onclick = "";
image.style.filter = "grayscale(100%)";
});
I have the image initialized and displayed. Thus image.onclick = "" and image.style.filter = "grayscale(100%)
both work, if they are used normally. However, even though the function blocks are executed as intended (Console logs "image changed" if the file isnt found, and nothing otherwise.) none of the style changes are ever visible, if they are executed from within those blocks. Why might that be and how could I fix it?
I found out the solution myself, while talking to Emiel Zuurbier: I noticed that the code works if I open the html file normally in my browser. The bug occurs, if I access the file over a webserver, which i've done the whole time. If I shut down the server while the site is still opened in the browser, then the changes also get applied. If I look at the requests with dev tools in the browser. I see that only the successfull requests are finishing and the unsuccessfull ones are left pending forever. Thats why the changes get applied when the server is shut down and all pending requests get closed with errors. The Server uses the Node.js "fs" module and its readFile method.
I will now try to turn the styles around so all images start off gray and without "onclick" - methods and only become unlocked once the file is found. This way the images with pending requests remain gray.
I have a Chrome extension that uses chrome.storage to keep track of stylesheets to apply to the page's content. One of these stylesheets is a required default stylesheet that I initially load from Chrome's extension files if the file does not exist in the user's chrome.storage. This works great.
However, I sometimes update this default stylesheet with different rules to improve the styling. When the extension runs, it checks if the default stylsheet is there and finds the old version of the stylesheet - so it doesn't load anything from the extension's storage. Thus the user is still using the old version of the stylesheet.
On my local computer, I can manually empty out my chrome.storage and load the new one, but I can't do this through the extension when it's published because I don't want to empty it every time my extension runs nor do I know only the times the stylesheet has been updated in Chrome's extension files to do so.
I could get around this by checking each character of both files, comparing if they're the same, and loading the extension's stylesheet if so, but this seems like overkill and prone to errors.
Is there an easier way to update chrome.storage's stylesheet only when the extension's stylesheet is updated without changing the file name?
If you want to look at my implementation, the whole project is open source on GitHub.
With a nudge from Florian in a chat, I came up with the following solution using a second chrome.storage space.
I was already checking to see if a stylesheet exists inside of the user's Chrome storage and loading the stylesheet from the extension's files if it didn't exist. To cause it to auto update upon changes, I now check a second chrome.storage space that holds a version number when checking whether or not to load the stylesheet from Chrome's storage. The basic approach is as follows:
// Helper function that checks whether an object is empty or not
function isEmpty(obj) {
return Object.keys(obj).length === 0;
}
var stylesheetObj = {}, // Keeps track of all stylesheets
stylesheetVersion = 1; // THIS NUMBER MUST BE CHANGED FOR THE STYLESHEETS TO KNOW TO UPDATE
chrome.storage.sync.get('just-read-stylesheets', function (result) {
// Here 'results' is an object with all stylesheets if it exists
// This keeps track of whether or not the user has the latest stylsheet version
var needsUpdate = false;
// Here I get the user's current stylesheet version
chrome.storage.sync.get('stylesheet-version', function (versionResult) {
// If the user has a version of the stylesheets and it is less than the cufrent one, update it
if(isEmpty(versionResult)
|| versionResult['stylesheet-version'] < stylesheetVersion) {
chrome.storage.sync.set({'stylesheet-version': stylesheetVersion});
needsUpdate = true;
}
if(isEmpty(result) // Not found, so we add our default
|| isEmpty(result["just-read-stylesheets"])
|| needsUpdate) { // Update the default stylesheet if it's on a previous version
// Open the default CSS file and save it to our object
var xhr = new XMLHttpRequest();
xhr.open('GET', chrome.extension.getURL('default-styles.css'), true);
// Code to handle successful GET here
}
xhr.send();
return;
}
// Code to do if no load is necessary here
});
});
This makes it so that the only thing that has to be changed to update the stylesheet for users is stylesheetVersion, making sure that it is larger than the previous versions. For example, if I updated the stylesheet and wanted the user's version to auto update, I would change stylesheetVersion from 1 to 1.1.
If you need a more full implementation, you can find the JS file here on GitHub
Try to use chrome.storage.sync and add a listener to its *onChanged* event. Whenever anything changes in storage, that event fires. Here's sample code to listen for save changes:
chrome.storage.onChanged.addListener(function(changes, namespace) {
for (key in changes) {
var storageChange = changes[key];
console.log('Storage key "%s" in namespace "%s" changed. ' +
'Old value was "%s", new value is "%s".',
key,
namespace,
storageChange.oldValue,
storageChange.newValue);
}
});
I was wondering if it was possible to intercept and control/redirect DNS requests made by Firefox?
The intention is to set an independent DNS server in Firefox (not the system's DNS server)
No, not really. The DNS resolver is made available via the nsIDNSService interface. That interface is not fully scriptable, so you cannot just replace the built-in implementation with your own Javascript implementation.
But could you perhaps just override the DNS server?
The built-in implementation goes from nsDNSService to nsHostResolver to PR_GetAddrByName (nspr) and ends up in getaddrinfo/gethostbyname. And that uses whatever the the system (or the library implementing it) has configured.
Any other alternatives?
Not really. You could install a proxy and let it resolve domain names (requires some kind of proxy server of course). But that is a very much a hack and nothing I'd recommend (and what if the user already has a real, non-resolving proxy configured; would need to handle that as well).
You can detect the "problem loading page" and then probably use redirectTo method on it.
Basically they all load about:neterror url with a bunch of info after it. IE:
about:neterror?e=dnsNotFound&u=http%3A//www.cu.reporterror%28%27afew/&c=UTF-8&d=Firefox%20can%27t%20find%20the%20server%20at%20www.cu.reporterror%28%27afew.
about:neterror?e=malformedURI&u=about%3Abalk&c=&d=The%20URL%20is%20not%20valid%20and%20cannot%
But this info is held in the docuri. So you have to do that. Here's example code that will detect problem loading pages:
var listenToPageLoad_IfProblemLoadingPage = function(event) {
var win = event.originalTarget.defaultView;
var docuri = window.gBrowser.webNavigation.document.documentURI; //this is bad practice, it returns the documentUri of the currently focused tab, need to make it get the linkedBrowser for the tab by going through the event. so use like event.originalTarget.linkedBrowser.webNavigation.document.documentURI <<i didnt test this linkedBrowser theory but its gotta be something like that
var location = win.location + ''; //I add a " + ''" at the end so it makes it a string so we can use string functions like location.indexOf etc
if (win.frameElement) {
// Frame within a tab was loaded. win should be the top window of
// the frameset. If you don't want do anything when frames/iframes
// are loaded in this web page, uncomment the following line:
// return;
// Find the root document:
//win = win.top;
if (docuri.indexOf('about:neterror') == 0) {
Components.utils.reportError('IN FRAME - PROBLEM LOADING PAGE LOADED docuri = "' + docuri + '"');
}
} else {
if (docuri.indexOf('about:neterror') == 0) {
Components.utils.reportError('IN TAB - PROBLEM LOADING PAGE LOADED docuri = "' + docuri + '"');
}
}
}
window.gBrowser.addEventListener('DOMContentLoaded', listenToPageLoad_IfProblemLoadingPage, true);
Situation looks like that:
I need to have button on my site that will link to subpage with video.
There are two subpages - one with high quality video and second with low quality video.
When somebody click on button first time then it redirect him to subpage high quality video. From this subpage he can switch to second subpage (with low quality video).
The problem:
I want to remember in cookies on which web page with video client was last in (low or high quality). So that when client returns to my website, button will lead him to page with video that he was last in.
I use ASP.NET MVC 2. But I think that solution to this problem is probably some javascript.
Any help here much appreciated!
Cookies are passed to the server with each HTTP request.
Assuming your button is generated dynamically on the server, you can inspect the incoming cookies to see if the user has the parameter in question set to low quality and update the button URL accordingly.
ASP docs
From experience with ASP.Net WebForms, its pretty straightforward to access cookies and I am pretty sure things are setup similarly w/ MVC.
String GetBandwidthSetting()
{
HttpCookie bandwidth = Context.Request.Cookies["bandwidth"];
return (bandwidth != null) ? bandwidth.Value : null;
}
String SetBandwidthSetting(String value)
{
HttpCookie bandwidth = new HttpCookie("bandwidth", value);
bandwidth.Expires = DateTime.Now.AddYears(1);
Context.Response.Cookies.Add(bandwidth);
}
You can check this script:
http://javascript.internet.com/cookies/cookie-redirect.html
It is similar to what you need.
In .js you have to change last if statement to one looking similar to that:
if (favorite != null) {
switch (favorite) {
case 'videohq': url = 'url_of_HQ_Video'; // change these!
break;
case 'videolq': url = 'url_of_LQ_Video';
break;
}
And then add this to button/link:
onclick="window.location.href = url"
to your site on which you are redirectiong to those videos.
Remember also to add code that set cookies. You can add action similar to this:
onClick="SetCookie('video', 'videohq' , exp);
I'm fully aware that this question has been asked and answered everywhere, both on SO and off. However, every time there seems to be a different answer, e.g. this, this and that.
I don't care whether it's using jQuery or not - what's important is that it works, and is cross-browser.]
So, what is the best way to preload images?
Unfortunately, that depends on your purpose.
If you plan to use the images for purposes of style, your best bet is to use sprites.
http://www.alistapart.com/articles/sprites2
However, if you plan to use the images in <img> tags, then you'll want to pre-load them with
function preload(sources)
{
var images = [];
for (i = 0, length = sources.length; i < length; ++i) {
images[i] = new Image();
images[i].src = sources[i];
}
}
(modified source taken from What is the best way to preload multiple images in JavaScript?)
using new Image() does not involve the expense of using DOM methods but a new request for the image specified will be added to the queue. As the image is, at this point, not actually added to the page, there is no re-rendering involved. I would recommend, however, adding this to the end of your page (as all of your scripts should be, when possible) to prevent it from holding up more critical elements.
Edit: Edited to reflect comment quite correctly pointing out that separate Image objects are required to work properly. Thanks, and my bad for not checking it more closely.
Edit2: edited to make the reusability more obvious
Edit 3 (3 years later):
Due to changes in how browsers handle non-visible images (display:none or, as in this answer, never appended to the document) a new approach to pre-loading is preferred.
You can use an Ajax request to force early retrieval of images. Using jQuery, for example:
jQuery.get(source);
Or in the context of our previous example, you could do:
function preload(sources)
{
jQuery.each(sources, function(i,source) { jQuery.get(source); });
}
Note that this doesn't apply to the case of sprites which are fine as-is. This is just for things like photo galleries or sliders/carousels with images where the images aren't loading because they are not visible initially.
Also note that this method does not work for IE (ajax is normally not used to retrieve image data).
Spriting
As others have mentioned, spriting works quite well for a variety of reasons, however, it's not as good as its made out to be.
On the upside, you end up making only one HTTP request for your images. YMMV though.
On the down side you are loading everything in one HTTP request. Since most current browsers are limited to 2 concurrent connections the image request can block other requests. Hence YMMV and something like your menu background might not render for a bit.
Multiple images share the same color palette so there is some saving but this is not always the case and even so it's negligible.
Compression is improved because there is more shared data between images.
Dealing with irregular shapes is tricky though. Combining all new images into the new one is another annoyance.
Low jack approach using <img> tags
If you are looking for the most definitive solution then you should go with the low-jack approach which I still prefer. Create <img> links to the images at the end of your document and set the width and height to 1x1 pixel and additionally put them in a hidden div. If they are at the end of the page, they will be loaded after other content.
As of January 2013 none of the methods described here worked for me, so here's what did instead, tested and working with Chrome 25 and Firefox 18. Uses jQuery and this plugin to work around the load event quirks:
function preload(sources, callback) {
if(sources.length) {
var preloaderDiv = $('<div style="display: none;"></div>').prependTo(document.body);
$.each(sources, function(i,source) {
$("<img/>").attr("src", source).appendTo(preloaderDiv);
if(i == (sources.length-1)) {
$(preloaderDiv).imagesLoaded(function() {
$(this).remove();
if(callback) callback();
});
}
});
} else {
if(callback) callback();
}
}
Usage:
preload(['/img/a.png', '/img/b.png', '/img/c.png'], function() {
console.log("done");
});
Note that you'll get mixed results if the cache is disabled, which it is by default on Chrome when the developer tools are open, so keep that in mind.
In my opinion, using Multipart XMLHttpRequest introduced by some libraries will be a preferred solution in the following years. However IE < v8, still don't support data:uri (even IE8 has limited support, allowing up to 32kb). Here is an implementation of parallel image preloading - http://code.google.com/p/core-framework/wiki/ImagePreloading , it's bundled in framework but still worth taking a look.
This was from a long time ago so I dont know how many people are still interested in preloading an image.
My solution was even more simple.
I just used CSS.
#hidden_preload {
height: 1px;
left: -20000px;
position: absolute;
top: -20000px;
width: 1px;
}
Here goes my simple solution with a fade in on the image after it is loaded.
function preloadImage(_imgUrl, _container){
var image = new Image();
image.src = _imgUrl;
image.onload = function(){
$(_container).fadeTo(500, 1);
};
}
For my use case I had a carousel with full screen images that I wanted to preload. However since the images display in order, and could take a few seconds each to load, it's important that I load them in order, sequentially.
For this I used the async library's waterfall() method (https://github.com/caolan/async#waterfall)
// Preload all images in the carousel in order.
image_preload_array = [];
$('div.carousel-image').each(function(){
var url = $(this).data('image-url');
image_preload_array.push(function(callback) {
var $img = $('<img/>')
$img.load(function() {
callback(null);
})[0].src = url;
});
});
async.waterfall(image_preload_array);
This works by creating an array of functions, each function is passed the parameter callback() which it needs to execute in order to call the next function in the array. The first parameter of callback() is an error message, which will exit the sequence if a non-null value is provided, so we pass null each time.
See this:
http://www.mattfarina.com/2007/02/01/preloading_images_with_jquery
Related question on SO:
jquery hidden preload