I'm fully aware that this question has been asked and answered everywhere, both on SO and off. However, every time there seems to be a different answer, e.g. this, this and that.
I don't care whether it's using jQuery or not - what's important is that it works, and is cross-browser.]
So, what is the best way to preload images?
Unfortunately, that depends on your purpose.
If you plan to use the images for purposes of style, your best bet is to use sprites.
http://www.alistapart.com/articles/sprites2
However, if you plan to use the images in <img> tags, then you'll want to pre-load them with
function preload(sources)
{
var images = [];
for (i = 0, length = sources.length; i < length; ++i) {
images[i] = new Image();
images[i].src = sources[i];
}
}
(modified source taken from What is the best way to preload multiple images in JavaScript?)
using new Image() does not involve the expense of using DOM methods but a new request for the image specified will be added to the queue. As the image is, at this point, not actually added to the page, there is no re-rendering involved. I would recommend, however, adding this to the end of your page (as all of your scripts should be, when possible) to prevent it from holding up more critical elements.
Edit: Edited to reflect comment quite correctly pointing out that separate Image objects are required to work properly. Thanks, and my bad for not checking it more closely.
Edit2: edited to make the reusability more obvious
Edit 3 (3 years later):
Due to changes in how browsers handle non-visible images (display:none or, as in this answer, never appended to the document) a new approach to pre-loading is preferred.
You can use an Ajax request to force early retrieval of images. Using jQuery, for example:
jQuery.get(source);
Or in the context of our previous example, you could do:
function preload(sources)
{
jQuery.each(sources, function(i,source) { jQuery.get(source); });
}
Note that this doesn't apply to the case of sprites which are fine as-is. This is just for things like photo galleries or sliders/carousels with images where the images aren't loading because they are not visible initially.
Also note that this method does not work for IE (ajax is normally not used to retrieve image data).
Spriting
As others have mentioned, spriting works quite well for a variety of reasons, however, it's not as good as its made out to be.
On the upside, you end up making only one HTTP request for your images. YMMV though.
On the down side you are loading everything in one HTTP request. Since most current browsers are limited to 2 concurrent connections the image request can block other requests. Hence YMMV and something like your menu background might not render for a bit.
Multiple images share the same color palette so there is some saving but this is not always the case and even so it's negligible.
Compression is improved because there is more shared data between images.
Dealing with irregular shapes is tricky though. Combining all new images into the new one is another annoyance.
Low jack approach using <img> tags
If you are looking for the most definitive solution then you should go with the low-jack approach which I still prefer. Create <img> links to the images at the end of your document and set the width and height to 1x1 pixel and additionally put them in a hidden div. If they are at the end of the page, they will be loaded after other content.
As of January 2013 none of the methods described here worked for me, so here's what did instead, tested and working with Chrome 25 and Firefox 18. Uses jQuery and this plugin to work around the load event quirks:
function preload(sources, callback) {
if(sources.length) {
var preloaderDiv = $('<div style="display: none;"></div>').prependTo(document.body);
$.each(sources, function(i,source) {
$("<img/>").attr("src", source).appendTo(preloaderDiv);
if(i == (sources.length-1)) {
$(preloaderDiv).imagesLoaded(function() {
$(this).remove();
if(callback) callback();
});
}
});
} else {
if(callback) callback();
}
}
Usage:
preload(['/img/a.png', '/img/b.png', '/img/c.png'], function() {
console.log("done");
});
Note that you'll get mixed results if the cache is disabled, which it is by default on Chrome when the developer tools are open, so keep that in mind.
In my opinion, using Multipart XMLHttpRequest introduced by some libraries will be a preferred solution in the following years. However IE < v8, still don't support data:uri (even IE8 has limited support, allowing up to 32kb). Here is an implementation of parallel image preloading - http://code.google.com/p/core-framework/wiki/ImagePreloading , it's bundled in framework but still worth taking a look.
This was from a long time ago so I dont know how many people are still interested in preloading an image.
My solution was even more simple.
I just used CSS.
#hidden_preload {
height: 1px;
left: -20000px;
position: absolute;
top: -20000px;
width: 1px;
}
Here goes my simple solution with a fade in on the image after it is loaded.
function preloadImage(_imgUrl, _container){
var image = new Image();
image.src = _imgUrl;
image.onload = function(){
$(_container).fadeTo(500, 1);
};
}
For my use case I had a carousel with full screen images that I wanted to preload. However since the images display in order, and could take a few seconds each to load, it's important that I load them in order, sequentially.
For this I used the async library's waterfall() method (https://github.com/caolan/async#waterfall)
// Preload all images in the carousel in order.
image_preload_array = [];
$('div.carousel-image').each(function(){
var url = $(this).data('image-url');
image_preload_array.push(function(callback) {
var $img = $('<img/>')
$img.load(function() {
callback(null);
})[0].src = url;
});
});
async.waterfall(image_preload_array);
This works by creating an array of functions, each function is passed the parameter callback() which it needs to execute in order to call the next function in the array. The first parameter of callback() is an error message, which will exit the sequence if a non-null value is provided, so we pass null each time.
See this:
http://www.mattfarina.com/2007/02/01/preloading_images_with_jquery
Related question on SO:
jquery hidden preload
Related
As I mentioned in the title I have a problem properly embedding my animation in my Django project.
At some point, I decided to update the logo used by my web page built-in Django.
Having small experience with JavaScript I was looking for approaches that fit my needs.
I built an animation engine having some inspiration from this StackOverflow answer. (Thanks to user gilly3).
My approach is, though, different in this topic because I realised that if I would have a lot of pictures then sticking/concatenating them together in one file might be difficult. I've decided to use another approach and use a bunch of files instead of one. For that sake, I built a function with a generator which I could call in another function to display all pictures in order. This animation engine looks like this:
function* indexGenerator() {
let index = 1;
while (index < 28) {
yield index++;
}
if (index = 28)
yield* indexGenerator();
};
var number = indexGenerator();
setInterval(function animationslider()
{
var file = "<img src=\"../../../static/my_project/logos/media/slides_banner_f/slide_" + number.next().value + ".jpg\" />";
document.getElementById("animation-slider").innerHTML = file;
$("#animation-slider").fadeIn(1000);
}, 100);
$("#animation-slider").fadeIn(1000); doesn't do the trick with values from 10-1000.
I record what this looks like:
https://youtu.be/RVBtLbBArh0
I suspect that the solution to the first relocation problem can be solved using CSS, but I don't know how to do it, yet.
The blinking/flickering of the animation is probably caused by loading all these images one by one. In the example video above, there are 28 of them. I wonder if this content would be loaded asynchronously (e.g. using Ajax) would it solve the problem? Is it possible to load all and after all is loaded, then display the animation (some sort of preloading)? I will be grateful for all suggestions and hints.
I'm aware that in the topic mentioned by me before there is a solution. But I'm curious about other approaches.
At the moment, from what I checked, the best method would be to convert pictures to the format:
APNG
WebP which seems to be better quality.
GIF
Having that kind of file, there is no problem with loading and flickering.
The flickering is the time it takes to create the dom element, insert it into your page, make the web request to download the image, and final render the image. That's too much work to do in the moment you want to animate the next frame! It would look even worse running from a client with a poor connection to the server.
To solve this, preload all of the images. This is the benefit of a sprite - it loads all at once by its very nature. But, if you have individual images, you can still preload them. Just hide them all at first, and show them one by one.
One way you can show them one at a time, is to set up a style that hides all but the first child image. Then, to animate, move the first image to the end of the list.
#animation-slider img ~ img {
display: none;
}
const animationContainer = document.getElementById("animation-slider");
for (let i = 0; i < 28; i++) {
const img = document.createElement("img");
img.src = `../../../static/my_project/logos/media/slides_banner_f/slide_${i}.jpg`;
animationContainer.appendChild(img);
}
addEventListener("load", () => {
setInterval(() => animationContainer.appendChild(animationContainer.children[0]), 100);
});
Here's a working example, using my own image:
const animationContainer = document.getElementById("animation-slider");
for (let i = 1; i < 16; i++) {
const img = document.createElement("img");
img.src = `https://jumpingfishes.com/dancingpurpleunicorns/charging${i.toString().padStart(2, "0")}.png`;
animationContainer.appendChild(img);
}
addEventListener("load", () => {
setInterval(() => animationContainer.appendChild(animationContainer.children[0]), 100);
});
#animation-slider img ~ img {
display: none;
}
<div id="animation-slider"></div>
I making a game and I want to load 798 sound files, but there is a problem only in Chrome, Firefox fine. Sample code: https://jsfiddle.net/76zb42ag/, see the console (press F12).
Sometimes script loads only 100, 500, 700 files, sometimes is fine. If i reduce the number of files to ex. 300 is ok (always). How can I solve this problem? I need a callback or any ideas? The game will be offline (node webkit).
Javascript code :
var total = 0;
// sample file to download: http://www.sample-videos.com/audio/mp3/crowd-cheering.mp3
// sounds.length = 798 files
var sounds = [
(...limit character, see https://jsfiddle.net/76zb42ag/...)
];
for (var i in sounds) {
load(sounds[i]);
}
function load(file) {
var snd = new Audio();
snd.addEventListener('canplaythrough', loadedAudio, false);
snd.src = file;
}
function loadedAudio() {
total++;
console.log(total);
if (total == sounds.length){
console.log("COMPLETE");
}
}
This isn't really a code problem. It's a general architecture problem.
Depending not only on the number, but also the size of the samples, it's going to be unlikely you can get them all loaded at once. Even if you can, it'll run very poorly because of the high memory use and likely crash the tab after a certain amount of time.
Since it's offline, I would say you could even get away with not pre-loading them at all, since the read speed is going to be nearly instantaneous.
If you find that isn't suitable, or you may need like 5 at once and it might be too slow, I'd say you'll need to architect your game in a way that you can determine which sounds you'll need for a certain game state, and just load those (and remove references to ones you don't need so they can be garbage collected).
This is exactly what all games do when they show you a loading screen, and for the same reasons.
If you want to avoid "loading screens", you can get clever by working out a way to know what is coming up and load it just ahead of time.
I am building a fairly complex web app. The main page loads and then all menu items (300+), form submissions, ect. are loaded via XMLHttpRequest. I have a basic "panel" template that allows the the panel to look and act (drag, resize, ect.) like a child window of the app. I load all XMLHttpRequest requested pages into the the content section of the "panel" template.
The problem I am running into is that if I try to center the new "panel" it does not seem to find the new "panels" size. My code is setup so that when a menu item is clicked it runs a function that calls the XMLHttpRequest function, the originating function passes to the XMLHttpRequest a callback function. The callback function then clones the panel template, so I can change several element attributes, I then append the response to the cloned "panel" template, in a document fragment. And then all that is appended to the displayed HTML, after which I find the new "panels" size and try to center it but it always fails.
As each function has a lot more going on than just what I spelled out above what follows is hopefully an accurate striped down version of the relevant parts of the code.
The XMLHttpRequest function in nothing unusual, and once it has a successful response the callback will run the "OpenPanel" function (see below).
Callback function:
function OpenPanel(e, response)
{
var rescontent = response.querySelector('.content');
var newid = rescontent.getAttribute('data-id');
var titlebar = rescontent.getAttribute('data-title');
var frag = document.createDocumentFragment();
var clonepanel = doc.getElementById('paneltemplate').cloneNode(true);
clonepanel.id = newid;
frag.appendChild(clonepanel);
frag.querySelector('.titlebar').innerHTML = titlebar;
var replacelem = frag.querySelector('.content');
replacelem.parentNode.replaceChild(rescontent, replacelem);
doc.getElementById('mainbody').appendChild(frag);
var newpanel = document.getElementById(newid);
newpanel.addEventListener('mousedown', PanelSelect, true);
newpanel.style.cssText = PanelPosition(newpanel);
}
PanelPosition function:
function PanelPosition(panel)
{
var lh = panel.clientHeight;
var lw = panel.clientWidth;
var wh = panel.parentNode.clientHeight;
var ww = panel.parentNode.clientWidth;
var paneltoppos = (wh - lh) / 2;
var panelleftpos = (ww - lw) / 2;
return 'top: ' + paneltoppos + 'px; left: ' + panelleftpos + 'px;';
}
I tried using setTimeout with a delay of 1ms, but that causes the panel to flash on the screen, in the wrong position, before its moved. Which from my perspective makes the app feel cheap or like only a second best effort was given. And even if it didn't flash setTimeout seems like a hack more than a solution.
I have tried this code with a few different "pages" (xhr requests) and I almost get the sense that the XMLHttpRequest hasn't finished loading when the callback function is ran (which I doubt is possible). For example, I put
console.log('top: '+wh+' - '+lh+'(wh - lh) left: '+ww+' - '+lw+'(ww - lw)');
in the "PanelPosition" function, and without the setTimeout the panel height (lh) and width (lw) are between 100 and 200 pixels. But with setTimeout the panels usually are over 500 pixels in height and width. And of course that severely effects where centered is.
I have tried several searches over the last few days but nothing has turned up. So if there is a good post or article describing the problem and the solution, feel free point me to it.
Should note that as I am running the web app exclusively in node-webkit/nw.js (chromium/webkit browser) there is no need for a cross-browser solution.
Well I guess I am going to answer my own question.
While looking for something completely unrelated I found this this SO post. The accepted answer gives a clue that explains the issue.
Unfortunately, it seems that you have to hand the controls back to the browser (using setTimeout() as you did) before the final dimensions can be observed; luckily, the timeout can be very short.
Basically javascript does not draw the appended element till the end of the function call. So setTimeout is one solution to my problem. But, there is a second solution. If I just have to wait till the end of the function then lets make that one heck of a small (focused) function. So I just moved all the code need to create the appended "panel" to a totally separate function. And in so doing I solved another pending issue.
Edit:
Or not. Apparently a separate function doesn't work now, but it did before I posted. Who knows maybe I didn't save the change before reloading the page.
We use the js lib retina.js which swaps low quality images with "retina" images (size times 2). The problem is, that retina.js throws a 404 for every "retina" image which can't be found.
We own a site where users can upload their own pictures which are most likely not in a retina resolution.
Is there no way to prevent the js from throwing 404s?
If you don't know the lib. Here is the code throwing the 404:
http = new XMLHttpRequest;
http.open('HEAD', this.at_2x_path);
http.onreadystatechange = function() {
if (http.readyState != 4) {
return callback(false);
}
if (http.status >= 200 && http.status <= 399) {
if (config.check_mime_type) {
var type = http.getResponseHeader('Content-Type');
if (type == null || !type.match(/^image/i)) {
return callback(false);
}
}
RetinaImagePath.confirmed_paths.push(that.at_2x_path);
return callback(true);
} else {
return callback(false);
}
}
http.send();
There are a few options that I see, to mitigate this.
Enhance and persist retina.js' HTTP call results caching
For any given '2x' image that is set to swap out a '1x' version, retina.js first verifies the availability of the image via an XMLHttpRequest request. Paths with successful responses are cached in an array and the image is downloaded.
The following changes may improve efficiency:
Failed XMLHttpRequest verification attempts can be cached: Presently, a '2x' path verification attempt is skipped only if it has previously succeeded. Therefore, failed attempts can recur. In practice, this doesn't matter much beacuse the verification process happens when the page is initially loaded. But, if the results are persisted, keeping track of failures will prevent recurring 404 errors.
Persist '2x' path verification results in localStorage: During initialization, retina.js can check localStorage for a results cache. If one is found, the verification process for '2x' images that have already been encountered can be bypassed and the '2x' image can either be downloaded or skipped. Newly encounterd '2x' image paths can be verified and the results added to the cache. Theoretically, while localStorage is available, a 404 will occur only once for an image on a per-browser basis. This would apply across pages for any page on the domain.
Here is a quick workup. Expiration functionality would probably need to be added.
https://gist.github.com/4343101/revisions
Employ an HTTP redirect header
I must note that my grasp of "server-side" matters is spotty, at best. Please take this FWIW
Another option is for the server to respond with a redirect code for image requests that have the #2x characters and do not exist. See this related answer.
In particular:
If you redirect images and they're cacheable, you'd ideally set an HTTP Expires header (and the appropriate Cache-Control header) for a date in the distant future, so at least on subsequent visits to the page users won't have to go through the redirect again.
Employing the redirect response would get rid of the 404s and cause the browser to skip subsequent attempts to access '2x' image paths that do not exist.
retina.js can be made more selective
retinajs can be modified to exclude some images from consideration.
A pull request related to this: https://github.com/imulus/retinajs/commit/e7930be
Per the pull request, instead of finding <img> elements by tag name, a CSS selector can be used and this can be one of the retina.js' configurable options. A CSS selector can be created that will filter out user uploaded images (and other images for which a '2x' variant is expected not to exist).
Another possibility is to add a filter function to the configurable options. The function can be called on each matched <img> element; a return true would cause a '2x' variant to be downloaded and anything else would cause the <img> to be skipped.
The basic, default configuration would change from the current version to something like:
var config = {
check_mime_type: true,
retinaImgTagSelector: 'img',
retinaImgFilterFunc: undefined
};
The Retina.init() function would change from the current version to something like:
Retina.init = function(context) {
if (context == null) context = root;
var existing_onload = context.onload || new Function;
context.onload = function() {
// uses new query selector
var images = document.querySelectorAll(config.retinaImgTagSelector),
retinaImages = [], i, image, filter;
// if there is a filter, check each image
if (typeof config.retinaImgFilterFunc === 'function') {
filter = config.retinaImgFilterFunc;
for (i = 0; i < images.length; i++) {
image = images[i];
if (filter(image)) {
retinaImages.push(new RetinaImage(image));
}
}
} else {
for (i = 0; i < images.length; i++) {
image = images[i];
retinaImages.push(new RetinaImage(image));
}
}
existing_onload();
}
};
To put it into practice, before window.onload fires, call:
window.Retina.configure({
// use a class 'no-retina' to prevent retinajs
// from checking for a retina version
retinaImgTagSelector : 'img:not(.no-retina)',
// or, assuming there is a data-owner attribute
// which indicates the user that uploaded the image:
// retinaImgTagSelector : 'img:not([data-owner])',
// or set a filter function that will exclude images that have
// the current user's id in their path, (assuming there is a
// variable userId in the global scope)
retinaImgFilterFunc: function(img) {
return img.src.indexOf(window.userId) < 0;
}
});
Update: Cleaned up and reorganized. Added the localStorage enhancement.
Short answer: Its not possible using client-side JavaScript only
After browsing the code, and a little research, It appears to me that retina.js isn't really throwing the 404 errors.
What retina.js is actually doing is requesting a file and simply performing a check on whether or not it exists based on the error code. Which actually means it is asking the browser to check if the file exists. The browser is what gives you the 404 and there is no cross browser way to prevent that (I say "cross browser" because I only checked webkit).
However, what you could do if this really is an issue is do something on the server side to prevent 404s altogether.
Essentially this would be, for example, /retina.php?image=YOUR_URLENCODED_IMAGE_PATH a request to which could return this when a retina image exists...
{"isRetina": true, "path": "YOUR_RETINA_IMAGE_PATH"}}
and this if it doesnt...
{"isRetina": false, "path": "YOUR_REGULAR_IMAGE_PATH"}}
You could then have some JavaScript call this script and parse the response as necessary. I'm not claiming that is the only or the best solution, just one that would work.
Retina JS supports the attribute data-no-retina on the image tag.
This way it won't try to find the retina image.
Helpful for other people looking for a simple solution.
<img src="/path/to/image" data-no-retina />
I prefer a little more control over which images are replaced.
For all images that I've created a #2x for, I changed the original image name to include #1x. (* See note below.) I changed retina.js slightly, so that it only looks at [name]#1x.[ext] images.
I replaced the following line in retina-1.1.0.js:
retinaImages.push(new RetinaImage(image));
With the following lines:
if(image.src.match(/#1x\.\w{3}$/)) {
image.src = image.src.replace(/#1x(\.\w{3})$/,"$1");
retinaImages.push(new RetinaImage(image));
}
This makes it so that retina.js only replaces #1x named images with #2x named images.
(* Note: In exploring this, it seems that Safari and Chrome automatically replace #1x images with #2x images, even without retina.js installed. I'm too lazy to track this down, but I'd imagine it's a feature with the latest webkit browsers. As it is, retina.js and the above changes to it are necessary for cross-browser support.)
One of solutions is to use PHP:
replace code from 1st post with:
http = new XMLHttpRequest;
http.open('HEAD', "/image.php?p="+this.at_2x_path);
http.onreadystatechange = function() {
if (http.readyState != 4) {
return callback(false);
}
if (http.status >= 200 && http.status <= 399) {
if (config.check_mime_type) {
var type = http.getResponseHeader('Content-Type');
if (type == null || !type.match(/^image/i)) {
return callback(false);
}
}
RetinaImagePath.confirmed_paths.push(that.at_2x_path);
return callback(true);
} else {
return callback(false);
}
}
http.send();
and in yours site root add file named "image.php":
<?php
if(file_exists($_GET['p'])){
$ext = explode('.', $_GET['p']);
$ext = end($ext);
if($ext=="jpg") $ext="jpeg";
header("Content-Type: image/".$ext);
echo file_get_contents($_GET['p']);
}
?>
retina.js is a nice tool for fixed images on static web pages, but if you are retrieving user uploaded images, the right tool is server side. I imagine PHP here, but the same logic may be applied to any server side language.
Provided that a nice security habit for uploaded images is to not let users reach them by direct url: if the user succeeds in uploading a malicious script to your server, he should not be able to launch it via an url (www.yoursite.com/uploaded/mymaliciousscript.php). So it is usually a good habit to retrieve uploaded images via some script <img src="get_image.php?id=123456" /> if you can... (and even better, keep the upload folder out of the document root)
Now the get_image.php script can get the appropriate image 123456.jpg or 123456#2x.jpg depending on some conditions.
The approach of http://retina-images.complexcompulsions.com/#setupserver seems perfect for your situation.
First you set a cookie in your header by loading a file via JS or CSS:
Inside HEAD:
<script>(function(w){var dpr=((w.devicePixelRatio===undefined)?1:w.devicePixelRatio);if(!!w.navigator.standalone){var r=new XMLHttpRequest();r.open('GET','/retinaimages.php?devicePixelRatio='+dpr,false);r.send()}else{document.cookie='devicePixelRatio='+dpr+'; path=/'}})(window)</script>
At beginning of BODY:
<noscript><style id="devicePixelRatio" media="only screen and (-moz-min-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2/1), only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (min-device-pixel-ratio: 2)">#devicePixelRatio{background-image:url("/retinaimages.php?devicePixelRatio=2")}</style></noscript>
Now every time your script to retrieve uploaded images is called, it will have a cookie set asking for retina images (or not).
Of course you may use the provided retinaimages.php script to output the images, but you may also modify it to accomodate your needs depending on how you produce and retrieve images from a database or hiding the upload directory from users.
So, not only it may load the appropriate image, but if GD2 is installed, and you keep the original uploaded image on the server, it may even resize it and crop accordingly and save the 2 cached image sizes on the server. Inside the retinaimages.php sources you can see (and copy) how it works:
<?php
$source_file = ...
$retina_file = ....
if (isset($_COOKIE['devicePixelRatio'])) {
$cookie_value = intval($_COOKIE['devicePixelRatio']);
}
if ($cookie_value !== false && $cookie_value > 1) {
// Check if retina image exists
if (file_exists($retina_file)) {
$source_file = $retina_file;
}
}
....
header('Content-Length: '.filesize($source_file), true);
readfile($source_file); // or read from db, or create right size.. etc..
?>
Pros: image is loaded only once (retina users on 3G at least won'tload 1x+2x images), works even without JS if cookies are enabled, can be switched on and off easily, no need to use apple naming conventions. You load image 12345 and you get the correct DPI for your device.
With url rewriting you may even render it totally transparent by redirecting /get_image/1234.jpg to /get_image.php?id=1234.jpg
My suggestion is that you recognize the 404 errors to be true errors, and fix them the way that you are supposed to, which is to provide Retina graphics. You made your scripts Retina-compatible, but you did not complete the circle by making your graphics workflow Retina-compatible. Therefore, the Retina graphics are actually missing. Whatever comes in at the start of your graphics workflow, the output of the workflow has to be 2 image files, a low-res and Retina 2x.
If a user uploads a photo that is 3000x2400, you should consider that to be the Retina version of the photo, mark it 2x, and then use a server-side script to generate a smaller 1500x1200 non-Retina version, without the 2x. The 2 files together then constitute one 1500x1200 Retina-compatible image that can be displayed in a Web context at 1500x1200 whether the display is Retina or not. You don’t have to care because you have a Retina-compatible image and Retina-compatible website. The RetinaJS script is the only one that has to care whether a client is using Retina or not. So if you are collecting photos from users, your task is not complete unless you generate 2 files, both low-res and high-res.
The typical smartphone captures a photo that is more than 10x the size of the smartphone’s display. So you should always have enough pixels. But if you are getting really small images, like 500px, then you can set a breakpoint in your server-side image-reducing script so that below that, the uploaded photo is used for the low-res version and the script makes a 2x copy that is going to be no better than the non-Retina image but it is going to be Retina-compatible.
With this solution, your whole problem of “is the 2x image there or not?” goes away, because it is always there. The Retina-compatible website will just happily use your Retina-compatible database of photos without any complaints.
I am trying to intercept the window.location changes to do some native work in Android app. To ve more specific, I overwrite the call in WebViewClient:
public boolean shouldOverrideUrlLoading(WebView view, String url)
to look for anything start with "native://". The JavaScript code is like this.
function callNative() {
window.location = "native://doSomeNativeWork()";
}
function callNativeManyTimes(count) {
for(i = 0 ; i < count ; i ++) {
callNative();
}
}
DoSomeNativeWork<br/>
The problem I am seeing is that if I call "window.location = something" many time very quickly (like in the code above) , I will only get one call inside the WebViewClient on the native code side. If I make the call 50ms apart, I will get everyone of them. I am thinking that the browser is doing some optimization around this.
I think I can solve this problem like this: do not use window.location, change to embed a native object to javascript, and call methods on that object in javascript. I am just wondering why this is happening. Can someone more familiar wit JS to share some insight?
Thanks
I had exact the same problem. You can do that by creating iframes. I create iframes with the native urls and then I delete the iframe after 2 seconds just so we don't have too many laying around on the DOM. Worked like a charm to me. You can create as many iframes as you want.
I also included a cache buster on the iframe url, even though I'm not sure it's needed. Better safe then sorry.
function callNative(url){
var _frame = document.createElement('iframe');
_frame.width=0; _frame.height=0;_frame.frameBorder=0;
document.body.appendChild(_frame);
if (url.indexOf('?') >= 0){
url = url + "&cb=";
}else{
url = url + "?cb=";
}
_frame.src = url + Math.round(Math.random()*1e16);
// Remove the iframe
setTimeout(function(){document.body.removeChild(_frame);}, 2000);
}
function callNativeManyTimes(count) {
for(i = 0 ; i < count ; i ++) {
callNative('native://doSomeNativeWork()');
}
}
DoSomeNativeWork<br/>
Note that I used the approach above on iOS. Not sure if it's exactly the same on Android, but from judging the question and other answer I guess it's the same.
I would guess that loading a URL is asynchronous (it would probably be a bad idea to block the execution of a script until a URL has been resolved). As such, setting window.location will presumably only queue up the loading of the new URL, which will be done in a different thread.
Waiting 50ms is a hack that may or may not work. You need to find a different approach. You need something that guarantees that each one of those URLs will be resolved. If the order doesn't matter, you could just use images, like somebody suggested. Otherwise, you could use a native JavaScript interface (which is probably the better approach).