I created a Magic The Gathering site for my friends and I to use. On this site, we upload our decks of cards, and on the page where you can view all the cards in the deck each card is a link to the card on http://gatherer.wizards.com/. For ease of use, though, I made it so that when you hover over any of the card names, the card image gets Ajax'd in from gatherer, thus letting you see the card without having to click the link.
The question is: should I load all of the ~40 or so card images all at once when the page loads, or should I continuously load the images as they are hovered over, or is there some other way I should be doing it?
As it stands, I load each card as it is hovered over. My concern is that, as people mouse up and down the list, that is a LOT of requests to Gatherer. It would probably save requests to load them all up at the start, but I'm not sure if Gatherer would be upset with me for a sudden flurry of requests every time someone loads one of the decks on my site.
A solution I thought of was to load cards as they are hovered over, but save the image in a hidden container and just reload it when they mouse over it AGAIN. Thus if they load the page and don't look at anything, no needless requests were sent, but if they stay on the page for 30 minutes looking at every card over and over again, we don't inundate Gatherer with requests.
I just don't know if the method I'm using is wasteful - from a bandwidth standpoint for me or for gatherer, or from any other standpoint that I'm not familiar with. Are there any golden rules of external Ajax that I should know, for instance?
The method I'm currently using, which I assume is probably the worst implementation possible, but it was a proof of concept:
$(document).ready(function(){
var container = $('#cardImageHolder');
$('.bumpin a').mouseenter(function(){
doAjax($(this).attr('href'));
return false;
});
function doAjax(url){
// if it is an external URI
if(url.match('^http')){
// call YQL
$.getJSON("http://query.yahooapis.com/v1/public/yql?"+
"q=select%20*%20from%20html%20where%20url%3D%22"+
encodeURIComponent(url)+
"%22&format=xml'&callback=?",
// this function gets the data from the successful
// JSON-P call
function(data){
// if there is data, filter it and render it out
if(data.results[0]){
var data = filterData(data.results[0]);
var src = $(data).find('.leftCol img').first().attr('src');
var fixedImageSrc = src.replace("../../", "http://gatherer.wizards.com/");
var image = $(data).find('.leftCol img').first().attr('src', fixedImageSrc);
container.html(image);
// otherwise tell the world that something went wrong
} else {
var errormsg = "<p>Error: can't load the page.</p>";
container.html(errormsg);
}
}
);
// if it is not an external URI, use Ajax load()
} else {
$('#target').load(url);
}
}
// filter out some nasties
function filterData(data){
data = data.replace(/<?\/body[^>]*>/g,'');
data = data.replace(/[\r|\n]+/g,'');
data = data.replace(/<--[\S\s]*?-->/g,'');
data = data.replace(/<noscript[^>]*>[\S\s]*?<\/noscript>/g,'');
data = data.replace(/<script[^>]*>[\S\s]*?<\/script>/g,'');
data = data.replace(/<script.*\/>/,'');
return data;
}
});
No, there are no Golden Rules of Ajax. Loading 40 images up front would minimize load time upon hover, but would greatly increase how much bandwidth is used when the page is first loaded.
You will always have these types of balance questions. It's up to you to decide what is best, and tweak it based on empirical data.
"A solution I thought of was to load cards as they are hovered over,
but save the image in a hidden container and just reload it when they
mouse over it AGAIN. Thus if they load the page and don't look at
anything, no needless requests were sent, but if they stay on the page
for 30 minutes looking at every card over and over again, we don't
inundate Gatherer with requests."
This sounds reasonable.
If I were you, though, I would load every picture when the user first loads the page. Let the browser cache the images and you don't have to worry about it. Plus, this is likely the easiest method. Don't over complicate things when you don't have to :)
Related
I have an internet radio station and I need a script that will display a picture of the current song in a particular dvi with an id. The image is automatically uploaded via ftp to the server each time the song changes..
HTML:
<div id="auto"></div>
JS:
$ (document).ready(function() {
$('#auto').html('<img src="artwork.png"></img>');
refresh();
});
function refresh() {
setTimeout (function() {
$('#auto').html('<img src="artwork.png"></img>');
refresh();
}, 1000);
}
I tried this, but all I get is that the image is loaded, but in case of a change, I have to manually refresh the whole page again..
I'll point out multiple things here.
I think your code is just fine if you are going for the setTimeout recursive calls instead of one setInterval action to repeat it.
File Caching
your problem is probably the browser's cache since you are using the same image name and directory all the time. browsers compare the file name and directory and to decide to load it from its cache or else it will request it from the server. there are different tricks you can do to reload the image from the server in this particular case.
Use different file names/directories for the songs loaded dynamically
Use a randomized GET query (e.g. image.png?v=current timestamp)
Your method for switching
you are replacing the file with FTP, I wouldn't recommend that. maybe you should have all your albums and thumbnails uploaded to the server and use a different dynamic switching for efficiency and less error proneness and will help you achieve method #1 in the previous section better.
Loading with constant refresh
I would like to highlight that if you are using nodeJs or nginx servers - which are event based - you can achieve the same functionality with much less traffic. you don't need a refresh method since those servers can actually send data on specific events to the browser telling it to load a specific resource at that time. no constant refresh is required for this.
You consider your options, I tried to be as comprehensive as I could
At the top level, browser cache the image based on its absolute URL. You may add extra query to the url to trick browser that is another new image. In this case, new URL of artist.png will be artist.png?timestamp=123
Check this out for the refresh():
function refresh() {
setTimeout (function() {
var timestamp = new Date().getTime();
// reassign the url to be like artwork.png?timestamp=456784512 based on timestmap
$('#auto').html('<img src="artwork.png?timestamp='+ timestamp +'"></img>');
refresh();
}, 1000);
}
You may assign id attribute to the image and change its src url
html
<img id="myArtworkId" src="artwork.png"/>
js in the refresh method
$('#myArtworkId').attr('src', 'artwork.png?timestamp=' + new Date().getTime());
You can use window.setInterval() to call a method every x seconds and clearInterval() to stop calling that method. View this answer for more information on this.
// Array containing src for demo
$srcs = ['https://www.petmd.com/sites/default/files/Acute-Dog-Diarrhea-47066074.jpg',
'https://www.catster.com/wp-content/uploads/2018/05/Sad-cat-black-and-white-looking-out-the-window.jpg',
'https://img.buzzfeed.com/buzzfeed-static/static/2017-05/17/13/asset/buzzfeed-prod-fastlane-03/sub-buzz-25320-1495040572-8.jpg?downsize=700:*&output-format=auto&output-quality=auto'
]
$i = 0;
$(document).ready(function() {
$('#auto').html('<img src="https://images.pexels.com/photos/617278/pexels-photo-617278.jpeg?auto=compress&cs=tinysrgb&dpr=1&w=500"></img>');
// call method after every 2 seconds
window.setInterval(function() {
refresh();
}, 2000);
// To stop the calling of refresh method uncomment the line below
//clearInterval()
});
function refresh() {
$('#auto').html('<img src="' + $srcs[$i++] + '"></img>');
// Handling of index out of bound exception
if ($srcs.length == $i) {
$i = 0;
}
}
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script>
<div id="auto"></div>
I am using the following Google Apps Script code to display content in a custom sidebar of my spreadsheet while the script runs:
function test() {
var sidebarContent = '1<br>';
updateSidebar(sidebarContent);
sidebarContent += '2<br>';
updateSidebar(sidebarContent);
sidebarContent += '3';
updateSidebar(sidebarContent);
}
function updateSidebar(content) {
var html = HtmlService.createHtmlOutput(content)
.setSandboxMode(HtmlService.SandboxMode.IFRAME)
.setTitle('Sidebar')
.setWidth(250);
SpreadsheetApp.getUi().showSidebar(html);
}
It works, but each time the updateSidebar() function runs, the sidebar blinks when loading in the new content.
How can I program this to update the content of the sidebar more efficiently, thus removing the blink?
I'm assuming that SpreadsheetApp.getUi().showSidebar(html); should really only be run once, at the beginning, and the subsequent updates to the content should be handled by Javascript in a .js file.
But I don't know how to get the sidebarContent variable from Javascript code running client-side in the user's browser.
Also, I know this must be possible, because I just saw this post on the Google Apps Developer Blog today about an app that uses a custom sidebar, and the .gif towards the end of the article shows a nicely-animated sidebar that's being updated in real-time.
I believe the solution for this situation is to actually handle the flow of the server-side script from the client-side. That is the only way I can think of right now to pass data to the client side from the server without re-generating the HTML.
What I mean by this is that you would want to make the calls to the server-side functions from the client, and have them return a response as a success handler to the client. This means that each action that needs to be logged will need to be made into its own function.
Ill show you a quick example of what I mean.
Lets say your server-side GAS code looked like this:
function actionOne(){
...insert code here...
return true;
}
function actionTwo(){
...insert code here...
return true;
}
And so on for as many actions need to be executed.
Now, for your .html file, at the bottom you would have javascript looking something like this:
<script>
callActionOne();
function callActionOne(){
google.script.run.withSuccessHandler(callActionTwo).actionOne();
}
function callActionTwo(){
...update html as necessary to indicate that the first action has completed...
google.script.run.withSuccessHandler(actionsComplete).actionTwo();
}
function actionsComplete(){
..update html to indicate script is complete...
}
</script>
It is a bit more complex than is ideal, and you might need to use the CacheService to store some data in between actions, but it should help you with your problem.
Let me know if you have any questions or if this doesn't fit your needs.
I have a single page website and would like to achieve the following:
back button working as if it was a normal website
and instead of say,
www.mysite.com/index.php?p=#this-is-a-great-product
I'd like to have this url
www.mysite.com/this-is-a-great-product
while still having back button working properly.
Regarding 1.) I use the following code ive found which works great:
<!-- Getting BackButton to work properly -->
<script type="text/javascript">
var times = 0;
function doclick() {
times++;
}
function doclick() {
times++;
location.hash = times;
}
window.onhashchange = function() {
if (location.hash.length > 0) {
times = parseInt(location.hash.replace('#',''),10);
} else {
times = 0;
}
}
</script>
…but of course it just changes any anchors to /#1, then /#2 and so forth ro get the backbutton to work. But as I'm not a programmer I don't know how to change it… :(
Regarding 2.) i can add in htaccess this:
>RewriteEngine On
>RewriteRule ^([^/.]+)/?$ /index.php?page=$1
and this changes /index.php?p=products to /products.
So how do I change the above code (under 1.) so it doesn't change all anchors to #1, #2, etc. but instead references / uses the urls I achieved under 2, like
www.mysite.com/this-is-a-great-product
And (probably a very dumb question, but a very important one) -given I use only the new url links on my site- is there any danger that this still might result in duplicate content in any way?
Regarding this, should I (for that reason or any other) sefreferential my single page index.php to itself using rel canonical link=index.php?
Thanks so much in advance!
As mentioned, you will want to use the HTML5 History API. Please note, this API is relatively new and therefore browser support is a concern. At the time of writing, approximately 71% of global Internet users have support for it (see http://caniuse.com/#feat=history for browser support information). Therefore, you will want to ensure you have a fall-back solution for this. You will likely want to use the older #! solution that was popular before the HTML 5 History API was adopted.
If you use the history API to replace, for example, example.com/#!settings with example.com/settings and a user bookmarks that nicer URL, then when they go to visit it, their browser will make a request to the server for /settings (which doesn't actually exist in the web server's context). Therefore, you will need to make sure your web server has some redirection rules (i.e. RewriteEngine) such that it can take the pretty URLs and redirect them to the #! version (and then if the user's browser supports the history API it can replace that with the nice URL).
If you aren't very comfortable programming yourself, I'd recommend using a JavaScript library that does a lot of the work for you. I did some quick searching and discovered the following, though there might be better ones out there: https://github.com/browserstate/history.js
Basically i have created a small prototype on jsfiddle which tracks all the urls accessed via ajax calls.
Also contains navigation to access links back and forth .
How It Actually Works:
I have created a global array called history, which keeps track of all urls accessed via ajax in sequence.
also there a global index defined to keep track of the url being accessed when navigating back and forth the links in history array.
There is History section at the bottom of the jsfiddle, which shows the sequence in which the links are accessed by capturing the link names and posting them in the order in which they were accessed.
JS Code:
$(function () {
var history = [];
var index = 0;
$('.links').on('click', function () {
$('#history').append($(this).text());
var address = $(this).attr('data-ref');
index += 1;
history[index] = address;
$('.links').attr('disabled', 'disabled');
loadExternalPage(address);
console.log('list:' + history);
});
$('#back').on('click', function () {
console.log(index);
index -= 1;
console.log(index);
console.log(history[index]);
loadExternalPage(history[index]);
});
$('#forward').on('click', function () {
console.log(index);
index += 1;
console.log(index);
console.log(history[index]);
loadExternalPage(history[index]);
});
var loadExternalPage = function (address) {
console.log(history[index]);
$('#result-section').load(address, function () {
console.log('data-loaded');
$('.links').removeAttr('disabled');
});
};
});
Live Demo # JSFiddle:http://jsfiddle.net/dreamweiver/dpwmcu0b/8/
Note: This solution is far from being perfect, so dont consider it as final solution but rather use it as a base to build upon
On using BACK and FORWARD functions in the browser top-left button:
In principle, there is no great problem with this as long as you work with the existing storage object (a stack) for previously visited web pages on your browser. This object is the history object and you can see what is in it anytime by right-clicking and selecting "Inspect", then selecting the "Console" tab, then enter window.history and enter.
Check out the Browser Object Model (BOM) section of Pro Java For Web Developers (Frisbee) for the background to the history object. (Just a few pages, an easy read, don't worry.) Just remember that in this process you are storing the new page that you move to, not the old page that you are leaving !
For a simple SPA example, look at this example. codepen.io/tamjk/pen/NWxWOxL
In regard to the URL, the method that the history object uses to load a new page state into the history stack, i.e. pushState(...), has an optional third parameter for associating a dummy URL for each web page that is stored.
Personally, when I first sorted out the BACK & FORWARD functions, I did not use dummy URLs as the browser was being confused by them and I had enough to do sorting out the history sequence using just the first two parameters, i.e.
the state object - a JSON holding enough data to recreate the page stored
a title for the page I expect that you could also use a dummy URL but I will leave that to the student as an exercise, as they say.
But you can add the URL of the new page if you want to.
In the example above, for the state object I just used the IDs of the page's nav link and its content element.
For the title, I programmatically changed the HTML's page title element with each change of page. I did this after noticing that the browser listed the previous pages according to the title element in the HTML code.
Unfortunately, this title does not show up on CodePen when you right-click on the browser BACK and FORWARD buttons due to CodePen's system not allowing it. But it will show on your own sites.
It's important that whatever method you use to store current web page states when using the navbar links to navigate, you DO NOT ADD page states to the browser history when you arrive at them using BACK or FORWARD buttons. Otherwise your history stack will have repetitions of entries going back and deletion of entries going forward.
In the CodePen, this was achieved by having the addToHistory(..) function separate to and outside the scope of the switchPage(...) function. This allows you use of the switchPage function in both normal navbar navigation and browser BACK/FORWARD navigation. The third parameter of switchPage(...) is a boolean indicating if the page is to be stored in history or not.
Anyway, this is just something to get you started.
In Lightswitch HTML client we have created a screen to display the work in progress for a particular business processes.
This is to be displayed on a big screen, much like when you go to Argos to collect your order. Here's a screenshot...
We are using some java script to refresh the page every 30 seconds.
setTimeout(function () {
window.location.reload(1);
}, 30000);
However, there are two issues with this.
The 'maximum number of results' text input by the user is lost on refresh.
It doesnt look nice to refresh the whole page.
Is it therefore possible to just trigger each query to reload instead of the entire page?
(The data is provided to LightSwitch by a WCF RIA Service)
In the JavaScript, use screen.MyList.load(). It will reload the list asynchronously.
Note that IntelliSense does not always suggest list names on the screen object but will recognize them if you type the name.
Combined with the setTimeout() method and the created screen event, it should work.
I had the same issue and finally found the solution. I added this in my created event:
myapp.BrowseMembers.created = function (screen) {
setInterval(function () {screen.Members.load(true);}, 1000);
};
Ii works, just the screen get flickering when reloading the data.
setTimeout will only trigger once but setInterval will trigger every 1 second.
I had the same problem with LightSwitch VS2012 Update 3,
for my case just invalidation is enough, so i can always work with a fresh entity.
This code runs once on entry screen and invalidates loaded entities every 30 seconds, and forces a refetch just when needed.
myapp.aaHome.created = function (screen) {
setInterval(function () {
screen.details.dataWorkspace.ApplicationData.Currencies._loadedEntities = {};
}, 30000);
};
I'm fully aware that this question has been asked and answered everywhere, both on SO and off. However, every time there seems to be a different answer, e.g. this, this and that.
I don't care whether it's using jQuery or not - what's important is that it works, and is cross-browser.]
So, what is the best way to preload images?
Unfortunately, that depends on your purpose.
If you plan to use the images for purposes of style, your best bet is to use sprites.
http://www.alistapart.com/articles/sprites2
However, if you plan to use the images in <img> tags, then you'll want to pre-load them with
function preload(sources)
{
var images = [];
for (i = 0, length = sources.length; i < length; ++i) {
images[i] = new Image();
images[i].src = sources[i];
}
}
(modified source taken from What is the best way to preload multiple images in JavaScript?)
using new Image() does not involve the expense of using DOM methods but a new request for the image specified will be added to the queue. As the image is, at this point, not actually added to the page, there is no re-rendering involved. I would recommend, however, adding this to the end of your page (as all of your scripts should be, when possible) to prevent it from holding up more critical elements.
Edit: Edited to reflect comment quite correctly pointing out that separate Image objects are required to work properly. Thanks, and my bad for not checking it more closely.
Edit2: edited to make the reusability more obvious
Edit 3 (3 years later):
Due to changes in how browsers handle non-visible images (display:none or, as in this answer, never appended to the document) a new approach to pre-loading is preferred.
You can use an Ajax request to force early retrieval of images. Using jQuery, for example:
jQuery.get(source);
Or in the context of our previous example, you could do:
function preload(sources)
{
jQuery.each(sources, function(i,source) { jQuery.get(source); });
}
Note that this doesn't apply to the case of sprites which are fine as-is. This is just for things like photo galleries or sliders/carousels with images where the images aren't loading because they are not visible initially.
Also note that this method does not work for IE (ajax is normally not used to retrieve image data).
Spriting
As others have mentioned, spriting works quite well for a variety of reasons, however, it's not as good as its made out to be.
On the upside, you end up making only one HTTP request for your images. YMMV though.
On the down side you are loading everything in one HTTP request. Since most current browsers are limited to 2 concurrent connections the image request can block other requests. Hence YMMV and something like your menu background might not render for a bit.
Multiple images share the same color palette so there is some saving but this is not always the case and even so it's negligible.
Compression is improved because there is more shared data between images.
Dealing with irregular shapes is tricky though. Combining all new images into the new one is another annoyance.
Low jack approach using <img> tags
If you are looking for the most definitive solution then you should go with the low-jack approach which I still prefer. Create <img> links to the images at the end of your document and set the width and height to 1x1 pixel and additionally put them in a hidden div. If they are at the end of the page, they will be loaded after other content.
As of January 2013 none of the methods described here worked for me, so here's what did instead, tested and working with Chrome 25 and Firefox 18. Uses jQuery and this plugin to work around the load event quirks:
function preload(sources, callback) {
if(sources.length) {
var preloaderDiv = $('<div style="display: none;"></div>').prependTo(document.body);
$.each(sources, function(i,source) {
$("<img/>").attr("src", source).appendTo(preloaderDiv);
if(i == (sources.length-1)) {
$(preloaderDiv).imagesLoaded(function() {
$(this).remove();
if(callback) callback();
});
}
});
} else {
if(callback) callback();
}
}
Usage:
preload(['/img/a.png', '/img/b.png', '/img/c.png'], function() {
console.log("done");
});
Note that you'll get mixed results if the cache is disabled, which it is by default on Chrome when the developer tools are open, so keep that in mind.
In my opinion, using Multipart XMLHttpRequest introduced by some libraries will be a preferred solution in the following years. However IE < v8, still don't support data:uri (even IE8 has limited support, allowing up to 32kb). Here is an implementation of parallel image preloading - http://code.google.com/p/core-framework/wiki/ImagePreloading , it's bundled in framework but still worth taking a look.
This was from a long time ago so I dont know how many people are still interested in preloading an image.
My solution was even more simple.
I just used CSS.
#hidden_preload {
height: 1px;
left: -20000px;
position: absolute;
top: -20000px;
width: 1px;
}
Here goes my simple solution with a fade in on the image after it is loaded.
function preloadImage(_imgUrl, _container){
var image = new Image();
image.src = _imgUrl;
image.onload = function(){
$(_container).fadeTo(500, 1);
};
}
For my use case I had a carousel with full screen images that I wanted to preload. However since the images display in order, and could take a few seconds each to load, it's important that I load them in order, sequentially.
For this I used the async library's waterfall() method (https://github.com/caolan/async#waterfall)
// Preload all images in the carousel in order.
image_preload_array = [];
$('div.carousel-image').each(function(){
var url = $(this).data('image-url');
image_preload_array.push(function(callback) {
var $img = $('<img/>')
$img.load(function() {
callback(null);
})[0].src = url;
});
});
async.waterfall(image_preload_array);
This works by creating an array of functions, each function is passed the parameter callback() which it needs to execute in order to call the next function in the array. The first parameter of callback() is an error message, which will exit the sequence if a non-null value is provided, so we pass null each time.
See this:
http://www.mattfarina.com/2007/02/01/preloading_images_with_jquery
Related question on SO:
jquery hidden preload