So I run a site that uses a lot of javascript and ajax. I understand how to make users refresh their browser when the browser loads. But what happens if I need them to refresh their browser after they have loaded the site?
I want to change the ajax that is served to the client to speed up things up, but this is going to cause errors for the users who have not yet refreshed their browser.
The only solution I can come up with is that when a new version of the JavaScript file is required, the site uses a popup that asks the users to force refresh their browsers. (This won't really fix the current version, but would prevent future issues.)
I hate to use a popup for something that I could do automatically. Is there a better way to force updates for the client?
window.location.href = "http://example.com"
replaces the current page with the one pointed to by http://example.com.
You sound like you are having trouble with your JavaScript getting an updated version of the data it loads through Ajax methods, is that correct? For instance, if two Ajax calls try to load 'data.txt', then the second call merely uses the cached version.
You also may be having trouble with loading new versions your script itself.
The way around both of these problems is to add a randomly-generated query string to your script source and your Ajax source.
For example, make one script that loads your main script, like this:
/* loader1.js */
document.write('<script src="mainjavascript.js?.rand=', Math.random(), '"></script>');
And in your HTML, just do
<script src="loader1.js"></script>
The same method works for JavaScript Ajax requests as well. Assuming that "client" is a new XMLHttpRequest() object, and has been properly set up with a readystatechange function and so on, then the you simply append the same query string, like this:
request = client.open('GET', 'data.txt?.rand=' + Math.random(), true);
request.send();
You may be using a library to do your Ajax requests, and so it's even easier then. Just specify the data URL as 'data.txt?.rand=' + Math.random() instead of merely 'data.txt'
Related
I noticed something odd regarding ajax and image loading. Suppose you have an image on the page, and ajax requests the same image - one would guess that ajax requests would hit the browser cache, or it should at least only make one request, the resulting image going to the page and the script that wants to read/process the image.
Surprisingly, I found that even when the javascript waits for the entire page to load, the image request still makes a new request! Is this a known bug in Firefox and Chrome, or something bad jQuery ajax is doing?
Here you can see the problem, open Fiddler or Wireshark and set it to record before you click "run":
<script src="http://code.jquery.com/jquery-1.11.1.min.js"></script>
<div id="something" style="background-image:url(http://jsfiddle.net/img/logo-white.png);">Hello</div>
<script>
jQuery(function($) {
$(window).load(function() {
$.get('http://jsfiddle.net/img/logo-white.png');
})
});
</script>
Note that in Firefox it makes two requests, both resulting in 200-OK, and sending the entire image back to the browser twice. In Chromium, it at least correctly gets a 304 on second request instead of downloading the entire contents twice.
Oddly enough, IE11 downloads the entire image twice, while it seems IE9 aggressively caches it and downloads it once.
Ideally I would hope the ajax wouldn't make a second request at all, since it is requesting exactly the same url. Is there a reason css and ajax in this case usually have different caches, as though the browser is using different cache storage for css vs ajax requests?
I use the newest Google Chrome and it makes one request. But in your JSFIDDLE example you are loading jQuery twice. First with CSS over style attribute and second in your code over script tag. Improved: JSFIDDLE
<div id="something" style="background-image:url('http://jsfiddle.net/img/logo-white.png');">Hello</div>
<script>
jQuery(window).load(function() {
jQuery.get('http://jsfiddle.net/img/logo-white.png');
});
// or
jQuery(function($) {
jQuery.get('http://jsfiddle.net/img/logo-white.png');
});
</script>
jQuery(function($) {...} is called when DOM is ready and jQuery(window).load(...); if DOM is ready and every image and other resources are loaded. To put both together nested makes no sense, see also here: window.onload vs $(document).ready()
Sure, the image is loaded two times in Network tab of the web inspector. First through your CSS and second through your JavaScript. The second request is probably cached.
UPDATE: But every request if cached or not is shown in this tab. See following example: http://jsfiddle.net/95mnf9rm/4/
There are 5 request with cached AJAX calls and 5 without caching. And 10 request are shown in 'Network' tab.
When you use your image twice in CSS then it's only requested once. But if you explicitly make a AJAX call then the browser makes an AJAX call. As you want. And then maybe it's cached or not, but it's explicitly requested, isn't it?
This "problem" could a be a CORS pre-flight test.
I had noticed this in my applications awhile back, that the call to retrieve information from a single page application made the call twice. This only happens when you're accessing URLs on a different domain. In my case we have APIs we've built and use on a different server (a different domain) than that of the applications we build. I noticed that when I use a GET or POST in my application to these RESTFUL APIs the call appears to be made twice.
What is happening is something called pre-flight (https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS), an initial request is made to the server to see if the ensuing call is allowed.
Excerpt from MDN:
Unlike simple requests, "preflighted" requests first send an HTTP request by the OPTIONS method to the resource on the other domain, in order to determine whether the actual request is safe to send. Cross-site requests are preflighted like this since they may have implications to user data. In particular, a request is preflighted if:
It uses methods other than GET, HEAD or POST. Also, if POST is used to send request data with a Content-Type other than application/x-www-form-urlencoded, multipart/form-data, or text/plain, e.g. if the POST request sends an XML payload to the server using application/xml or text/xml, then the request is preflighted.
It sets custom headers in the request (e.g. the request uses a header such as X-PINGOTHER)
Your fiddle tries to load a resource from another domain via ajax:
I think I created a better example. Here is the code:
<img src="smiley.png" alt="smiley" />
<div id="respText"></div>
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script>
$(window).load(function(){
$.get("smiley.png", function(){
$("#respText").text("ajax request succeeded");
});
});
</script>
You can test the page here.
According to Firebug and the chrome network panel the image is returned with the status code 200 and the image for the ajax request is coming from the cache:
Firefox:
Chrome:
So I cannot find any unexpected behavior.
Cache control on Ajax requests have always been a blurred and buggy subject (example).
The problem gets even worse with cross-domain references.
The fiddle link you provided is from jsfiddle.net which is an alias for fiddle.jshell.net. Every code runs inside the fiddle.jshell.net domain, but your code is referencing an image from the alias and browsers will consider it a cross-domain access.
To fix it, you could change both urls to http://fiddle.jshell.net/img/logo-white.png or just /img/logo-white.png.
The helpful folks at Mozilla gave some details as to why this happens. Apparently Firefox assumes an "anonymous" request could be different than normal, and for this reason it makes a second request and doesn't consider the cached value with different headers to be the same request.
https://bugzilla.mozilla.org/show_bug.cgi?id=1075297
This may be a shot in the dark, but here's what I think is happening.
According to,
http://api.jquery.com/jQuery.get/
dataType
Type: String
The type of data expected from the server.
Default: Intelligent Guess (xml, json, script, or html).
Gives you 4 possible return types. There is no datatype of image/gif being returned. Thus, the browser doesn't test it's cache for the src document as it is being delivered a a different mime type.
The server decides what can be cached and for how long. However, it again depends on the browser, whether or not to follow it. Most web browsers like Chrome, Firefox, Safari, Opera and IE follow it, though.
The point that I want to make here, is that your web sever might be configured to not allow your browser to cache the content, thus, when you request the image through CSS and JS, the browser follows your server's orders and doesn't cache it and thus it requests the image twice...
I want JS-accessible image
Have you tried to CSS using jQuery? It is pretty fun - you have full CRUD (Create, read, update, delete) CSS elements. For example do image resize on server side:
$('#container').css('background', 'url(somepage.php?src=image_source.jpg'
+ '&w=' + $("#container").width()
+ '&h=' + $("#container").height() + '&zc=1');
Surprisingly, I found that even when the javascript waits for the entire page to load, the image request still makes a new request! Is this a known bug in Firefox and Chrome, or something bad jQuery ajax is doing?
It is blatantly obvious that this is not a browser bug.
The computer is deterministic and does what exactly you tell it to (not want you want it to do). If you want to cache images it is done in server side. Based on who handles caching it can be handled as:
Server (like IIS or Apache) cache - typically caches things that are reused often (ex: 2ce in 5 seconds)
Server side application cache - typically it reuses server custom cache or you create sprite images or ...
Browser cache - Server side adds cache headers to images and browsers maintain cache
If it is not clear then I would like to make it clear : You don't cache images with javascript.
Ideally I would hope the ajax wouldn't make a second request at all, since it is requesting exactly the same url.
What you try to do is to preload images.
Once an image has been loaded in any way into the browser, it will be
in the browser cache and will load much faster the next time it is
used whether that use is in the current page or in any other page as
long as the image is used before it expires from the browser cache.
So, to precache images, all you have to do is load them into the
browser. If you want to precache a bunch of images, it's probably best
to do it with javascript as it generally won't hold up the page load
when done from javascript. You can do that like this:
function preloadImages(array) {
if (!preloadImages.list) {
preloadImages.list = [];
}
for (var i = 0; i < array.length; i++) {
var img = new Image();
img.onload = function() {
var index = preloadImages.list.indexOf(this);
if (index !== -1) {
// remove this one from the array once it's loaded
// for memory consumption reasons
preloadImages.splice(index, 1);
}
}
preloadImages.list.push(img);
img.src = array[i];
}
}
preloadImages(["url1.jpg", "url2.jpg", "url3.jpg"]);
Then, once they've been preloaded like this via javascript, the browser will have them in its cache and you can just refer to the normal URLs in other places (in your web pages) and the browser will fetch that URL from its cache rather than over the network.
Source : How do you cache an image in Javascript
Is there a reason css and ajax in this case usually have different caches, as though the browser is using different cache storage for css vs ajax requests?
Even in absence of information do not jump to conclusions!
One big reason to use image preloading is if you want to use an image
for the background-image of an element on a mouseOver or :hover event.
If you only apply that background-image in the CSS for the :hover
state, that image will not load until the first :hover event and thus
there will be a short annoying delay between the mouse going over that
area and the image actually showing up.
Technique #1 Load the image on the element's regular state, only shift it away with background position. Then move the background
position to display it on hover.
#grass { background: url(images/grass.png) no-repeat -9999px -9999px; }
#grass:hover { background-position: bottom left; }
Technique #2 If the element in question already has a background-image applied and you need to change that image, the above
won't work. Typically you would go for a sprite here (a combined
background image) and just shift the background position. But if that
isn't possible, try this. Apply the background image to another page
element that is already in use, but doesn't have a background image.
#random-unsuspecting-element {
background: url(images/grass.png) no-repeat -9999px -9999px; }
#grass:hover { background: url(images/grass.png) no-repeat; }
The idea create new page elements to use for this preloading technique
may pop into your head, like #preload-001, #preload-002, but that's
rather against the spirit of web standards. Hence the using of page
elements that already exist on your page.
The browser will make the 2 requests on the page, cause an image called from the css uses a get request (not ajax) too before rendering the entire page.
The window load is similar to de attribute, and is loading before the rest of the page, then, the image from the Ajax will be requested first than the image on the div, processed during the page load.
If u would like to load a image after the entire page is loaded, u should use the document.ready() instead
I'm trying to debug a Javascript written in the Mootools framework. Right now I am developing a web application on top of Rails and my webserver is the rails s that boots WEBrick.
When I modify a particular tree.js file thats called with in one a mootools init script,
require: {
css: [MUI.path.plugins + 'tree/css/style.css'],
js: [MUI.path.plugins + 'tree/scripts/tree.js'],
onload: function(){
if (buildTree) buildTree('tree1');
}
},
the changes are not loaded as the headers being sent to the client are Last Modified: 10 July, 2010..... which is obviously not true since I just modified the file.
How do I get rid of this annoying caching. If I go directly to the script in my browser (Chrome) it doesn't show the changes until I hit refresh, but this doesn't fix my problem when I go back to my application and hit refresh, it still loads the pre-modified script.
This has happen to me also in FF, I think it is a cache header sent by the server or the browser itself.
Anyway a simple way to avoid this problem while in development is adding a random param to the file name of the script.
instead of calling 'tree/scripts/tree.js' use 'tree/scripts/tree.js?'+random that should invalidate all caches.
As frisco says, adding a random number in development does the trick but you will likely find that the problem still affects you production. You want to push new JavaScript changes to your users but can't until their browsers stop caching the file. In order to do this, just get the files mtime and add that as the random string. This will only change when the file is modified and so the JavaScript will be loaded from cache if it has not been changed or it will be loaded from the server, if it has.
PHP has the function filemtime but as I'm not familiar with Ruby, I'm afraid I can't help you further in that direction (sorry!). However, this answer seems to accomplish what you want.
Try the Ctrl+F5 trick. To avoid hitting browser cache.
More info here:
What requests do browsers' "F5" and "Ctrl + F5" refreshes generate?
I work on an internal corporate system that has a web front-end using Tomcat.
How can I monitor the rendering time of specific pages in a browser (IE6)?
I would like to be able to record the results in a log file (separate log file or the Tomcat access log).
EDIT: Ideally, I need to monitor the rendering on the clients accessing the pages.
The Navigation Timing API is available in modern browsers (IE9+) except Safari:
function onLoad() {
var now = new Date().getTime();
var page_load_time = now - performance.timing.navigationStart;
console.log("User-perceived page loading time: " + page_load_time);
}
In case a browser has JavaScript enabled one of the things you could do is to write an inline script and send it first thing in your HTML. The script would do two things:
Record current system time in a JS variable (if you're lucky the time could roughly correspond to the page rendering start time).
Attach JS function to the page onLoad event. This function will then query the current system time once again, subtract the start time from step 1 and send it to the server along with the page location (or some unique ID you could insert into the inline script dynamically on your server).
<script language="JavaScript">
var renderStart = new Date().getTime();
window.onload=function() {
var elapsed = new Date().getTime()-renderStart;
// send the info to the server
alert('Rendered in ' + elapsed + 'ms');
}
</script>
... usual HTML starts here ...
You'd need to make sure that the page doesn’t override onload later in the code, but adds to the event handlers list instead.
As far as non-invasive techniques are concerned, Hammerhead measures complete load time (including JavaScript execution), albeit in Firefox only.
I've seen usable results when a JavaScript snippet could be added globally to measure the start and end of each page load operation.
Have a look at Selenium - they offer a remote control that can automatically start different browsers (e.g. IE6), load pages, test for specific content on the page. At the end reports are generated that also show the rendering times.
Since others are posting answers that use other browsers, I guess I will too. Chrome has a very detailed profiling system that breaks down the rendering time of the page and shows the time it took for each step along the way.
As for IE, you might want to consider writing a plugin. There seems to be few tools like this on the market. Maybe you could sell it.
On Firefox you can use Firebug to monitor load time. With the YSlow plugin you can even get recommendations how to improve the performance.
I build a website focussing on loading only data that has to be loaded.
I've build an example here and would like to know if this is a good way to build a wegpage.
There are some problems when building a site like that, e.g.
bookmarking
going back and forth in
history SEO (since the content is basically not really connected)
so here is the example:
index.html
<html>
<head>
<title>Somebodys Website</title>
<!-- JavaScript -->
<script type="text/javascript" src="jquery-1.3.2.min.js"></script>
<script type="text/javascript" src="pagecode.js"></script>
</head>
<body>
<div id="navigation">
<ul>
<li>Welcome</li>
<li>Page1</li>
</ul>
</div>
<div id="content">
</div>
</body>
</html>
pagecode.js
var http = null;
$(document).ready(function()
{
// create XMLHttpRequest
try {
http = new XMLHttpRequest();
}
catch(e){
try{
http = new ActiveXObject("MS2XML.XMLHTTP");
}
catch(e){
http = new ActiveXObject("Microsoft.XMLHTTP");
}
}
// set navigation click events
$('.nav').click(function(e)
{
loadPage(e);
});
});
function loadPage(e){
// e.g. "link_Welcome" becomes "Welcome.html"
var page = e.currentTarget.id.slice(5) + ".html";
http.open("POST", page);
http.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
http.setRequestHeader("Connection", "close");
http.onreadystatechange = function(){changeContent(e);};
http.send(null);
}
function changeContent(e){
if(http.readyState == 4){
// load page
var response = http.responseText;
$('#content')[0].innerHTML = response;
}
}
Welcome.html
<b>Welcome</b>
<br />
To this website....
So as you can see, I'm loading the content based on the IDs of the links in the navigation section. So to make the "Page1" navigation item linkable, i would have to create a "Page1.html" file with some content in it.
Is this way of loading data for your web page very wrong? and if so, what is a better way to do it?
thanks for your time
EDIT:
this was just a very short example and i'd like to say that for users with javascript disabled it is still possible to provide the whole page (additionally) in static form.
e.g.
<li>Welcome</li>
and this Welcome.html would contain all the overhead of the basic index.html file.
By doing so, the ajax using version of the page would be some kind of extra feature, wouldn't it?
No, it isn't a good way to do it.
Ajax is a tool best used with a light touch.
Reinventing frames using it simply recreates all the problems of frames except that it replaces the issue of orphan pages with complete invisibility to search engines (and other use agents that don't support JS or have it disabled).
By doing so, the ajax using version of the page would be some kind of extra feature, wouldn't it?
No. Users won't notice, and you break bookmarking, linksharing, etc.
It's wrong to use AJAX (or any javascript for that matter) only to use it (unless you're learning how to use ajax which is diffrent matter).
There are situations where the use of javascript is good (mostly when you're building a custom user interface inside your browser window) and when AJAX really shines. But loading static web pages with javascript is very wrong: first, you tie yourself with a browser that can run your JS, second you increase the load on your server and on the client side.
More technical details:
The function loadPage should be re-written using jquery : $post(). This is a random shot, not tested:
function loadPage(e){
// e.g. "link_Welcome" becomes "Welcome.html"
var page = e.currentTarget.id.slice(5) + ".html";
$.post( page, null, function(response){
$('#content')[0].innerHTML = response;
} );
}
Be warned, I did not test it, and I might get this function a little wrong. But... dud, you are using jQuery already - now abuse it! :)
When considering implementing an AJAX pattern on a website you should first ask yourself the question: why? There are several good reasons to implement AJAX but also several bad reasons depending on what you're trying to achieve.
For example, if your website is like Facebook, where you want to offer end-users with a rich user interface where you can immediately see responses from friends in chat, notifications when users post something to your wall or tag you in a photo, without having to refresh the entire page, AJAX is GREAT!
However, if you are creating a website where the main content area changes for each of the top-level menu items using AJAX, this is a bad idea for several reasons: First, and what I consider to be very important, SEO (Search Engine Optimization) is
NOT optimized. Search engine
crawlers do not follow AJAX requests
unless they are loaded via the
onclick event of an anchor tag.
Ultimately, in this example, you are
not getting the value out of the rich
experience, and you are losing a lot
of potential viewers.
Second, users will have trouble bookmarking pages unless you implement a smart way to parse URLs to map to AJAX calls.
Third, users will have problems properly navigating using the back and forward buttons if you have not implemented a custom client-side mechanism to manage history.
Lastly, each browser interprets JavaScript differently, and with the more JavaScript you write, the more potential there is for losing cross browser compatibility unless you implement a framework that such as jQuery, Dojo, EXT, or MooTools that handles most of that for you.
gabtub you are not wrong, you can get working AJAX intensive web sites SEO compatible, with bookmarking, Back/Forward buttons (history navigation in general), working with JavaScript disabled (avoiding site duplication), accessible...
There is one problem, you must get back to server-centric.
You can get some "howtos" here.
And take a look to ItsNat.
How about unobtrusivity (or how should I call it?)?
If the user has no javascript for some reason, he'll only see a list with Welcome and Page1.
Yes it's wrong. What about users without JavaScript? Why not do this sort of work server-side? Why pay the cost of multiple HTTP requests instead of including the files server-side so they can be downloaded in a single fetch? Why pay the cost of non-JavaScript enabled clients not being able to view your stuff (Google's spider being an important user who'll be alienated by this approach)? Why? Why?
I am debugging a large, complex web page that has a lot of JavaScript, JQuery, Ajax and so on. Somewhere in that code I am getting a rouge request (I think it is an empty img) that calls the root of the server. I know it is not in the html or the css and am pretty convinced that somewhere in the JavaScript code the reqest is being made, but I can't track it down. I am used to using firebug, VS and other debugging tools but am looking for some way to find out where this is executed - so that I can find the offending line amongst about 150 .js files.
Apart from putting in a gazzillion console outputs of 'you are now here', does anyone have suggestions for a debugging tool that could highlight where in Javascript requests to external resources are made? Any other ideas?
Step by step debugging will take ages - I have to be careful what I step into (jQuery source - yuk!) and I may miss the crucial moment
What about using the step-by-step script debugger in Firebug ?
I also think that could be a very interesting enhancement to Firebug, being able to add a breakpoint on AJAX calls.
You spoke of jQuery source...
Assuming the request goes through jQuery, put a debug statement in the jQuery source get() function, that kicks in if the URL is '/'. Maybe then you can tell from the call stack.
You can see all HTTP request done through JavaScript using the Firebug console.
If you want to track all HTTP requests manually, you can use this code:
$(document).bind('beforeSend', function(event, request, ajaxOptions)
{
// Will be called before every jQuery AJAX call
});
For more information, see jQuery documentation on AJAX events.
If its a HTTPRequest sent to a web server, I would recommend using TamperData plugin on Firefox. Just install the plugin, start tamper data, and every request sent will be prompted to tamper/continue/abort first.
Visit this page at Mozilla website
Just a guess here, but are you using ThickBox? It tries to load an image right at the start of the code.
First thing I would do is check whether this rouge request is an Ajax request or image load request via the Net panel in Firebug.
If it's Ajax, then you can overload the $.ajax function with your own and do a strack trace and include the URL requested before handing off to the original $.ajax.
If it's an image, it's not ideal, but if you can respond to the image request with a server side sleep (i.e. php file that just sleeps for 20 seconds) you might be able to hang the app and get a starting guess as to where the problem might be.