I'm conditionally loading the javascript code using the Modernizr's loader (integrated yepnope.js). Here's my code:
Modernizr.load({
test: Modernizr.geolocation,
yep: ['js/get-native-geo-data.js','https://www.google.com/jsapi'],
nope: ['js/get-geo-data-by-ip.js','https://www.google.com/jsapi'],
complete : function () {
google.load("maps", "3",
{other_params: "sensor=false", 'callback':init});
});
It works, but the Network tab in both FireBug and Google Developers tools shows that it loads get-native-geo-data.js twice. I've added a console.log() in the native-geo-data.js, and prints the message only once. So what makes both FireBug and Dev Tools report two network calls both return 200, size 3K?
This is how Firebug reports the response header for each GET (they are the same):
HTTP/1.1 200 OK
Date: Thu, 20 Dec 2012 19:39:52 GMT
Server: HttpComponents/4.1.3
Content-Length: 3054
Content-Type: application/x-javascript
Connection: keep-alive
After running it though the Charles monitoring tool I see the same results - the function gets called twice. So where is the bug - in Modernizr, in yepnope, or in my head?
Not sure if you've seen it in the yepnope documentation:
I'm seeing two requests in my dev tools, why is it loading everything
twice? Depending on your browser and your server this could mean a
couple different things. Due to the nature of how yepnope works, there
are two requests made for every file. The first request is to load the
resource into the cache and the second request is to execute it (but
since it's in the cache, it should execute immediately). Seeing two
requests is pretty normal as long as the second request is cached. If
you notice that the second request isn't cached (and your script load
times are doubling), then make sure you are sending the correct cache
headers to allow the caching of your scripts. This is vital to
yepnope. It will not work without proper caching enabled. We actually
test to make sure things aren't loaded twice in our test suite, so if
you think we may have a bug in your browser regarding double loading,
we encourage you to run the test suite to see if the double loading
test passes.
Related
When using the chrome.webNavigation API, the following code (used in background page of an extension):
chrome.webNavigation.onCommitted.addListener(function(data) {
console.log('onCommitted', data.tabId, data.url);
});
chrome.webNavigation.onBeforeNavigate.addListener(function(data) {
console.log('onBeforeNavigate', data.tabId, data.url);
});
produces this output when navigating to, say, 'http://drive.google.com'
newTest.js:18 onBeforeNavigate 606 http://drive.google.com/
newTest.js:18 onCommitted 606 https://drive.google.com/
Somewhere, even before the request was sent to the server, Chrome changed the url from http to https.
This behaviour is also exhibited in other cases. For instance for 'http://getpocket.com', where it also adds a new path:
newTest.js:18 onBeforeNavigate 626 http://getpocket.com/
newTest.js:18 onCommitted 626 https://getpocket.com/beta/
The server side redirects all come after onCommitted, but this is one case where Chrome modifies urls even before it sends a request to the server.
Is this behaviour documented somewhere, so I can predictably handle it?
For Google Drive, it's HTTP Strict Transport Security kicking in.
After it's set up, the browser will automatically redirect everything to HTTPS.
You can look under the hood at net-internals, e.g. chrome://net-internals/#hsts
static_sts_domain: drive.google.com
static_upgrade_mode: STRICT
In case of Pocket, this seems to be a 301 Moved Permanently redirect.
By design, browsers cache this response permanently (at least Chrome does) and rewrite links automatically without hitting the server until said cache is cleared.
I'm simply trying to use XHR to precache some resources, but the cache is not behaving as expected.
Here are the bounds of the problem:
I know the resource URLs in advance (of course).
I don't know their content-types (mix of CSS, images, and other).
They will always be same-origin.
I control the cache headers.
They can be cached forever for all I care.
I've always been under the impression that XHR used the browser cache more or less like any other resource, but never rigorously tested that. Here's what I'm doing:
Request all resources up-front with XHR.
Explicitly set request header Cache-Control: max-age=3600 (Chrome was setting max-age=0 for some reason).
Set the following response headers on the server:
Cache-control: public; max-age=3600
Date: now
Expires: now + 1 hour
[Content-Type, Content-Length]
Here's what I'm seeing:
XHR always fetches the resource (confirmed on server and with dev tools).
Subsequent requests (via image/css/etc elements) always fetch (even after the XHRs have completed) on a cold cache.
But they always use the cache when it's warm.
I've poked at it in various ways, but this behavior never seems to change.
After much wailing and gnashing of teeth, I believe I've proven that this approach simply won't work on all browsers. Here's my experience thus far:
Firefox: Works a charm.
Chrome: Seldom works -- it's as though XHR uses a different cache than the elements (even though I'm pretty sure that's not the case; I haven't had time to delve into Chrome code to figure out exactly what's going on.
Safari: Apparently random. Sometimes resource requests kicked off from elements retrieve from the cache, sometimes not. I'm sure there's a method, but it appears to be madness from the outside.
In the end, I had to switch to the somewhat more craptastic-but-reliable approach of creating a hidden iframe, injecting script/img elements into it, then waiting on the iframe window's onload event. This works, but gives no fine-grained feedback in terms of which elements are loaded (getting reliable onload events from the individual elements is more "cross-browser tricky" than just waiting on the whole frame.
I'd love to understand more precisely what's going on in Chrome/Safari, but sadly don't have the time to dig in further.
We use a heatmap at work (it is secure so I can't post it here and I certainly cannot give the url). Now the heatmap is a sigle image, but it is clickable on different segments - There are different icons on the map and each of them pops up a different window(So it is not just a single click event for the whole image)...
Now this map is generated by a program called Whatsup Gold, and it is definitely not Flash...
The map constantly monitors line connection activity - When an internet line goes down, the
green icon turns red. The map is refreshed every few minutes...
What I want to know: Is there a way, either using a browser plugin, or Javascript, or any other method to notify me that the status of the map has changed(any change on the map) without having to open the browser window everytime?
Here is the part of the markup of the map...(the whole map)
<a href="mappickhost.asp?map=WhatsUpTL.wup">
<img border="0" src="WUG1.jpg" ismap="">
</a>
Update:
Request and Response headers(As retrieved from Firebugs Network tab)
Request:
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8
Accept-Encoding: gzip, deflate
Accept-Language: en-US,en;q=0.5
Authorization: Basic dGxpbms6dGxpbms=
Cache-Control: max-age=0
Connection: keep-alive
Host: ***************
Response
Cache-Control no-cache
Content-Type image/jpeg
Date Fri, 17 May 2013 07:06:30 GMT
Expires Mon, 17 Jul 1980 20:00:00 GMT
Pragma no-cache
Server WhatsUp_Gold/8.0
I have added a screenshot of firebug console in firefox (press F12 to open it). I executed a xmlhttprequest (line at the bottom) and you can see (the black line) that it connected to stackoverflow. Firebug shows xhr connections in the console by default but you can select what you want to see by clicking the little arrow button to the right of the word "Console" in the tab that activates the console.
Even having the console open with StackOverflow shows that the site connects every so often to update question status and comments.
If nothing shows up there than it might show in the Net tab. It has to make a connection to somewhere to see if network is still up. If it connects to a service than xhr is likely if it just tries to download a textfile sometmes or sets the src of an image than it'll show up only in the net tab every time it checks.
We use a heatmap at work (it is secure so I can't post it here and I certainly cannot give the url). Now the heatmap is a sigle image, but it is clickable on different segments
Surely it's not just the image that's causing interaction. Plain images can't do that. If it's not Flash, then it could be JavaScript powering it. There must be some library and it could have an API. I suggest you start reading the documentation (if any).
The map constantly monitors line connection activity
This is done in a lot of ways, and the most common is polling AJAX or Web Sockets. To check this out, you can take a look at the Network Tab of the browser's debugger. If there's a supported API, you can check it out first and save the hassle of debugging the implementation.
If no API exists, inspect the requests and find a way to replicate it. Once you can replicate it or use the API, you can create your plugin or create a custom app for it.
I make an Ajax request in which I set the response cacheability and last modified headers:
if (!String.IsNullOrEmpty(HttpContext.Current.Request.Headers["If-Modified-Since"]))
{
HttpContext.Current.Response.StatusCode = 304;
HttpContext.Current.Response.StatusDescription = "Not Modified";
return null;
}
HttpContext.Current.Response.Cache.SetCacheability(HttpCacheability.Public);
HttpContext.Current.Response.Cache.SetLastModified(DateTime.UtcNow);
This works as expected. The first time I make the Ajax request, I get 200 OK. The second time I get 304 Not Modified.
When I hard refresh in Chrome (Ctrl+F5), I get 200 OK - fantastic!
When I hard refresh in Internet Explorer/Firefox, I get 304 Not Modified. However, every other resource (JS/CSS/HTML/PNG) returns 200 OK.
The reason is because the "If-Not-Modified" header is sent for XMLHttpRequest's regardless of hard refresh in those browsers. I believe Steve Souders documents it here.
I have tried setting an ETag and conditioning on "If-None-Match" to no avail (it was mentioned in the comments on Steve Souders page).
Has anyone got any gems of wisdom here?
Thanks,
Ben
Update
I could check the "If-Modified-Since" against a stored last modified date. However, hopefully this question will help other SO users who find the header to be set incorrectly.
Update 2
Whilst the request is sent with the "If-Modified-Since" header each time. Internet Explorer won't even make the request if an expiry isn't set or is set to a future date. Useless!
Update 3
This might as well be a live blog now. Internet Explorer doesn't bother making the second request when localhost. Using a real IP or the loopback will work.
Prior to IE10, IE does not apply the Refresh Flags (see http://blogs.msdn.com/b/ieinternals/archive/2010/07/08/technical-information-about-conditional-http-requests-and-the-refresh-button.aspx) to requests that are not made as a part of loading of the document.
If you want, you can adjust the target URL to contain a nonce to prevent the cached copy from satisfying a future request. Alternatively, you can send max-age=0 to force IE to conditionally revalidate the resource before each reuse.
As for why the browser reuses a cached resource that didn't specify a lifetime, please see http://blogs.msdn.com/b/ie/archive/2010/07/14/caching-improvements-in-internet-explorer-9.aspx
The solution i came upon for consistent control was managing the cache headers for all request types.
So, I forced standard requests the same as XMLHttpRequests, which was telling IE to use the following cache policy: Cache-Control: private, max-age=0.
For some reason, IE was not honoring headers for various requests types. For example, my cache policy for standard requests defaulted to the browser and for XMLHttpRequests, it was set to the aforementioned control policy. However, making a request to something like /url as a standard get request, render the result properly. Unfortunately, making the same request to /url as an XMLHttpRequest, would not even hit the server because the get request was cached and the XMLHttpRequest was hitting the same url.
So, either force your cache policy on all fronts or make sure you're using different access points (uri's) for your request types. My solution was the former.
I'm using jQuery's .ajax() to call a server (actually local Django runserver) and get a response.
On the server console, I can see that the JSON request comes in, he proper JSON response is made, and everything looks OK.
But in my browser (tested on Firefox 3.6 and Safari 4.0.4, and I'm using jQuery 1.4.2), it seems the response body is empty (the response code is 200, and the headers otherwise look OK).
Testing the response from the command line, I get the answer I expect.
$ curl http://127.0.0.1:8000/api/answers/answer/1 --data "text=test&user_name=testy&user_location=testyville&site=http%3A%2F%2Flocalhost%3A8888%2Fcs%2Fjavascript_answer_form.html&email_address="
{"answer_permalink": "http://127.0.0.1:8000/questions/1", "answer_id": 16, "question_text": "What were the skies like when you were young?", "answer_text": "test", "question_id": "1"}
I am making the request from an HTML file on my local machine that is not being served by a web browser. It's just addressed using file://. The django server is also local, at 127.0.0.1:8000, the default location.
Thanks for any suggestions!
-Jim
Unless you specifically allow your browser alternate settings for local files, everything remains bound by the cross-domain security policy. Files not on a domain (like localhost) can not request files from that domain.
I'm not sure how cross-domain policy works with ports; you may be able to put this file in your port-80-accessible localhost folder (if you have one) and get the job done. Otherwise, you're stuck, unless you can change browser settings to make exceptions (and even then I'm not sure this is doable in any standard browsers).
Add an "error: function(data){alert(data);}" to see if your $.ajax is failing.
Change 'complete' to 'success' in your .ajax() call. 'complete' is used to signal when the ajax operation is done but does not provide the response data. 'success' is called with a successful request and receives the response. 'error' is the counterpart to 'success', used for error handling.
I think browsers (at least some, like Safari, for me) treat files served off the file system as trusted sources in terms of the same-origin policy. So that turned out to be a red herring here.