defer script execution for some time - javascript

Is it possible, in javascript, to defer the execution of a self-execution function that resides at the top level closure? (For you to understand, jQuery uses this. It has a self-execution function at global scope)
My objective here is to make the whole external script to execute at a later time without deferring it's download, such that it downloads as soon as possible and executes after an event (I decide when that event happens).
I found this also:
https://developer.mozilla.org/en-US/docs/DOM/element.onbeforescriptexecute
Sounds like a "part 1" of what I want. If I cancel, the script is not executed. but then... How do I effectively execute it later?
Note1: I have absolutely no control over that external script, so I cannot change it.
Note2: Both me and who requested this does not care if there's slowness in IE due to stuff like this (yeah, true! (s)he's anti-IE like me)
Edit (additional information):
#SLaks The server sends the default headers that apache send (so-it-seems) plus a cache-control header ordering to cache for 12 hours.
From the POV of the rules about domains, it is cross-domain.
I'm not allowed to change the definitions about the other server. I can try to asking for it, though.

Unfortunately it's not possible to do with the <script> tag because you can't get the text of an external script included that way to wrap it in code to delay it. If you could, that would open up a whole new world of possibilities with Cross Site Request Forgery (CSRF).
However, you can fetch the script with an AJAX request as long as it's hosted at the same domain name. Then you can delay its execution.
$.ajax({
url: '/path/to/myscript.js',
success: function(data) {
// Replace this with whatever you're waiting for to delay it
$.ready(function() {
eval(data);
});
}
});
EDIT
I've never used $.getScript() myself since I use RequireJS for including scripts, so I don't know if it will help with what you need, but have a look at this: http://api.jquery.com/jQuery.getScript/
FYI
If you want to know what I'm talking about with the CSRF attacks, Check this out.

Related

How can I cancel consecutive requests to my server? [duplicate]

What would cause a page to be canceled? I have a screenshot of the Chrome Developer Tools.
This happens often but not every time. It seems like once some other resources are cached, a page refresh will load the LeftPane.aspx. And what's really odd is this only happens in Google Chrome, not Internet Explorer 8. Any ideas why Chrome would cancel a request?
We fought a similar problem where Chrome was canceling requests to load things within frames or iframes, but only intermittently and it seemed dependent on the computer and/or the speed of the internet connection.
This information is a few months out of date, but I built Chromium from scratch, dug through the source to find all the places where requests could get cancelled, and slapped breakpoints on all of them to debug. From memory, the only places where Chrome will cancel a request:
The DOM element that caused the request to be made got deleted (i.e. an IMG is being loaded, but before the load happened, you deleted the IMG node)
You did something that made loading the data unnecessary. (i.e. you started loading a iframe, then changed the src or overwrite the contents)
There are lots of requests going to the same server, and a network problem on earlier requests showed that subsequent requests weren't going to work (DNS lookup error, earlier (same) request resulted e.g. HTTP 400 error code, etc)
In our case we finally traced it down to one frame trying to append HTML to another frame, that sometimes happened before the destination frame even loaded. Once you touch the contents of an iframe, it can no longer load the resource into it (how would it know where to put it?) so it cancels the request.
status=canceled may happen also on ajax requests on JavaScript events:
<script>
$("#call_ajax").on("click", function(event){
$.ajax({
...
});
});
</script>
<button id="call_ajax">call</button>
The event successfully sends the request, but is is canceled then (but processed by the server). The reason is, the elements submit forms on click events, no matter if you make any ajax requests on the same click event.
To prevent request from being cancelled, JavaScript event.preventDefault(); have to be called:
<script>
$("#call_ajax").on("click", function(event){
event.preventDefault();
$.ajax({
...
});
});
</script>
NB: Make sure you don't have any wrapping form elements.
I had a similar issue where my button with onclick={} was wrapped in a form element. When clicking the button the form is also submitted, and that messed it all up...
Another thing to look out for could be the AdBlock extension, or extensions in general.
But "a lot" of people have AdBlock....
To rule out extension(s) open a new tab in incognito making sure that "allow in incognito is off" for the extention(s) you want to test.
In my case, I found that it is jquery global timeout settings, a jquery plugin setup global timeout to 500ms, so that when the request exceed 500ms, chrome will cancel the request.
You might want to check the "X-Frame-Options" header tag. If its set to SAMEORIGIN or DENY then the iFrame insertion will be canceled by Chrome (and other browsers) per the spec.
Also, note that some browsers support the ALLOW-FROM setting but Chrome does not.
To resolve this, you will need to remove the "X-Frame-Options" header tag. This could leave you open to clickjacking attacks so you will need to decide what the risks are and how to mitigate them.
Here's what happened to me: the server was returning a malformed "Location" header for a 302 redirect.
Chrome failed to tell me this, of course. I opened the page in firefox, and immediately discovered the problem.
Nice to have multiple tools :)
Another place we've encountered the (canceled) status is in a particular TLS certificate misconfiguration. If a site such as https://www.example.com is misconfigured such that the certificate does not include the www. but is valid for https://example.com, chrome will cancel this request and automatically redirect to the latter site. This is not the case for Firefox.
Currently valid example: https://www.pthree.org/
A cancelled request happened to me when redirecting between secure and non-secure pages on separate domains within an iframe. The redirected request showed in dev tools as a "cancelled" request.
I have a page with an iframe containing a form hosted by my payment gateway. When the form in the iframe was submitted, the payment gateway would redirect back to a URL on my server. The redirect recently stopped working and ended up as a "cancelled" request instead.
It seems that Chrome (I was using Windows 7 Chrome 30.0.1599.101) no longer allowed a redirect within the iframe to go to a non-secure page on a separate domain. To fix it, I just made sure any redirected requests in the iframe were always sent to secure URLs.
When I created a simpler test page with only an iframe, there was a warning in the console (which I had previous missed or maybe didn't show up):
[Blocked] The page at https://mydomain.com/Payment/EnterDetails ran insecure content from http://mydomain.com/Payment/Success
The redirect turned into a cancelled request in Chrome on PC, Mac and Android. I don't know if it is specific to my website setup (SagePay Low Profile) or if something has changed in Chrome.
Chrome Version 33.0.1750.154 m consistently cancels image loads if I am using the Mobile Emulation pointed at my localhost; specifically with User Agent spoofing on (vs. just Screen settings).
When I turn User Agent spoofing off; image requests aren't canceled, I see the images.
I still don't understand why; in the former case, where the request is cancelled the Request Headers (CAUTION: Provisional headers are shown) have only
Accept
Cache-Control
Pragma
Referer
User-Agent
In the latter case, all of those plus others like:
Cookie
Connection
Host
Accept-Encoding
Accept-Language
Shrug
I got this error in Chrome when I redirected via JavaScript:
<script>
window.location.href = "devhost:88/somepage";
</script>
As you see I forgot the 'http://'. After I added it, it worked.
Here is another case of request being canceled by chrome, which I just encountered, which is not covered by any of answers up there.
In a nutshell
Self-signed certificate not being trusted on my android phone.
Details
We are in development/debug phase. The url is pointing to a self-signed host. The code is like:
location.href = 'https://some.host.com/some/path'
Chrome just canceled the request silently, leaving no clue for newbie to web development like myself to fix the issue. Once I downloaded and installed the certificate using the android phone the issue is gone.
If you use axios it can help you
// change timeout delay:
instance.defaults.timeout = 2500;
https://github.com/axios/axios#config-order-of-precedence
For my case, I had an anchor with click event like
<a href="" onclick="somemethod($index, hour, $event)">
Inside click event I had some network call, Chrome cancelling the request. The anchor has href with "" means, it reloads the page and the same time it has click event with network call that gets cancelled. Whenever i replace the href with void like
<a href="javascript:void(0)" onclick="somemethod($index, hour, $event)">
The problem went away!
If you make use of some Observable-based HTTP requests like those built-in in Angular (2+), then the HTTP request can be canceled when observable gets canceled (common thing when you're using RxJS 6 switchMap operator to combine the streams). In most cases it's enough to use mergeMap operator instead, if you want the request to complete.
I had faced the same issue, somewhere deep in our code we had this pseudocode:
create an iframe
onload of iframe submit a form
After 2 seconds, remove the iframe
thus, when the server takes more than 2 seconds to respond the iframe to which the server was writing the response to, was removed, but the response was still to be written , but there was no iframe to write , thus chrome cancelled the request, thus to avoid this I made sure that the iframe is removed only after the response is over, or you can change the target to "_blank".
Thus one of the reason is:
when the resource(iframe in my case) that you are writing something in, is removed or deleted before you stop writing to it, the request will be cancelled
I have embedded all types of font as well as woff, woff2, ttf when I embed a web font in style sheet. Recently I noticed that Chrome cancels request to ttf and woff when woff2 is present. I use Chrome version 66.0.3359.181 right now but I am not sure when Chrome started canceling of extra font types.
We had this problem having tag <button> in the form, that was supposed to send ajax request from js. But this request was canceled, due to browser, that sends form automatically on any click on button inside the form.
So if you realy want to use button instead of regular div or span on the page, and you want to send form throw js - you should setup a listener with preventDefault function.
e.g.
$('button').on('click', function(e){
e.preventDefault();
//do ajax
$.ajax({
...
});
})
I had the exact same thing with two CSS files that were stored in another folder outside my main css folder. I'm using Expression Engine and found that the issue was in the rules in my htaccess file. I just added the folder to one of my conditions and it fixed it. Here's an example:
RewriteCond %{REQUEST_URI} !(images|css|js|new_folder|favicon.ico)
So it might be worth you checking your htaccess file for any potential conflicts
happened to me the same when calling a. js file with $. ajax, and make an ajax request, what I did was call normally.
In my case the code to show e-mail client window caused Chrome to stop loading images:
document.location.href = mailToLink;
moving it to $(window).load(function () {...}) instead of $(function () {...}) helped.
In can this helps anybody I came across the cancelled status when I left out the return false; in the form submit. This caused the ajax send to be immediately followed by the submit action, which overwrote the current page. The code is shown below, with the important return false at the end.
$('form').submit(function() {
$.validator.unobtrusive.parse($('form'));
var data = $('form').serialize();
data.__RequestVerificationToken = $('input[name=__RequestVerificationToken]').val();
if ($('form').valid()) {
$.ajax({
url: this.action,
type: 'POST',
data: data,
success: submitSuccess,
fail: submitFailed
});
}
return false; //needed to stop default form submit action
});
Hope that helps someone.
For anyone coming from LoopbackJS and attempting to use the custom stream method like provided in their chart example. I was getting this error using a PersistedModel, switching to a basic Model fixed my issue of the eventsource status cancelling out.
Again, this is specifically for the loopback api. And since this is a top answer and top on google i figured i'de throw this in the mix of answers.
For me 'canceled' status was because the file did not exist. Strange why chrome does not show 404.
It was as simple as an incorrect path for me. I would suggest the first step in debugging would be to see if you can load the file independently of ajax etc.
The requests might have been blocked by a tracking protection plugin.
It happened to me when loading 300 images as background images. I'm guessing once first one timed out, it cancelled all the rest, or reached max concurrent request. need to implement a 5-at-a-time
One the reasons could be that the XMLHttpRequest.abort() was called somewhere in the code, in this case, the request will have the cancelled status in the Chrome Developer tools Network tab.
In my case, it started coming after chrome 76 update.
Due to some issue in my JS code, window.location was getting updated multiple times which resulted in canceling previous request.
Although the issue was present from before, chrome started cancelling request after update to version 76.
I had the same issue when updating a record. Inside the save() i was prepping the rawdata taken from the form to match the database format (doing a lot of mapping of enums values, etc), and this intermittently cancels the put request. i resolved it by taking out the data prepping from the save() and creating a dedicated dataPrep() method out of it. I turned this dataPrep into async and await all the memory intensive data conversion. I then return the prepped data to the save() method that i could use in the http put client. I made sure i await on dataPrep() before calling the put method:
await dataToUpdate = await dataPrep();
http.put(apiUrl, dataToUpdate);
This solved the intermittent cancelling of request.

Continue to load HTML when <script src='...' is taking too long to load?

I was asked in a Job interview:
if a script takes more than X seconds to load , the whole
page should be loaded (except this synchronous script). Note that we
should not change the script to run asynchronously (for example, via
appendChild). No server.
Well, I had a couple of approaches :
-remove the dom
-window.abort
-mess up the document by document.write("'</s'+'cript>'")
-moving it to an iframe
-adding headers of CSP
Nothing worked.
Here is the code (for example) to remove the script dom tag :
Notice that there is TEXT after the script. So it is expected to see the text after 1 sec.
<body>
<script>
setTimeout( function (){
document.querySelector('#s').parentNode.removeChild(document.querySelector('#s'))
},1000); //change here
</script>
<script id ='s' src="https://www.mocky.io/v2/5c3493592e00007200378f58?mocky-delay=40000ms" ></script>
<span>!!TEXT!!</span>
</body>
Question
I can't seem to find the trick for how can I make the page continue loading after a certain timeout. How can I do that?
Fiddle
BTW I've seen interesting approaches here
Since it has been pointed to my attention that you said this interview happened a "long time ago", below solution is probably not what they expected at that time.
I will admit I have no idea what they were expecting then, but with today's APIs, it is doable:
You can setup a ServiceWorker which will handle all the requests of your page as would do a proxy (but hosted in the browser), and which would be able to abort the request if it takes too long.
But to do so, we need the AbortController API which is still considered an experimental technology, so once again, that may not be what they expected as an answer during this interview...
Anyway, here is what our ServiceWorker could look like to accomplish the requested task:
self.addEventListener('fetch', async function(event) {
const controller = new AbortController();
const signal = controller.signal;
const fetchPromise = fetch(event.request.url, {signal})
.catch(err => new Response('console.log("timedout")')); // in case you want some default content
// 5 seconds timeout:
const timeoutId = setTimeout(() => controller.abort(), 5000);
event.respondWith(fetchPromise);
});
And here it is as a plnkr. (There is a kind of a cheat in that it uses two pages to avoid waiting once the full 50s, one for registering the ServiceWorker, and the other one where the slow-network occurs. But since the requirements say that the slow network happens "On occasions", I think it's still valid to assume we were able to register it at least once.)
But if that's really what they wanted, then you'd even be better just to cache this file.
And as said in comments, if you ever face this issue IRL, then by all means try to fix the root cause instead of this hack.
My guess is that this was a problem they were trying to solve but had not, so they were asking candidates for a solution, and they would be impressed with anyone who had one. My hope would be that it was a trick question and they knew it and wanted to see if you did.
Given their definitions, the task is impossible using what was widely supported in 2017. It is sort of possible in 2019 using ServiceWorkers.
As you know, in the browser, the window runs in a single thread that runs an event loop. Everything that is asynchronous is some kind of deferred task object that is put on a queue for later execution1. Communications between the window's threads and workers' threads are asynchronous, Promises are asynchronous, and generally XHR's are done asynchronously.
If you want to call a synchronous script, you need to make a call that blocks the event loop. However, JavaScript does not have interrupts or a pre-emptive scheduler, so while the event loop is blocked, there is nothing else that can run to cause it to abort. (That is to say, even if you could spin up a worker thread that the OS runs in parallel with the main thread, there is nothing the worker thread can do to cause the main thread to abort the read of the script.) The only hope you could have of having a timeout on the fetch of the script is if there were a way to set an operating-system-level timeout on the TCP request, and there is not. The only way to get a script other than via the HTML script tag, which has no way to specify a timeout, is XHR (short for XMLHttpRequest) or fetch where it is supported. Fetch only has an asynchronous interface, so it is no help. While it is possible to make a synchronous XHR request, according to MDN (the Mozilla Developer's Network), many browsers have deprecated synchronous XHR support on the main thread entirely. Even worse, XHR.timeout() "shouldn't be used for synchronous XMLHttpRequests requests used in a document environment or it will throw an InvalidAccessError exception."
So if you block the main thread to wait for the synchronous loading of the JavaScript, you have no way to abort the loading when you have decided it is taking too long. If you do not block the main thread, then the script will not execute until after the page has loaded.
Q.E.D
Partial Solution with Service Workers
#Kaiido argues this can be solved with ServiceWorkers. While I agree that ServiceWorkers were designed to solve problems like this, I disagree that they answer this question for a few reasons. Before I get into them, let me say that I think Kaiido's solution is fine in the much more general case of having a Single Page App that is completely hosted on HTTPS implement some kind of timeout for resources to prevent the whole app from locking up, and my criticisms of that solution are more about it being a reasonable answer to the interview question than any failing of the solution overall.
OP said the question came from "a long time ago" and service worker support in production releases of Edge and Safari is less than a year old. ServiceWorkers are still not considered "standard" and AbortController is still today not fully supported.
In order for this to work, Service Workers have to be installed and configured with a timeout prior to the page in question being loaded. Kaiido's solution loads a page to load the service workers and then redirects to the page with the slow JavaScript source. Although you can use clients.claim() to start the service workers on the page where they were loaded, they still will not start until after the page is loaded. (On the other hand, they only have to be loaded once and they can persist across browser shutdowns, so in practice it is not entirely unreasonable to assume that the service workers have been installed.)
Kaiido's implementation imposes the same timeout for all resources fetched. In order to apply only to the script URL in question, the service worker would need to be preloaded with the script URL before the page itself was fetched. Without some kind of whitelist or blacklist of URLs, the timeout would apply to loading the target page itself as well as loading the script source and every other asset on the page, synchronous or not. While of course it is reasonable in this Q&A environment to submit an example that is not production ready, the fact that to limit this to the one URL in question means that the URL needs to be separately hard coded into the service worker makes me uncomfortable with this as a solution.
ServiceWorkers only work on HTTPS content. I was wrong. While ServiceWorkers themselves have to be loaded via HTTPS, they are not limited to proxying HTTPS content.
That said, my thanks to Kaiido for providing a nice example of what a ServiceWorker can do.
1There is an excellent article from Jake Archibald that explains how asynchronous tasks are queued and executed by the window's event loop.
If the page doesn't reach the block that sets window.isDone = true, the page will be reloaded with a querystring telling it not to write the slow script tag to the page in the first place.
I'm using setTimeout(..., 0) to write the script tag to the page outside of the current script block, this does not make the loading asynchronous.
Note that the HTML is properly blocked by the slow script, then after 3 seconds the page reloads with the HTML appearing immediately.
This may not work inside of jsBin, but if you test it locally in a standalone HTML page, it will work.
<!DOCTYPE html>
<html>
<head>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width">
<title>JS Bin</title>
</head>
<body>
<script>
setTimeout( function (){
// window.stop();
window.location.href = window.location.href = "?prevent"
}, 3000); //change here
</script>
<script>
function getSearchParams(){
return window.location.search.slice(1).split('&').reduce((obj, t) => {
const pair = t.split('=')
obj[pair[0]] = pair[1]
return obj
}, {})
}
console.log(getSearchParams())
if(!getSearchParams().hasOwnProperty('prevent')) setTimeout(e => document.write('<script id ="s" src="https://www.mocky.io/v2/5c3493592e00007200378f58?mocky-delay=4000ms"><\/script>'), 0)
</script>
<span>!!TEXT!!</span>
</body>
</html>
Here is a solution the uses window.stop(), query params, and php.
<html>
<body>
<?php if(!$_GET["loadType"] == "nomocky"):?>
<script>
var waitSeconds = 3; //the number of seconds you are willing to wait on the hanging script
var timerInSeconds = 0;
var timer = setInterval(()=>{
timerInSeconds++;
console.log(timerInSeconds);
if(timerInSeconds >= waitSeconds){
console.log("wait time exceeded. stop loading the script")
window.stop();
window.location.href = window.location.href + "?loadType=nomocky"
}
},1000)
setTimeout(function(){
document.getElementById("myscript").addEventListener("load", function(e){
if(document.getElementById("myscript")){
console.log("loaded after " + timerInSeconds + " seconds");
clearInterval(timer);
}
})
},0)
</script>
<?php endif; ?>
<?php if(!$_GET["loadType"] == "nomocky"):?>
<script id='myscript' src="https://www.mocky.io/v2/5c3493592e00007200378f58?mocky-delay=20000ms"></script>
<?php endif; ?>
<span>!!TEXT!!</span>
</body>
</html>
You can use async property in script tag it will load your js file asynchronous.
script src="file.js" async>

Why do browsers inefficiently make 2 requests here?

I noticed something odd regarding ajax and image loading. Suppose you have an image on the page, and ajax requests the same image - one would guess that ajax requests would hit the browser cache, or it should at least only make one request, the resulting image going to the page and the script that wants to read/process the image.
Surprisingly, I found that even when the javascript waits for the entire page to load, the image request still makes a new request! Is this a known bug in Firefox and Chrome, or something bad jQuery ajax is doing?
Here you can see the problem, open Fiddler or Wireshark and set it to record before you click "run":
<script src="http://code.jquery.com/jquery-1.11.1.min.js"></script>
<div id="something" style="background-image:url(http://jsfiddle.net/img/logo-white.png);">Hello</div>
<script>
jQuery(function($) {
$(window).load(function() {
$.get('http://jsfiddle.net/img/logo-white.png');
})
});
</script>
Note that in Firefox it makes two requests, both resulting in 200-OK, and sending the entire image back to the browser twice. In Chromium, it at least correctly gets a 304 on second request instead of downloading the entire contents twice.
Oddly enough, IE11 downloads the entire image twice, while it seems IE9 aggressively caches it and downloads it once.
Ideally I would hope the ajax wouldn't make a second request at all, since it is requesting exactly the same url. Is there a reason css and ajax in this case usually have different caches, as though the browser is using different cache storage for css vs ajax requests?
I use the newest Google Chrome and it makes one request. But in your JSFIDDLE example you are loading jQuery twice. First with CSS over style attribute and second in your code over script tag. Improved: JSFIDDLE
<div id="something" style="background-image:url('http://jsfiddle.net/img/logo-white.png');">Hello</div>
<script>
jQuery(window).load(function() {
jQuery.get('http://jsfiddle.net/img/logo-white.png');
});
// or
jQuery(function($) {
jQuery.get('http://jsfiddle.net/img/logo-white.png');
});
</script>
jQuery(function($) {...} is called when DOM is ready and jQuery(window).load(...); if DOM is ready and every image and other resources are loaded. To put both together nested makes no sense, see also here: window.onload vs $(document).ready()
Sure, the image is loaded two times in Network tab of the web inspector. First through your CSS and second through your JavaScript. The second request is probably cached.
UPDATE: But every request if cached or not is shown in this tab. See following example: http://jsfiddle.net/95mnf9rm/4/
There are 5 request with cached AJAX calls and 5 without caching. And 10 request are shown in 'Network' tab.
When you use your image twice in CSS then it's only requested once. But if you explicitly make a AJAX call then the browser makes an AJAX call. As you want. And then maybe it's cached or not, but it's explicitly requested, isn't it?
This "problem" could a be a CORS pre-flight test.
I had noticed this in my applications awhile back, that the call to retrieve information from a single page application made the call twice. This only happens when you're accessing URLs on a different domain. In my case we have APIs we've built and use on a different server (a different domain) than that of the applications we build. I noticed that when I use a GET or POST in my application to these RESTFUL APIs the call appears to be made twice.
What is happening is something called pre-flight (https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS), an initial request is made to the server to see if the ensuing call is allowed.
Excerpt from MDN:
Unlike simple requests, "preflighted" requests first send an HTTP request by the OPTIONS method to the resource on the other domain, in order to determine whether the actual request is safe to send. Cross-site requests are preflighted like this since they may have implications to user data. In particular, a request is preflighted if:
It uses methods other than GET, HEAD or POST. Also, if POST is used to send request data with a Content-Type other than application/x-www-form-urlencoded, multipart/form-data, or text/plain, e.g. if the POST request sends an XML payload to the server using application/xml or text/xml, then the request is preflighted.
It sets custom headers in the request (e.g. the request uses a header such as X-PINGOTHER)
Your fiddle tries to load a resource from another domain via ajax:
I think I created a better example. Here is the code:
<img src="smiley.png" alt="smiley" />
<div id="respText"></div>
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script>
$(window).load(function(){
$.get("smiley.png", function(){
$("#respText").text("ajax request succeeded");
});
});
</script>
You can test the page here.
According to Firebug and the chrome network panel the image is returned with the status code 200 and the image for the ajax request is coming from the cache:
Firefox:
Chrome:
So I cannot find any unexpected behavior.
Cache control on Ajax requests have always been a blurred and buggy subject (example).
The problem gets even worse with cross-domain references.
The fiddle link you provided is from jsfiddle.net which is an alias for fiddle.jshell.net. Every code runs inside the fiddle.jshell.net domain, but your code is referencing an image from the alias and browsers will consider it a cross-domain access.
To fix it, you could change both urls to http://fiddle.jshell.net/img/logo-white.png or just /img/logo-white.png.
The helpful folks at Mozilla gave some details as to why this happens. Apparently Firefox assumes an "anonymous" request could be different than normal, and for this reason it makes a second request and doesn't consider the cached value with different headers to be the same request.
https://bugzilla.mozilla.org/show_bug.cgi?id=1075297
This may be a shot in the dark, but here's what I think is happening.
According to,
http://api.jquery.com/jQuery.get/
dataType
Type: String
The type of data expected from the server.
Default: Intelligent Guess (xml, json, script, or html).
Gives you 4 possible return types. There is no datatype of image/gif being returned. Thus, the browser doesn't test it's cache for the src document as it is being delivered a a different mime type.
The server decides what can be cached and for how long. However, it again depends on the browser, whether or not to follow it. Most web browsers like Chrome, Firefox, Safari, Opera and IE follow it, though.
The point that I want to make here, is that your web sever might be configured to not allow your browser to cache the content, thus, when you request the image through CSS and JS, the browser follows your server's orders and doesn't cache it and thus it requests the image twice...
I want JS-accessible image
Have you tried to CSS using jQuery? It is pretty fun - you have full CRUD (Create, read, update, delete) CSS elements. For example do image resize on server side:
$('#container').css('background', 'url(somepage.php?src=image_source.jpg'
+ '&w=' + $("#container").width()
+ '&h=' + $("#container").height() + '&zc=1');
Surprisingly, I found that even when the javascript waits for the entire page to load, the image request still makes a new request! Is this a known bug in Firefox and Chrome, or something bad jQuery ajax is doing?
It is blatantly obvious that this is not a browser bug.
The computer is deterministic and does what exactly you tell it to (not want you want it to do). If you want to cache images it is done in server side. Based on who handles caching it can be handled as:
Server (like IIS or Apache) cache - typically caches things that are reused often (ex: 2ce in 5 seconds)
Server side application cache - typically it reuses server custom cache or you create sprite images or ...
Browser cache - Server side adds cache headers to images and browsers maintain cache
If it is not clear then I would like to make it clear : You don't cache images with javascript.
Ideally I would hope the ajax wouldn't make a second request at all, since it is requesting exactly the same url.
What you try to do is to preload images.
Once an image has been loaded in any way into the browser, it will be
in the browser cache and will load much faster the next time it is
used whether that use is in the current page or in any other page as
long as the image is used before it expires from the browser cache.
So, to precache images, all you have to do is load them into the
browser. If you want to precache a bunch of images, it's probably best
to do it with javascript as it generally won't hold up the page load
when done from javascript. You can do that like this:
function preloadImages(array) {
if (!preloadImages.list) {
preloadImages.list = [];
}
for (var i = 0; i < array.length; i++) {
var img = new Image();
img.onload = function() {
var index = preloadImages.list.indexOf(this);
if (index !== -1) {
// remove this one from the array once it's loaded
// for memory consumption reasons
preloadImages.splice(index, 1);
}
}
preloadImages.list.push(img);
img.src = array[i];
}
}
preloadImages(["url1.jpg", "url2.jpg", "url3.jpg"]);
Then, once they've been preloaded like this via javascript, the browser will have them in its cache and you can just refer to the normal URLs in other places (in your web pages) and the browser will fetch that URL from its cache rather than over the network.
Source : How do you cache an image in Javascript
Is there a reason css and ajax in this case usually have different caches, as though the browser is using different cache storage for css vs ajax requests?
Even in absence of information do not jump to conclusions!
One big reason to use image preloading is if you want to use an image
for the background-image of an element on a mouseOver or :hover event.
If you only apply that background-image in the CSS for the :hover
state, that image will not load until the first :hover event and thus
there will be a short annoying delay between the mouse going over that
area and the image actually showing up.
Technique #1 Load the image on the element's regular state, only shift it away with background position. Then move the background
position to display it on hover.
#grass { background: url(images/grass.png) no-repeat -9999px -9999px; }
#grass:hover { background-position: bottom left; }
Technique #2 If the element in question already has a background-image applied and you need to change that image, the above
won't work. Typically you would go for a sprite here (a combined
background image) and just shift the background position. But if that
isn't possible, try this. Apply the background image to another page
element that is already in use, but doesn't have a background image.
#random-unsuspecting-element {
background: url(images/grass.png) no-repeat -9999px -9999px; }
#grass:hover { background: url(images/grass.png) no-repeat; }
The idea create new page elements to use for this preloading technique
may pop into your head, like #preload-001, #preload-002, but that's
rather against the spirit of web standards. Hence the using of page
elements that already exist on your page.
The browser will make the 2 requests on the page, cause an image called from the css uses a get request (not ajax) too before rendering the entire page.
The window load is similar to de attribute, and is loading before the rest of the page, then, the image from the Ajax will be requested first than the image on the div, processed during the page load.
If u would like to load a image after the entire page is loaded, u should use the document.ready() instead

jQuery.getJSON inside a greasemonkey user script

I am attempting to write a user script that makes a cross domain AJAX request.
I have included jQuery inside my script using #require and everything seems to be working fine up until the point where I try to run jQuery.getJSON.
The API I am accessing supports jsonp, however I keep getting an error stating jsonp123456789 is not defined.
From what I have been able to gather this is due to jQuery writing the jsonp response directly into the head of the page, which then becomes sandboxed. Once that has occured jQuery can no longer access the callback resulting in it being undefined. (I'm not 100% on this being the case, but it seems likely to me).
Is there any way to work around this? It has been suggested I declare the callback function inside unsafeWindow but I'm unsure how to do this and haven't managed to get it to work.
Wouldn't it be nice if jQuery used GM_xmlhttpRequest internally so that you could have all the convenience of the jQuery methods and the cross-site functionality of Greasemonkey? As mahemoff points out, Greasemonkey could let you make the request without relying on JSONP and running into the callback problem you're facing, but you'll have to deal with the JSON contents yourself.
We've written a library that will do just that: the Greasemonkey/jQuery XHR bridge. If you #require that script in your userscript, then all $.get and $.getJSON and $.post, etc. jQuery calls will work cross-site without relying on techniques like JSONP.
So if you use this bridge and simply remove the ?callback=? from your URL, your jQuery code should work without modification. This blog post provides a step-by-step walkthrough. If anyone has any questions, comments, bug reports or suggestions about the bridge plugin, please let me know.
The workaround is to use GM_HttpRequest. You can get away with it, instead of JSONP for cross-domain requests, because unlike the usual XHR, GM_HttpRequest does allow cross-domain calls. You want something like:
GM_xmlhttpRequest({
method: "GET",
url: "http://example.com/path/to/json",
onload: function(xhr) {
var data = eval("(" + xhr.responseText + ")");
// use data ...
}
});
Note that this eval's JSON the simplest way. If you want a more secure solution for untrusted JSON, you'll need to include a small JSON-parsing library.
Unfortunately, you also have to wrap a seemingly useless zero-duration setTimeout around the whole thing. I find it easiest to stick the GM_xmlhttpRequest in its own method, then run setTimeout(makeCall, 0);.
You can see a real example here.
As many of you will know, Google Chrome doesn't support any of the handy GM_ functions at the moment.
As such, it is impossible to do cross site AJAX requests due to various sandbox restrictions (even using great tools like James Padolsey's Cross Domain Request Script)
I needed a way for users to know when my Greasemonkey script had been updated in Chrome (since Chrome doesn't do that either...). I came up with a solution which is documented here (and in use in my Lighthouse++ script) and worth a read for those of you wanting to version check your scripts:
http://blog.bandit.co.nz/post/1048347342/version-check-chrome-greasemonkey-script

Where in JavaScript is the request coming from?

I am debugging a large, complex web page that has a lot of JavaScript, JQuery, Ajax and so on. Somewhere in that code I am getting a rouge request (I think it is an empty img) that calls the root of the server. I know it is not in the html or the css and am pretty convinced that somewhere in the JavaScript code the reqest is being made, but I can't track it down. I am used to using firebug, VS and other debugging tools but am looking for some way to find out where this is executed - so that I can find the offending line amongst about 150 .js files.
Apart from putting in a gazzillion console outputs of 'you are now here', does anyone have suggestions for a debugging tool that could highlight where in Javascript requests to external resources are made? Any other ideas?
Step by step debugging will take ages - I have to be careful what I step into (jQuery source - yuk!) and I may miss the crucial moment
What about using the step-by-step script debugger in Firebug ?
I also think that could be a very interesting enhancement to Firebug, being able to add a breakpoint on AJAX calls.
You spoke of jQuery source...
Assuming the request goes through jQuery, put a debug statement in the jQuery source get() function, that kicks in if the URL is '/'. Maybe then you can tell from the call stack.
You can see all HTTP request done through JavaScript using the Firebug console.
If you want to track all HTTP requests manually, you can use this code:
$(document).bind('beforeSend', function(event, request, ajaxOptions)
{
// Will be called before every jQuery AJAX call
});
For more information, see jQuery documentation on AJAX events.
If its a HTTPRequest sent to a web server, I would recommend using TamperData plugin on Firefox. Just install the plugin, start tamper data, and every request sent will be prompted to tamper/continue/abort first.
Visit this page at Mozilla website
Just a guess here, but are you using ThickBox? It tries to load an image right at the start of the code.
First thing I would do is check whether this rouge request is an Ajax request or image load request via the Net panel in Firebug.
If it's Ajax, then you can overload the $.ajax function with your own and do a strack trace and include the URL requested before handing off to the original $.ajax.
If it's an image, it's not ideal, but if you can respond to the image request with a server side sleep (i.e. php file that just sleeps for 20 seconds) you might be able to hang the app and get a starting guess as to where the problem might be.

Categories

Resources