JS style-changes don't get applied when inside request - javascript

I want to make it such that an image on a website gets its "onclick" event disabled and a gray filter applied, if a certain file on the same domain is not found. I want to use purely JS and have tried this so far:
function fileNonExist(url, callback){
var http = new XMLHttpRequest();
http.onreadystatechange = function() {
if (http.readyState === XMLHttpRequest.DONE && callback) {
if(http.status != 200){
callback();
}
}
}
http.open('HEAD', url);
http.send();
}
fileNonExist("theFileIAmLookingFor.html", () => {
console.log("image changed");
image.onclick = "";
image.style.filter = "grayscale(100%)";
});
I have the image initialized and displayed. Thus image.onclick = "" and image.style.filter = "grayscale(100%)
both work, if they are used normally. However, even though the function blocks are executed as intended (Console logs "image changed" if the file isnt found, and nothing otherwise.) none of the style changes are ever visible, if they are executed from within those blocks. Why might that be and how could I fix it?

I found out the solution myself, while talking to Emiel Zuurbier: I noticed that the code works if I open the html file normally in my browser. The bug occurs, if I access the file over a webserver, which i've done the whole time. If I shut down the server while the site is still opened in the browser, then the changes also get applied. If I look at the requests with dev tools in the browser. I see that only the successfull requests are finishing and the unsuccessfull ones are left pending forever. Thats why the changes get applied when the server is shut down and all pending requests get closed with errors. The Server uses the Node.js "fs" module and its readFile method.
I will now try to turn the styles around so all images start off gray and without "onclick" - methods and only become unlocked once the file is found. This way the images with pending requests remain gray.

Related

XMLHttpRequest returning with status 200, but 'onreadystatechange' event not fired

We have been receiving an intermittent bug with the XMLHttpRequest object when using IE11. Our codebase is using legacy architecture, so this browser is required.
After clicking a button, the browser launches an out-of-band process by creating a new ActiveX control which integrates with a camera to capture an image. This control appears to be working fine... it allows the operator to capture the image, and the Base64 content of the image is returned out of the control back to the browser interface, so I think we can rule out a problem with this object.
Once the image is returned to the browser, the browser performs an asynchronous 'ping' to the web server to check if the IIS session is still alive or it has expired (because the out-of-band image capture process forbids control of the browser while it is open).
The ping to the server returns successfully (and running Fiddler I can see that the response has status 200), with the expected response data:
<sessionstate>ok</sessionstate>
There is a defined 'onreadystatechange' function which should be fired on this response, and the majority of times this seems to fire correctly. However, on the rare occasion it does appear, it continues to happen every time.
Here is a snippet of the code... we expect the 'callback()' function to be called on a successful response to Timeout.asp:
XMLPoster.prototype.checkSessionAliveAsync = function(callback) {
var checkSessionAlive = new XMLHttpRequest();
checkSessionAlive.open("POST", "Timeout.asp?Action=ping", true);
checkSessionAlive.setRequestHeader("Content-Type", "application/x-www-form-urlencoded");
checkSessionAlive.onreadystatechange = function() {
if (checkSessionAlive.readyState == 4) {
if (checkSessionAlive.responseText.indexOf("expired") != -1 || checkSessionAlive.status !== 200) {
eTop.window.main.location = "timeout.asp";
return;
}
callback(checkSessionAlive.responseText);
}
}
checkSessionAlive.send();
}
Has anyone seen anything like this before? I appreciate that using legacy software is not ideal, but we are currently limited to using it.

How to make JS wait until protocol execution finished

I have a custom URL protocol handler cgit:[...]
It launches up a background process which configures some stuff on the local machine. The protocol works fine, i'm launching it from JavaScript (currently using document.location = 'cgit:[...]'), but i actually want JavaScript to wait until the associated program exits.
So basically the steps i want JavaScript to do:
JavaScript does something
JavaScript launches cgit:[...]
Javascript waits until cgit:[...] exits
JavaScript does something else
Code:
function launchCgit(params)
{
showProgressBar();
document.location="cgit:"+params;
document.addEventListener( /* CGit-Program exited event */, hideProgressBar );
}
or:
function launchCgit(params)
{
showProgressBar();
// setLocationAndWait("cgit:"+params);
hideProgressBar();
}
Any ideas if this is possible?
Since this isn't really an expected use of window.location I would doubt that there's an easy way. My recommendation would be to use an AJAX request and have the c++ program send a response when it's done. That way, whatever code needs to run after the c++ program can be run when the request completes.
As i didn't find a suitable way to solve my problem using ajax requests or anything similar, i finally solved my problem using a kind-of-ugly workarround including XmlHttpRequest
For launching the protocol i'm still using document.location=cgit:[...]
I'm using a server side system including "lock-files" - that's like generic dummy files, with generated names for each request.
Once the user requests to open the custom protocol, such a file is being generated on the server specifically for that one protocol-opening-request.
I created a folder called "$locks" on the server where these files are being placed in. Once the protocol-associated program exits, the appropriate file is being deleted.
The website continuously checks if the file for a request still exists using XmlHttpRequest and fires a callback if it doesn't (example timout between tests: 1 sec).
The structure of the new files is the following:
lockThisRequest.php: It creates a file in the $locks directory based on the req url-parameter.
unlockThisRequest.php: It deletes a file in the $locks directory; again based on the req url-parameter.
The JavaScript part of it goes:
function launchCgit(params,callback)
{
var lock = /* Generate valid filename from params variable */;
// "Lock" that Request (means: telling the server that a request with this ID is now in use)
var locker = new XmlHttpRequest();
locker.open('GET', 'lockThisRequest.php?req='+lock, true)
locker.send(null);
function retry()
{
// Test if the lock-file still exists on the server
var req = new XmlHttpRequest();
req.open('GET', '$locks/'+lock, true);
req.onReadyStateChanged=function()
{
if (req.readyState == 4)
{
if (req.status == 200)
{
// lock-file exists -> cgit has not exited yet
window.setTimeout(retry,1000);
}
else if (req.status == 404)
{
// lock-file not found -> request has been proceeded
callback();
}
}
}
req.send(null);
}
document.location = 'cgit:'+params; // execute custom protocol
retry(); // initialize lockfileCheck-loop
}
Ussage is:
launchCgit("doThisAndThat",function()
{
alert("ThisAndThat finished.");
});
the lockThisRequest.php-file:
<?php
file_put_contents("$locks/".$_GET["req"],""); // Create lock file
?>
and unlockThisRequest.php:
<?php
unlink("../\$locks/".$_GET["req"]); // Delete lock file
?>
The local program / script executed by the protocol can simply call something like:
#!/bin/bash
curl "http://servername/unlockThisRequest.php?req=$1"
after it finished.
As i just said this works, but it's anything else than nice (congratulations if you kept track of those instructions)
I'd rather prefered a more simple way and (important) this also may cause security issues with the lockThisRequest.php and unlockThisRequest.php files!
I'm fine with this solution, because i'm only using it on a password protected private page. But if you plan to use it on a public or non protected page, you may want to add some security to the php files.
Anyways, the solution works for me now, but if anyone finds a better way to do it - for example by using ajax requests - he/she would be very welcome to add that way to the respective stackoverflow-documentation or the like and post a link to it on this thread. I'd still be interested in alternative solutions :)

Reload iframe only when iframe contents are new?

Short version:
How could an HTML file that has an iframe pointing to another HTML file on the same server automatically reload that iframe whenever the iframe's content is new?
Context:
I'm using an HTML/Javascript file to watch for new instructions from a Python program. The Python program rewrites a simple HTML file when there are new instructions to see.
So my easy solution is a bit of Javascript that forces a reload of the iframe src every second.
However, this causes a flash of the content every time the iframe loads, and most of the time the information isn't new.
Instead, I'd prefer for the HTML file to only force a reload of the iframe src when it's new. Is this possible?
You can use an AJAX request to check if the iframe has changed.
var value;
(function check() {
var xhr = new XMLHttpRequest();
xhr.open('GET', '/url/toIframe'); // change this to the correct URL
xhr.addEventListener('load', function() {
if(value === undefined) {
value = this.responseText;
} else if(value != this.responseText) {
value = this.responseText;
// refresh iframe!
}
setTimeout(check, 1000); // check again in another second
});
xhr.send();
})();
This makes requests to the server to see if the content has changed. There is one second between each the end of a request and the start of the next check. (And currently, there is no error handling if the server goes down.)
FYI, if the server sends a Last-Modified header, you could actually make a HEAD request and just check that header. If you don't know what I am talking about, don't worry about it.

Suppressing 404s in retina.js library

We use the js lib retina.js which swaps low quality images with "retina" images (size times 2). The problem is, that retina.js throws a 404 for every "retina" image which can't be found.
We own a site where users can upload their own pictures which are most likely not in a retina resolution.
Is there no way to prevent the js from throwing 404s?
If you don't know the lib. Here is the code throwing the 404:
http = new XMLHttpRequest;
http.open('HEAD', this.at_2x_path);
http.onreadystatechange = function() {
if (http.readyState != 4) {
return callback(false);
}
if (http.status >= 200 && http.status <= 399) {
if (config.check_mime_type) {
var type = http.getResponseHeader('Content-Type');
if (type == null || !type.match(/^image/i)) {
return callback(false);
}
}
RetinaImagePath.confirmed_paths.push(that.at_2x_path);
return callback(true);
} else {
return callback(false);
}
}
http.send();
There are a few options that I see, to mitigate this.
Enhance and persist retina.js' HTTP call results caching
For any given '2x' image that is set to swap out a '1x' version, retina.js first verifies the availability of the image via an XMLHttpRequest request. Paths with successful responses are cached in an array and the image is downloaded.
The following changes may improve efficiency:
Failed XMLHttpRequest verification attempts can be cached: Presently, a '2x' path verification attempt is skipped only if it has previously succeeded. Therefore, failed attempts can recur. In practice, this doesn't matter much beacuse the verification process happens when the page is initially loaded. But, if the results are persisted, keeping track of failures will prevent recurring 404 errors.
Persist '2x' path verification results in localStorage: During initialization, retina.js can check localStorage for a results cache. If one is found, the verification process for '2x' images that have already been encountered can be bypassed and the '2x' image can either be downloaded or skipped. Newly encounterd '2x' image paths can be verified and the results added to the cache. Theoretically, while localStorage is available, a 404 will occur only once for an image on a per-browser basis. This would apply across pages for any page on the domain.
Here is a quick workup. Expiration functionality would probably need to be added.
https://gist.github.com/4343101/revisions
Employ an HTTP redirect header
I must note that my grasp of "server-side" matters is spotty, at best. Please take this FWIW
Another option is for the server to respond with a redirect code for image requests that have the #2x characters and do not exist. See this related answer.
In particular:
If you redirect images and they're cacheable, you'd ideally set an HTTP Expires header (and the appropriate Cache-Control header) for a date in the distant future, so at least on subsequent visits to the page users won't have to go through the redirect again.
Employing the redirect response would get rid of the 404s and cause the browser to skip subsequent attempts to access '2x' image paths that do not exist.
retina.js can be made more selective
retinajs can be modified to exclude some images from consideration.
A pull request related to this: https://github.com/imulus/retinajs/commit/e7930be
Per the pull request, instead of finding <img> elements by tag name, a CSS selector can be used and this can be one of the retina.js' configurable options. A CSS selector can be created that will filter out user uploaded images (and other images for which a '2x' variant is expected not to exist).
Another possibility is to add a filter function to the configurable options. The function can be called on each matched <img> element; a return true would cause a '2x' variant to be downloaded and anything else would cause the <img> to be skipped.
The basic, default configuration would change from the current version to something like:
var config = {
check_mime_type: true,
retinaImgTagSelector: 'img',
retinaImgFilterFunc: undefined
};
The Retina.init() function would change from the current version to something like:
Retina.init = function(context) {
if (context == null) context = root;
var existing_onload = context.onload || new Function;
context.onload = function() {
// uses new query selector
var images = document.querySelectorAll(config.retinaImgTagSelector),
retinaImages = [], i, image, filter;
// if there is a filter, check each image
if (typeof config.retinaImgFilterFunc === 'function') {
filter = config.retinaImgFilterFunc;
for (i = 0; i < images.length; i++) {
image = images[i];
if (filter(image)) {
retinaImages.push(new RetinaImage(image));
}
}
} else {
for (i = 0; i < images.length; i++) {
image = images[i];
retinaImages.push(new RetinaImage(image));
}
}
existing_onload();
}
};
To put it into practice, before window.onload fires, call:
window.Retina.configure({
// use a class 'no-retina' to prevent retinajs
// from checking for a retina version
retinaImgTagSelector : 'img:not(.no-retina)',
// or, assuming there is a data-owner attribute
// which indicates the user that uploaded the image:
// retinaImgTagSelector : 'img:not([data-owner])',
// or set a filter function that will exclude images that have
// the current user's id in their path, (assuming there is a
// variable userId in the global scope)
retinaImgFilterFunc: function(img) {
return img.src.indexOf(window.userId) < 0;
}
});
Update: Cleaned up and reorganized. Added the localStorage enhancement.
Short answer: Its not possible using client-side JavaScript only
After browsing the code, and a little research, It appears to me that retina.js isn't really throwing the 404 errors.
What retina.js is actually doing is requesting a file and simply performing a check on whether or not it exists based on the error code. Which actually means it is asking the browser to check if the file exists. The browser is what gives you the 404 and there is no cross browser way to prevent that (I say "cross browser" because I only checked webkit).
However, what you could do if this really is an issue is do something on the server side to prevent 404s altogether.
Essentially this would be, for example, /retina.php?image=YOUR_URLENCODED_IMAGE_PATH a request to which could return this when a retina image exists...
{"isRetina": true, "path": "YOUR_RETINA_IMAGE_PATH"}}
and this if it doesnt...
{"isRetina": false, "path": "YOUR_REGULAR_IMAGE_PATH"}}
You could then have some JavaScript call this script and parse the response as necessary. I'm not claiming that is the only or the best solution, just one that would work.
Retina JS supports the attribute data-no-retina on the image tag.
This way it won't try to find the retina image.
Helpful for other people looking for a simple solution.
<img src="/path/to/image" data-no-retina />
I prefer a little more control over which images are replaced.
For all images that I've created a #2x for, I changed the original image name to include #1x. (* See note below.) I changed retina.js slightly, so that it only looks at [name]#1x.[ext] images.
I replaced the following line in retina-1.1.0.js:
retinaImages.push(new RetinaImage(image));
With the following lines:
if(image.src.match(/#1x\.\w{3}$/)) {
image.src = image.src.replace(/#1x(\.\w{3})$/,"$1");
retinaImages.push(new RetinaImage(image));
}
This makes it so that retina.js only replaces #1x named images with #2x named images.
(* Note: In exploring this, it seems that Safari and Chrome automatically replace #1x images with #2x images, even without retina.js installed. I'm too lazy to track this down, but I'd imagine it's a feature with the latest webkit browsers. As it is, retina.js and the above changes to it are necessary for cross-browser support.)
One of solutions is to use PHP:
replace code from 1st post with:
http = new XMLHttpRequest;
http.open('HEAD', "/image.php?p="+this.at_2x_path);
http.onreadystatechange = function() {
if (http.readyState != 4) {
return callback(false);
}
if (http.status >= 200 && http.status <= 399) {
if (config.check_mime_type) {
var type = http.getResponseHeader('Content-Type');
if (type == null || !type.match(/^image/i)) {
return callback(false);
}
}
RetinaImagePath.confirmed_paths.push(that.at_2x_path);
return callback(true);
} else {
return callback(false);
}
}
http.send();
and in yours site root add file named "image.php":
<?php
if(file_exists($_GET['p'])){
$ext = explode('.', $_GET['p']);
$ext = end($ext);
if($ext=="jpg") $ext="jpeg";
header("Content-Type: image/".$ext);
echo file_get_contents($_GET['p']);
}
?>
retina.js is a nice tool for fixed images on static web pages, but if you are retrieving user uploaded images, the right tool is server side. I imagine PHP here, but the same logic may be applied to any server side language.
Provided that a nice security habit for uploaded images is to not let users reach them by direct url: if the user succeeds in uploading a malicious script to your server, he should not be able to launch it via an url (www.yoursite.com/uploaded/mymaliciousscript.php). So it is usually a good habit to retrieve uploaded images via some script <img src="get_image.php?id=123456" /> if you can... (and even better, keep the upload folder out of the document root)
Now the get_image.php script can get the appropriate image 123456.jpg or 123456#2x.jpg depending on some conditions.
The approach of http://retina-images.complexcompulsions.com/#setupserver seems perfect for your situation.
First you set a cookie in your header by loading a file via JS or CSS:
Inside HEAD:
<script>(function(w){var dpr=((w.devicePixelRatio===undefined)?1:w.devicePixelRatio);if(!!w.navigator.standalone){var r=new XMLHttpRequest();r.open('GET','/retinaimages.php?devicePixelRatio='+dpr,false);r.send()}else{document.cookie='devicePixelRatio='+dpr+'; path=/'}})(window)</script>
At beginning of BODY:
<noscript><style id="devicePixelRatio" media="only screen and (-moz-min-device-pixel-ratio: 2), only screen and (-o-min-device-pixel-ratio: 2/1), only screen and (-webkit-min-device-pixel-ratio: 2), only screen and (min-device-pixel-ratio: 2)">#devicePixelRatio{background-image:url("/retinaimages.php?devicePixelRatio=2")}</style></noscript>
Now every time your script to retrieve uploaded images is called, it will have a cookie set asking for retina images (or not).
Of course you may use the provided retinaimages.php script to output the images, but you may also modify it to accomodate your needs depending on how you produce and retrieve images from a database or hiding the upload directory from users.
So, not only it may load the appropriate image, but if GD2 is installed, and you keep the original uploaded image on the server, it may even resize it and crop accordingly and save the 2 cached image sizes on the server. Inside the retinaimages.php sources you can see (and copy) how it works:
<?php
$source_file = ...
$retina_file = ....
if (isset($_COOKIE['devicePixelRatio'])) {
$cookie_value = intval($_COOKIE['devicePixelRatio']);
}
if ($cookie_value !== false && $cookie_value > 1) {
// Check if retina image exists
if (file_exists($retina_file)) {
$source_file = $retina_file;
}
}
....
header('Content-Length: '.filesize($source_file), true);
readfile($source_file); // or read from db, or create right size.. etc..
?>
Pros: image is loaded only once (retina users on 3G at least won'tload 1x+2x images), works even without JS if cookies are enabled, can be switched on and off easily, no need to use apple naming conventions. You load image 12345 and you get the correct DPI for your device.
With url rewriting you may even render it totally transparent by redirecting /get_image/1234.jpg to /get_image.php?id=1234.jpg
My suggestion is that you recognize the 404 errors to be true errors, and fix them the way that you are supposed to, which is to provide Retina graphics. You made your scripts Retina-compatible, but you did not complete the circle by making your graphics workflow Retina-compatible. Therefore, the Retina graphics are actually missing. Whatever comes in at the start of your graphics workflow, the output of the workflow has to be 2 image files, a low-res and Retina 2x.
If a user uploads a photo that is 3000x2400, you should consider that to be the Retina version of the photo, mark it 2x, and then use a server-side script to generate a smaller 1500x1200 non-Retina version, without the 2x. The 2 files together then constitute one 1500x1200 Retina-compatible image that can be displayed in a Web context at 1500x1200 whether the display is Retina or not. You don’t have to care because you have a Retina-compatible image and Retina-compatible website. The RetinaJS script is the only one that has to care whether a client is using Retina or not. So if you are collecting photos from users, your task is not complete unless you generate 2 files, both low-res and high-res.
The typical smartphone captures a photo that is more than 10x the size of the smartphone’s display. So you should always have enough pixels. But if you are getting really small images, like 500px, then you can set a breakpoint in your server-side image-reducing script so that below that, the uploaded photo is used for the low-res version and the script makes a 2x copy that is going to be no better than the non-Retina image but it is going to be Retina-compatible.
With this solution, your whole problem of “is the 2x image there or not?” goes away, because it is always there. The Retina-compatible website will just happily use your Retina-compatible database of photos without any complaints.

Detect failure to load contents of an iframe

I can detect when the content of an iframe has loaded using the load event. Unfortunately, for my purposes, there are two problems with this:
If there is an error loading the page (404/500, etc), the load event is never fired.
If some images or other dependencies failed to load, the load event is fired as usual.
Is there some way I can reliably determine if either of the above errors occurred?
I'm writing a semi-web semi-desktop application based on Mozilla/XULRunner, so solutions that only work in Mozilla are welcome.
If you have control over the iframe page (and the pages are on the same domain name), a strategy could be as follows:
In the parent document, initialize a variable var iFrameLoaded = false;
When the iframe document is loaded, set this variable in the parent to true calling from the iframe document a parent's function (setIFrameLoaded(); for example).
check the iFrameLoaded flag using the timer object (set the timer to your preferred timeout limit) - if the flag is still false you can tell that the iframe was not regularly loaded.
I hope this helps.
This is a very late answer, but I will leave it to someone who needs it.
Task: load iframe cross-origin content, emit onLoaded on success and onError on load error.
This is the most cross browsers origin independent solution I could develop. But first of all I will briefly tell about other approaches I had and why they are bad.
1. iframe That was a little shock for me, that iframe only has onload event and it is called on load and on error, no way to know it is error or not.
2. performance.getEntriesByType('resource'). This method returns loaded resources. Sounds like what we need. But what a shame, firefox always adds Resource in resources array no matter it is loaded or failed. No way to know by Resource instance was it success. As usual. By the way, this method does not work in ios<11.
3. script I tried to load html using <script> tag. Emits onload and onerror correctly, sadly, only in Chrome.
And when I was ready to give up, my elder collegue told me about html4 tag <object>. It is like <iframe> tag except it has fallbacks when content is not loaded. That sounds like what we are need! Sadly it is not as easy as it sounds.
CODE SECTION
var obj = document.createElement('object');
// we need to specify a callback (i will mention why later)
obj.innerHTML = '<div style="height:5px"><div/>'; // fallback
obj.style.display = 'block'; // so height=5px will work
obj.style.visibility = 'hidden'; // to hide before loaded
obj.data = src;
After this we can set some attributes to <object> like we'd wanted to do with iframe. The only difference, we should use <params>, not attributes, but their names and values are identical.
for (var prop in params) {
if (params.hasOwnProperty(prop)) {
var param = document.createElement('param');
param.name = prop;
param.value = params[prop];
obj.appendChild(param);
}
}
Now, the hard part. Like many same-like elements, <object> doesn't have specs for callbacks, so each browser behaves differently.
Chrome. On error and on load emits load event.
Firefox. Emits load and error correctly.
Safari. Emits nothing....
Seems like no different from iframe, getEntriesByType, script....
But, we have native browser fallback! So, because we set fallback (innerHtml) directly, we can tell if <object> is loaded or not
function isReallyLoaded(obj) {
return obj.offsetHeight !== 5; // fallback height
}
/**
* Chrome calls always, Firefox on load
*/
obj.onload = function() {
isReallyLoaded(obj) ? onLoaded() : onError();
};
/**
* Firefox on error
*/
obj.onerror = function() {
onError();
};
But what to do with Safari? Good old setTimeout.
var interval = function() {
if (isLoaded) { // some flag
return;
}
if (hasResult(obj)) {
if (isReallyLoaded(obj)) {
onLoaded();
} else {
onError();
}
}
setTimeout(interval, 100);
};
function hasResult(obj) {
return obj.offsetHeight > 0;
}
Yeah.... not so fast. The thing is, <object> when fails has unmentioned in specs behaviour:
Trying to load (size=0)
Fails (size = any) really
Fallback (size = as in innnerHtml)
So, code needs a little enhancement
var interval = function() {
if (isLoaded) { // some flag
return;
}
if (hasResult(obj)) {
if (isReallyLoaded(obj)) {
interval.count++;
// needs less then 400ms to fallback
interval.count > 4 && onLoadedResult(obj, onLoaded);
} else {
onErrorResult(obj, onError);
}
}
setTimeout(interval, 100);
};
interval.count = 0;
setTimeout(interval, 100);
Well, and to start loading
document.body.appendChild(obj);
That is all. I tried to explain code in every detail, so it may look not so foolish.
P.S. WebDev sucks
I had this problem recently and had to resort to setting up a Javascript Polling action on the Parent Page (that contains the IFRAME tag). This JavaScript function checks the IFRAME's contents for explicit elements that should only exist in a GOOD response. This assumes of course that you don't have to deal with violating the "same origin policy."
Instead of checking for all possible errors which might be generated from the many different network resources.. I simply checked for the one constant positive Element(s) that I know should be in a good response.
After a pre-determined time and/or # of failed attempts to detect the expected Element(s), the JavaScript modifies the IFRAME's SRC attribute (to request from my Servlet) a User Friendly Error Page as opposed to displaying the typical HTTP ERROR message. The JavaScript could also just as easily modify the SRC attribute to make an entirely different request.
function checkForContents(){
var contents=document.getElementById('myiframe').contentWindow.document
if(contents){
alert('found contents of myiframe:' + contents);
if(contents.documentElement){
if(contents.documentElement.innerHTML){
alert("Found contents: " +contents.documentElement.innerHTML);
if(contents.documentElement.innerHTML.indexOf("FIND_ME") > -1){
openMediumWindow("woot.html", "mypopup");
}
}
}
}
}
I think that the pageshow event is fired for error pages. Or if you're doing this from chrome, then your check your progress listener's request to see if it's an HTTP channel in which case you can retrieve the status code.
As for page dependencies, I think you can only do this from chrome by adding a capturing onerror event listener, and even then it will only find errors in elements, not CSS backgrounds or other images.
Doesn't answer your question exactly, but my search for an answer brought me here, so I'm posting just in case anyone else had a similar query to me.
It doesn't quite use a load event, but it can detect whether a website is accessible and callable (if it is, then the iFrame, in theory, should load).
At first, I thought to do an AJAX call like everyone else, except that it didn't work for me initially, as I had used jQuery. It works perfectly if you do a XMLHttpRequest:
var url = http://url_to_test.com/
var xhttp = new XMLHttpRequest();
xhttp.onreadystatechange = function() {
if (this.readyState == 4 && this.status != 200) {
console.log("iframe failed to load");
}
};
xhttp.open("GET", url, true);
xhttp.send();
Edit:
So this method works ok, except that it has a lot of false negatives (picks up a lot of stuff that would display in an iframe) due to cross-origin malarky. The way that I got around this was to do a CURL/Web request on a server, and then check the response headers for a) if the website exists, and b) if the headers had set x-frame-options.
This isn't a problem if you run your own webserver, as you can make your own api call for it.
My implementation in node.js:
app.get('/iframetest',function(req,res){ //Call using /iframetest?url=url - needs to be stripped of http:// or https://
var url = req.query.url;
var request = require('https').request({host: url}, function(response){ //This does an https request - require('http') if you want to do a http request
var headers = response.headers;
if (typeof headers["x-frame-options"] != 'undefined') {
res.send(false); //Headers don't allow iframe
} else {
res.send(true); //Headers don't disallow iframe
}
});
request.on('error',function(e){
res.send(false); //website unavailable
});
request.end();
});
Have a id for the top most (body) element in the page that is being loaded in your iframe.
on the Load handler of your iframe, check to see if getElementById() returns a non null value.
If it is, iframe has loaded successfully. else it has failed.
in that case, put frame.src="about:blank". Make sure to remove the loadhandler before doing that.
If the iframe is loaded on the same origin as the parent page, then you can do this:
iframeEl.addEventListener('load', function() {
// NOTE: contentDocument is null if a connection error occurs or if
// X-Frame-Options is not SAMESITE (which could happen with
// 4xx or 5xx error pages if the corresponding error handlers
// do not specify SAMESITE). If error handlers do not specify
// SAMESITE, then networkErrorOccurred will incorrectly be set
// to true.
const networkErrorOccurred = !iframeEl.contentDocument;
const serverErrorOccurred = (
!networkErrorOccurred &&
!iframeEl.contentDocument.querySelector('#well-known-element')
);
if (networkErrorOccurred || serverErrorOccurred) {
let errorMessage;
if (networkErrorOccurred) {
errorMessage = 'Error: Network error';
} else if (serverErrorOccurred) {
errorMessage = 'Error: Server error';
} else {
// Assert that the above code is correct.
throw new Error('networkErrorOccurred and serverErrorOccurred are both false');
}
alert(errorMessage);
}
});

Categories

Resources