Programmatically check and update only if an image has changed - javascript

I have an application which updates an image from time to time. The update interval is not predictable. The image itself is updated atomically on the web server via rename(). That is all this application does and there shall be no change on the Apache side such that the webserver can continue to only serve static files.
There is some AJAX script which displays the content and updates this image when it is changed. This is done using polling. The naive JavaScript version used a counter and updated pulled the image each second or so by adding a query timestamp. However 99% of the time this pulls the image unchanged.
The current not so naive version uses XMLHttpRequest aka. AJAX to check the If-Modified-Since-header, and if a change is detected the update is invoked.
The question now is, is there a better way to archive this effect? Perhaps look at the last paragraph of this text before you dive into this ;)
Here are the core code snippets of the current version. Please note that the code is edited for brevity, so var initialization left away and some lines removed which are not of interest.
First the usual, slightly extended AJAX binding:
// partly stolen at http://snippets.dzone.com/posts/show/2025
function $(e){if(typeof e=='string')e=document.getElementById(e);return e};
ajax={};
ajax.collect=function(a,f){var n=[];for(var i=0;i<a.length;i++){var v=f(a[i]);if(v!=null)n.push(v)}return n};
ajax.x=function(){try{return new XMLHttpRequest()}catch(e){try{return new ActiveXObject('Msxml2.XMLHTTP')}catch(e){return new ActiveXObject('Microsoft.XMLHTTP')}}};
ajax.send=function(u,f,m,a,h){var x=ajax.x();x.open(m,u,true);x.onreadystatechange=function(){if(x.readyState==4)f(x.responseText,x,x.status==0?200:x.status,x.getResponseHeader("Last-Modified"))};if(m=='POST')x.setRequestHeader('Content-type','application/x-www-form-urlencoded');if(h)h(x);x.send(a)};
ajax.get=function(url,func){ajax.send(url,func,'GET')};
ajax.update=function(u,f,lm){ajax.send(u,f,'GET',null,lm?function(x){x.setRequestHeader("If-Modified-Since",lm)}:lm)};
ajax.head=function(u,f,lm){ajax.send(u,f,'HEAD',null,lm?function(x){x.setRequestHeader("If-Modified-Since",lm)}:lm)};
The basic HTML part, it includes 2 images which are flipped after loading, and a third one (not referenced in the code snippets) to display archived versions etc., which prevents flipping the updates as well:
</head><body onload="init()">
<div id="shower"><img id="show0"/><img id="show1"/><img id="show2"/></div>
The initial part includes the timer. It is a bit more to it, to compensate for network delays on slow links, reduce the polling rate etc.:
function init()
{
window.setInterval(timer,500);
for (var a=2; --a>=0; )
{
var o=$("show"+a);
o.onload = loadi;
}
disp(0);
}
function disp(n)
{
shown=n;
window.setTimeout(disp2,10);
}
function disp2()
{
hide("show0");
hide("show1");
hide("show2");
show("show"+shown);
}
function hide(s)
{
$(s).style.display="none";
}
function show(s)
{
$(s).style.display="inline";
}
function timer(e)
{
if (waiti && !--waiti)
dorefresh();
nextrefresh();
}
function nextrefresh()
{
if (sleeps<0)
sleeps = sleeper;
if (!--sleeps)
pendi = true;
if (pendi && !waiti)
dorefresh();
}
From time to time dorefresh() is called to pull the HEAD, tracking If-Modified-Since:
function dorefresh()
{
waiti = 100; // allow 50s for this request to take
ajax.head("test.jpg",checkrefresh,lm);
}
function checkrefresh(e,x,s,l)
{
if(!l)
{
// not modified
lmc++;
waiti = 0;
}
else
{
lmc=0;
lm=l;
$("show"+loadn).src = "test.jpg?"+stamp();
waiti=100;
}
pendi=false;
sleeper++;
if (sleeper>maxsleep)
sleeper = maxsleep;
sleeps=0;
nextrefresh();
}
function stamp()
{
return new Date().getTime();
}
When the image is loaded it is flipped into view. shown usually is 0 or 1:
function loadi()
{
waiti=0;
$("show"+loadn).style.opacity=1;
if (shown<2)
disp(loadn);
loadn=1-loadn;
}
Please note that I only tested this code with Webkit based browsers yet.
Sorry, I cannot provide a working example, as my update source is non-public.
Also please excuse that the code is somewhat quick-n-dirty quality.
Strictly speaking HEAD alone is enough, we could look at the Last-Modified header of course.
But this recipe here also works for GET requests in a non-image situation.
AJAX GET in combination with images makes less sense, as this pulls the image as binary data.
I could convert that into some inline image, of course, but on bigger images (like mine) this will exceed the maximum URL length.
One thing which possibly can be done is using the browser cache.
That is pull the image using an ajax.update and then re-display the image from the cache.
However this depends on the cache strategy of a browser. On mobile devices the image might be too big to be cached, in that case it is transferred twice. This is wrong as usually mobile devices have slow and more important expensive data links.
We could use this method if the webserver would write a text file, like JSON or a JS snippet, which then is used to display the image.
However the nice thing about this code here is, that you do not need to provide additional information.
So no race conditions, no new weird states like in disk full cases, just works.
So one basic idea is to not alter the code on the webserver which generates the picture, just do it on the browser side.
This way all you need is a softlink from the web tree to the image and make sure, the image is atomically updated.
The downside of AJAX is the same origin policy, so AJAX can only check the HEAD of resources from the host which provided the running JavaScript code.
Greasemonkey or Scriptlets can circumvent that, but these cannot be deployed to a broad audience.
So foreign resources (images) sadly cannot be efficiently queried if they were updated or not.
At my side luclily both, the script and the image, originate from the same host.
Having said this all, here are the problems with this code:
The code above adds to the delay. First the HEAD is checked and if this shows that something has changed the update is done.
It would be nice to do both in one request, such that the update of the image does not require an additional roundtrip.
GET can archive that with If-Modified-Since, and it works on my side, however I found no way to properly display the result as an inlined image. It might work for some browsers, but not for all.
The code also is way too complex for my taste. You have to deal with possible network timeouts, not overwhelming limited bandwidth, trying to be friendly to the webserver, being compatible to as many browsers as possible, and so on.
Also I would like to get rid of the hack to use a query parameter just to pull an updated image, as this slowly fills/flushes the cache.
Perhaps there is an - unknown to me - way to "re-trigger" image refresh in the browser?
This way the browser could check with If-Modified-Since directly and update the image.
With JavaScript this could trigger a .load event then or similar.
At my side I even do not need that at all, all I want is to keep the image somewhat current.
I did not experiment with CANVAS yet. Perhaps somebody has an idea using that.
So my question just is, is there any better way (another algorithm) than shown above, except from improving code quality?

From what I understand, you have 2 sources of information on the server: the image itself and time of last update. Your solution polls on both channels and you want to push, right?
Start here: http://en.wikipedia.org/wiki/Comet_(programming), there should be a simple way to let the server update the client on a new image url. In case server and client support websockets it's a shortcut.
However, most simple solution assumes no image url change and runs
image.src = "";
image.src = url;
by using setInterval() and let the browser deal with cache and headers.

Related

Gmail like preloader

I have an SPA with heavy assets:
One JavaScript file: 3Mb
One Stylesheet file: almost 1Mb
Two fonts: 700Kb
With a normal connection the files download quickly, less than 3 seconds. But you can imagine how the experience will be frustrating for a user with slow connection, he will probably end up closing the window.
One solution is to use a classic preloader like Pace but this is not still good enough.
My solution:
I would call a bit of code at different point of the big script file:
console.log('progress at 0 %') // at the top
// code to update the progress bar
console.log('progress at 23 %') // Somewhere else
// code to update the progress bar
and then at the bottom I just listen for $(document).ready() to remove the progress bar.
My question:
Is there a better solution, or a way to get how much the user downloaded and how much left to download from all the scripts stylesheets ... ?
If you were to include a smaller, inline bit of javascript that bootstrapped the rest of your application, you could use the XHR progress event.
Imagine this javascript inlined:
var appScript = document.createElement('script')
var xhr = new XMLHttpRequest();
xhr.addEventListener('progress', function (e) {
var percent = e.loaded / e.total
console.log('loaded', percent)
// update loader
})
xhr.addEventListener('load', function () {
appScript.innerHTML = this.responseText
document.body.appendChild(appScript)
// ^ at this point the app javascript will run
})
xhr.open('GET', '/js/app.js')
xhr.send()
This should allow you to monitor the progress of your app being loaded.
Answer: There isn't a better (existing) solution that monitors the precise percentage of the progress of the download of all of the resources in a document.
JoshWillik's answer referenced the XHR progress event which does offer possibility since it is possible to monitor the real-time progress of the download of resources in ajax requests ( https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Using_XMLHttpRequest#Monitoring_progress ). Naturally this would require the initial unmonitored download of the resources required to send those requests, monitor, calculate and render their collective progress and to load those resources upon completion (or whatever you wanted to do).
You also have have to account for a multitude of obstacles involving server configurations, the differences in media types, loading those types on the page in the correct order in relation to the others from the cache, the resources concerned by the ProgressEvent returning a length that can be calculated (lengthComputable), calculating the precise percentage real-time, lazy-loading and a wide arrangement of other factors involved in modern website and web application development techniques.
Many would argue that this type of functionality goes against UX principles and to focus on delivering the document and resources as quickly as possible (which I'm not against) but I would love to see something like this exist as well.
Alas we still have no jetpacks, flying cars nor real-time progress preloaders.
PS I've heard WebSocket mentioned in reference to a solution in another answer to a question such as this however I have not had time to look into it. Another idea is utilizing service workers.

Why do browsers inefficiently make 2 requests here?

I noticed something odd regarding ajax and image loading. Suppose you have an image on the page, and ajax requests the same image - one would guess that ajax requests would hit the browser cache, or it should at least only make one request, the resulting image going to the page and the script that wants to read/process the image.
Surprisingly, I found that even when the javascript waits for the entire page to load, the image request still makes a new request! Is this a known bug in Firefox and Chrome, or something bad jQuery ajax is doing?
Here you can see the problem, open Fiddler or Wireshark and set it to record before you click "run":
<script src="http://code.jquery.com/jquery-1.11.1.min.js"></script>
<div id="something" style="background-image:url(http://jsfiddle.net/img/logo-white.png);">Hello</div>
<script>
jQuery(function($) {
$(window).load(function() {
$.get('http://jsfiddle.net/img/logo-white.png');
})
});
</script>
Note that in Firefox it makes two requests, both resulting in 200-OK, and sending the entire image back to the browser twice. In Chromium, it at least correctly gets a 304 on second request instead of downloading the entire contents twice.
Oddly enough, IE11 downloads the entire image twice, while it seems IE9 aggressively caches it and downloads it once.
Ideally I would hope the ajax wouldn't make a second request at all, since it is requesting exactly the same url. Is there a reason css and ajax in this case usually have different caches, as though the browser is using different cache storage for css vs ajax requests?
I use the newest Google Chrome and it makes one request. But in your JSFIDDLE example you are loading jQuery twice. First with CSS over style attribute and second in your code over script tag. Improved: JSFIDDLE
<div id="something" style="background-image:url('http://jsfiddle.net/img/logo-white.png');">Hello</div>
<script>
jQuery(window).load(function() {
jQuery.get('http://jsfiddle.net/img/logo-white.png');
});
// or
jQuery(function($) {
jQuery.get('http://jsfiddle.net/img/logo-white.png');
});
</script>
jQuery(function($) {...} is called when DOM is ready and jQuery(window).load(...); if DOM is ready and every image and other resources are loaded. To put both together nested makes no sense, see also here: window.onload vs $(document).ready()
Sure, the image is loaded two times in Network tab of the web inspector. First through your CSS and second through your JavaScript. The second request is probably cached.
UPDATE: But every request if cached or not is shown in this tab. See following example: http://jsfiddle.net/95mnf9rm/4/
There are 5 request with cached AJAX calls and 5 without caching. And 10 request are shown in 'Network' tab.
When you use your image twice in CSS then it's only requested once. But if you explicitly make a AJAX call then the browser makes an AJAX call. As you want. And then maybe it's cached or not, but it's explicitly requested, isn't it?
This "problem" could a be a CORS pre-flight test.
I had noticed this in my applications awhile back, that the call to retrieve information from a single page application made the call twice. This only happens when you're accessing URLs on a different domain. In my case we have APIs we've built and use on a different server (a different domain) than that of the applications we build. I noticed that when I use a GET or POST in my application to these RESTFUL APIs the call appears to be made twice.
What is happening is something called pre-flight (https://developer.mozilla.org/en-US/docs/Web/HTTP/Access_control_CORS), an initial request is made to the server to see if the ensuing call is allowed.
Excerpt from MDN:
Unlike simple requests, "preflighted" requests first send an HTTP request by the OPTIONS method to the resource on the other domain, in order to determine whether the actual request is safe to send. Cross-site requests are preflighted like this since they may have implications to user data. In particular, a request is preflighted if:
It uses methods other than GET, HEAD or POST. Also, if POST is used to send request data with a Content-Type other than application/x-www-form-urlencoded, multipart/form-data, or text/plain, e.g. if the POST request sends an XML payload to the server using application/xml or text/xml, then the request is preflighted.
It sets custom headers in the request (e.g. the request uses a header such as X-PINGOTHER)
Your fiddle tries to load a resource from another domain via ajax:
I think I created a better example. Here is the code:
<img src="smiley.png" alt="smiley" />
<div id="respText"></div>
<script src="//ajax.googleapis.com/ajax/libs/jquery/2.1.1/jquery.min.js"></script>
<script>
$(window).load(function(){
$.get("smiley.png", function(){
$("#respText").text("ajax request succeeded");
});
});
</script>
You can test the page here.
According to Firebug and the chrome network panel the image is returned with the status code 200 and the image for the ajax request is coming from the cache:
Firefox:
Chrome:
So I cannot find any unexpected behavior.
Cache control on Ajax requests have always been a blurred and buggy subject (example).
The problem gets even worse with cross-domain references.
The fiddle link you provided is from jsfiddle.net which is an alias for fiddle.jshell.net. Every code runs inside the fiddle.jshell.net domain, but your code is referencing an image from the alias and browsers will consider it a cross-domain access.
To fix it, you could change both urls to http://fiddle.jshell.net/img/logo-white.png or just /img/logo-white.png.
The helpful folks at Mozilla gave some details as to why this happens. Apparently Firefox assumes an "anonymous" request could be different than normal, and for this reason it makes a second request and doesn't consider the cached value with different headers to be the same request.
https://bugzilla.mozilla.org/show_bug.cgi?id=1075297
This may be a shot in the dark, but here's what I think is happening.
According to,
http://api.jquery.com/jQuery.get/
dataType
Type: String
The type of data expected from the server.
Default: Intelligent Guess (xml, json, script, or html).
Gives you 4 possible return types. There is no datatype of image/gif being returned. Thus, the browser doesn't test it's cache for the src document as it is being delivered a a different mime type.
The server decides what can be cached and for how long. However, it again depends on the browser, whether or not to follow it. Most web browsers like Chrome, Firefox, Safari, Opera and IE follow it, though.
The point that I want to make here, is that your web sever might be configured to not allow your browser to cache the content, thus, when you request the image through CSS and JS, the browser follows your server's orders and doesn't cache it and thus it requests the image twice...
I want JS-accessible image
Have you tried to CSS using jQuery? It is pretty fun - you have full CRUD (Create, read, update, delete) CSS elements. For example do image resize on server side:
$('#container').css('background', 'url(somepage.php?src=image_source.jpg'
+ '&w=' + $("#container").width()
+ '&h=' + $("#container").height() + '&zc=1');
Surprisingly, I found that even when the javascript waits for the entire page to load, the image request still makes a new request! Is this a known bug in Firefox and Chrome, or something bad jQuery ajax is doing?
It is blatantly obvious that this is not a browser bug.
The computer is deterministic and does what exactly you tell it to (not want you want it to do). If you want to cache images it is done in server side. Based on who handles caching it can be handled as:
Server (like IIS or Apache) cache - typically caches things that are reused often (ex: 2ce in 5 seconds)
Server side application cache - typically it reuses server custom cache or you create sprite images or ...
Browser cache - Server side adds cache headers to images and browsers maintain cache
If it is not clear then I would like to make it clear : You don't cache images with javascript.
Ideally I would hope the ajax wouldn't make a second request at all, since it is requesting exactly the same url.
What you try to do is to preload images.
Once an image has been loaded in any way into the browser, it will be
in the browser cache and will load much faster the next time it is
used whether that use is in the current page or in any other page as
long as the image is used before it expires from the browser cache.
So, to precache images, all you have to do is load them into the
browser. If you want to precache a bunch of images, it's probably best
to do it with javascript as it generally won't hold up the page load
when done from javascript. You can do that like this:
function preloadImages(array) {
if (!preloadImages.list) {
preloadImages.list = [];
}
for (var i = 0; i < array.length; i++) {
var img = new Image();
img.onload = function() {
var index = preloadImages.list.indexOf(this);
if (index !== -1) {
// remove this one from the array once it's loaded
// for memory consumption reasons
preloadImages.splice(index, 1);
}
}
preloadImages.list.push(img);
img.src = array[i];
}
}
preloadImages(["url1.jpg", "url2.jpg", "url3.jpg"]);
Then, once they've been preloaded like this via javascript, the browser will have them in its cache and you can just refer to the normal URLs in other places (in your web pages) and the browser will fetch that URL from its cache rather than over the network.
Source : How do you cache an image in Javascript
Is there a reason css and ajax in this case usually have different caches, as though the browser is using different cache storage for css vs ajax requests?
Even in absence of information do not jump to conclusions!
One big reason to use image preloading is if you want to use an image
for the background-image of an element on a mouseOver or :hover event.
If you only apply that background-image in the CSS for the :hover
state, that image will not load until the first :hover event and thus
there will be a short annoying delay between the mouse going over that
area and the image actually showing up.
Technique #1 Load the image on the element's regular state, only shift it away with background position. Then move the background
position to display it on hover.
#grass { background: url(images/grass.png) no-repeat -9999px -9999px; }
#grass:hover { background-position: bottom left; }
Technique #2 If the element in question already has a background-image applied and you need to change that image, the above
won't work. Typically you would go for a sprite here (a combined
background image) and just shift the background position. But if that
isn't possible, try this. Apply the background image to another page
element that is already in use, but doesn't have a background image.
#random-unsuspecting-element {
background: url(images/grass.png) no-repeat -9999px -9999px; }
#grass:hover { background: url(images/grass.png) no-repeat; }
The idea create new page elements to use for this preloading technique
may pop into your head, like #preload-001, #preload-002, but that's
rather against the spirit of web standards. Hence the using of page
elements that already exist on your page.
The browser will make the 2 requests on the page, cause an image called from the css uses a get request (not ajax) too before rendering the entire page.
The window load is similar to de attribute, and is loading before the rest of the page, then, the image from the Ajax will be requested first than the image on the div, processed during the page load.
If u would like to load a image after the entire page is loaded, u should use the document.ready() instead

There is any way to save image/pdf content into local file system applicable for all browsers [closed]

It's difficult to tell what is being asked here. This question is ambiguous, vague, incomplete, overly broad, or rhetorical and cannot be reasonably answered in its current form. For help clarifying this question so that it can be reopened, visit the help center.
Closed 10 years ago.
I got image content by ajax response in an array buffer.appended that array buffer to blob builder.now i want to write these contents to a file.Is there any way to do this..?
I used windows.requestFileSystem it is working fine with chrome but in mozilla not working..
here is my piece of code ,
function retrieveImage(studyUID, seriesUID, instanceUID, sopClassUID,nodeRef) {
window.requestFileSystem = window.requestFileSystem||window.webkitRequestFileSystem;
var xhr = new XMLHttpRequest();
var url="/alfresco/createthumbnail?ticket="+ticket+"&node="+nodeRef;
xhr.open('GET', url, true);
xhr.responseType = 'arraybuffer';
xhr.onload = function(e) {
if(this.status == 200) {
window.requestFileSystem(window.TEMPORARY, 1024*1024, function(fs) {
var fn = '';
if(sopClassUID == '1.2.840.10008.5.1.4.1.1.104.1') {
fn = instanceUID+'.pdf';
} else {
fn = instanceUID+'.jpg';
}
fs.root.getFile(fn, {create:true}, function(fileEntry) {
fileEntry.createWriter(function(writer) {
writer.onwriteend = function(e) {
console.log(fileEntry.fullPath + " created");
}
writer.onerror = function(e) {
console.log(e.toString());
}
var bb;
if(window.BlobBuilder) {
bb = new BlobBuilder();
} else if(window.WebKitBlobBuilder) {
bb = new WebKitBlobBuilder();
}
bb.append(xhr.response);
if(sopClassUID == '1.2.840.10008.5.1.4.1.1.104.1') {
writer.write(bb.getBlob('application/pdf'));
} else {
writer.write(bb.getBlob('image/jpeg'));
}
}, fileErrorHandler);
}, fileErrorHandler);
}, fileErrorHandler);
}
};
xhr.send();
}
The script of a web page is not allowed to write arbitrary files [such as pdfs] to client's storage. And you should be thankful because that means that web pages have a hard time trying to put malware on your machine.
Instead you should redirect the user (or open a new window/tab) to an url where the browser can find the content desired for download, and let it handle it. Use the header to tell the client to download it or displayed as explained here.
If you need to create the downloaded content dynamically, then manage it on the server making it an active page (.php, .jsp, .aspx, etc...). What matters is to have the correct MIME type in the header of the response.
Note: yes, I'm telling you to not use ajax, just window.open. Edit: I guess you may want to present the images in a img, in that case, it is the same, just put the url in the src attribute and have no ajax. Only some javascript to update the attribute if appropiate.
Given your comment I understand that you want:
To cache the image in the client to avoid to have to get it back from the server every time.
To allow the user to customize his experience allowing the use of images from local storage.
Now, again for security reasons, arbirary access to client's files is not allowed. In this case it works both ways: first it prevents the webpage to spy you, and second it prevents you to inject malicious content on the page.
So, for the first part, as far as I know the default is to cache images [this is handled by your browser, and yes, you should clean it from time to time because it tends to grow]. If that is not working for you, I guess you could try use a cache manifest.
About the second, the usual way would be use local storage [which, again is handled by your browser, but is not arbitrary access to client's files] to store/retrieve the url of the image and use it present the image.
The image can still be saved at the server, and yes, it can be cached. To get it to the server - of course - you can always upload it with <input type="file" ... /> and you may need to set enctype to your form. - You already knew that, right? - On the server, store the image on a database (or dedicated folder). Now the page that is resposible to retrieve the image should:
check the request method
check user's permissions (identify it by the session / cookie)
check the parameters of the request (if any)
set the header
output the file got the database (or dedicated folder)
Now, let's say you want to allow this to works as an xcopy deployable application (that just happens to run in a browser). In this case you can always tell the user to store the images he want in a particular location and access them with a relative path.
Or - just because - you are hosting in a place were there is no chance of server-side scripting. So you got to go along only with what javascript gives you. Well, you cannot use relative path here, since it is not local... and if you try to use a local absolute path, the browser will just diss you (I mean, it just ignores it).
So, you can't get the image from a file of the client, and you can't store it on the server...
Well, as you know there is a working draft for that, and I notice it is what you are trying. The problem is that it is a working draft. The initial implementation gets staggered by the security issues, to quote Jonas Sicking:
The main problem with exposing this functionality to the web is security. You wouldn’t want just any website to read or modify your images. We could put up a prompt like we do with the GeoLocation API, given that this API potentially can delete all your pictures from the last 10 years, we probably want something more. This is something we are actively working on. But it’s definitely the case here that security is the hard part here, not implementing the low-level file operations.
So, I guess the answer is "not yet"? In fact, considering Microsoft's approach of only providing the parts of the standardar that reach recommendation status, and also its approach of launching a new version of IE each new version of Windows... then you will have to wait a while to have supports in all the browsers. First wait until FileAPI reaches recommendation status. Then wait until Microsoft updates IE to support it. And if, by any chance (as it seems will happen) it will be only for IE10 (or a future IE11) and those deosn't work on a Windows before Windows 8, you will be waiting a lot of people to upgrade.
If this is your situation, I would suggest to get an API for some image hosting web site, and use that instead [That will probably not be free (or not be private), so you could just change your web hosting already].
you cant have a common way to store the response in files compatible with all the browsers ,
there is a way , u can use FileReader in javascript but that again wudn't work on IE either .
I had the similar prob a few weeks ago , what i did was i made an ajax request to a server passing the content , the server stored the content for me in the file , then it return a reference to the stored file.
i stored my files in a temp database table and the server action returned the id for the file by which we can access the file from database whenever we want.
you can also store your files on the server in some thumbnail , but i prefered database.
if u need any more specification , let me know

Take Screenshot of Browser via JavaScript (or something else)

For support reasons I want to be able for a user to take a screenshot of the current browser window as easy as possible and send it over to the server.
Any (crazy) ideas?
That would appear to be a pretty big security hole in JavaScript if you could do this. Imagine a malicious user installing that code on your site with a XSS attack and then screenshotting all of your daily work. Imagine that happening with your online banking...
However, it is possible to do this sort of thing outside of JavaScript. I developed a Swing application that used screen capture code like this which did a great job of sending an email to the helpdesk with an attached screenshot whenever the user encountered a RuntimeException.
I suppose you could experiment with a signed Java applet (shock! horror! noooooo!) that hung around in the corner. If executed with the appropriate security privileges given at installation it might be coerced into executing that kind of screenshot code.
For convenience, here is the code from the site I linked to:
import java.awt.Dimension;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.File;
...
public void captureScreen(String fileName) throws Exception {
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
Rectangle screenRectangle = new Rectangle(screenSize);
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
ImageIO.write(image, "png", new File(fileName));
}
...
Please see the answer shared here for a relatively successful implementation of this:
https://stackoverflow.com/a/6678156/291640
Utilizing:
https://github.com/niklasvh/html2canvas
You could try to render the whole page in canvas and save this image back to server. have fun :)
A webpage can't do this (or at least, I would be very surprised if it could, in any browser) but a Firefox extension can. See https://developer.mozilla.org/en/Drawing_Graphics_with_Canvas#Rendering_Web_Content_Into_A_Canvas -- when that page says "Chrome privileges" that means an extension can do it, but a web page can't.
Seems to me that support needs (at least) the answers for two questions:
What does the screen look like? and
Why does it look that way?
A screenshot -- a visual -- is very necessary and answers the first question, but it can't answer the second.
As a first attempt, I'd try to send the entire page up to support. The support tech could display that page in his browser (answers the first question); and could also see the current state of the customer's html (helps to answer the second question).
I'd try to send as much of the page as is available to the client JS by way of AJAX or as the payload of a form. I'd also send info not on the page: anything that affects the state of the page, like cookies or session IDs or whatever.
The cust might have a submit-like button to start the process.
I think that would work. Let's see: it needs some CGI somewhere on the server that catches the incoming user page and makes it available to support, maybe by writing a disk file. Then the support person can load (or have loaded automatically) that same page. All the other info (cookies and so on) can be put into the page that support sees.
PLUS: the client JS that handles the submit-button onclick( ) could also include any useful JS variable values!
Hey, this can work! I'm getting psyched :-)
HTH
-- pete
I've seen people either do this with two approaches:
setup a separate server for screenshotting and run a bunch of firefox instances on there, check out these two gem if you're doing it in ruby: selenium-webdriver and headless
use a hosted solution like http://url2png.com (way easier)
You can also do this with the Fireshot plugin. I use the following code (that I extracted from the API code so I don't need to include the API JS) to make a direct call to the Fireshot object:
var element = document.createElement("FireShotDataElement");
element.setAttribute("Entire", true);
element.setAttribute("Action", 1);
element.setAttribute("Key", "");
element.setAttribute("BASE64Content", "");
element.setAttribute("Data", "C:/Users/jagilber/Downloads/whatev.jpg");
if (typeof(CapturedFrameId) != "undefined")
element.setAttribute("CapturedFrameId", CapturedFrameId);
document.documentElement.appendChild(element);
var evt = document.createEvent("Events");
evt.initEvent("capturePageEvt", true, false);
element.dispatchEvent(evt);
Note: I don't know if this functionality is only available for the paid version or not.
Perhaps http://html2canvas.hertzen.com/ could be used. Then you can capture the display and then process it.
You might try PhantomJs, a headlesss browsing toolkit.
http://phantomjs.org/
The following Javascript example demonstrates basic screenshot functionality:
var page = require('webpage').create();
page.settings.userAgent = 'UltimateBrowser/100';
page.viewportSize = { width: 1200, height: 1200 };
page.clipRect = { top: 0, left: 0, width: 1200, height: 1200 };
page.open('https://google.com/', function () {
page.render('output.png');
phantom.exit();
});
I understand this post is 5 years old, but for the sake of future visits I'll add my own solution here which I think solves the original post's question without any third-party libraries apart from jQuery.
pageClone = $('html').clone();
// Make sure that CSS and images load correctly when opening this clone
pageClone.find('head').append("<base href='" + location.href + "' />");
// OPTIONAL: Remove potentially interfering scripts so the page is totally static
pageClone.find('script').remove();
htmlString = pageClone.html();
You could remove other parts of the DOM you think are unnecessary, such as the support form if it is in a modal window. Or you could choose not to remove scripts if you prefer to maintain some interaction with dynamic controls.
Send that string to the server, either in a hidden field or by AJAX, and then on the server side just attach the whole lot as an HTML file to the support email.
The benefits of this are that you'll get not just a screenshot but the entire scrollable page in its current form, plus you can even inspect and debug the DOM.
Print Screen? Old school and a couple of keypresses, but it works!
This may not work for you, but on IE you can use the snapsie plugin. It doesn't seem to be in development anymore, but the last release is available from the linked site.
i thing you need a activeX controls. without it i can't imagine. you can force user to install them first after the installation on client side activex controls should work and you can capture.
We are temporarily collecting Ajax states, data in form fields and session information. Then we re-render it at the support desk. Since we test and integrate for all browsers, there are hardly any support cases for display reasons.
Have a look at the red button at the bottom on holidaycheck
Alternatively there is html2canvas of Google. But it is only applicable for never browsers and I've never tried it.
In JavaScript? No. I do work for a security company (sort of NetNanny type stuff) and the only effective way we've found to do screen captures of the user is with a hidden application.

How can I monitor the rendering time in a browser?

I work on an internal corporate system that has a web front-end using Tomcat.
How can I monitor the rendering time of specific pages in a browser (IE6)?
I would like to be able to record the results in a log file (separate log file or the Tomcat access log).
EDIT: Ideally, I need to monitor the rendering on the clients accessing the pages.
The Navigation Timing API is available in modern browsers (IE9+) except Safari:
function onLoad() {
var now = new Date().getTime();
var page_load_time = now - performance.timing.navigationStart;
console.log("User-perceived page loading time: " + page_load_time);
}
In case a browser has JavaScript enabled one of the things you could do is to write an inline script and send it first thing in your HTML. The script would do two things:
Record current system time in a JS variable (if you're lucky the time could roughly correspond to the page rendering start time).
Attach JS function to the page onLoad event. This function will then query the current system time once again, subtract the start time from step 1 and send it to the server along with the page location (or some unique ID you could insert into the inline script dynamically on your server).
<script language="JavaScript">
var renderStart = new Date().getTime();
window.onload=function() {
var elapsed = new Date().getTime()-renderStart;
// send the info to the server
alert('Rendered in ' + elapsed + 'ms');
}
</script>
... usual HTML starts here ...
You'd need to make sure that the page doesn’t override onload later in the code, but adds to the event handlers list instead.
As far as non-invasive techniques are concerned, Hammerhead measures complete load time (including JavaScript execution), albeit in Firefox only.
I've seen usable results when a JavaScript snippet could be added globally to measure the start and end of each page load operation.
Have a look at Selenium - they offer a remote control that can automatically start different browsers (e.g. IE6), load pages, test for specific content on the page. At the end reports are generated that also show the rendering times.
Since others are posting answers that use other browsers, I guess I will too. Chrome has a very detailed profiling system that breaks down the rendering time of the page and shows the time it took for each step along the way.
As for IE, you might want to consider writing a plugin. There seems to be few tools like this on the market. Maybe you could sell it.
On Firefox you can use Firebug to monitor load time. With the YSlow plugin you can even get recommendations how to improve the performance.

Categories

Resources