I have an SPA with heavy assets:
One JavaScript file: 3Mb
One Stylesheet file: almost 1Mb
Two fonts: 700Kb
With a normal connection the files download quickly, less than 3 seconds. But you can imagine how the experience will be frustrating for a user with slow connection, he will probably end up closing the window.
One solution is to use a classic preloader like Pace but this is not still good enough.
My solution:
I would call a bit of code at different point of the big script file:
console.log('progress at 0 %') // at the top
// code to update the progress bar
console.log('progress at 23 %') // Somewhere else
// code to update the progress bar
and then at the bottom I just listen for $(document).ready() to remove the progress bar.
My question:
Is there a better solution, or a way to get how much the user downloaded and how much left to download from all the scripts stylesheets ... ?
If you were to include a smaller, inline bit of javascript that bootstrapped the rest of your application, you could use the XHR progress event.
Imagine this javascript inlined:
var appScript = document.createElement('script')
var xhr = new XMLHttpRequest();
xhr.addEventListener('progress', function (e) {
var percent = e.loaded / e.total
console.log('loaded', percent)
// update loader
})
xhr.addEventListener('load', function () {
appScript.innerHTML = this.responseText
document.body.appendChild(appScript)
// ^ at this point the app javascript will run
})
xhr.open('GET', '/js/app.js')
xhr.send()
This should allow you to monitor the progress of your app being loaded.
Answer: There isn't a better (existing) solution that monitors the precise percentage of the progress of the download of all of the resources in a document.
JoshWillik's answer referenced the XHR progress event which does offer possibility since it is possible to monitor the real-time progress of the download of resources in ajax requests ( https://developer.mozilla.org/en-US/docs/Web/API/XMLHttpRequest/Using_XMLHttpRequest#Monitoring_progress ). Naturally this would require the initial unmonitored download of the resources required to send those requests, monitor, calculate and render their collective progress and to load those resources upon completion (or whatever you wanted to do).
You also have have to account for a multitude of obstacles involving server configurations, the differences in media types, loading those types on the page in the correct order in relation to the others from the cache, the resources concerned by the ProgressEvent returning a length that can be calculated (lengthComputable), calculating the precise percentage real-time, lazy-loading and a wide arrangement of other factors involved in modern website and web application development techniques.
Many would argue that this type of functionality goes against UX principles and to focus on delivering the document and resources as quickly as possible (which I'm not against) but I would love to see something like this exist as well.
Alas we still have no jetpacks, flying cars nor real-time progress preloaders.
PS I've heard WebSocket mentioned in reference to a solution in another answer to a question such as this however I have not had time to look into it. Another idea is utilizing service workers.
Related
Our site has an asynchronously loaded application.js:
<script async="async" src="//...application-123456.js"></script>
Additionally, we have a lot of third party scripts that (1) are asynchronously loaded, and (2) create in turn an async <script> tag where a bigger script is called.
Just to give an example, one of these third party scripts is Google's gpt.js (you can have a quick look to understand how it works).
Our problem is that, while all the third party scripts load asynchronously as expected, the application.js one gets stack in "queuing" status for more than 4 seconds.
I tried to change the script and make it load like the third party ones: create a <script> element, set the "src" attribute and load it:
<script async>
(function() {
var elem = document.createElement('script');
elem.src = 'http://...application-123456.js';
elem.async = true;
elem.type = 'text/javascript';
var scpt = document.getElementsByTagName('script')[0];
scpt.parentNode.insertBefore(elem, scpt);
})();
</script>
but nothing changed.
Then I studied the network cascade in a page of our site that almost doesn't contain images, and I saw that the queuing time was almost zero. I tried the same experiment in pages with different amounts of images, and saw that the queuing time proportionally increases in pages with more images.
I read this in Chrome's network cascade documentation:
QUEUING TIME: The request was postponed by the rendering engine because it's considered lower priority than critical resources (such as scripts/styles). This often happens with images.
Is it possible that for some reason the browser is marking our application.js as "lower priority"? I looked on the web and it seems that nobody has experienced problems with the queuing time. Anybody has an idea?
Thank you very much.
Browsers use a pre-loader to improve network utilisation. This article explains the concept.
In the Chrome Documentation you linked to above, it says the following about queuing:
If a request is queued it indicated that:
The request was postponed by: the rendering engine because it's considered lower priority than critical resources (such as scripts/styles). This often happens with images.
The request was put on hold to wait for an unavailable TCP
socket that's about to free up. The request was put on hold because the browser only allows six TCP connections per origin on HTTP 1.
Time spent making disk cache entries (typically very quick.)
The pre-loader would have retrieved the lightweight resources quickly, such as the styles and scripts, and then queued up the images because, as the criteria above suggests, only 6 TCP connections are permitted per origin. Therefore, this would explain the delay in the total response time.
I've been experimenting with responsive voice for my html5 application. Apart from responsive voice, my app works stand-alone, with no need for the Internet, because the schools where I work have unreliable Internet connections.
I previously had a problem waiting for RV's javascript to load on a slow connection, which I solved using the preloader yepnope:
yepnope({
load: 'https://code.responsivevoice.org/responsivevoice.js',
callback: function (url, result, key) {
if (typeof responsiveVoice!="undefined"){
//code to activate RV functionality here
}
}
});
While testing this out, I realised the potential for something much better: yepnope automatically times out after 10 seconds if the script doesn't load and triggers the callback function anyway. That timeout can be changed, but what I'd like is, effectively, no timeout at all!
For example, if the students start using my app at 07:30 a.m. when the school's satellite dish barely works, then at 3 p.m. the clouds clear to make RV viable, it would be nice if the script finally loaded, triggered the callback and RV sprang into life.
So I have 3 questions:
In principle, is there any reason why I should not change the yepnope timeout from 10 seconds, to 12 hours? e.g.
yepnope.errorTimeout = 43200000;
I notice that yepnope has been deprecated. Can anyone recommend a similarly easy-to-use preloader with a no timeout option?
Would it be lighter on system resources to use setInterval? e.g.
var net_check = window.setInterval(yepnope, 18000); // try loading the script every 5 minutes
then if RV loads, cancel the setInterval:
yepnope({
load: 'https://code.responsivevoice.org/responsivevoice.js',
callback: function (url, result, key) {
if (typeof responsiveVoice!="undefined"){
clearInterval(net_check);
//code to activate RV functionality here
}
}
});
Thanks as always for any advice.
Edit: #Steyn van Esveld raised a really good point: "Is there any reason you can't download the responsivevoice.js and load it locally?"
In fact the RV script doesn't provide the text-to-speech voices itself - it's more of a facilitator. If your browser has native t-t-s support, it will use it, if not, they generate audio files (presumably from their website) and send them to your browser. Also, even Chrome's native t-t-s support evaporates if you are off-line. e.g. if you run:
voices = window.speechSynthesis.getVoices();
voices.length
from the console when offline, it returns "0".
This means I need a fallback if the Internet goes down after my app loads. The most reliable way I've found to do this is:
var rvStarted=false;
responsiveVoice.speak(vocEx, {onstart:function(){rvStarted=true;}});
setTimeout(function(){
if (rvStarted==false){
responsiveVoice.cancel();
audVoc.play(); //plays a backup off-line recording
}
},1000);
There is an onerror callback in the RV api, but it's vaguely documented, and I certainly cannot control the timeout myself as I can with this script.
After more research and experimenting, I've got a solution that works. If anyone can come up with something more efficient, I'd be delighted. First the code (explanation follows):
function loadRV() {
console.log("Trying to load RV");
inject.js('https://code.responsivevoice.org/responsivevoice.js',
function() {
if (typeof responsiveVoice!="undefined"){
clearInterval(net_check);
console.log("RV loaded");
// code here to turn on RV functionality
}
else {
console.log("RV load failed");
}
});
}
var net_check=setInterval(function(){
loadRV();
},60000);
loadRV();
Now the explanation:
I realised that yepnope detected that there was no network connection during the first try, and thereafter made no attempt to load the javascript despite the network having been meanwhile turned on. The problem was not about timeout (thus the change in the question title.
So I searched around for a replacement for yepnope that would try to load the javascript each time it was called regardless of what had previously happened. I eventually came across inject.js. This is based on yepnope with some functionality removed and some added. I guess the ability to check if there is a network connection was removed, which suited my needs.
I then wrapped inject.js in a function and called it every minute with setInterval. I also call it once immediately so that RV functionality will be present if there is a viable Internet connection at initialisation.
Although this seems unwieldy, it's working very well so far. I've tested it with the three situations that I commonly face:
No network connection at all during startup; the computer connects to the network some time later.
A local network connection but no Internet connection during startup; the local network connects to the Internet some time later.
A very slow Internet connection during startup; the connection becomes viable some time later.
The setInterval has no appreciable effect on program execution, and the added functionality is quite transparent. If I connect the computer to the network and start a listening activity, about 1 minute later the RV voices will be added to my off-line recordings.
I doubt that many others face similar problems (I certainly would not wish them on anyone), but if so, I do hope this helps.
I have an application which updates an image from time to time. The update interval is not predictable. The image itself is updated atomically on the web server via rename(). That is all this application does and there shall be no change on the Apache side such that the webserver can continue to only serve static files.
There is some AJAX script which displays the content and updates this image when it is changed. This is done using polling. The naive JavaScript version used a counter and updated pulled the image each second or so by adding a query timestamp. However 99% of the time this pulls the image unchanged.
The current not so naive version uses XMLHttpRequest aka. AJAX to check the If-Modified-Since-header, and if a change is detected the update is invoked.
The question now is, is there a better way to archive this effect? Perhaps look at the last paragraph of this text before you dive into this ;)
Here are the core code snippets of the current version. Please note that the code is edited for brevity, so var initialization left away and some lines removed which are not of interest.
First the usual, slightly extended AJAX binding:
// partly stolen at http://snippets.dzone.com/posts/show/2025
function $(e){if(typeof e=='string')e=document.getElementById(e);return e};
ajax={};
ajax.collect=function(a,f){var n=[];for(var i=0;i<a.length;i++){var v=f(a[i]);if(v!=null)n.push(v)}return n};
ajax.x=function(){try{return new XMLHttpRequest()}catch(e){try{return new ActiveXObject('Msxml2.XMLHTTP')}catch(e){return new ActiveXObject('Microsoft.XMLHTTP')}}};
ajax.send=function(u,f,m,a,h){var x=ajax.x();x.open(m,u,true);x.onreadystatechange=function(){if(x.readyState==4)f(x.responseText,x,x.status==0?200:x.status,x.getResponseHeader("Last-Modified"))};if(m=='POST')x.setRequestHeader('Content-type','application/x-www-form-urlencoded');if(h)h(x);x.send(a)};
ajax.get=function(url,func){ajax.send(url,func,'GET')};
ajax.update=function(u,f,lm){ajax.send(u,f,'GET',null,lm?function(x){x.setRequestHeader("If-Modified-Since",lm)}:lm)};
ajax.head=function(u,f,lm){ajax.send(u,f,'HEAD',null,lm?function(x){x.setRequestHeader("If-Modified-Since",lm)}:lm)};
The basic HTML part, it includes 2 images which are flipped after loading, and a third one (not referenced in the code snippets) to display archived versions etc., which prevents flipping the updates as well:
</head><body onload="init()">
<div id="shower"><img id="show0"/><img id="show1"/><img id="show2"/></div>
The initial part includes the timer. It is a bit more to it, to compensate for network delays on slow links, reduce the polling rate etc.:
function init()
{
window.setInterval(timer,500);
for (var a=2; --a>=0; )
{
var o=$("show"+a);
o.onload = loadi;
}
disp(0);
}
function disp(n)
{
shown=n;
window.setTimeout(disp2,10);
}
function disp2()
{
hide("show0");
hide("show1");
hide("show2");
show("show"+shown);
}
function hide(s)
{
$(s).style.display="none";
}
function show(s)
{
$(s).style.display="inline";
}
function timer(e)
{
if (waiti && !--waiti)
dorefresh();
nextrefresh();
}
function nextrefresh()
{
if (sleeps<0)
sleeps = sleeper;
if (!--sleeps)
pendi = true;
if (pendi && !waiti)
dorefresh();
}
From time to time dorefresh() is called to pull the HEAD, tracking If-Modified-Since:
function dorefresh()
{
waiti = 100; // allow 50s for this request to take
ajax.head("test.jpg",checkrefresh,lm);
}
function checkrefresh(e,x,s,l)
{
if(!l)
{
// not modified
lmc++;
waiti = 0;
}
else
{
lmc=0;
lm=l;
$("show"+loadn).src = "test.jpg?"+stamp();
waiti=100;
}
pendi=false;
sleeper++;
if (sleeper>maxsleep)
sleeper = maxsleep;
sleeps=0;
nextrefresh();
}
function stamp()
{
return new Date().getTime();
}
When the image is loaded it is flipped into view. shown usually is 0 or 1:
function loadi()
{
waiti=0;
$("show"+loadn).style.opacity=1;
if (shown<2)
disp(loadn);
loadn=1-loadn;
}
Please note that I only tested this code with Webkit based browsers yet.
Sorry, I cannot provide a working example, as my update source is non-public.
Also please excuse that the code is somewhat quick-n-dirty quality.
Strictly speaking HEAD alone is enough, we could look at the Last-Modified header of course.
But this recipe here also works for GET requests in a non-image situation.
AJAX GET in combination with images makes less sense, as this pulls the image as binary data.
I could convert that into some inline image, of course, but on bigger images (like mine) this will exceed the maximum URL length.
One thing which possibly can be done is using the browser cache.
That is pull the image using an ajax.update and then re-display the image from the cache.
However this depends on the cache strategy of a browser. On mobile devices the image might be too big to be cached, in that case it is transferred twice. This is wrong as usually mobile devices have slow and more important expensive data links.
We could use this method if the webserver would write a text file, like JSON or a JS snippet, which then is used to display the image.
However the nice thing about this code here is, that you do not need to provide additional information.
So no race conditions, no new weird states like in disk full cases, just works.
So one basic idea is to not alter the code on the webserver which generates the picture, just do it on the browser side.
This way all you need is a softlink from the web tree to the image and make sure, the image is atomically updated.
The downside of AJAX is the same origin policy, so AJAX can only check the HEAD of resources from the host which provided the running JavaScript code.
Greasemonkey or Scriptlets can circumvent that, but these cannot be deployed to a broad audience.
So foreign resources (images) sadly cannot be efficiently queried if they were updated or not.
At my side luclily both, the script and the image, originate from the same host.
Having said this all, here are the problems with this code:
The code above adds to the delay. First the HEAD is checked and if this shows that something has changed the update is done.
It would be nice to do both in one request, such that the update of the image does not require an additional roundtrip.
GET can archive that with If-Modified-Since, and it works on my side, however I found no way to properly display the result as an inlined image. It might work for some browsers, but not for all.
The code also is way too complex for my taste. You have to deal with possible network timeouts, not overwhelming limited bandwidth, trying to be friendly to the webserver, being compatible to as many browsers as possible, and so on.
Also I would like to get rid of the hack to use a query parameter just to pull an updated image, as this slowly fills/flushes the cache.
Perhaps there is an - unknown to me - way to "re-trigger" image refresh in the browser?
This way the browser could check with If-Modified-Since directly and update the image.
With JavaScript this could trigger a .load event then or similar.
At my side I even do not need that at all, all I want is to keep the image somewhat current.
I did not experiment with CANVAS yet. Perhaps somebody has an idea using that.
So my question just is, is there any better way (another algorithm) than shown above, except from improving code quality?
From what I understand, you have 2 sources of information on the server: the image itself and time of last update. Your solution polls on both channels and you want to push, right?
Start here: http://en.wikipedia.org/wiki/Comet_(programming), there should be a simple way to let the server update the client on a new image url. In case server and client support websockets it's a shortcut.
However, most simple solution assumes no image url change and runs
image.src = "";
image.src = url;
by using setInterval() and let the browser deal with cache and headers.
For support reasons I want to be able for a user to take a screenshot of the current browser window as easy as possible and send it over to the server.
Any (crazy) ideas?
That would appear to be a pretty big security hole in JavaScript if you could do this. Imagine a malicious user installing that code on your site with a XSS attack and then screenshotting all of your daily work. Imagine that happening with your online banking...
However, it is possible to do this sort of thing outside of JavaScript. I developed a Swing application that used screen capture code like this which did a great job of sending an email to the helpdesk with an attached screenshot whenever the user encountered a RuntimeException.
I suppose you could experiment with a signed Java applet (shock! horror! noooooo!) that hung around in the corner. If executed with the appropriate security privileges given at installation it might be coerced into executing that kind of screenshot code.
For convenience, here is the code from the site I linked to:
import java.awt.Dimension;
import java.awt.Rectangle;
import java.awt.Robot;
import java.awt.Toolkit;
import java.awt.image.BufferedImage;
import javax.imageio.ImageIO;
import java.io.File;
...
public void captureScreen(String fileName) throws Exception {
Dimension screenSize = Toolkit.getDefaultToolkit().getScreenSize();
Rectangle screenRectangle = new Rectangle(screenSize);
Robot robot = new Robot();
BufferedImage image = robot.createScreenCapture(screenRectangle);
ImageIO.write(image, "png", new File(fileName));
}
...
Please see the answer shared here for a relatively successful implementation of this:
https://stackoverflow.com/a/6678156/291640
Utilizing:
https://github.com/niklasvh/html2canvas
You could try to render the whole page in canvas and save this image back to server. have fun :)
A webpage can't do this (or at least, I would be very surprised if it could, in any browser) but a Firefox extension can. See https://developer.mozilla.org/en/Drawing_Graphics_with_Canvas#Rendering_Web_Content_Into_A_Canvas -- when that page says "Chrome privileges" that means an extension can do it, but a web page can't.
Seems to me that support needs (at least) the answers for two questions:
What does the screen look like? and
Why does it look that way?
A screenshot -- a visual -- is very necessary and answers the first question, but it can't answer the second.
As a first attempt, I'd try to send the entire page up to support. The support tech could display that page in his browser (answers the first question); and could also see the current state of the customer's html (helps to answer the second question).
I'd try to send as much of the page as is available to the client JS by way of AJAX or as the payload of a form. I'd also send info not on the page: anything that affects the state of the page, like cookies or session IDs or whatever.
The cust might have a submit-like button to start the process.
I think that would work. Let's see: it needs some CGI somewhere on the server that catches the incoming user page and makes it available to support, maybe by writing a disk file. Then the support person can load (or have loaded automatically) that same page. All the other info (cookies and so on) can be put into the page that support sees.
PLUS: the client JS that handles the submit-button onclick( ) could also include any useful JS variable values!
Hey, this can work! I'm getting psyched :-)
HTH
-- pete
I've seen people either do this with two approaches:
setup a separate server for screenshotting and run a bunch of firefox instances on there, check out these two gem if you're doing it in ruby: selenium-webdriver and headless
use a hosted solution like http://url2png.com (way easier)
You can also do this with the Fireshot plugin. I use the following code (that I extracted from the API code so I don't need to include the API JS) to make a direct call to the Fireshot object:
var element = document.createElement("FireShotDataElement");
element.setAttribute("Entire", true);
element.setAttribute("Action", 1);
element.setAttribute("Key", "");
element.setAttribute("BASE64Content", "");
element.setAttribute("Data", "C:/Users/jagilber/Downloads/whatev.jpg");
if (typeof(CapturedFrameId) != "undefined")
element.setAttribute("CapturedFrameId", CapturedFrameId);
document.documentElement.appendChild(element);
var evt = document.createEvent("Events");
evt.initEvent("capturePageEvt", true, false);
element.dispatchEvent(evt);
Note: I don't know if this functionality is only available for the paid version or not.
Perhaps http://html2canvas.hertzen.com/ could be used. Then you can capture the display and then process it.
You might try PhantomJs, a headlesss browsing toolkit.
http://phantomjs.org/
The following Javascript example demonstrates basic screenshot functionality:
var page = require('webpage').create();
page.settings.userAgent = 'UltimateBrowser/100';
page.viewportSize = { width: 1200, height: 1200 };
page.clipRect = { top: 0, left: 0, width: 1200, height: 1200 };
page.open('https://google.com/', function () {
page.render('output.png');
phantom.exit();
});
I understand this post is 5 years old, but for the sake of future visits I'll add my own solution here which I think solves the original post's question without any third-party libraries apart from jQuery.
pageClone = $('html').clone();
// Make sure that CSS and images load correctly when opening this clone
pageClone.find('head').append("<base href='" + location.href + "' />");
// OPTIONAL: Remove potentially interfering scripts so the page is totally static
pageClone.find('script').remove();
htmlString = pageClone.html();
You could remove other parts of the DOM you think are unnecessary, such as the support form if it is in a modal window. Or you could choose not to remove scripts if you prefer to maintain some interaction with dynamic controls.
Send that string to the server, either in a hidden field or by AJAX, and then on the server side just attach the whole lot as an HTML file to the support email.
The benefits of this are that you'll get not just a screenshot but the entire scrollable page in its current form, plus you can even inspect and debug the DOM.
Print Screen? Old school and a couple of keypresses, but it works!
This may not work for you, but on IE you can use the snapsie plugin. It doesn't seem to be in development anymore, but the last release is available from the linked site.
i thing you need a activeX controls. without it i can't imagine. you can force user to install them first after the installation on client side activex controls should work and you can capture.
We are temporarily collecting Ajax states, data in form fields and session information. Then we re-render it at the support desk. Since we test and integrate for all browsers, there are hardly any support cases for display reasons.
Have a look at the red button at the bottom on holidaycheck
Alternatively there is html2canvas of Google. But it is only applicable for never browsers and I've never tried it.
In JavaScript? No. I do work for a security company (sort of NetNanny type stuff) and the only effective way we've found to do screen captures of the user is with a hidden application.
I work on an internal corporate system that has a web front-end using Tomcat.
How can I monitor the rendering time of specific pages in a browser (IE6)?
I would like to be able to record the results in a log file (separate log file or the Tomcat access log).
EDIT: Ideally, I need to monitor the rendering on the clients accessing the pages.
The Navigation Timing API is available in modern browsers (IE9+) except Safari:
function onLoad() {
var now = new Date().getTime();
var page_load_time = now - performance.timing.navigationStart;
console.log("User-perceived page loading time: " + page_load_time);
}
In case a browser has JavaScript enabled one of the things you could do is to write an inline script and send it first thing in your HTML. The script would do two things:
Record current system time in a JS variable (if you're lucky the time could roughly correspond to the page rendering start time).
Attach JS function to the page onLoad event. This function will then query the current system time once again, subtract the start time from step 1 and send it to the server along with the page location (or some unique ID you could insert into the inline script dynamically on your server).
<script language="JavaScript">
var renderStart = new Date().getTime();
window.onload=function() {
var elapsed = new Date().getTime()-renderStart;
// send the info to the server
alert('Rendered in ' + elapsed + 'ms');
}
</script>
... usual HTML starts here ...
You'd need to make sure that the page doesn’t override onload later in the code, but adds to the event handlers list instead.
As far as non-invasive techniques are concerned, Hammerhead measures complete load time (including JavaScript execution), albeit in Firefox only.
I've seen usable results when a JavaScript snippet could be added globally to measure the start and end of each page load operation.
Have a look at Selenium - they offer a remote control that can automatically start different browsers (e.g. IE6), load pages, test for specific content on the page. At the end reports are generated that also show the rendering times.
Since others are posting answers that use other browsers, I guess I will too. Chrome has a very detailed profiling system that breaks down the rendering time of the page and shows the time it took for each step along the way.
As for IE, you might want to consider writing a plugin. There seems to be few tools like this on the market. Maybe you could sell it.
On Firefox you can use Firebug to monitor load time. With the YSlow plugin you can even get recommendations how to improve the performance.