I need to have a way or tools to test the actual perceived rendering time for the browser to render the entire page to users. Any suggestions?
The reason I ask is because firbug and Yslow only reports the DomContentLoaded and OnLoad time.
For instance, my application reports 547ms (onLoad:621ms) for the contents. But the actual content is rendered around 3 seconds. I know so because I actually counted 1, 2, 3 slowly from the moment I hit enter in the url field of the browser to the moment when content appears in front of my eyes. So I know 547ms nor 621ms DOES NOT represents the actual time it takes for the page to load.
Not sure if this helps. But my application
renders json data on the server side, save the data as a javascript variable along with the rest of the page's html before server returns the entire html to browser
page loads Jquery 1.5 and Jquery template
jquery code grabs the json data from the variable defined at step 1
use jquery template to render the page.
Technically, no Ajax involved here and images on the page are all cached. I don't see firebug downloads any of them.
[Edit]
What i'm trying to figure out is after the firebug reported onLoad time which in my case is 621ms, to the time the page is completed and loaded in my eyes (which is at least 3 seconds), what happened to the 2.4s in between? What took place there? Browser is doing something? Something is blocking? Network? what is it?
Google Chrome has excellent auditing built in. Your results will be skewed because it's one of the fastest browsers right now, but it will give you exact measurements of how long it takes for Chrome to render. =)
Related
I'm having a situation in which I want to allow the user to download a PDF. The generation of this PDF can take a couple of seconds, and I don't want a webserver thread to wait for the generation of the PDF, since that means the thread isn't available to handle other incoming requests.
So, what I would like to do is introduce the following URL patterns:
/request_download which invokes a background job to generate the PDF;
/document.pdf which will serve the PDF once it is generated
The use case is as follows.
A user clicks on 'Download PDF'. This invokes a piece of Javascript that'll show a spinner, make a request to /request_download and receive a HTTP 202 Accepted, indicating the request was accepted and a background job was created. The JS should then poll the /request_download url periodically until it gets HTTP 201 Created, indicating that the PDF has been created. A Location header is included that is used by the JS to forward the client to /document.pdf. This has to be in a new window (or tab, at least it shouldn't replace the current page). The level of expertise of our users is very low, so when I forward to the url, they might not know how to get back to the website.
The problem
Now, window.open works fine if it is invoked by the user via a click event. However, I want to invoke window.open in a callback function through setInterval as soon as I see that 201 Created response. However, browsers won't like that and will interpret it as a popup, causing it to get caught by popup blockers on IE8-10 as well as Chrome. Which makes sense.
I also can't open a new window and invoke the Javascript over there. In that case, users would see a blank page with a spinner. If that page then forwards to the /document.pdf, IE will show this yellow warning bar telling that it prevented files from being downloaded. Chrome and Firefox will do this without any problems, but a large percentage of our users is on IE.
What I've tried, and didn't work
window.open(location)
Setting the src of an iframe to the retrieve location
Manually adding an <a> tag to the document body and trying to click it using javascript
Adding an <a> tag to the document body and invoking a MouseEvent on it
Partially works: opening a new window on user click and storing a reference to it, then perform the asynchronous requests and upon completion, set the location of the earlier opened window. But: IE will block this and say it prevented files from being downloaded. So still not a fully working solution.
I'm straight out of ideas on how I can make this work and decided and decided to ask The Internet for help. I'm hoping you guys can help me with this, or recognise the problem. Thanks in advance!
I have a demo site built in Word Press.
In the head of the template I injected a script tag that loads a special script.
This script does a document.write to load another script from a server.
The script from the server in turn can do other document.write's to load up to as many as 10.
It is extremely important for these to load like this as these scripts make changes to the page before and thus it eliminates any visible change of content to the visitor.
Problem:
I am using a visitor experience recording service that loads my page inside an iframe and overlays it's mouse actions and heat maps.
When viewing the heat maps built in Flash, there is an odd behavior that is causing my script to load asynchronously I presume, cause the result is the page gets cleared because the second document.write is done after DOM ready and thus clears all content.
I am presuming that for some reason the page continues to load as my script runs it's series of document.write's in this particular case.
The only difference from mouse actions recordings that work fine is the presence of the Flash heat map.
I have Googled this to left and right and have found some similar reports some also reporting the added presence of a Flash but not found any clue that would point me towards a solution.
Has anyone see this behavior before and found a solution?
Please do not ask for links as I can only give you the link to the site that has no issues outside the recording service iframe to which I cannot give my credentials to.
Note: Asynchronous loading of my script would cause the default content to show for a second or two then be replaced. This would be visible to the visitor and thus unacceptable.
At first this might seem an odd question, but here's my problem. I'm developing a website that on window.load calculates the div positions as it has some dynamic scroll event highlighting (DOM Ready is the wrong choice for this as images and content isn't loaded yet and the calculate div positions are incorrect when the page has fully rendered.) The local assets run perfectly and are optimised for performance, but my problem is that the client wants social media embeds, for instance a twitter follow and facebook like button. Twitter seems to render pretty quickly, but Facebook takes so long and you can literally lag for about 20-30 seconds before the window.load event is ready, which means my navigation then lags and doesn't work properly. I don't know if it's even possible, but is there a way to determine when all local JavaScript files are loaded (these are included before the closing body tag).
Probably. All JavaScript in a page is executed in the order in which the browser encounters it. So when you add a <script> element as the last element inside the <body> element (i.e. at the bottom of the page), this code will run after all other script code has been executed. Also, at that time, the DOM will be finished (no further HTML to process) except for things that callbacks still might do (timers, onload-handlers).
So what you can try is to put a <script> element between your code and the code from Facebook. But that means your DOM won't be ready.
A better solution is probably to start loading the Facebook code in the background inside of onload. That means all the rest of the page is there and Facebook can take its time.
I have a content script running on every page. It updates the html of the page, which seems to take more time on longer pages (the script walks the DOM a lot of times). On longer pages the script could take up to 10-20 seconds to finish, and it seems that when it takes too long chrome stops the script, because on these pages I see only a part of the page changed.
The weird part is that when I add several alerts somewhere in the code, dividing the runtime to several parts, the script runs perfectly fine and changes the entire page. However when the alert is removed, the script is again stopped prematurely, and only a part of the page gets changed.
My only conclusion is that chrome stops scripts that run too long, so my question is - is that true? And if so - what can be done about it? (besides using annoying pop-up alerts)
I have another theory, the script stops because somehow there is a conflict when simultaneous commands try to change the DOM. Does this make more sense?
More details about the architecture: The background page gets a message from the content script. This message includes a callback function (the function that actually changes the HTML). This function is then called from the background page several times, relative to the length of the page. If I insert an alert inside the callback function, every call is run without problems. If however I remove it, only the first call from the background page is made, and further calls don't do anything (although the background page code keeps running).
Yes, and usually it prompts the user if he/she wants to continue waiting or to kill the page.
Chrome has some limits in place if the JavaScript takes a long time (hence blocks the main thread) it will notify you about taking too much time.
You have two options:
Can you offload some processing to a HTML5 worker and post a message back to the page when processing is complete?
Can you offload some processing to the background page and send a message request back to the content script when processing is complete?
I found out what was the problem. I was using the callback given by the message from the content script to the background html page. This callback can be run only once, because there is only one response allowed for each message send to the background page.
Some sort of bug in chrome causes many responses to work when I added an alert inside the callback function.
The problem was solved by breaking down the content of the message into several messages.
I am looking for a way to, give a URL, get the source of a webpage back after the JavaScript has been run on it. For example:
I have a webpage with a .
On loading the page, some JavaScript populates the div.
Viewing the source of the page through a browser will not give the information which is within the div.
As far as I know, in order for the browser to render the page the div must have been filled with (X|D)HTML which would mean that the source of the page after being rendered is still just nested markup, so theoretically there should be a "final" version of the page source.
I have considered using a rendering engine like WebKit or Gecko and somehow adapting these to do this, however this is a fairly large task and I don't really want to duplicate something which has already been done. Does anyone know of a way of performing this task.
Regards.
Update: I am aiming to use Selenium (as mentioned in the comments to the accepted answer) to do this automatically for several pages. My project is a web spider which by design needs to target a number of pages in which the content I am aiming to reach is not available until after the JavaScript has populated everything.
Such addons for Firefox as the WebDev toolbar, or Firebug have options like 'View generated source'.
As far as timing it goes, just about the only option you have is to have a snippet of javascript code. You could set a start-time as soon as is possible on the page-load, and check again when the page is completed (either for dom-ready or page completely downloaded). It's going to be highly variable however, and if you are trying to time it in order to improve the speed (which is good to know, and to do) - just getting Firebug + Yslow would be far more useful.
Within Firefox you can get the final rendered DIV by waiting the browser to finish rendering, then pressing ctrl-A to select all content on the page and finally selecting "Show selection source" from the right-click menu.
This shows you the manipulated/populated DOM-code of the page.