I have a as to how google's async analytics tracker works. The following code is used to init a command array:
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(
['_setAccount', 'UA-xxxxxxxx-x'],
['_trackPageview']
);
</script>
Now, this is a standard array that gets replaced once the GA's code is loaded and is used as a sort of queue that stores your clicks.
My confusion lies in wondering how these clicks could possibly be persisted if a user clicks a link that causes a reload (prior to the GA javascript being loaded). If the GA code hasn't captured that push on the the _gaq object, then the user clicks a link and goes to a new page, this array is just re initialized each time no?
Isn't it true that a javascript variable will not persist across requests that cause a refresh? If this is the case, haven't we then lost that original click that caused the page reload?
Any explanation is greatly appreciated.
Yes, you're right that if the user clicks away from the site before ga.js has loaded and has executed the __utm.gif request to Google's servers, then it will not track the _gaq array and that information is gone forever. But this version code still provides many benefits over the older synchronous code.
First, the loading of ga.js using this method is not blocking.
Cleverly, the loading of ga.js is injected indirectly via JavaScript, rather than through a hard-coded <script> tag. As per Google Code Blog,
The second half of the snippet
provides the logic that loads the
tracking code in parallel with other
scripts on the page. It executes an
anonymous function that dynamically
creates a element and sets
the source with the proper protocol.
As a result, most browsers will load
the tracking code in parallel with
other scripts on the page, thus
reducing the web page load time.
This means that the loading of ga.js occurs in a non-blocking way for most modern browsers (and as a benefit, the async="true" part, currently supported in FF 4+, IE10p2+, Chrome 12+, Safari 5.1+, formalizes this asynchronization). This mildly reduces load time, and mildly reduces the likelihood that clicks will occur before ga.js has loaded.
The benefit of queuing up the _gaq array in advance is to prevent race conditions; priorly, if you tried to make GA calls before ga.js loaded (say, Event Tracking a video play), it would throw an error and the Event call would be lost and never recoverable. This way, as long as the ga.js eventually loads, the _gaq array is ready to serve it all of the calls at load time.
Yep. Javascript contexts are thrown away on page reload, so if the user leaves the page before ga.js loads, those hits are lost. The advantage of the async version of GA is that it can be put higher in the page, which means it's much more likely to have ga.js load before the user leaves.
Related
We are using Angular for our Website. As not all Pages have been ported to Angular, we implemented a hybrid approach:
Every request goes to Angular first. When it has been loaded, it checks if the Route exists
If not, the HTML-page is fetched from the backend
The html-Element in the DOM (i.e. the complete page) is replaced with the response's body
ngOnInit() {
this.railsService.fetchRailsPage(this.router.url).subscribe(
(response) => this.replaceDOM(response),
(errorResponse) => this.replaceDOM(errorResponse.error)
);
}
private replaceDOM(newContent: string) {
document.open();
document.write(newContent);
document.close();
}
Since all a-hrefs in old pages are plain old hrefs (not Angular's routerLinks), once the user navigates away, the page is reloaded and Angular kicks in again.
So far, it works, but: I noticed that sometimes the DOM is not replaced with the response body.
Debugging brought us to the conclusion that Google Tag Manager could be the issue. It overwrites document.write() and a lot of other default Javascript functions.
Why is that? And how can this be prevented to get the default version of e.g. document.write()?
Seconding Alan here.
Please make sure you're running two tests:
Block gtm with the request blocking function of the dev tools and try reproducing the issue.
Try creating an empty GTM container, loading it on page and reproduce the issue.
If the first test shows that The issue persists with GTM blocked, then it's not GTM.
If the second test shows that the issue is solved, then it's not about GTM but about the logic used in it's configuration.
If anything, I would first make sure no custom code in GTM additionaly overrides document.write (which I've never seen before, but it's definitely possible). Then I would broadly audit all custom scripts deployed by GTM. After that, I would try pausing all the element visibility triggers if any are deployed and seeing if that helps.
GTM likely would aim to override write to be able to watch DOM changes. But it does so gently, adding a bit of tracking there and not changing the essence of it. It's severely unlikely that GTM's core logic would conflict with Angular.
//UPD just had a chat with a colleague on Measure. It looks like the only scenario when GTM overrides the document.write is when there are Custom HTML tags that have an option to "support document.write". The Element Visibility trigger uses mutation and intersection observers rather than listening to document.writes.
Our site has an asynchronously loaded application.js:
<script async="async" src="//...application-123456.js"></script>
Additionally, we have a lot of third party scripts that (1) are asynchronously loaded, and (2) create in turn an async <script> tag where a bigger script is called.
Just to give an example, one of these third party scripts is Google's gpt.js (you can have a quick look to understand how it works).
Our problem is that, while all the third party scripts load asynchronously as expected, the application.js one gets stack in "queuing" status for more than 4 seconds.
I tried to change the script and make it load like the third party ones: create a <script> element, set the "src" attribute and load it:
<script async>
(function() {
var elem = document.createElement('script');
elem.src = 'http://...application-123456.js';
elem.async = true;
elem.type = 'text/javascript';
var scpt = document.getElementsByTagName('script')[0];
scpt.parentNode.insertBefore(elem, scpt);
})();
</script>
but nothing changed.
Then I studied the network cascade in a page of our site that almost doesn't contain images, and I saw that the queuing time was almost zero. I tried the same experiment in pages with different amounts of images, and saw that the queuing time proportionally increases in pages with more images.
I read this in Chrome's network cascade documentation:
QUEUING TIME: The request was postponed by the rendering engine because it's considered lower priority than critical resources (such as scripts/styles). This often happens with images.
Is it possible that for some reason the browser is marking our application.js as "lower priority"? I looked on the web and it seems that nobody has experienced problems with the queuing time. Anybody has an idea?
Thank you very much.
Browsers use a pre-loader to improve network utilisation. This article explains the concept.
In the Chrome Documentation you linked to above, it says the following about queuing:
If a request is queued it indicated that:
The request was postponed by: the rendering engine because it's considered lower priority than critical resources (such as scripts/styles). This often happens with images.
The request was put on hold to wait for an unavailable TCP
socket that's about to free up. The request was put on hold because the browser only allows six TCP connections per origin on HTTP 1.
Time spent making disk cache entries (typically very quick.)
The pre-loader would have retrieved the lightweight resources quickly, such as the styles and scripts, and then queued up the images because, as the criteria above suggests, only 6 TCP connections are permitted per origin. Therefore, this would explain the delay in the total response time.
I must be doing something wrong here. I'm trying to use Google Analytics to track hits on a form hosted by InfusionSoft (our CRM/etc provider) on their domain. We want to track hits under a separate domain in GA.
here is the form in question: our order form
I have tried several forms for the GA code -- first the async snippet, then the 'traditional' snippet, now back to the async. Here is the async code I'm trying to use (inside the <body> tags):
Near the top of the page
<script type="text/javascript">
var _gaq = _gaq || [];
_gaq.push(['_setAccount', 'UA-XXXXX-1']);
_gaq.push(['_setDomainName', 'oursite.com']);
_gaq.push(['_setAllowLinker', true]);
_gaq.push(['_trackPageview', '/saleform/67']);
</script>
Further down the page
<script src='https://ssl.google-analytics.com/ga.js' type='text/javascript'></script>
Relevant information
Using Firebug or Chrome's dev tools, I don't see any errors coming from GA when using this code.
Tracking is already working on oursite.com, but does not receive data from this page.
In Firebug's console, _gaq and _gat seem to be working fine (no errors, they appear as objects with a lot inside)
__utm.gif is NOT being requested. I know this is a Bad Thing but am unsure what to do about it.
Using a Firefox extension to view cookies, I do NOT see any cookies being set for the domain specified in the above code (or the site hosting the form). edit: after including _setAllowLinker on our site, cookies DO seem to be working (showing up on the page)
Additionally, I have tried manually firing the _gaq.push(['_trackPageview', '/saleform/67']); method from the JS console (with no luck--the page does not show up in GA).
Please let me know if there's any pertinent information missing from this post and I'll be happy to update it. Thanks in advance for any insight you can offer.
After much fiddling with the code on the various pages, I've found a solution that works to my satisfaction. I'll detail the three things that seemed to make the difference in this case.
Hope this helps someone out there, and don't forget to vote up if it helped you!
Set up the normal GA code on our own website, including _gaq.push(['_setAllowLinker', true]);
On the order forms (hosted by InfusionSoft in our case) set up the GA code as instructed by their website, but include _gaq.push(['_setDomainName','none']);
Important: add onClick handlers in javascript which call _gaq.push(['_link', 'http://your.link.tld/etc']);
For point 3, I used a snippet of jQuery to identify links to order forms on the webpage and bind the GA function call with click() -- code for that is below.
<script>
jQuery(document).ready(function() {
jQuery("a[href*='/sale-form'],
a[href*='/another-order-form-link'],
a[href*='your_site.infusionsoft.com/saleform/']").click(function() {
_gaq.push(['_link', this.href]);
return false;
});
});
</script>
Explanation of code:
wait for the page to be ready
search for any links with href containing your search terms (more on that here
bind the _gaq.push(['_link', this.href]); function call to the onClick event handler for any links that were found in the previous step
Additional note: if you include this jQuery code, you'll have to have the jQuery library loaded. Also, obviously remove the <script></script> tags if you're including this in a .js file.
I have a number of tracking scripts and web services installed on my website and I noticed when one of the services goes down, it still tries to call the external javascript file hosted on a different server. In Firefox, Chrome and other new browsers, there doesn't seem to be any issues when one of the services go down. However, in IE7 and IE8, my pages don't load all the way and time out before everything is displayed. Is there any way to add a time out on these javascript calls to prevent them from breaking my pages when they go down?
You can load them dynamically after page load with JS. If the JS files are on a different server, the browser will still show a "browser busy" indicator when you do that, but the original page will load.
If you can fetch the JS from your own site, you can load it with XMLHttpRequest after page load (or with your favorite JS library's helpers, e.g. jQuery's $.ajax(...)) and then eval it. This way the fetching itself won't show the browser-busy indicator.
To fetch the JS from your own site, you can download it from your tracking provider (which won't be officially supported but usually works) - just remember to refetch new versions every once in a while - or you can create a "forwarding" service on your own site that fetches it from the tracking provider and caches it locally for a while. This way your JS won't be in danger of staleness.
Steve Souders has more information about deferred loading of scripts and browser-busy indicators.
Try adding defer="defer"
The defer attribute gives a hint to
the browser that the script does not
create any content so the browser can
optionally defer interpreting the
script. This can improve performance
by delaying execution of scripts until
after the body content is parsed and
rendered.
Edit
This will prevent those scripts from running until the page loads:
function loadjs(filename) {
var fileref=document.createElement('script');
fileref.setAttribute("type","text/javascript");
fileref.setAttribute("src", filename);
}
window.onLoad = function() {
loadJs("http://path.to.js");
loadJs("http://path.to2.js");
...
}
If you need to load external scripts and you want to enforce a timeout limit, to avoid having a busy indicator running for too long, you can use setTimeout() with window.stop() and, the IE equivalent:
http://forums.devshed.com/html-programming-1/does-window-stop-work-in-ie-1311.html
var abort_load = function() {
if(navigator.appName == "Microsoft Internet Explorer") {
window.document.execCommand('Stop');
} else {
window.stop();
}
};
/**
* Ensure browser gives up trying to load JS after 3 seconds.
*/
setTimeout(abort_load, 3000);
Note that window.stop() is the equivalent of the user clicking the stop button on their browser. So typically you'd only want to call setTimeout() after page load, to ensure you don't interrupt the browser while it's still downloading images, css and so on.
This should be combined with the suggestions made by orip, namely to load the scripts dynamically, in order to avoid the worst case of a server that never responds, resulting in a "browser busy" indicator that's active until the browser's timeout (which is often over a minute). With window.stop() in a timer, you effectively specify how long the browser can try to load the script.
Also note that setTimeout()'s interval is not that precisely interpreted by browsers so round up in terms of how much time you want to allow to load a script.
Also, one counter-indication to using window.stop() is if your page does things like scroll to a certain position via js. You might be willing to live with that but in any case you can make the stop() conditional on NOT having already loaded the content you expected. For example if your external JS will define a variable foo, you could do:
var abort_load = function() {
if (typeof(foo) == "undefined") {
if(navigator.appName == "Microsoft Internet Explorer") {
window.document.execCommand('Stop');
} else {
window.stop();
}
}
};
This way, in the happy path case (scripts do load within timeout interval), you don't actually invoke window.stop().
I work on an internal corporate system that has a web front-end using Tomcat.
How can I monitor the rendering time of specific pages in a browser (IE6)?
I would like to be able to record the results in a log file (separate log file or the Tomcat access log).
EDIT: Ideally, I need to monitor the rendering on the clients accessing the pages.
The Navigation Timing API is available in modern browsers (IE9+) except Safari:
function onLoad() {
var now = new Date().getTime();
var page_load_time = now - performance.timing.navigationStart;
console.log("User-perceived page loading time: " + page_load_time);
}
In case a browser has JavaScript enabled one of the things you could do is to write an inline script and send it first thing in your HTML. The script would do two things:
Record current system time in a JS variable (if you're lucky the time could roughly correspond to the page rendering start time).
Attach JS function to the page onLoad event. This function will then query the current system time once again, subtract the start time from step 1 and send it to the server along with the page location (or some unique ID you could insert into the inline script dynamically on your server).
<script language="JavaScript">
var renderStart = new Date().getTime();
window.onload=function() {
var elapsed = new Date().getTime()-renderStart;
// send the info to the server
alert('Rendered in ' + elapsed + 'ms');
}
</script>
... usual HTML starts here ...
You'd need to make sure that the page doesn’t override onload later in the code, but adds to the event handlers list instead.
As far as non-invasive techniques are concerned, Hammerhead measures complete load time (including JavaScript execution), albeit in Firefox only.
I've seen usable results when a JavaScript snippet could be added globally to measure the start and end of each page load operation.
Have a look at Selenium - they offer a remote control that can automatically start different browsers (e.g. IE6), load pages, test for specific content on the page. At the end reports are generated that also show the rendering times.
Since others are posting answers that use other browsers, I guess I will too. Chrome has a very detailed profiling system that breaks down the rendering time of the page and shows the time it took for each step along the way.
As for IE, you might want to consider writing a plugin. There seems to be few tools like this on the market. Maybe you could sell it.
On Firefox you can use Firebug to monitor load time. With the YSlow plugin you can even get recommendations how to improve the performance.