I have a page I'm working on that is just a redirect page to get some browser data before sending them off to an external site. It's a white HTML page with just the google analytics code and then it redirects to Kickstarter. My problem is I don't know when I can redirect them. I want to know when the data was sent to their servers so I can redirect them. I can see how to get callback data for custom tracking, but I want all the browser data.
I see this code here: https://developers.google.com/analytics/devguides/collection/analyticsjs/advanced#hitCallback
But that seems to just be custom page view sends. Is there anyway to just do a general onload event for google analytics?
UPDATE: Some people are finding my question hard to understand. I'll try to make this simpler although I figured this was almost as simple as I could make it.
I'm doing nothing custom with Google Analytics. I just want to know when, on any page that has google analytics tracking code (the stuff you copy and paste) has loaded and sent the data up to Google. So, in theory, like ga.onload(function () { ... }). Hope that helps?
Oscar,
What's difficult to picture in your question is the navigation flow and where to pinpoint the issue:
A page redirects to a page to get some browser data before sending them off to an external site.
Google recommends placing the tracking code on the redirecting page as well as on the landing page. This way, the redirecting page will capture the actual referrer information for your reports.
Fundamentally, whenever the user hits a page with the tracking script a request to google is sent notifying about the visit. However, some browsers may actually redirect before the JavaScript call from the code can be made.
Possible solutions:
Implement your own counter inside the redirect script, separated from GA
Implement an event before the redirect happens
Implement an event on the redirect page with the useBeacon parameter
References:
https://support.google.com/analytics/answer/1009614?hl=en
track a redirect page with google analytics
Related
My website is having a script tag which links to a script which helps me keep track of all the user activity in my web page. My web page has click events linked to that script's function.
I want web crawlers and bots to either not load that script or send me wrong data from the click events.
Also want to know if web crawlers and bots can call the script scr or call the click events in my page and send me wrong data from that page. Is this possible??
Thank you in advance.
I checked the whole internet and found nothing, stackoverflow community can help!! Please.
I have a Swift app with a section that contains a feed of articles. Each article is presented as a web view baked into the app.
The web views are each loaded as the links in the feed are shown, i.e. before the actual view of the article is presented.
I have noticed that Google Analytics considers these preloads to be page views. How can I avoid this? I am able to run JavaScript at the time of the preload as well as when the user navigates to the article. Is it possible to use custom JavaScript to stop GA when the page first loads, then re-enable it when the page is actually displayed?
This is not guaranteed to work but one thing you could explore, providing that you own the GA implementation and the actual articles is to filter out traffic in GA based on a query string param or a campaign param, OR to use some javascript in the articles pages to prevent the GA script from executing.
In your app, upon fetching the list of articles you would append all the URLs in the webviews with a param=value that you would use to build an exclusion rule in the property/view settings in the admin, or instruct your JS in the articles to not execute GA.
On the click on an article you would however remove the param from the URL which would cause GA to behave "normally".
From the GA console:
I was wondering if there was a way to prevent a user from saving/downloading a web page? Specifically, I mean not letting them have access to the data displayed through my web application on their own machine?
I've heard that this is not possible since the browser must have access to the source code/data, but at the same time, I've noticed that if I to my gmail account, open an email, save the page, but when I try to open that page on my computer, it doesn't work. Furthermore, if I click "view source", I can see that even the source does not display the entire email message, even though the email is opened in my browser.
How it's possible for gmail to prevent me from seeing that email data?
Thats what called rendering pages using dynamic data without refreshing page (AJAX). The entire page source code is not downloaded in one go and components within the page request data asynchronously to display content. Try googling it and you will find more information.
In View source you can only see the HTML, CSS, JavaScript codes. No one can copy any dynamic code (PHP) from view source.
You can't stop anyone to see your html,css code in browser, as we are having view source option.
Maximum what you can do is disable right click on your page. Thant can be done through JavaScript.
how i can make my pages show like grooveshark pages
http://grooveshark.com/#!/popular
is there a tutorial or something to know how to do this way for showing page by jQuery or JavaScript?
The hash and exclamation mark in a url are called a hashbang, and are usualy used in web applications where javascript is responsible for actually loading the page. Content after the hash is never sent to the server. So for example if you have the url example.com/#!recipes/bread. In this case, the page at example.com would be fetched from the server, this could contain a piece of javascript. This script can then read from location.hash, and load the page at /recipes/bread.
Google also recognizes this URL scheme as an AJAX url, and will try to fetch the content from the server, as it would be rendered by your javascript. If you're planning to make a site using this technique, take a look at google's AJAX crawling documentation for webmasters. Also keep in mind that you should not rely on javascript being enabled, as Gawker learned the hard way.
The hashbang is being going out of use in a lot of sites, evenif javascript does the routing. This is possible because all major browsers support the history API. To do this, they make every path on the site return the same Javascript, which then looks at the actual url to load in content. When the user clicks a link, Javascript intercepts the click event, and uses the History API to push a new page onto the browser history, and then loads the new content.
I'm using google maps for a website. All the data available via the map on the home page is also listed on other pages throughout the site, but with javascript turned on these pages redirect to the home page: eg http://host.com/county/essex redirects to http://host.com/#county/essex, which makes the map load all the data for Essex.
I'd like Google to index all the pages on my site, as otherwise none of the data included in the map will be searchable. But I know that for very good reasons Google doesn't normally index pages which get redirected. Is there a way to get Google to index my pages. Or, failing that, is there a way to submit all the data to google in some other way?
*edit
All my pages are linked to from my main navigation (it's just that javascript overrides the default link action)... the upshot being that there should be no need for a sitemap as all the pages are discoverable by google bot using normal link following
Regarding submitting data to google, there is a thing called Google sitemaps that is supposed to notify google of all URIs/locations that exist on a given site and are/should be indexed. Truth is, however, that sites that aren't crawlable by default rarely benefit much from the aforementioned sitemap.
Have a look at site Maps it allows you to include urls you need indexed.
You write:
with javascript turned on these pages redirect to the home page
Google bot doesn't execute javascript. If your redirects are made in javascript, Google bot will simple index the page and ignore the redirect.