changing web content but still google analytics tracks the page - javascript

Here is a website a let say a.com with a.com/xxx or /yyy
I am now the webmaster and i don't know the initial setup of the web.
It used google analytics to track webpage info. And it is assumed the web is attached with javascript tracking code.
The webpage reforms and it is now with no tracking code.
However, there is still tracking info until today.
Some of the hierarchy of the website remains.
Q1. Why google can still track the website? I guess that's because of the unchanged hierarchy.
Q2. Can i see individual user pathway / history? Now i can see the summary of each day / hour with total. What I want is user a /xxx -> /yyy / a.com -> /yyy
Q3. I still want to use google analytics service. How can I make sure it works fine? Status now: receiving data, which I barely trust. Answer of Q2 leads to Q3.
If I can see there are some users viewing the new page e.g. a.com/zzz, then i know new webpage is being tracked.
Newbie to web.
Comment appreciated.

Most possibly your tracking code is in your header or footer, so every new page will have it. Just look for the code in the source code.
It doesn't matter if you change a content, structure, make new page or whatsoever. Google Analytics tracks every page where the code is present, and don't track pages where there is no code.
If there is no tracking code (and you are sure there isn't) - then GA can't track it. However there are spam referrals, that could hit your GA account and show you traffic on pages where the code is missing (just check if the traffic is referral). To fight with this spam you need a filters but this is whole new question.

Related

Hiding URLs from the location bar

This might be a silly question which I'll delete if I realise, so if you are reading this then I didn't yet figure it out.
I have some software which is online (addressable) and available but it's a bit of a secret, so instead of just hitting my software when you come to my domain, you are shown a blog that I wrote and hidden within that blog is a link ;)
All well and good.
Now the problem is that users of my software always post screenshots which gives my 1/2 secret URL away. EEEEK yep! So I wanted to have the url be just the plain old normal domain, so as not to make things too easy for them hacky types :p
I have full control over everything here. Clientside / Server / Everything. Initially you hit some jsp and then the GWT app (inside of Tomcat) - you have to provide login details in the GWT app. So I have plenty of places to do this URL hiding / faking but any ideas to help would be great.
...and yes I'm posting this (perhaps isn't too dumb)!
Many thanks in advance.
You can use the javascript history.pushState() here
history.pushState({},"Some title here","/")
For example, on http://yourwebsite.com/secretlink.html, after the JS runs, the URL bar will show http://yourwebsite.com/ without having refreshed the page1. Note that if the user refreshes the page they will be taken to http://yourwebsite.com/ and not back to the secret link.
You can also do something like history.pushState({},"Some title here","/hidden.jsp"), so that if the user refreshes the page you can show them an error page that tells them to find the secret link and open it again.
1. If you pushState() some other domain than your own, a refresh will happen so this cannot be abused to phish sites
Include the inner page as an iFrame

Facebook like error - disabled/not visible

We've got a number of content managed sites that use the same functionality. We added a site recently, and the Facebook like button is failing with an error on-click (following Facebook login):
This page is either disabled or not visible to the current user.
This only happens when the Facebook user isn't an administrator of the page, or of an application we've created for the page.
The site where this is failing is here: http://beachhousemilfordonsea.co.uk/
An example of a site that works (same code): http://monmouthash.co.uk/
The Facebook like code:
<fb:like href="http://beachhousemilfordonsea.co.uk/" width="380"></fb:like>
Actions already taken
I've checked with the FB Linter and there are a couple of Opengraph warnings that do need to be fixed (add a description, increase the image size) - but these are the same for all sites so should be affecting this (it's on the dev plan to get these rectified in the next release).
I've taken a look at the Facebook App we've got running on the problem page, and checked it against other working applications and the settings are the same as far as I can see, except there are missing options with this new application:
Encrypted access token (assume this is default, not changeable now)
Include recent activity stories
It doesn't feel like the application should have much of an impact on this though, as we use the application for the other functionality within the page (which is all working fine!).
I've searched for possible issues, and checked the more common ones:
There are no age/geographic restrictions
I've submitted 2 requests to Facebook in case the content is blocked, but no response or change
Any recommendations as to what else to try?
Thanks in advance,
Kev
P.S. I asked this question a week ago but it wasn't well formed - hopefully this is a better attempt, but if you need anything else please do let me know.

SEO and AJAX (Twitter-style)

Okay, so I'm trying to figure something out. I am in the planing stages of a site, and I want to implement "fetch data on scroll" via JQuery, much like Facebook and Twitter, so that I don't pull all data from the DB at once.
But I some problems regarding the SEO, how will Google be able to see all the data? Because the page will fetch more data automatically when the user scrolls, I can't include any links in the style of "go to page 2", I want Google to just index that one page.
Any ideas for a simple and clever solution?
Put links to page 2 in place.
Use JavaScript to remove them if you detect that your autoloading code is going to work.
Progressive enhancement is simply good practise.
You could use PHP (or another server-side script) to detect the user agent of webcrawlers you specifically want to target such as Googlebot.
In the case of a webcrawler, you would have to use non-JavaScript-based techniques to pull down the database content and layout the page. I would recommended not paginating the search-engine targeted content - assuming that you are not paginating the "human" version. The URLs discovered by the webcrawler should be the same as those your (human) visitors will visit. In my opinion, the page should only deviate from the "human" version by having more content pulled from the DB in one go.
A list of webcrawlers and their user agents (including Google's) is here:
http://www.useragentstring.com/pages/Crawlerlist/
And yes, as stated by others, don't reply on JavaScript for content you want see in search engines. In fact, it is quite frequently use where a developer doesn't something to appear in search engines.
All of this comes with the rider that it assumes you are not paginating at all. If you are, then you should use a server-side script to paginate you pages so that they are picked up by search engines. Also, remember to put sensible limits on the amout of your DB that you pull for the search engine. You don't want it to timeout before it gets the page.
Create a Google webmaster tools account, generate a sitemap for your site (manually, automatically or with a cronjob - whatever suits) and tell Google webmaster tools about it. Update the sitemap as your site gets new content. Google will crawl this and index your site.
The sitemap will ensure that all your content is discoverable, not just the stuff that happens to be on the homepage when the googlebot visits.
Given that your question is primarily about SEO, I'd urge you to read this post from Jeff Atwood about the importance of sitemaps for Stackoverflow and the effect it had on traffic from Google.
You should also add paginated links that get hidden by your stylesheet and are a fallback for when your endless-scroll is disabled by someone not using javascript. If you're building the site right, these will just be partials that your endless scroll loads anyway, so it's a no-brainer to make sure they're on the page.

How do I keep track of how many times an external link is clicked?

I have a site affiliated with a university and we want to link to another site that has a certain teaching program.
How can we track the number of times this link has been clicked from within our website?
I would use jquery and/or ajax to touch a page in the background (ajax) that counts hits whenever a link is clicked and then proceed to allow the link do what it does.
You can use web analytics tools such as Google Analytics or commercial software such as WebTrends.
Instead of directly providing the external link, link to a page on your own site that redirects to the link, and it will be logged like every other request.
The sky is really the limit when it comes to link tracking. It really depends on your expertise.
You can use a service like bit.ly to track the clicks on the link. Bit.ly is mostly used a s shortner service, but if you sign up for bit.ly (actually make an account) You can keep track on the links that you generate and how much they are clicked.
If you want to install something on your server to track the links you can use something like:
http://www.phpjunkyard.com/php-click-counter.php Its a simple redirect script that where you give it a link, and it will give you back a link that it can track. It keeps track of all the click. This script is super simple and does not require you to use a mysql database and you don't have to have a huge knowledge of programming to install it.
Most reliable method is using a redirector to measure the traffic going through, no need for JavaScript (e.g. the phpjunkyard.com link given) as long as you can rely on your server to redirect without problems.
If a server side option is not available, using a web-analytics tool simply to count link clicks doesn't make sense, so the JavaScript option could be used, but you still need something to count the clicks in to.
If you would like to us a web-analytics tool to e.g. better understand your visitors, it's a different story; All (not all require it, but all use it if possible) the WA tools use a 1x1 pixel GIF to record calls and read the incoming data. GA is free, but you would have to code the link click (simple though). Piwik hosting is available really cheap and would do the trick. WebTrends and other tools are way too hardcore for this kind of requirement.

Monitor clicks/goals Google Analytics with iframes?

We've got a site that shows some content in iframes loaded from another domains. What I'd like to do is setup some Goals to track if this stuff is clicked, is this possible to track these clicks?
I know that this content us outside our domain but is it still in the dom?
It is possible. Since Goal Tracking is Profile based, the key is to have a tracking of all domains into one Profile. See the How do I install the tracking code if my site spans multiple domains? entry in the Google Analytics Help for further instructions. After that your iframe contents will appear as usual PageViews in the reports. (For instance, if you used <iframe src="http://otherdomain.com/stuff"></iframe>, you will find PageViews for ’/stuff’.)
Otherwise, I don't really know what you mean under ’stuff is clicked’. If it's an object in the iframe you want to track, you may generate a virtual PageView when visitor clicks on it:
pageTracker._trackPageview("/Stuff_clicked");
If you are not able to install GATC (Google Analytics Tracking Code) on so called another domains which you are loading inside your iFrame, unfortunately, you won't be able to track any clicks or virtual page-views for those domains. Any clicks or event occurring on any domain can be tracked back to your account as far as your GATC which includes your Google Analytics ProfileID e.g. UA-XXXX-X is installed on that page.

Categories

Resources