SERP is just showing the main page but none of the subpages - javascript

Have a look on this search where you can see that just my main page is indexed.
But why does Google/Search engines not take arda-maps.org/about/ and the other subpages? Is my deep linking done in a wrong way? Do the search engines need more time? If they do need more time why is the forum - which came very late - already indexed?
And by clicking the links I'm loading the "subpages" via hiding and showing of layers. Maybe it's because of that?

I didn't see index-follow tag in your html code. It's better to have it
<meta name="robots" content="index, follow">
Also you can do two more things. Go to GWT > Crawl > Fetch as Google and submit some of your pages. Also click on the Sitemaps button in the left menu and submit your sitemap.
Also you can share pages from your site in Twitter or Google+, everything posted there is indexed very fast.
Wish you luck,
Kasmetski

You don't seem to have a robots.txt. It is always good idea to implement one. This might explain your issue, because when Google does not find one it stops crawling websites. Check for warnings or error messages in Google Webmaster Tools.
I have also seen that site:arda-maps.org returns urls with www. You should implement a redirect from www to non-www URLs.
Keep in mind that the site command does not return all indexed pages.
Your About page does not have a NOINDEX tag which is good. I have noticed you have a sitemap.xml and that your About page is in there. If the issue persists, this probably means Google thinks your page is not worth indexing.

Related

How can I break a third party iframe on my site, with code?

I am not a programmer.
Someone has scraped my site home page source code and placed their iframe over it, so that when the page is fetched it displays their content.
The iframe is not immediately apparent but it's there, just well hidden. These sites are all hosted on hacked servers running WordPress. They still display our site links and architecture that is being delivered by our server. There are currently over 160 such sites built using the same method.
I believe that they have disabled js so that may not be an option.
I know that we can break out of an iframe if it's our site in the frame.
Is there any way, either on the server side or on the page to break their iframe and force our page to the top?
If we can break it, then our code becomes worthless and with a bit of luck they may stop using it.
Update:
Just wanted to add a few points to anyone who has any ideas.
1, They already have the code, only things being served are the images and CSS files because they have only left those links in the page.
2, They are showing their site by floating it with a z-index on top of everything, which is why when you view src you see the site above and not the site that is floated in the iframe.
3, The iframe is visible if you inspect element with firefox and scroll to the top of the page you can see the iframe they are using.
Based upon the additions (currently in an answer), since they have your code there's not much you can do about breaking out of the iframe.
Depending upon your server environment you could try determining what page is requesting your images and CSS, and then display modified versions to those accessing the scraped versions. The key word for your searches is 'hotlinking.'
Possible modifications could include not serving the assets (images/CSS), or returning a CSS file that just does a display:none; on HTML elements to hide.
It might be a fool's errand, but trying to contact the hosts of the hacked servers might be a good idea, but I can't honestly say that it will get you very far, and might be a waste of time for the majority of them.

Google crawling angularjs site strangley

I'm having some trouble identifying why Google is crawling links that don't exist on my angularjs site. I'm using prerender.io which takes a snapshot of the page and returns it to the search bot. For example google will correctly list in the search results:
www.mywebsite.co.uk/string/string2
but then it will randomly create another link to crawl such as:
www.mywebsite.co.uk/string/string2/string/string3
which doesn't even exist!
In the end I get these long urls which are shown in the search results for my site but don't actually display anything useful.
I think the problem is caused by the tag base href="/" - as google will see a header link for example on the page /string1/string2 and then take the base root as it currently is and append the header link to it- i.e. /string1/string2/string1/string2.
Does anyone know if this is the problem and how I can combat this apart from putting in absolute links and removing angular from the equation?
I've tried removing the urls in google webmaster tools but this is time consuming and unreliable as there are so many links and I've added to the robots.txt file the disallowed links but they still show up in google with a message about robots.txt.
Any ideas here?
Thanks!

Showing a frame of another site on a page

I am trying to create a small frame on my site that will show the home page of another site similar to what google is doing with your most viewed pages. I know how to create this with frames but I am really against frames in general for many reasons not worth mentioning. Is there a jQuery plugin somewhere that can do that for me?
For a more visual explanation go here and navigate to 'portfolio'. The current developer is using simple images for what he is doing. I would like those icons to be frames of other sites instead
You want an actual image of a webpage? You'd need something like html2canvas, but that'll be html5 only. There's some methods for doing this in PHP as well, but it's tricky, and I've only heard of this in theory, never actually practiced it myself.
How about this link?
Website screenshots using PHP
To embed an external page within your page, you should check out the <iframe> tag.
As pointed out in the other answer absolutely the best solution to embedding an external site into yours is usually an <iframe>.
You could, in theory, avoid using <iframe>s by pulling in the HTML from the external sites via ajax requests and injecting it into your page, as appropriate, using javascript. This is a much more heavyweight solution however and I wouldn't recommend it to solve your particular problem, but just to point it out.
What I would recommend however is just linking to the sites, potentially with target="_blank" so that the links don't send the browser away from your portfolio.
<iframe>s have their place for certain solutions, but for browsing the different sites you've worked on? No - I'd say the user would benefit from the full browser window experience for that.

Multiple og:image tags not being displayed by share dialog or update status box

I am currently working a new feature to allow users to select the thumbnail they would like to use when sharing an page on Facebook. The user should be able to use the Facebook widgets like the send dialog or share buttons as well as simply cutting and pasting the URL into their udpate status dialog on Facebook.
I have read much of the documentation, which seems to indicate that I simply need to add multiple og:image tags in the page being shared. I have done this and run the page through the linter so the cache gets updated.
When passing the page to the share.php directly, effectively removing any of my client side code and letting the dialog present what it is scraping, I am seeing 3 images from the page available.
I am not sure what I am doing wrong here.
Here is the linter result, the graph object, the sharer.php link and the page. Anyone have ideas of what I could be doing incorrectly?
I have confirmed that at least the og:title tag is being respected by the share dialog. I have also tested the size of the images, and included file extensions as suggested below.
I know this works because buzzfeed has the exact functionality I am going for. I have reduced my example down to only the core pieces I think should work. You can find the full source here.
Could it be the XML namespace in the top HTML tag?
In the BuzzFeed article, it's:
xmlns:og="http://opengraphprotocol.org/schema/"
In your page its:
xmlns:og="http://ogp.me/ns#"
On the Buzzfeed article, the content attributes in the og:image links point to named .jpg files, vs your links which do not have a filename/extension at the end.
It may be required to include a filename in the links, especially if it's basing image detection on the file extension.
EG:
Buzzfeed:
<meta property="og:image" content="http://s3-ak.buzzfeed.com/static/campaign_images/webdr02/2013/3/18/11/10-lifechanging-ways-to-make-your-day-more-effici-1-2774-1363621197-4_big.jpg" />
Yours:
<meta property="og:image" content="http://statics.stage3.cheezdev.com/mediumSquare/3845/4AC356E3/1"/>
After some tests, I guess it's a caching issue.
Looks like the sharer is caching the graph, using the og:url as a key, so that different querystrings in the sharer won't bypass the cache, if they do not impact the og:url value.
Obviously, the debug tool don't use such cache.
If I'm right (this is just an insight), you can either wait that the cache entry expires or try with a different og:url. Moreover, to ease the test, keep the new og:url equal to the new page location.
So funny story, I'm a developer at BuzzFeed and came across this while trying to figure out why our share dialogs suddenly stopped showing the thumbnail picker.
It looks like Facebook disabled the functionality. It briefly made a reappearance on 1/14/2014 but they introduced a bug that prevented sharing from any pages with multiple og:image tags defined. (See: https://developers.facebook.com/bugs/1393578360896606/)
They fixed the bug, but as of 1/22/2014 it still looks like the thumbnail picker is disabled.
The Sharer.php script on the Facebook site doesn't support all the OG tags as far as I know. The images are scraped from the page content itself, so if you want your three images to appear on the Sharer.php script, include them in your content.
Sharer.php has been officially deprecated by Facebook, so I wouldn't be surprised if certain functionality does not work with it. While it still works, it was always the simplest option and I'm guessing they never built the link image scraping from the og items into it.
I was able to find this article, which shows one way that you can specify exactly what images are available to the sharer.php share page. You can specify one (or multiple) images to share with a URL structure like the following:
http://www.facebook.com/sharer.php?s=100
&p[url]=http://bit.ly/myelection
&p[images][0]=http://election.gv.my/assets/vote.png
&p[title]=My customized title
&p[summary]=My customized summary

page injected via jsquery/ajax does not show properly in any Chromium browser

as obviously the css related to the page being injected is not loaded by Chromium. However, it is working well in IE8/O 10.x/FF3.6x.
Hence begs the question - my stupidity in html coding, Chromium bug or jquery bug? that is what I could think of.
this is the page in question, eliminated all non-essential js http://logistik-experte.gmxhome.de/test.html, navigate to resume and see the difference. It is basically driving me nuts as missing the point somewhere and hence any sound advice/help would be highly appreciated.
cheers
I agree with Buggabill: works for me in Chrome 5. (At least on the server; there may be issues with loading files from a local filesystem.)
However there are problems with your approach. By having page content loaded by script only, you have made your page inaccessible to non-JavaScript users, which includes all search engines. Also you can't use the back button and the pages are unbookmarkable, un-open-in-new-tab-able, and so on.
Basically you've reinvented all the problems of <frameset>, the reasons why no-one uses frames any more. You shouldn't really deploy this kind of solution until you are familiar with the ways accessibility and usability can be served. At the very least, you need to point the navigation links to the real pages containing their content. Then consider allowing hash-based navigation, so the dynamically-loaded pages have a unique URL which can be navigated between, and which will re-load the selected page at load time when the URL is first entered.
Also if you are loading content into the page you should take care to load only the content you want, for example using load('portfolio.html #somewrapperdiv'). Otherwise you are inserting the complete HTML, including <!DOCTYPE> and <head> and all that, which clearly makes no sense.
To be honest, as it currently is, I don't see the point of the dynamic loading. You have spent a bunch of time implementing an unusual navigation scheme with many disadvantages over simple separate navigable pages, but no obvious advantage.

Categories

Resources