I want to have a popup div with iframe content.
Can search engines read this when I'm using jquery to create it?
Alternatively, is there a way to detect a search engine on the server side and remove the option of this popup?
The best way would be to degrade gracefully, e.g. by using a standard
<a id='mylink' href='xyz.html'>
link that points to the resource that is opened in the popup.
You would then add the JQuery code to the link, causing it to open in the pop-up.
That way, even users that do not have JavaScript turned on can access your popup.
Most Lightbox clones like Thickbox work that way.
Generally speaking, search engines do not execute JavaScript, so there's no way for them to index anything contained in a popup div.
You can, however, inspect User-Agent header to see if the page is requested by web spider, but this is something that's not considered best practice.
Search engines do not play well with javascript but you can see how google bot would fetch your page using Google's webmaster tools.
Rendering a different page to bots is not considered a best practice too. The best you can do is graceful degradation.
Just to say it at once: DON'T use the iframe, as it is bad-ass technology and search engines won't index the iframe'd page.
First question:
No, it cannot, if the data are loaded simultaneously with the popup "popping up" (as search engines, as said already, generally doesn't execute javascript). If the data are already loaded, and the popup div are somehow hidden at page load, the search engine will index the content.
Second question:
Don't do that. That's called cloaking, and will be punished by search engines if detected - they don't like content customized just for them, and then you're back at scratch.
Related
is there a way to press a button on external site with javascript and/or jquery? Like I open a new window like this:
windowObjectReference = window.open("http://some_site.html","name");
Then I want to press a button on this site. Something like this:
windowObjectReference.button.click();
Or:
name.button.click();
It would be a huge security violation if a browser would let you do that from the script placed on your own website.
So, no, this cannot be done, and should not be possible.
But...
If both sites belong to you (you have access to their code), you can pass a parameter (eg. as a hash within URL), then the target website may read it and fire the event you mentioned (name.button.click()).
You can't do this with JavaScript from a webpage.
You can do it from browser extension though.
NO !
For security reasons. This kind of attack is called clickjacking! and it was used on Twitter.
is there a way to press a button on external site with javascript
and/or jquery?
I know that I'm late at party but YES, Unfortunately it is and is a quite simple way.
make a div on your page (ex. #externalDiv);
set CSS attribute to hidden and display to none;
use (simple way) jQuery method .load() or make your own JS method using XMLHttpRequest();
load external page on your page;
click on button you wish
$('#externalDiv').load('http://www.externalPage.com', function(){$('#externalPageButtonId').click();});
Can not doge by something if you don't know how it's work :)
You are not allowed to do so because of SOP. Any trick to force user to perform click on your behalf will be considered as clickjacking attack and could lead to bad consequences.
I want to show an English wikipedia article on the left side of the page and then show the Spanish version of that wikipedia article on the right side of the page.
Is there a way to do that with html, javascript, ajax, etc.?
I know I could use iframes, but it would be nice to have them scroll together (you scroll one, and the other scrolls... or just have one scroll bar for both) and follow links together (if a link is clicked on one page, the appropriate translated page goes in the other side(if it exists)).
Iframes are good to display pages from another domain and let users browse them within your page. However, there really isn't a way to detect click events within the content of an iframe if it is from another domain. This article explains why.
You basically have to find what page the iframe loaded, right? Even if you were to add an onload event on the iframe to check what page was loaded, even this is not allowed, I'm afraid.
An interesting concept. IF you are sure you want to load two webpages then iframe is your option. However, for the functionality that you desire, you have to use a custom scroll on one of the pages and traverse the second page by the same amount. Here is a sample for a single page custom scroll. Just use the same concept for the other page too.
Update : You can perhaps, have a look at this. The content stored in the databases can be access MediaWiki as far as i know. Use this to get the data specific to your link.
I have used popup at many places in my website (its in PHP with Mysql DB and lots of javascript). These are mostly been blocked my browsers, which restricts user to move ahead. what should do in my code so that my popups becomes Popyp blocker independent..
Use Jquery Dialogs http://jqueryui.com/demos/dialog/ they are purely javascript and so popup blockers will not block this
Make sure your popups's URL is on the same domain as the main site. Using a link with 'target="_blank"' will never get blocked unless it was manually blocked as the user did the action himself. Switch to alerts and prompts instead of popups.
Not a direct answer but may be Floating div is a good replacement. Build a js dialog box.
Is there any reason to NOT have a webpage retrieve it's main content on the fly?
For example, I have a page that has a header and a footer, and in the middle of this page is an empty div. When you click on one of the buttons in the header, an http GET is done behind the scenes and the .innerHTML() of the empty div is replaced with the result.
I can't think of any reason why this might be a bad idea, but I can't seem to find any pages out there that do it? Please advise!
It's not unheard of, but there are issues.
The obvious one is that some users have javascript turned off for security reasons, and they will not be able to use your site at all.
It can also negatively impact handicapped users that are using assistive technology such as a screen reader.
It can make it harder for the browser to effectively cache your static content, slowing down the browsing experience.
It can make it harder for search engines to index your content.
It can cause the back and forward buttons to stop working unless to take special steps to make them work.
It's also fairly annoying to debug problems, although certainly not impossible if you use a tool such as Firebug.
I wouldn't use it for static content (a plain web page) but it's certainly a reasonable approach for content that is dynamically updated anyway.
Without extra work on your part it kills the back and forward history buttons, and it makes it difficult to link to the pages each button loads. You'd have to implement some sort of URL changing mechanism, for example by encoding the last clicked page in the URL's hash (e.g. when you click a button you redirect to #page-2 or whatever).
It also makes your site inaccessible to users with JavaScript disabled. One of the principles of good web design is "graceful degradation"--enhancing your site with advanced features like JavaScript or Flash or CSS but still working if they are disabled.
Two considerations: Search engine optimization (SEO) and bookmarks.
Is there a direct URL to access your header links? If so, you're (almost) fine. For example, the following code is both SEO friendly and populates your page as you desire:
Header Link
The catch occurs when people attempt to bookmark the page they've loaded via JavaScript... it won't happen. You can throw most of those potential tweets, email referrals, and front page Digg/Reddit articles out the window. The average user won't know how to link to your content.
Where did you read it is a bad idea? It purely depends on requirements whether or not content will be populated on-the-fly. In most cases, however, the content is loaded along with the page not on-the-fly but if you need your content on-the-fly, it shouldn't be a bad idea.
If your content is loaded via javascript and javascript is disabled on users' browser then definitely it is a bad idea.
I cant think of a bad reason for this either (other than possibly SEO), one thing that would probably be a good idea is to load the data only once. ie
Show Div1 - do ajax/whatever only if the innerhtml is blank
Show Div2 - do ajax/whatever only if the innerhtml is blank
<div1></div>
<div2></div2>
This should keep the server load down so the divs content is only loaded once.
Cheers
This is pretty standard behavior in ajax enabled sites.
Keep in mind however that extra effort will be needed to:
ensure the back button works
link to (and bookmark) specific content
support browsers with javascript disabled.
I am looking for a way to, give a URL, get the source of a webpage back after the JavaScript has been run on it. For example:
I have a webpage with a .
On loading the page, some JavaScript populates the div.
Viewing the source of the page through a browser will not give the information which is within the div.
As far as I know, in order for the browser to render the page the div must have been filled with (X|D)HTML which would mean that the source of the page after being rendered is still just nested markup, so theoretically there should be a "final" version of the page source.
I have considered using a rendering engine like WebKit or Gecko and somehow adapting these to do this, however this is a fairly large task and I don't really want to duplicate something which has already been done. Does anyone know of a way of performing this task.
Regards.
Update: I am aiming to use Selenium (as mentioned in the comments to the accepted answer) to do this automatically for several pages. My project is a web spider which by design needs to target a number of pages in which the content I am aiming to reach is not available until after the JavaScript has populated everything.
Such addons for Firefox as the WebDev toolbar, or Firebug have options like 'View generated source'.
As far as timing it goes, just about the only option you have is to have a snippet of javascript code. You could set a start-time as soon as is possible on the page-load, and check again when the page is completed (either for dom-ready or page completely downloaded). It's going to be highly variable however, and if you are trying to time it in order to improve the speed (which is good to know, and to do) - just getting Firebug + Yslow would be far more useful.
Within Firefox you can get the final rendered DIV by waiting the browser to finish rendering, then pressing ctrl-A to select all content on the page and finally selecting "Show selection source" from the right-click menu.
This shows you the manipulated/populated DOM-code of the page.