I making a tool where a user has to quickly sort throo a heap of websites
to determine weather they are fit for a particular purpose.
I load the websides inside and iframe. All fine here but
some sites have javascript code that makes them pop out of frames.
Now is there a way to prevent that ?
i'v tried onbeforeunload , it worked for a while , but it seems even that isn't working anymore.
No, not really - if website has script like top.location = "mypagehere"; then the other page will load outside the frame, or if it has alert() alert will show up.
The only way around this is using server side language to read the contents of those remote sites then put only the contents, without any scripts, inside your own placeholders.
If you have server side language at your disposal edit your question and put comment here so that we can guide you further.
Related
How do IHeartRadio and 8Tracks keep the music playing without pause even when you go to a different url without any Pause?
My initial thought was that they would use something like Ajax to load content - but the fact that the browser favicon reloads makes me think it may be something else.
There are questions like this one that ask how is it possible to do so at all - but my question is how do established sites like IHeartRadio.com and 8Tracks.com do this?
If this question is not within StackOverflows scope, please let me know and I will remove it.
They are using ajax obviously.
8Tracks
Screen shot shows a browser loading 8Tracks's About us page. See initiator column (red mark) which is a javascript/xhr which means ajax.
To trigger the default browser is loading event: See:
How to have AJAX trigger the browser's loading indicator
I am trying to load a webpage then insert my own Javascript into it.
I have the current code here:
window.location.assign('http://http://79.170.44.75/hostdoctordemo.co.uk/downloads/vpn/index.php');
document.getElementById('address_box').value = prompt("Site Address: ");
document.getElementById('go').click();
and what I am trying to do is:
Load the webpage
Set the address box to a value
Simulate a mouse click on the search button
So it loads the webpage, then searches a value it sets itself.
The problem with my current JavaScript is that as soon as the webpage has loaded the JavaScript stops working (as I expected). I have tried using the iframe tag to load the webpage 'within the webpage' but that did not work when obtaining the id and people said iframe would also not work because of the resolution difference.
**The Question: ** How do I load a webpage and run my own JavaScript code on it? Thank you!
Matthew
You're propably looking for something like Greasemonkey.
I really can't see an easy way to do what you want.
When the browser receives a web page from a server the javascript is interpreted, and only after that, the page is presented on the screen.
So you would have to have a web page with a button or other mechanism to make a request to a web server, receive the request, save its contents locally, add your javascript code and only then "give it to the browser".
I have a web page jammed full with a few hundred rows of tabular numerical data. It's fearsome. The user wants to see as many data rows as he can in one single view, without scrolling vertically. I'm thinking to serve that page with just a title bar -- no back or refresh button, no address bar, no Google toolbar, no status pane, nothing but a title bar. The user reaches the page by way of a normal html link.
Is there a way to do that in the CGI that writes the page? The CGI is already writing content and cache-control headers.
If not, then (next best thing) is there a way to do it with JavaScript, without opening a new browser window, perhaps in the onload event handler?
Thanks!
Is there a way to do that in the CGI that writes the page?
No.
If not, then (next best thing) is there a way to do it with JavaScript, without opening a new browser window, perhaps in the onload event handler?
Well, you could, on onload open a new ("chromeless") browser window with your page, but that's about it.
You can't control the user's current browser window from within an HTML page.
Pretty sure you can't do this cross browser and I'm positive you can't do it server side. However most browsers allow a full screen view which the user can get to.
In Internet Explorer and firefox the shortcut is F11. I know that's not the solution your looking for. However I'm pretty sure thats all there is.
I didn't see n0nick answer when I typed mine up. I agree with his answer I'm leaving mine in here for the F11 part.
Is there any reason to NOT have a webpage retrieve it's main content on the fly?
For example, I have a page that has a header and a footer, and in the middle of this page is an empty div. When you click on one of the buttons in the header, an http GET is done behind the scenes and the .innerHTML() of the empty div is replaced with the result.
I can't think of any reason why this might be a bad idea, but I can't seem to find any pages out there that do it? Please advise!
It's not unheard of, but there are issues.
The obvious one is that some users have javascript turned off for security reasons, and they will not be able to use your site at all.
It can also negatively impact handicapped users that are using assistive technology such as a screen reader.
It can make it harder for the browser to effectively cache your static content, slowing down the browsing experience.
It can make it harder for search engines to index your content.
It can cause the back and forward buttons to stop working unless to take special steps to make them work.
It's also fairly annoying to debug problems, although certainly not impossible if you use a tool such as Firebug.
I wouldn't use it for static content (a plain web page) but it's certainly a reasonable approach for content that is dynamically updated anyway.
Without extra work on your part it kills the back and forward history buttons, and it makes it difficult to link to the pages each button loads. You'd have to implement some sort of URL changing mechanism, for example by encoding the last clicked page in the URL's hash (e.g. when you click a button you redirect to #page-2 or whatever).
It also makes your site inaccessible to users with JavaScript disabled. One of the principles of good web design is "graceful degradation"--enhancing your site with advanced features like JavaScript or Flash or CSS but still working if they are disabled.
Two considerations: Search engine optimization (SEO) and bookmarks.
Is there a direct URL to access your header links? If so, you're (almost) fine. For example, the following code is both SEO friendly and populates your page as you desire:
Header Link
The catch occurs when people attempt to bookmark the page they've loaded via JavaScript... it won't happen. You can throw most of those potential tweets, email referrals, and front page Digg/Reddit articles out the window. The average user won't know how to link to your content.
Where did you read it is a bad idea? It purely depends on requirements whether or not content will be populated on-the-fly. In most cases, however, the content is loaded along with the page not on-the-fly but if you need your content on-the-fly, it shouldn't be a bad idea.
If your content is loaded via javascript and javascript is disabled on users' browser then definitely it is a bad idea.
I cant think of a bad reason for this either (other than possibly SEO), one thing that would probably be a good idea is to load the data only once. ie
Show Div1 - do ajax/whatever only if the innerhtml is blank
Show Div2 - do ajax/whatever only if the innerhtml is blank
<div1></div>
<div2></div2>
This should keep the server load down so the divs content is only loaded once.
Cheers
This is pretty standard behavior in ajax enabled sites.
Keep in mind however that extra effort will be needed to:
ensure the back button works
link to (and bookmark) specific content
support browsers with javascript disabled.
I am looking for a way to, give a URL, get the source of a webpage back after the JavaScript has been run on it. For example:
I have a webpage with a .
On loading the page, some JavaScript populates the div.
Viewing the source of the page through a browser will not give the information which is within the div.
As far as I know, in order for the browser to render the page the div must have been filled with (X|D)HTML which would mean that the source of the page after being rendered is still just nested markup, so theoretically there should be a "final" version of the page source.
I have considered using a rendering engine like WebKit or Gecko and somehow adapting these to do this, however this is a fairly large task and I don't really want to duplicate something which has already been done. Does anyone know of a way of performing this task.
Regards.
Update: I am aiming to use Selenium (as mentioned in the comments to the accepted answer) to do this automatically for several pages. My project is a web spider which by design needs to target a number of pages in which the content I am aiming to reach is not available until after the JavaScript has populated everything.
Such addons for Firefox as the WebDev toolbar, or Firebug have options like 'View generated source'.
As far as timing it goes, just about the only option you have is to have a snippet of javascript code. You could set a start-time as soon as is possible on the page-load, and check again when the page is completed (either for dom-ready or page completely downloaded). It's going to be highly variable however, and if you are trying to time it in order to improve the speed (which is good to know, and to do) - just getting Firebug + Yslow would be far more useful.
Within Firefox you can get the final rendered DIV by waiting the browser to finish rendering, then pressing ctrl-A to select all content on the page and finally selecting "Show selection source" from the right-click menu.
This shows you the manipulated/populated DOM-code of the page.