Whenever I load a webpage, I need to get the URL of the current tab to be stored in some variable. I would prefer, to get the URL of the website I am requesting, before anything loads, so I can do some logic depending on the URL.
What are the methods that I can use so I can achieve that?
(this is for firefox extension)
Depending on the specifics of your task, three of the how-tos on the main MDN page on web extensions are relevant:
Intercepting HTTP requests (webRequest)
Modify a web page (content scripts), and
the tabs API (specifically onUpdated).
Related
I am creating a chrome extension that requires an HTTP Get Request API call whenever a new page loads. My extension then inserts an IFrame in the web page, to which I would like to provide the data from the API call. I have devised two different ways of doing this. I have been able to get both ways to work, however, I am wondering which is more advisable.
After the content script injects the iframe it makes a call to a background script. In the background script, we fetch the data and do message passing with the postMessage function to send the data to the IFrame. The data is then received by a script inside the IFrame and the data is loaded.
After the content script injects the iframe a script runs inside the IFrame that fetches the data. This same script then loads the data.
Or if there are any other methods, I would be grateful for any recommendations. My logic as of now comparing the two methods I have described is that the first method has advantages of conducting the API calls from the backgorund script, while the second method has the advantage of not requireing a large amount of communication between various scripts.
Is either of these methods superior? Thank you for the advice.
Any extension page or frame that has a chrome-extension:// URL has equal rights. It includes iframes that you insert in web pages with src pointing to an html file from your extension exposed via web_accessible_resources in manifest.json.
It means there are no inherent restrictions or preferred methods.
It only depends on the life cycle of data.
When it would make sense to make the request in the background page:
to cache it in a variable/object if you have a persistent background page for whatever reason;
to avoid interruption of the request due to the tab being closed or navigated away by the user or the main document's script;
to transform the data using some library that you load in the background script and don't want to load it in the UI page/frame for whatever reason e.g. it's slow to load.
any other reason.
As for sharing between pages it should be fast even with messaging unless your data exceeds 64MB message size limit in which case you would have to use Blob URLs or directly access the variable via getBackgroundPage that returns the window object of the background script. There's also BroadcastChannel API that should be able to work between all chrome-extension:// page or frames of an extension and in Chrome it should be much faster than messaging thanks to using the structured cloning algorithm instead of JSON stringify/parsing used by messaging internally.
As a rule of thumb for any performance-related concerns: use devtools performance profiler.
I'm trying to create an extension for Google Chrome overriding the "new tab" page, in which there is content in JavaScript. In the first tab, it runs correctly, but when new tabs are opened, the scripts don't work. What should I do for fixing it?
You can make an extension for Google Chrome. Chrome extensions require a manifest.json file to be included, and you can set overridable pages to bookmarks history or newtab. Like such:
"chrome_url_overrides" : {
"newtab": "index.html"
},
For more info about making Chrome extensions, read the docs.
I'm not sure what you mean by 'content in JavaScript'.
You can check out the code for my chrome extension, New Tab Redirect: https://github.com/jimschubert/newtab-redirect
The way I have it set up is that there is a redirect.html page which is used as the override page. On load, it checks for a user-specified url from the background/event page (background.js), then redirects to that url with a simple JavaScript redirect.
The master branch is v1.0 (background page) and the 2.0 branch uses an event page.
edit
To specifically answer why you might be having problems with your code, without seeing any actual code.. I assume you're doing the initialization of your script in a background page in very much the same way as in my extension and querying that data from the redirect page, where you're then getting/setting data in local storage.
If any of that data needs to change, you can't use local storage from your redirecting page. Instead, you need to send the data back to the background page and store options there. Think of your background page as a service and your redirecting page as a very thin, very stateless client.
how i can make my pages show like grooveshark pages
http://grooveshark.com/#!/popular
is there a tutorial or something to know how to do this way for showing page by jQuery or JavaScript?
The hash and exclamation mark in a url are called a hashbang, and are usualy used in web applications where javascript is responsible for actually loading the page. Content after the hash is never sent to the server. So for example if you have the url example.com/#!recipes/bread. In this case, the page at example.com would be fetched from the server, this could contain a piece of javascript. This script can then read from location.hash, and load the page at /recipes/bread.
Google also recognizes this URL scheme as an AJAX url, and will try to fetch the content from the server, as it would be rendered by your javascript. If you're planning to make a site using this technique, take a look at google's AJAX crawling documentation for webmasters. Also keep in mind that you should not rely on javascript being enabled, as Gawker learned the hard way.
The hashbang is being going out of use in a lot of sites, evenif javascript does the routing. This is possible because all major browsers support the history API. To do this, they make every path on the site return the same Javascript, which then looks at the actual url to load in content. When the user clicks a link, Javascript intercepts the click event, and uses the History API to push a new page onto the browser history, and then loads the new content.
I have seen this excellent firefox extension, Screengrab!. It takes a "picture" of the web page and copies it to the clipboard or saves it to a png file. I need to do so, but with a new web page, from an url I have in javascript. I can open the web page in a new window, but then I have to call the extension -not to press the control- and saves the page once the page is fully loaded.
Is it possible?
I am pretty certain that it is not possible to access any Firefox add-on through web page content. This could create privacy and/or security issues within the Firefox browser (as the user has never given you permission to access such content on their machine). For this reason, I believe Firefox add-ons run in an entirely different JavaScript context, thereby making this entirely impossible.
However, as Dmitriy's answer states, there are server-side workarounds that can be performed.
Does not look like ScreenGrab has any javascript API.
There is a PHP solution for Saving Web Page as Image.
If you need to do it from JavaScript (from client side) - you can:
Step 1: Create a PHP server app that does the trick (see the link), and that accepts JSONP call.
Step 2: Create a client side page (JavaScript) that will send a JSONP request to that PHP script. See my answer here, that will help you to create such request.
I am in the process of developing an online music magazine. We have a html5/flash music player, and this forms a major part of the website. But the site also has a lot of articles and stuff. So basically, I want seamless music playback across page loads, but I also want to avoid a complete javascript application because I want all the content to be spider friendly and indexable in Google.
I use html5 history api with the hashbang (#!) fallback for loading various content within the main page on clicks. And the URLs loaded also point to pages with the content.
For example:
munimkazia.com/page1.html link in my index page munimkazia.com will load the content from page1.html and insert it. The URL will change to munimkazia.com/#!/page1.html in firefox and IE, and munimkazia.com/page1.html in chrome..
Since the href link is munimkazia.com/page1.html, the spider will follow the link and fetch the content.
I have the page set up properly at page1.html, ready for viewing. But now, I have problems.
If I decide to use ajax loads at this page, the URLs appearing on the browser location bar will not be consistent with the hashbang fallback (http://munimkazia.com/page1.html/#!/page2.html)
If I decide to redirect all clicks to the main container page at http://munimkazia.com and load page2.html, everything will work fine after this, but this page load will interrupt the music playback before it, if any.
Also, I don't want to rewrite all http://munimkazia.com/page1.html to http://munimkazia.com/#!/page1.html, because I want all the content to be present and not fetched and written by javascript for search engines spiders to read.
I am aware that Google has a spec to read the content from #! URLs, but I want the page to load with the article content for the user even if JS is disabled
Any ideas/advice/workarounds?
Edit: Those URLs are just examples to explain my point. There is no javascript code to fetch pages at munimkazia.com
Hash-bang #! URL's can be indexed by Google, that's kinda the whole point of them otherwise people would just use hash # on it's own.
I think the idea is that Google see's the #! URL and converts it into a querystring parameter, eg. example.com/#!/products/123/ipod-nano-32gb becomes example.com/?_escaped_fragment_=/products/123/ipod-nano-32gb but users still use the hash-bang URL. You program the server to response to the ?_escaped_fragment_ parameter, but JavaScript user get redirected to the proper #! URL.
Check out Google specification here http://code.google.com/web/ajaxcrawling/docs/getting-started.html
I don't think it's a good idea to use both types of URL, as you'd have two URL's being posted on blogs, Twitter etc. by users for the same page, would also be a nightmare to write the code to handle it reliably. You'd probably have to settle for hash-bangs for now until HTML5 History API is more broadly supported.