I am new to extension development. My requirement is to create a simple extension which modifies some response headers and adds some new headers to the http response. I was looking through some addons like redisposition and inline disposition. The former one does the same job without using xpcom and the later one uses xpcom. Also the former one has xul (gui) components too.
Now, is it possible to modify the response if extension has no overlay (gui components).
You don't need a GUI.
There are several ways to get your code running:
Create a bootstrapped (restartless) add-on, just like ReDisposition (The GUI parts there are not required) and register from your bootstrap.js.
Create an SDK add-on and register from your main.js.
Create an XPCOM component and register it for profile-after-change (run at startup, basically) and register once the component gets loaded.
Or have a stub overlay, that will just load a JS code module and register in the module. (A little hackish for my taste, but anyway).
Anyway, in the end, you'll need to register and observe http-examine-response and friends, no matter what method you use, or what
See MDN for more documentation (and the firefox-addon wiki).
Other than that, your question is too broad to tell you something less general.
Related
Can IBM Teamleaf be used on a single page application? Can it also be incorporated with require.js so that it is automatically included once and then picks up all javascript events that are fired?
I haven't used Tealeaf before, but our backend team does, and we're wondering if it is possible to combine it with a single page application that is mainly JS driven using require.js.
If so, should it be included on every single template page, just the main page, in our require.js file, or somewhere else?
The Tealeaf UIC creates a single global variable called TLT which you can easily shim. The API is designed to work with single page applications.
If the TLT object exists on the DOM where your application is loaded (however you got it loaded in there, like with require.js or some other method) you can then use any functionality from the library in your code. You'd probably want to do this where significant events on your front-end occur. Like if you had a React-Redux app with a componentWillUnmount firing at some significant UI action you wanted to track with Tealeaf, you could insert a TLT.flushAll() there to explicitly send a full snapshot to the collector at that instant. If HTML is being dynamically created and destroyed, fire TLT.rebind() after a create or destroy action to tell Tealeaf to re-observe the DOM to account for elements lost or created. I think the TLT API can be found here:
https://www.ibm.com/support/knowledgecenter/TLUIC/UICj2Guide/UIC/UICj2PublicAPIRef/UICaptureJ2PublicAPIReference.html
I just did a proof of concept/demo for a web app idea I had but that idea needs to be embedded on pages to work properly.
I'm now done with the development of the demo but now I have to tweak it so it works within a tag on any websites.
The question here is:
How do I achieve this without breaking up the main website's stylesheets and javascript?
It's a node.js/socket.io/angularjs/bootstrap based app for your information.
I basically have a small HTML file, a few css and js files and that's all. Any idea or suggestions?
If all you have is a script tag, and you want to inject UI/HTML/etc. into the host page, that means that an iframe approach may not be what you want (although you could possibly do a hybrid approach). So, there are a number of things that you'd need to do.
For one, I'd suggest you look into the general concept of a bookmarklet. While it's not exactly what you want, it's very similar. The problems of creating a bookmarklet will be very similar:
You'll need to isolate your JavaScript dependencies. For example, you can't load a version of a library that breaks the host page. jQuery for example, can be loaded without it taking over the $ symbol globally. But, not all libraries support that.
Any styles you use would also need to be carefully managed so as to not cause issues on the host page. You can load styles dynamically, but loading something like Bootstrap is likely going to cause problems on most pages that aren't using the exact same version you need.
You'll want your core Javascript file to load quickly and do as much async work as possible as to not affect the overall page load time (unless your functionality is necessary). You'll want to review content like this from Steve Souders.
You could load your UI via a web service or you could construct it locally.
If you don't want to use JSONP style requests, you'll need to investigate enabling CORS.
You could use an iframe and PostMessage to show some UI without needing to do complex wrapping/remapping of the various application dependencies that you have. PostMessage would allow you to send messages to tell the listening iFrame "what to do" at any given point, while the code that is running in the host page could move/manipulate the iframe into position. A number of popular embedded APIs have used this technique over the years. I think DropBox was using it for example.
I'm creating a single page web application.
I created a basic design for the app structure. This answer about this video was very helpful.
The application contains one html page. The JS code will change it's content.
The Usher will supply a module according to the URL (domain.com/#list#item1 will return an item module).
The module will use the sandbox to retrieve data from the server (that will use the Application Core for that).
The module will set the page style by passing the sandbox an key-value list and will set the page HTML in this way too.
What do you think about it. Does it decoupled enough?
Short answer: Kind of. It depends on how complex your modules are.
Long answer:
I'm working on one application that respects the Core -> Sandbox Instances -> Modules pattern like you described.
The only unanswered question about my application is this:
"What happens when Module A and Module B have a small UI component that is the same, or almost the same?".
In your case, this might be an accordion on 3 modules out of 5. This accordion might be application specific, so simply adding a jquery plugin in core, and exposing it to the modules via Sandbox will just not cut it.
I ended up with two possible solutions:
1) Use the common functionality as a special type of module that can be requested via the sandbox by other modules. This is the case when just one instance of the UI will be visible at a given time - which might be your case -
2) Use a simple prototype instantiation for my reused object and add it as a dependency for all modules that use it.
I am trying to save a couple of web pages by using a web crawler. Usually I prefer doing it with perl's WWW::Mechanize modul. However, as far as I can tell, the site I am trying to crawl has many javascripts on it which seem to be hard to avoid. Therefore I looked into the following perl modules
WWW::Mechanize::Firefox
MozRepl
MozRepl::RemoteObject
The Firefox MozRepl extension itself works perfectly. I can use the terminal for navigating the web site just the way it is shown in the developer's tutorial - in theory. However, I have no idea about javascript and therefore am having a hard time using the moduls properly.
So here is the source i like to start from: Morgan Stanley
For a couple of listed firms beneath 'Companies - as of 10/14/2011' I like to save their respective pages. E.g. clicking on the first listed company (i.e. '1-800-Flowers.com, Inc') a javascript function gets called with two arguments -> dtxt('FLWS.O','2011-10-14'), which produces the desired new page. The page I now like to save locally.
With perl's MozRepl module I thought about something like this:
use strict;
use warnings;
use MozRepl;
my $repl = MozRepl->new;
$repl->setup;
$repl->execute('window.open("http://www.morganstanley.com/eqr/disclosures/webapp/coverage")');
$repl->repl_enter({ source => "content" });
$repl->execute('dtxt("FLWS.O", "2011-10-14")');
Now I like to save the produced HTML page.
So again, the desired code I like to produce should visit for a couple of firms their HTML site and simply save the web page. (Here are e.g. three firms: MMM.N, FLWS.O, SSRX.O)
Is it correct, that I cannot go around the page's javascript functions and therefore cannot use WWW::Mechanize?
Following question 1, are the mentioned perl modules a plausible approach to take?
And finally, if you say the first two questions can be anwsered with yes, it would be really nice if you can help me out with the actual coding. E.g. in the above code, the essential part which is missing is a 'save'-command. (Maybe using Firefox's saveDocument function?)
The web works via HTTP requests and responses.
If you can discover the proper request to send, then you will get the proper response.
If the target site uses JS to form the request, then you can either execute the JS,
or analyse what it does so that you can do the same in the language that you are using.
An even easier approach is to use a tool that will capture the resulting request for you, whether the request is created by JS or not, then you can craft your scraping code
to create the request that you want.
The "Web Scraping Proxy" from AT&T is such a tool.
You set it up, then navigate the website as normal to get to the page you want to scrape,
and the WSP will log all requests and responses for you.
It logs them in the form of Perl code, which you can then modify to suit your needs.
some time ago, I was reading an article(a library built by some guy) about how his library can do
lazy loading of JS
resolve dependencies between JS
(typically encountered when trying
to "include" one js from another)
include files only once. thought
specified multiple times regardless
of how they are called (either
directly specifying it as file or
specifying it as one of the
dependencies)
I forgot to bookmark it, what a mistake. Can some one point me to something which can do the above. I know DOJO and YUI library have something like this, but I am looking for something which I can use with jQuery
I am probably looking for one more feature as well.
My site has asp.net user controls
(reusable server side code snippets)
which have some JS. Some of them get
fired right away, when the page is
loading which gives a bad user
experience. Yahoo performance
guidelines specify that JS should
be at the bottom of the page, but
this is not possible in my case as
this would require me to separate the
JS and the corresponding server side
control into different files and
maintenance would be difficult. I
definitely can put a jQuery
document.ready() in my user control
JS to make sure that it fires only
after the DOM has loaded, but I am
looking for a simpler solution.
Is there anyway that I could say "begin executing any JS only after DOM has loaded" in a global way than just writing "document.ready" within every user control ?
Microsoft Research proposed a new tool called DOLOTO. It can take care of rewriting & function splitting and enable the on-demand js loading possible.
From the site..
Doloto is a system that analyzes
application workloads and
automatically performs code splitting
of existing large Web 2.0
applications. After being processed by
Doloto, an application will initially
transfer only the portion of code
necessary for application
initialization. The rest of the
application's code is replaced by
short stubs -- their actual function
code is transferred lazily in the
background or, at the latest,
on-demand on first execution.
OK I guess I found the link
[>10 years ago; now they are all broken]
http://ajaxian.com/archives/usingjs-manage-javascript-dependencies
http://www.jondavis.net/techblog/post/2008/04/Javascript-Introducing-Using-%28js%29.aspx
I also found one more, for folks who are interested in lazy loading/dynamic js dependency resolution
http://jsload.net/
About the lazy-loading scripts thingy, most libraries just adds a <script> element inside the HTML pointing to the JS file to be "included" (assynchronously), while others like DOJO, fetches it's dependencies using a XMLHttpRequest and then eval's its contents, making it work synchronously.
I've used the YUI-Loader that is pretty much simple to use and you don't need the whole library to get it working. There are other libraries that gives you just this specific funcionality, but I think YUI's is the safe choice.
About your last question, I don't think there's something like that. You would have to do it yourself, but it would be similar to using document.ready.
i did in my framework a similar thing:
i created a include_js(file); that include the js file only if it isn't included reading and executing it with a synchronous ajax call.
Simply put that code in top of the page that needs dependencies and you're done!