Which Java Web Framework allows Cross-Domain Javascripting (http proxy)? - javascript

So just a quick intro, I am starting to explore Vaadin, and it's absolutely perfect. Previously, I was juggling PHP, Perl, Ruby, and Jquery for designing rich client web application. It didn't work out too well, as I've burnt out from trying to fix cross browser issues (aka get-it-to-work-on-IE-damn-it), handling server-side, client-side, and building a robust communication between the two tier had lot of code not related to application logic....by the time I was burnt out, only tiny bit of application logic was implemented.
Vaadin seems like the answer to my problem as it only requires Java and built on top of GWT.
However, I am curious how I can incorporate Cross-Domain Javascripting ? Back in LAMP environment, I had a CGI proxy script that loaded external URL, and injected JS into the proxy-loaded page. I used the CGI proxy script, as it rendered Javascript of the external URL well. Is there a class or package for Java or a specific Java web framework similiar to Vaadin that makes this possible ?
Thank you.

If you want to avoid any kind of proxies, and thereby keep a full context on each 'side', then you should choose easyXDM. To see it in action try http://easyxdm.net/current/example/methods.html
This fully supports all browsers, and has a neat RPC interface that lets you call methods and pass data between the domains.
If you plan to support IE6/7 then you should also try the upcoming version.
Even though the current version is fast (when used with a dependency), this one is even faster - actually nearly as fast as postMessage in never browsers!

You can easily implement the functionality yourself.
a proxy for cross domain javascript is really straightfoward.
It's just creating a request equivalent to the ajax request you want and direct it to the other domain.

ExtJS has what they call a "ScriptTagProxy" which may or may not be of use to you...
Here's a few more links about this:
http://xant.us/ext-ux/examples/css-proxy/
http://www.extjs.com/forum/showthread.php?17691-Cross-domain-Ext.Ajax-Ext.data.Connection

Related

analyse a .aspx site with paging with __doPostBack function

I want to analyse some data of a webpage, but here's the problem: the site has more pages which gets called with a __doPostBack function.
How can I "simulate" to go a page further and analyse this site, and so on..
At this time I analyse the data with JSoup in java - but I'm open to use some other language if it's necessary.
A postback-based system (.NET, Prado/PHP, etc) works in a manner that it keeps a complete snapshot of the browser contents on the server side. This is called a pagestate. Any attempt to manipulate with a client that is not JavaScript-capable is almost sure to fail.
What you need is a JavaScript-capable browser. The easiest solution I found is to use the framework Firefox is written in - XUL - to create such a desktop application. What you do is basically create a desktop application with a single browser element in it, which you can then script from the application itself without restrictions of the security container. Alternatively, you could also use the Greasemonkey plugin to do your bidding. The latter is a bit easier to get started with, but it's fairly limited since it's running on a per-page basis.
With both solutions you then have access to the page's DOM to gather data and you can also fire events (like clicking on a button). Unfortunately you have to learn JavaScript for this to work.
I used an automation library which is Selenium, which you can use in a lot of languages (C#, Java, Perl,...)
For more information how to start this link is very helpful: this.
As well as Selenium, you can use http://watin.org/

Is it possible to use node.js as a faster, more elegant, database enabled alternative to greasemonkey?

I was using Greasemonkey eariler in the week to automate some calls to a page to scrape some data from a website, this was awkward for two reasons:
It's GUI based instead of commandline based)
I had to store all persisted information in JSON, and not directly in a database.
Would it be possible, to use node.js as a Greasemonkey alternative since node.js can store records directly in a database, and won't be required visually load pages the way the Greasemonkey does?
Also I would think that node.js would be easier to work with since you don't have to re-deploy it's scripts to Firefox the way that you have to with GreaseMonkey, allowing you to easily use version control on separate scripting projects.
On the other hand using node.js to do GreaseMonkey's job might just be using a hammer to pound in a screw, so I thought I would check here to find out if I am mistaken.
On the other hand using node.js to do GreaseMonkey's job might just be using a hammer to pound in a screw
I would say that the opposite is true; I believe you're using Greasemonkey to do the job of a server-side processing library. Greasemonkey runs in the browser and is designed to modify your web experience by running scripts on the pages you visit.
Indeed, I believe Node.js would be very well suited to this task. With libraries like jsdom and node-jquery, you can easily do JavaScript parsing over the DOM. You may also wish to take a look at node.io, a "distributed data scraping and processing framework." Finally, you may look into non-Node (but still JavaScript) based tools, such as PhantomJS and CasperJS, which can do scraping, DOM manipulation, screenshots, and more.
The question is a bit of a non sequitur.
Greasemonkey is for clients to tweak their individual browsing experience, client-side.
Node.js is for developers to deliver applications to the masses (hopefully), server-side.
For scraping data, in an automatable way, use Node.js or some server-side library (Python works well).
For "Mashups" of webpages that you browse, use Greasemonkey.

GWT server with get() and post() built on client end

This is more of a curiosity really, to see if some one has done anything similar, or if at all it is possible.
I'm working on a project that will get notification through external notifications. Now I could go about doing this by having notifications coming to my server and have a comet setup between my client and server.
BUT
I was wondering if I could write server logic into my client and listen out for notifications from external sources. Immediately one issue I see is, external sources would need callback URL etc, which I dont know if you could do from client side (unless one could use the IP address in that way).
As you can see it is more ideas and discussions if such a thing was possible, this is somewhat inspired by P2P models whereby you wouldn't be mediating things through your central server.
Thanks in advance!
GWT compiles (nearly) Java source into JavaScript, so compiled GWT apps can't do anything that traditional JavaScript running in the browser cannot do. The major advantage of bringing Java into the picture isn't automatic access to any/all JVM classes, but the ability to not only maintain Java sources, which tend to be easier to refactor and test as well as keep consistent with the server, and to compile that statically defined code into JavaScript, performing all kinds of optimizations at compile time that aren't possible for normal JavaScript.
So no, while you can have some code shared by the client (in a browser) and the server (running in a JVM), you can't run Tomcat/Jetty/etc in the browser just by using GWT to compile the java code into JS.
As you point out, even if this was possible, it would be difficult to get different clients to talk back and forth, without also requiring that the browsers can see and connect at will to one another. BitTorrent and Skype have different ways for facilitating this, and currently browsers do not allow anything like this - they are designed to make connections to other servers, not to allow connections to be made to them.
Push notifications from the web server to the browser are probably the best way forward, either through wrapping comet or the like, or through an existing GWT library like Atmosphere (see https://github.com/Atmosphere/atmosphere/tree/master/samples/gwt-demo for a demo).

How does disqus work?

Does anyone know how disqus works?
It manages comments on a blog, but the comments are all held on third-party site. Seems like a neat use of cross-site communication.
The general pattern used is JSONP
Its actually implemented in a fairly sophisticated way (at least on the jQuery site) ... they defer the loading of the disqus.js and thread.js files until the user scrolls to the comment section.
The thread.js file contains json content for the comments, which are rendered into the page after its loaded.
You have three options when adding Disqus commenting to a site:
Use one of the many integrated solutions (WordPress, Blogger, Tumblr, etc. are supported)
Use the universal JavaScript code
Write your own code to communicate with the Disqus API
The main advantage of the integrated solutions is that they're easy to set up. In the case of WordPress, for example, it's as easy as activating a plug-in.
Having the ability to communicate with the API directly is very useful, and offers two advantages over the other options. First, it gives you as the developer complete control over the markup. Secondly, you're able to process comments server-side, which may be preferable.
Looks like that using easyXDM library, which uses the best available way for current browser to communicate with other site.
Quoting Anton Kovalyov's (former engineer at Disqus) answer to the same question on a different site that was really helpful to me:
Disqus is a third-party JavaScript application that runs in your browser and injects itself on publishers' websites. These publishers need to install a small snippet of JavaScript code that makes the first request to our servers and loads initial JavaScript loader. This loader then creates all necessary iframe elements, gets the data from our servers, renders templates and injects the result into some element on the page.
As you can probably guess there are quite a few different technologies supporting what seems like a simple operation. On the back-end you have to run and scale a gigantic web application that serves millions of requests (mostly read). We use Python, Django, PostgreSQL and Redis (for our realtime service).
On the front-end you have to minimize your payload, make sure your app is super fast and that it doesn't break in extremely hostile environments (you will be surprised how screwed up publisher websites can be). Cross-domain communication—ability to send messages from hosting website to your servers—can be tricky as well.
Unfortunately, it is impossible to explain how everything works in a comment on Quora, or even in an article. So if you're interested in the back-end side of Disqus just learn how to write, run and operate highly-scalable websites and you'll be golden. And if you're interested in the front-end side, Ben Vinegar and myself (both front-end engineers at Disqus) wrote a book on the topic called Third-party JavaScript (http://thirdpartyjs.com/).
I'm planning to read the book he mentioned, I guess it will be quite helpful.
Here's also a link to the official answer to this question on the Disqus site.
short answer? AJAX, you get your own url eg "site.com/?comments=ID" included via javascript... but with real time updates like that you would need a polling server.
I think they keep the content on their site and your site will only send & receive the data to/from disqus. Now I wonder what happens if you decide that you want to bring your commenting in house without losing all existing comments!. How easy would you get to your data I wonder? They claim that the data belongs to you, but they have the control over it, and there is not much explanation on their site about this.
I'm always leaving comment in disqus platform. Sometimes, comment seems to be removed once you refreshed it and sometimes it's not. I think the one that was removed are held for moderation without saying it.

Can you use Silverlight with AJAX without any UI element?

I know you can just use CSS to hide the DIV or Silverlight Plugin, but is there a way to instantiate a Silverlight Component/App using JavaScript that doesn't show any UI element at all?
There is alot of great functionality in Silverlight, like MultiThreading and compiled code, that could be utilized by traditional Ajax apps without using the XAML/UI layer of Silverlight at all.
I would like to just use the standard HTML/CSS for my UI layer only, and use some compiled .NET/Silverlight code in the background.
Yes you can, and some of the reasons you make makes perfect sense. I did a talk on the HTML bridge at CodeCampNZ some weeks back, and have a good collection of resources up on my blog.
I also recommend checking out Wilco Bauwers blog for lots of detail on the HTML bridge.
Some other scenarios for non visual Silverlight:
Writing new code in a managed language (C#, Ruby, JScript.NET, whatever) instead of native (interpreted) JavaScript.
Using OpenFileDialog to read files on the client, without round-tripping to the server.
Storing transient data securely on the client in isolated storage.
Improving responsiveness and performance by executing work in the background through a BackgroundWorker or by using ordinary threads.
Accessing cross-domain data via the networking APIs.
Retrieving real-time data from the server via sockets.
Binding data by re-using WPF's data-binding engine.
Yes. I think this is particularly intriguing when mixed with other dynamic languages -- but then, I'm probably biased. :)
Edit: But you'd need to use the managed Javascript that's part of the Silverlight Dynamic Languages SDK and not the normal Javascript that's part of the browser.
Curt, using Managed JavaScript would still require you to have some Silverlight/XAML display layer being visible on the page, correct? Is there a way to entirely get rid of any Silverlight/UI element from being displayed?

Categories

Resources