Protocol buffers for JavaScript? - javascript

Is there a way to do protocol buffers in JavaScript?
Why for .js?
If you think about sciencey requirements for a moment, situations pop up where you might want to send a large block of data to the client. With CRUD-style it doesn't really matter so much what you use. With sciencey stuff it does matter (at least I think it does).
tradeoffs:
protobuffs balances compactness, serialize and deserialize speeds well.
text based protocols (xml / json) have a larger message size... but with javascript I wonder which is more effective.
reference:
code.google.com/p/protobuf-plugin-closure
Google Protocol Buffers or something similar for .net/javascript
https://github.com/sirikata/protojs
Google Protocol Buffers - JavaScript
http://www.vitaliykulikov.com/2011/02/gwt-friendly-protocol-buffers.html
http://benhakala.blogspot.com/2010/05/converting-google-protocol-buffers-to.html (alludes to google maps possibly using protobufs)
Additional references provided by community (see below for more context):
https://github.com/dcodeIO/ProtoBuf.js
http://blog.ltgt.net/exploring-using-protobuf-in-the-browser/
http://blog.ltgt.net/using-protobuf-client-side-with-gwt
http://code.google.com/p/protobuf-gwt/

Google makes heavy use of Protocol Buffers in JS (GMail, etc.) through their Closure Library, generating JS code with a (unfortunately non-open-sourced) modified protoc (it would probably have to be ported to a protoc extension before being open-sourced).
Apache Wave (whose client webapp is built with GWT) also uses Protocol Buffers for its communications with the server, generating Java code by reflecting on the Java classes produced by protoc (this is the PST, aka protobuf-stringtemplate, subproject).
Previously, Wave was using protostuff (and I don't know why they switched to their own solution, I suspect PST is derived from what the original Google Wave was using, and protostuff was only an intermediate step during the move to open-source).
As a side note, I started exploring using Protocol Buffers on the browser side a while ago: http://blog.ltgt.net/exploring-using-protobuf-in-the-browser/ & http://blog.ltgt.net/using-protobuf-client-side-with-gwt with some almost-working code at http://code.google.com/p/protobuf-gwt/ that you might want to resurrect.
Finally, there's work underway to make GWT RequestFactory proxies compatible with the server-side Java classes generated by protoc (and you could use a protoc extension or a similar approach to Wave's PST to generate your RequestFactory proxies). It should already be possible, provided you use builders all the way on the server-side (which is not quite how the Protocol Buffers Java API was designed).

Historically javascript made working with binary a pain, which probably in part explains a relative lack of tooling - but with javascript typed arrays it could well be a lot easier now. I kinda agree that if you have to get the same volume of data (via some format), using less bandwidth is a plus - but before embarking on anything you'd need to check that bandwidth / processing was an actual bottleneck (and if bandwidth: have you tried gzip/deflate first).
I'm a fan of protobuf - and I'd happily see stronger browser-side tooling for it, but json is so ubiquitous that you'd need a compelling reason to challenge the status-quo. Also; think "jsonp".

I have been looking for protobuf for javascript. There is a project here:
https://github.com/dcodeIO/ProtoBuf.js

Related

Javascript and Scientific Processing? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
Matlab, R, and Python are powerful but either costly or slow for some data mining work I'd like to do. I'm considering using Javascript both for
speed, good visualization libraries, and to be able to use the browser as an interface.
The first question I faced is the obvious one for science programming, how to do I/O to data files? The second is client-side or server-side? The last question, can I make something that is truly portable i.e. put it all on a USB and run from that?
I've spent a couple of weeks looking for answers. Server2go seems to address client/server needs which I think means I can get data to and from the programs on the client side. Server2go also allows running from a USB. The data files I work with are usually XML and there seem to be several javascript converters to JSON.
However, after all the looking around, I'm not sure if my approach makes sense. So before I commit further, any advice/thoughts/guidance on Javascript as a portable tool for scientific data processing?
I have to agree with the comments that JavaScript is not a good fit for scientific processing. However, you know your requirements best; maybe you already found useful libraries that do what you need. Just be aware that you'll have to implement all logic yourself. There is no built in handling of complex numbers, or matrices or integrals or ... Usually programmer time is far more valuable than machine time. Personally, I'd look in to compiled languages; after I created a first version that isn't fast enough in whatever language I like the most.
Assuming that JavaScript is the way to go:
Data I/O
I can think of three options:
Sending and receiving data with ajax to a server
Seems to be the solution you've found with Server2go. It requires you to write a server back end, but that can be kept quite simple. All it really needs to do be able to read and write files as a response to you client-side application.
Using a non-browser implementation of v8 which includes file I/O
For instance Node.js. You could then avoid the need for a server and simply use a command-line interface, and all code will be JavaScript. Other than that it is roughly equivalent to the first option.
Creating a file object using the file API which you ask the user to save or load
It is the worst option in my opinion, as user interaction is required. It would avoid the need for a server; your application could be a simple html file that loads all data files with ajax requests. You'd have to start Chrome with a special switch to allow ajax requests with the file:// protocol, as described here
These options are only concerned with file I/O and you can't do file I/O in JavaScript. This is because browsers cannot allow arbitrary web code to do arbitrary file I/O; the security implications would be horrendous. Each option describes one way to not do file I/O.
The first communicates with a server that does the file I/O for the client.
The second uses "special" versions of JavaScript, with conditions other than that of the browser so the security implications are not important. But that means you'll have to look up how file I/O is done in the actual implementation you use, it's not common to JavaScript.
The third requires the user to control the file I/O.
Interface
Even if you don't use JavaScript to do the actual processing, which so far is the consensus, there is nothing stopping you from using a browser as the interface or JavaScript libraries for visualisation. That is something JavaScript is good at.
If you want to interactively control your data mining tool, you will need a server that can control the tool. Server2go should work, or the built in server in Node.js if you use that or... If you don't need interactive control of the data tool; that is you first generate the processed data, then look at the data a server can be avoided, by using the file//: protocol and JSONP. But really; avoiding a server shouldn't be a goal.
I won't go into detail about interface issues, as there is nothing specific to say and very nearly everything that has been written about javascript is about interface.
One thing, do use a declarative data binding library like Angular.js or Knockout.js.
JavaScript speed is heavily overrated. This is a Web 2.0 myth.
Let me explain this claim a bit (and don't just downvote me for saying something you do not want to hear!)
Sure, JavaScript V8 is a quite highly optimized VM. It does beat many other scripting languages in naive benchmarks.
However, it is a very limited scope language. It is meant for the "ADHS world" of web. It is a best effort, but it may just fail and you have little guarantees on things completing or completing on time.
Consider for example MongoDB. At first it seems to be good and fast and offer a lot. Until you see for example that the MapReduce is single-threaded only and thus really slow. It's not all gold that shines!
Now look at data mining relevant libraries such as BLAS. Basic linear algebra, math operations and such. All CPU manufacturers like Intel and AMD offer optimized versions for their CPUs. This is an optimization that requires detailed understanding of the individual CPUs, way beyond the capabilities of our current compilers. The libraries contain optimized codepaths for various CPUs all essentially doing the same thing.
And for these operations, using an optimized library such as BLAS can easily yield a 5-20x speedup; at the same time matrix operations that are often in O(n^2) or O(n^3) will dominate your overall runtime.
So a good language for data mining will let you go all the way to machine code!
Pythons SciPy and R are good choices here. They have the optimized libraries inside and easily accessible, but at the same time allow to do the wrapper stuff in a simpler language.
Have a look at this programming language benchmark:
http://benchmarksgame.alioth.debian.org/u32/which-programs-are-fastest.html
Pure JavaScript has a high variance, indicating that it can do some things fast (mostly regular expressions!) others much slower. It can clearly beat PHP, but it will be just as clearly be beaten by C and Java.
Multithreading is also important for modern data mining. Few large systems today have a single core, and you do want to make use of all cores. So you need libraries and a programming language that has a powerful set of multithreading operations. This is actually why Fortran and C are losing popularity here. Other languages such as Java are much better here.
Although this discussion is a bit old and I am not a Javascript guru by any stretch of the imagination, I find the above arguments doubtful about not having the processing speed or the capabilities for advance math operations. WebGL is a Javascipt API for rendering advance 2D and 3D graphics which relies heavily on advance math operations. I believe the capabilities are there from a technical point of view however what is lacking is good libraries to handling statistical analysis, natural language processing and other predictive analytics included in data mining.
WebGL is based on openGL, which in turn uses libraries like BLAS (library info here).
Advances like node.js, w8 make it technically possible. What is lacking is libraries like we can find in R and Scilab to do the same operations.

How to use JavaScript to stream parts of a file?

Given a really large, 3+ gig, binary file is there anyway that I could stream, from client to server, only portions of the file using JavaScript given that I know what byte range of the file that I want to receive?
I have a Ruby on Rails application that needs to grab specific portions of a file from the client. As one user has stated I could do this using Java.
Edit: After some reading it appears that HTML5 via slicing a file may be the best bet. http://www.html5rocks.com/en/tutorials/file/dndfiles/
The basic answer is yes, assuming your web server supports it (which many do).
You can use the Range HTTP header to request only a part of the file (e.g. Range: bytes=1000-2000). Whether this works for you depends heavily on what you’re trying to accomplish — more information would help.
See this answer for a discussion on using it.
No, not really (at least not now, anyways). The file handling capabilities exposed to Javascript is not powerful enough to really do anything useful client-side when processing files to send back to the server (including things like only take part of a file). There are proposed w3c specs for better client-side file handling for javascript, but none of the major browsers implement it to a sufficient level to really handle this case quite yet.
I'm currently working on a project with similar needs, and the only options we found when we looked into this was to either use Flash, or to use Java. Since we are much more comfortable with Java than flash, we went that route.
We are currently using Groovy and the Griffon framework, as well as Grails for the server-side pieces. Griffon has been great because it frees us from the hassles of desktop vs. webstart vs. applet, and since it's built on Groovy, it can leverage the Swing DSLs so it is much less painless to write Swing.

Why do web applications send HTML over the wire?

This question pertains to web applications. I have very little web app development experience, so might be missing some very obvious points/issues. Please point them out.
As I understand, in most web applications, a web server sends HTML over the wire to a client (browser). This happens every time a HTTP request is made. I feel this is very wasteful of bandwidth.
1) Since browsers can run JavaScript, why don't we just send a JavaScript program which can generate the webpage's HTML content (which the browser then renders).
2) Further a browser might cache the JavaScript program and next time the server only need send the data. The protocol might involve the browser sending the "program version" it has.
Consider an example of a relatively simple website Hacker News [http://news.ycombinator.com]. Let us separate the data (30 posts + their metadata) from its presentation. Assuming 1) above, the server can just send the data (say in JSON) + a JavaScript program to generate HTML. This gist shows the idea. The data for the 30 posts is in JSON [http://www.json.org/js.html] format. For this particular example the data transferred is cut in 1/2 (size of data+JavaScript / size of HTML). Further if browsers can do 2) above, it reduces the data transferred on each visit to 1/4 (size of data / size of HTML). [Note: this analysis is without considering compression; gzip,deflate is very successful in reducing the size of HTML. But isn't prevention better than cure?]
I see atleast the following advantages of this :-
* For most web pages, it will reduce the size of data transferred over the wire.
* Forces web apps to separate data from its presentation.
Disadvantages might include - more complex browsers, time to run the JavaScript program to generate HTML (this might get offset by the reduction in data size).
Now my question is - why are web applications not developed this way, or, why do web applications send HTML over the wire? Surely the web server (sending out HTML) doesn't care about HTML at all, so why should it, first, generate it, and then send it over the wire?
There are a few reasons, some of them historical this is by no means a complete list but just some of my experiences:
HTML predates JS, and a lot of scripts and libraries predate JS
Older browsers (think IE<=6) had rubbish, inconsistent JS engines, their rendering engines were much more consistent in how they treat HTML. So many more libraries and scripts predate consistent JS
It is a nightmare to debug applications written as you suggest if they are not constructed right (we have one at my work, it takes 30 minutes to find where a piece of html is actually generated)
It is a lot more work to do it right - why not use templates or static docs or something much simpler
Its not really a problem - HTML compresses really well
What you suggest is done - its called AJAX (OK, so ajax is more general than this but you all know what i mean)
It simply doesn't work for most plain-text user agents including those used by most search engines. If this page is serving most of your content, its generally a good idea to make it easy for Google to parse
Well the obvious reason on why this is the case is that JavaScript wasn't around when we started sending HTML around, and HTML was an improvement to sending around plaintext documents.
The reason we don't do this now: we eschew complex solutions to problems that aren't really problems.
Average internet connections download nearly 1M bytes per second, and web browsers are quite adept at parsing and starting to render this HTML before it's even all ready to be. They're also great at parallelizing the downloading of resources on the page. If we want to save a few bytes at the cost of some compute cycles, we gzip content before sending it. Problem solved.
And for the record, we do this with AJAX in complex webpages (checkout Github's source browsing for a great example of how awesome this can be).
What you suggest can, and is, done. Remember, web pages used to be static documents. Full blown web-based applications are a relatively recent idea.
I might also suggest that it isn't necessarily more efficient, especially when your pages are sent gzipped.
What you suggest is basically what a JavaScript full stack framework like ExtJS does. You can create rich, data intensive applications without writing any HTML -- well, only enough to reference the necessary .js libraries. The complex DOM needed for layouts, grids, forms etc is all created by the framework.
The simple answer is that HTML is older. Why is C99 not fully implemented with a lot of compilers? They figure 1989 is new enough for them. Also, JavaScript exercises a lot more control over people's browsers than they seem to want. Conditional statements and encoded data pose a security concern, and some people want to keep that can of worms closed to begin with. True, HTML is a very inefficient markup, but the size is insignificant compared to the images you download from the internet. That favicon takes up as much data as the page itself, and it's only 16 pixels across.
A good reason that the server-side code of a web application might do lots of HTML template work on the server side is that in many server environments it's not made easy to bundle up server-side data structures (object graphs) for easy delivery to the client. There may be information kept in server-side data structures that really shouldn't be delivered out to the client. Thus in order to send out a "pure" data-only response, the server would have to trim off sensitive data before delivering out the JSON. That's not an unsolvable problem, but I don't know of many server frameworks that facilitate a solution.
The server has direct, unfettered access to the database and to everything else that makes an application work: user preferences, history, account details, system settings, etc. To build an application that's client-centric for rendering purposes would mean concocting ways of keeping all that information intact and up-to-date on the client. For a lot of applications, that might not be terribly easy.
Finally, it's only relatively recently that it would make sense to trust a browser to provide a stable enough platform for building a long-lived "application environment" as a continually-updating web page. By building a web app such that pages are sometimes completely reloaded, there are lots of little "reboots". That's a cheap and dumb way of keeping a lid on at least some kinds of memory leaks.
Most implementations of sites with heavy Javascript use won't start executing until the DOM has fully loaded; then you'll get every page with 'loading screens' when the page wrapper has downloaded, but none of the content has.
Also, do remember that not all users have Javascript enabled, and not all browsers support high-level Javascript (think mobiles).
I would send HTML in a response if I wanted my application to work without Javascript. I would write HTML rendering code in my server-side language (most of the time not Javascript), which could then be used for two purposes: serving whole HTML pages, and serving bits of HTML in response to XHRs.
If the Javascript code is restricted to things like reporting UI events and replacing innerHTML with server-generated code, I don't have to duplicate any of my application logic across languages/frameworks. This duplication problem is one of the reasons why server-side Javascript is getting people excited.

JavaScript on the server-side like PHP

I'm now thinking to establish my server-side code in JavaScript, and begin to do all on it, but I want to know about its security and flexibility compared to PHP.
I want to know too, if it can be successfully used to develop things like forum boards, full web-sites and things like this, as PHP does.
Javascript is just now starting to get some presence on the server, with things like ServerJS and nodeJS, but right now, you would probably be best off using PHP for your server side code, and javascript for client-side beautification.
The question is very, very broad. Interpreting it as "can I use Javascript on the server":
Fundamentally, sure, Javascript is a very powerful language and so you can do development in it server-side just like you can client-side (and if you do client-side scripting as well, you get some definite reuse benefits using Javascript on the server).
For Apache systems, there's the v8cgi project (a FastCGI Javascript plug-in with connectors, using Google's freaky-fast V8 engine).
On Microsoft-based systems, IIS supports Javascript (JScript) on the server out of the box (I use that all the time), which has access to all of the ActiveX stuff (e.g., for talking to databases, dealing with the file system, etc.).
If your server framework is JVM-based, there's Rhino, which is Javascript for the Java platform and has access to all (or nearly all) of the libraries available for Java — e.g., a huge ecosystem of libraries and plug-ins.
Aside from v8cgi, there are a couple of other projects built on Google's V8 engine.
There's a place that does a full stack for you called chromeserver (I don't know what their backend is; I'm not going to infer from the name).
Paul mentioned ServerJS and NodeJS.
There's the whole CommonJS project.
Etc. etc. etc. There's quite a list on Wikipedia.
Arguing against, there's a very rich ecosystem built around PHP. Unless you're using something like Rhino for the Java platform or JScript on IIS (because of the ecosystems they leverage), you may find that you don't have nearly that ecosystem available to you when developing in Javascript for the server. I mean, if you're looking for pre-built forum or wiki software (for example), let's just say you can't swing a dead cat without finding one based on PHP, and the same cannot be said of Javascript on the server.
The way they are usually used, PHP and JavaScript run in entirely different worlds, and are not really comparable. (There is a server-side version of JavaScript but it's fair to say it's not especially widespread yet, and doesn't run on standard web hosting.)
The security issues you are going to encounter in JavaScript (on the browser) side are very different from what you have to look out for in PHP.
I want to know too, if it can be sucessfully used to develop things like forum boards, full web-sites and things like this, as PHP does.
No, not with client-side Javascript. For dynamic applications, you will always need some server-side language backing it, be it PHP or some other language like ASP, Python, Ruby, Perl....
To replace PHP with Javascript, you need server-side Javascript and there is a lot happening on that front. Mozilla’s Rhino runs Javascript atop the JVM and it seems Google is also working on its own server side Javascript framework. The most popular in-production implementations are:
Helma: Several active projects are using it, runs on Jetty & Rhino and lets developers leverage the power of JVM, has its own object-oriented MVC framework
Project Phobos: runs on Glassfish & Rhino and lets developers leverage the power of JVM, includes plug-ins for NetBeans and integrates with jMaki Web UI framework
JSSP: A very simple server side framework, a lot like classic ASP, JSP and PHP
Aptana’s Jaxer showed a lot of promise, especially by bringing the DOM to the server side, but the project seems dead now. From what I understand, node.js is not a server-side Javascript framework in the same sense as Helma and Phobos. Instead it can be used for writing event-driven servers in Javascript (for example: writing your own web server).
Yes, my site is written by node.js
Using websvr, it's Java style have filter and handlers, hosting on debian OS.
This is slightly off-topic, but it may actually get to the core of your question:
if you want to use only one language for web applications, you may wanna have a look at Haxe.
It is a cross-platform language, that (among other targets) compiles to JavaScript and PHP source as well as NekoVM bytecode. For server-side JavaScript, there are NodeJS bindings.
This way you are not bound to a specific platform. The neko and PHP APIs are largely compatible, so you can deploy on both platforms, having the option to choose neko's speed and persistency or PHP's ease of deployment. Please note however, the PHP output has a little overhead although common optimizers as eaccelerator will make this barely noticeable.
Haxe is significantly less forgiving than both JavaScript and PHP. This makes it harder to learn, but a much safer, robust and in the end more productive tool.
In a word: no. Javascript is a client-side language. In order to do the things that you are describing, you need a server-side language such as PHP.
EDIT: OK, technically it is possible to implement Javascript in other areas besides the browser, but this is not very common.
5 YEAR EDIT: Well, 5 years later, this answer obviously is not accurate, with the popularity of things like node.js. Let that be a testament to how quickly things can change!
PHP and JavaScript are two different languages that do two different things. One cannot replace the other. You are most likely going to use a combination of the two. JavaScript for client-side stuff. PHP for server-side stuff.

Building Standalone Applications in JavaScript

With the increased power of JavaScript frameworks like YUI, JQuery, and Prototype, and debugging tools like Firebug, doing an application entirely in browser-side JavaScript looks like a great way to make simple applications like puzzle games and specialized calculators.
Is there any downside to this other than exposing your source code? How should you handle data storage for this kind of program?
Edit: yes, Gears and cookies can be used for local storage, but you can't easily get access to files and other objects the user already has around. You also can't save data to a file for a user without having them invoke some browser feature like printing to PDF or saving page as a file.
I've written several application in JS including a spreadsheet.
Upside:
great language
short code-run-review cycle
DOM manipulation is great for UI design
clients on every computer (and phone)
Downside:
differences between browsers (especially IE)
code base scalability (with no intrinsic support for namespaces and classes)
no good debuggers (especially, again, for IE)
performance (even though great progress has been made with FireFox and Safari)
You need to write some server code as well.
Bottom line: Go for it. I did.
Another option for developing simple desktop like applications or games in JavaScript is Adobe AIR. You can build your app code in either HTML + JavaScript or using Flash/Flex or a combination of both. It has the advantage of being cross-platform (actually cross-platform, Linux, OS X, and Windows. Not just Windows and OS X).
Heck, it may be the only time in your career as a developer that you can write a web page and ONLY target ONE browser.
SproutCore is a wholly JavaScript-hosted application framework, borrowing concepts particularly from Cocoa (such as KVO) and Ruby on Rails (such as using a CLI generator for your models, views and controllers). It includes Prototype, but builds plenty of stuff such as sophisticated controls on top of that. Its Photos demo is arguably impressive (especially in Safari 3.1).
Greg already pointed you to Gears; in addition, HTML 5 will come with a standardized means of local storage. Safari 3.1 ships with an implementation where you have a per-site SQLite database with user-settable size maximums, as well as a built-in database browser with SQL querying. Unfortunately, it will be a long time until we can expect broad browser support. Until then, Gears is indeed an alternative (but not for Safari… yet!). For simpler storage, there is of course always cookies.
The downside to this would be that you are at the mercy of them having js enabled. I'm not sure that this is a big deal now. Virtually every browser supports js and has it enabled by default.
Of course the other downside would be performance. You are again at the mercy of the client handling all the intensive work. This also may not be that big of a deal, and would be dependent on the type of app you are building.
I've never used Gears, but it looks like it is worth a shot. The backup plan would be to run some server side script through ajax that dumps your data somewhere.
Not completely client side, but oh well.
Nihilogic (not my site) does a lot of stuff with Javascript. They even have several games that they've made in Javascript.
I've also seen a neat roguelike game made in Javascript. Unfortunately, I can't remember what it was called...
If you want to write a standalone JavaScript application, look at XULrunner. It's what Firefox is built on, but it is also built so that you can distribute it as an application runtime. You will write some of the interface in JavaScript and use JavaScript for your code.
Gears might provide the client-side persistent data storage you need. There isn't a terribly good way of not exposing your source code, though. You could obfuscate it but that only helps somewhat.
I've done simple apps like this for stuff like a Sudoku solver.
You might run into performance issues given that you're completely at the mercy of the client's Javascript interpreter. Gears would be a nice way of data storage, but I don't think it has penetrated the market that much. You could just use cookies if you're not fussy about that kind of thing.
I'm with ScottKoon here, Adobe AIR is great. I've really only made one really nice (imho) widget thus far, but I did so using jQuery and Prototype.js, which floored in such wonderful ways because I didn't have to learn a whole new event model. Adobe AIR is really sweet, the memory foot print isn't too bad, upgrading to a new version is built into AIR so it's almost automatic, and best of all it's cross-platform...they even have an alpha-version for Linux, but it works pretty well already on my Eee.
Standalone games in GWT:
http://gpokr.com/
http://kdice.com/
In regard to saving files from a javascript application:
I am really excited about the possibilities of client-side applications. Flash 10 introduced the ability to create files for save right in the browser. I thought it was super cool, so I built a javascript+flash component to wrap the saving feature. Right now it only works for creating text based files (vcard, ical, xml, html, css, etc.)
Downloadify Home Page
Source Code & Documentation on Github
See It In Use at Starter for jQuery
I am looking to add support for non-text files soon, but this is a start.
My RSS feeds have served me well- I found that Javascript roguelike!
It's called The Tombs of Asciiroth.
Given that you're going to be writing some server code anyway, it makes sense to keep storage on the server for a lot of domains (address books, poker scores, gui configuration, etc.,.) For anything the size of what you'll get in Webkit or Gears, you can probably also keep it on your server.
The advantage of keeping it on your server is two-fold:
You can integrate it fairly simply as a Model layer in a typical MVC framework, and,
Users get a consistent view without being tied to their browser/PC, or in a less-than-ideal environment (Internet Cafés).
The server code for handling this can also be fairly trivial, particularly if it's written with this task in mind, so it's not a huge cognitive burden.
Go with qooxdoo. They recently realsed 1.0, although most users of it say it was ripe for 1.0 at least two versions ago.
I compared qooxdoo with YUI and ext, and I think qooxdoo is the way to go for programmers - YUI isn't that polished as qooxdoo, from a programmer's point of view and ext has a not so friendly licensing model.
A few of the strong points (for me) of qooxdoo are:
extremely clean code
the nicest OO programming model I've seen among Javascript frameworks
an extremely rich UI widget library
It also features a test runner for unit tests, an API doc generator and reader, a logging facility, and several useful features for debugging, grouped under something called Inspector.
The only downside is that there aren't readymade themes (something like skins) for qooxdoo. But creating your own theme is quite easy.

Categories

Resources