How to make an iframe accessible by multiple users? - javascript

I want to make it possible that multiple users can view the same website (a specific URL that all agree on) and all events of the users will be shared so that everyone has the same state. So that you can use a website with multiple people but the website thinks there is only one person, similar to when you use one computer with multiple people.
I have two ideas for how to do this:
The client-sided approach: everyone loads the same page with an iframe and then it detects all events of the users and sends these to each other so everyone has the same state.
Problems:
Each user might use a different browser and the website can be different for everyone and desynchronisation can also happen.
Emulating clicks might be difficult.
The server-sided approach: load the website only once on the server and then send all user events to the server and stream back the website's pixels back to the users.
Problems:
Streaming back the website's state (its look, the pixels) to all the users could be quite expensive, but maybe it could only update the pixels that actually changed, instead of all pixels of the website.
Because approach 1 doesn't seem very feasible, I would like to try approach 2, but I'm not sure where to start there. Do I make the server open the URL in its browser and let the system emulate clicks on the browser?
What is the best approach to solve this, are there more and better approaches?

ProseMirror is an open source web-based editor that was designed with collaborative editing in mind:
Live demo of collaborative editor
Source code for demo
I suggest using ProseMirror as a base, then modify it for your needs. ProseMirror defines a text-based document data structure. ProseMirror renders this data-structure as as a web-based editor, but you could render the data structure however you desire:
Render the document data structure as a web page.
Clicking the webpage alters the underlying document data structure.
ProseMirror takes care of synchronizing the document data structure on all the clients.
Other clients render the updated document.
If you are more interested in creating your own thing completely from scratch, I would study how ProseMirror did this:
A single authority (the server) maintains the "official" version of the document. Clients send requests to update the document, which the authority handles and broadcasts to all other clients.
Note peer-to-peer (called 'client-sided approach' in the question) can also have an authority, but there is added complexity to determine and maintain the authority. So server-client is simpler to implement
Minimize network bandwidth. Sending every pixel or click event is expensive. (Also clicks may result in different, diverging documents on different clients.) Ideally, transactions are sent, which are deltas describing how the document changed.
There was a similar question, where I went into more detail: Creating collaborative whiteboard drawing application

For the desynchronization problem you can create a website that'll look the same to all by using styles. Like fixed dimensions for all browsers and screen sizes. That'll help in rendering the same version and same dimension to all the users.
For click synchronization you can use node sockets. By using this you can manage a server-side directory for a group of users and share the same events using peer-to-peer sockets. This is similar to the group chat functionality. https://socket.io/ can help you in implementing this. Node Sockets are also used in client-side multi-user games for making the same connected experience to both the users.
This would be a client-server solution so that you can manage everything from the server and provide a good experience to your users.
Hope this helps. If you need additional information on this please drop a comment.

It looks like you need to create a reactive application. So, your web page
will serve any content to many users at the same time and they will be able to interact with this app almost in real time (milliseconds). All users will be able to see all interactions.
I have used the above scenario using Meteor, which is reactive by default. Of course you can use other frameworks too or try the difficult way to manipulate communication between clients by yourself, using some clever javascript.

Related

Differential content draft saving in web app

I've seen that Medium.com as well as Google Drive docs/sheets/etc. all use draft content saving so your editing doesn't get lost. Well Google's version is used for content versioning, but that's just an additional side effect as they're apparently not updating the same content but rather saving all of them getting content history records.
I'm interested in the way these two internally work.
If you observe Ajax calls being exchanged between the client and server one can see that both of them send some sort of partial content or user action saves. These kind of draft saving is very quick because even lengthy content can utilize tiny requests making draft saving a quick process. The problem is of course because these partial saves are now stateful, so they heavily rely on existing content state.
I'm also working on a web app that would benefit from such feature of tiny and quick saves, but I don't want to reinvent the wheel and jumping over all the hurdles. I'm not sure whether I should be
recording keystrokes and send them immediately which would then have to be replayed on the server to actually generate matching content state there.
should it work in any other way like recording keystrokes and sending them in predefined intervals?
not recording keystrokes at all but rather calculating content diffs in some way. Content diffs are done in CVSs like GitHub and I suppose this algorithm is very well defined by now and correctly shows removed as well as added content. If this process is not too complex for client-side processing I may go down this route and send small diffs
These are just three ways from the top of my head (although the first two are similar in nature) but there may be others that I haven't thought about.
I'm sure some of you may have done this before and have some experience as well as pros and cons of different techniques and if there's maybe an existing open source lib that can help simplify this without doing it all from scratch.
How should this be done?

Update large HTML content in realtime using Node.js and Socket.io

I am building a web application which updates the HTML content of every client's web page depending on changes made by one client, for this purpose I am using Node.js's Socket.io so far.
I am using HTML5's contenteditable property to allow the client to manually edit the content of the div elements which I need to update for other clients as well.
The problem that I'm facing is that I don't understand what data to send over websockets to inform the server and, in turn, other clients of the changes made.
To send the whole innerHTML on adding and removing of every character means sending huge amount of data over websockets which results in bad performance and slow speed.
To send the appended data is not an option since we don't know at what position in the div element the data is appended or removed or edited.
Note that using the keyboard is not the only way how one client can change the HTML content of their copy of the web page. The required div element's data is changed depending on several activities by client using javascript.
Now I need to know how can I send the exact changed information over websockets on even the slightest change to get a realtime experience.
P.S. I intend not to use any existing modules like Share.js but suggestions are welcome.
This is not really a "question" rather a discussion and would be sort of "brainstorming" topic. As well it is pretty opinion based, as there is no specific one way of doing things talking so broad topic.
But I will take chance, and provide my pure IMHO the way I would approach this (sorry for being so personal):
Text Version Control - a way to keep track of version of text, something similar as git, but using character positions rather than lines. Each editable text should be version controlled, there should be one "master" that merges things in, and notifies clients of new versions. On client side, version changes (commits) should be applied the same way as server does merging. With respect of commits order.
Structure Updates - this is slightly different, as structure updates involve only HTML structure, and do not require versioning (there still is chance you would want versioning, or at least some history of actions, and there is racing conditions involved). Editing structure would notify server of such changes. It should make sure all clients are "on the same page" regarding structure.
Referencing - this can be hard bit. Referencing HTML elements can be pretty simple (selectors), but there is one complexity - when there is for example <ul> with many <li>, referencing each by index - fair enough, but what if you put new element in between? That will mess up references. So referencing should be unique, and fully depend on synchronised IDs that server would decide. You need to keep list of all references with pointer to html elements, so it is easy to access them directly.
Server <> Client politics. If you are talking fully about synchronised experience, then you have to follow authoritative politics of communication. Where only server really decides things, and clients render and only provide some input (requests). For example when client want to add element into HTML, he sends request and then server decides of such, and inserts it, once it is in, it will publish such event and then element will appear on clients and then they can edit content of this element. You can further enhance it by "going forward" without waiting for server but that should be done as extra layer, rather than part of main logic.
When one client is working on node (element/text), it should be the only who can work on it (to prevent conflicts). Commit will be considered as "in progress" and it should be single version commit. In beginning you can only update once user is "done" editing text or element. And later on add real-time visual capabilities of such process.
You have to research into amount of traffic this will inflict. With version controlling and event-based elements editing - it will be very efficient, but cost will be that server side should actually simulate whole DOM structure and do merging and other stuff.
Again this is primarily opinion based answer, and is not a small topic, it involves many fields. I think in terms "pure" and "good" approach to target, not the "fastest", there might be just much simpler solutions with some tradeoffs..

Advantages / Disadvantages to websites generated with Javascript

Two good examples would be google and facebook.
I have lately pondered the motivation for this approach. My best guess would be it almost completely separates the logic between your back-end language and the markup. Building up an array to send over in JSON format seems like a tidy way to maintain code, but what other elements am I missing here?
What are the advantages / disadvantages to this approach, and why are such large scale companies doing it?
The main disadvantage is that you have some pain with content indexation of your site.
For Google you can somewhere solve the problem by using Crawling scheme. Google supports crawling that allows you to index dynamically (without page reload) generated content of your page.
To do this your virtual links must be addresses like so: http://yoursite.com/#!/register/. In this case Google requests to http://yoursite/register/ to index content of the address.
When clicking on virtual link there is no page reload. You can provide this by using onclick:
<a href='http://yoursite.com/#!/register/' onclick='showRegister()'>Register</a>
Virtual advantage is that content of a page changed without reloading of the page. In my practice I do not use Javascript generation to do this because I build my interface in fixed positions. When page reloads user does not notice anything because elements of the interface appears in expected places.
So, my opinion that using of dynamic page generation is a big pain. I think Google did it not to separate markup and backend (it's not a real problem, you can use complex structure of backend-frontend to do that) but to use advantages of convenient and nice representation for users.
Advantages
View state is kept on the client (removing load from the server)
Partial refreshes of pages
Server does not need to know about HTML which leads to a Service Oriented Architecture
Disadvantages
Bookmarking (state in the URL) is harder to implement
Making it searchable is still a work in progress
Need a separate scheme to support non-JS users
I don't 100% understand your question, but I'll try my best here...
Google and Facebook both extensively use JavaScript across all of their websites and products. Every major website on the web uses it.
JavaScript is the technology used to modify the behavior of websites.
HTML => defines structure and elements
CSS => styling the elements
Scripting languages => dynamically generating elements and filling them with data
JavaScript => modifies all of the above by interacting with the DOM, responding to events, and styling elements on the fly
This is the 'approach' as you call it to every website on the web today. There are no alternatives to JavaScript/HTML/CSS. You can change the database or scripting language used, but JavaScript/HTML/CSS is a constant.
Consider an example of a simple form validation ...
client sends a request to a server ... the server will execute the server side code containing validation logic and in a response ...the server will send the result to the client ....
if the client has the capability to execute/process (that can be executed on the client side ...) the form ...(perform validation)..the client wont need send request to the server ...and wait for the server to respond to that request ...
i suggest you to take a look at Google Page Speed best practice http://code.google.com/intl/it-IT/speed/page-speed/ to see what are the factors that makes a good page ... generating a page with javascript seems cool because of separation of ui and logic , but it is a totally inefficient in practice

One page only javascript applications

Have you experimented with single page web application, i.e. where the browser only 'GETs' one page form the server, the rest being handled by client side javascript code (one good example of such an 'application page' is Gmail)?
What are some pro's and con's of going with this approach for simpler applications (such as blogs and CMSs)?
How do you go about designing such an application?
Edit: As mentioned in the response a difficuly is to handle the back button, the refresh button, bookmarking/copying url. The latter can be solved using location.hash, any clue about the remaining two issues?
I call these single page apps "long lived" apps.
For "simpler applications" as you put it it's terrible. Things that work OOTB for browsers all of a sudden need special care and attention:
the back button
the refresh button
bookmarking/copying url
Note I'm not saying you can't do these things with single-page apps, I'm saying you need to make the effort to build them into the app code. If you simply had different resources at different urls, these work with no additional developer effort.
Now, for complex apps like gmail, google maps, the benefits there are:
user-perceived responsiveness of the application can increase
the usability of the application may go up (eg scrollbars don't jump to the top on the new page when clicking on what the user thought was a small action)
no white screen flicker during the HTTP request->response
One concern with long-lived apps is memory leaks. Traditional sites that requests a new page for each user action have the added benefit that the browser discards the DOM and any unused objects to the degree that memory can be reclaimed. Newer browsers have different mechanisms for this, but lets take IE as an example. IE will require special care to clean up memory periodically during the lifetime of the long-lived app. This is made somewhat easier by libraries these days, but by no means is a triviality.
As with a lot of things, a hybrid approach is great. It allows you to leverage JavaScript for lazy-loading specific content while separating parts of the app by page/url.
One pro is that you get the full presentation power of JavaScript as opposed to non-JavaScript web sites where the browser may flicker between pages and similar minor nuisances. You may notice lower bandwidth use as well as a result of only handling with the immediately important parts that need to be refreshed instead of getting a full web page back from the server.
The major con behind this is the accessibility concern. Users without JavaScript (or those who choose to disable it) can't use your web site unless you do some serious server-side coding to determine what to respond with depending on whether the request was made using AJAX or not. Depending on what (server-side) web framework you use, this can be either easy or extremely tedious.
It is not considered a good idea in general to have a web site which relies completely on the user having JavaScript.
One major con, and a major complaint of websites that have taken AJAX perhaps a bit too far, is that you lose the ability to bookmark pages that are "deep" into the content of the site. When a user bookmarks the page they will always get the "front" page of the site, regardless of what content they were looking at when they made the bookmark.
Maybe you should check SproutCore (Apple Used it for MobileMe) or Cappuccino, these are Javascript frameworks to make exactly that, designing desktop-like interfaces that only fetch responses from the server via JSON or XML.
Using either for a blog won't be a good idea, but a well designed desktop-like blog admin area may be a joy to use.
The main reason to avoid it is that taken alone it's extremely search-unfriendly. That's fine for webapps like GMail that don't need to be publically searchable, but for your blogs and CMS-driven sites it would be a disaster.
You could of course create the simple HTML version and then progressive-enhance it, but making it work nicely in both versions at once could be a bunch of work.
I was creating exactly these kind of pages as webapps for the iPhone. My method was to really put everything in one huge index.html file and to hide or show certain content. This showing and hiding i.e. the navigation of the page, I control in a special javascript file where the necessary functions for handling the display of the parts in the page are.
Pro: Everything is loaded in the beginning and you don't need to request anything from the server anymore, e.g. "switching" content and performing actions is very fast.
Con: First, everything has to load... that can take its time, if you have a lot of content that has to be shown immediately.
Another issue is that in case when the connection goes down, the user will not really notice until he actually needs the server side. You can notice that in Gmail as well. (It sometimes can be a positive thing though).
Hope it helps! greets
Usually, you will take a framework like GWT, Echo2 or similar.
The advantage of this approach is that the application feels much more like a desktop app. When the server is fast enough, users won't notice the many little data packets that go back and forth. Also, loading a page from scratch is an expensive operation. If you just modify parts of it, the browser can keep a lot of the existing model in memory and just change the parts that changed.
Another advantage of these frameworks is that you can develop your application in pure Java. This means you can debug it in your IDE just like any other Java app, you can write unit tests and run them automatically, etc.
I'll add that on slower machines, a con is that a large amount of JavaScript will bring the browser to a screeching halt. Since all the rendering is done client-side, if the user doesn't have a higher-end computer, it will ruin the experience. My work computer is a P4 3.0GHZ with 2 GB of ram and JavaScript heavy sites cause it to chug along slower than molasses, which really kills the user experience for me.

Reflective Web Application (WebIDE)

Preamble
So, this question has already been answered, but as it was my first question for this project, I'm going to continue to reference it in other questions I ask for this project.
For anyone who came from another question, here is the basic idea: Create a web app that can make it much easier to create other web applications or websites. To do this, you would basically create a modular site with "widgets" and then combine them into the final display pages. Each widget would likely have its own set of functions combined in a Class if you use Prototype or .prototype.fn otherwise.
Currently
I am working on getting the basics down: editing CSS, creating user JavaScript functions and dynamically finding their names/inputs, and other critical technical aspects of the project. Soon I will create a rough timeline of the features I wish to create. Soon after I do this, I intent to create a Blog of sorts to keep everyone informed of the project's status.
Original Question
Hello all, I am currently trying to formalize an idea I have for a personal project (which may turn into a professional one later on). The concept is a reflective web application. In other words, a web application that can build other web applications and is actively used to build and improve itself. Think of it as sort of a webapp IDE for creating webapps.
So before I start explaining it further, my question to all of you is this: What do you think would be some of the hardest challenges along the way and where would be the best place to start?
Now let me try to explain some of the aspects of this concept briefly here. I want this application to be as close to a WYSIWYG as possible, in that you have a display area which shows all or part of the website as it would appear. You should be free to browse it to get to the areas you want to work on and use a JavaScript debugger/console to ask "what would happen if...?" questions.
I intend for the webapps to be built up via components. In other words, the result would be a very modular webapp so that you can tweak things on a small or large scale with a fair amount of ease (generally it should be better than hand coding everything in <insert editor of choice>).
Once the website/webapp is done, this webapp should be able to produce all the code necessary to install and run the created website/webapp (so CSS, JavaScript, PHP, and PHP installer for the database).
Here are the few major challenges I've come up with so far:
Changing CSS on the fly
Implementing reflection in JavaScript
Accurate and brief DOM tree viewer
Allowing users to choose JavaScript libraries (i.e. Prototype, jQuery, Dojo, extJS, etc.)
Any other comments and suggestions are also welcome.
Edit 1: I really like the idea of AppJet and I will check it out in detail when I get the time this weekend. However, my only concern is that this is supposed to create code that can go onto others webservers, so while AppJet might be a great way for me to develop this app more rapidly, I still think I will have to generate PHP code for my users to put on their servers.
Also, when I feel this is ready for beta testers, I will certainly release it for free for everyone on this site. But I was thinking that out of beta I should follow a scheme similar to that of git: Free for open source apps, costs money for private/proprietary apps.
Conceptually, you would be building widgets, a widget factory, and a factory making factory.
So, you would have to find all the different types of interactions that could be possible in making a widget, between widgets, within a factory, and between multiple widget making factories to get an idea.
Something to keep on top of how far would be too far to abstract?
**I think you would need to be able to abstract a few layers completely for the application space itself. Then you'd have to build some management tool for it all. **
- Presentation, Workflow and the Data tier.
Presentation: You are either receiving feedback, or putting in input. Usually as a result of clicking, or entering something. A simple example is making dynamic web forms in a database. What would you have to store in a database about where it comes/goes from? This would probably make up the presentation layer. This would probably be the best exercise to start with to get a feel for what you may need to go with.
Workflow: it would be wise to build a simple workflow engine. I built one modeled on Windows Workflow that I had up and running in 2 days. It could set the initial event that should be run, etc. From a designer perspective, I would imagine a visio type program to link these events. The events in the workflow would then drive the presentation tier.
Data: You would have to store the data about the application as much as the data in the application. So, form, event, data structures could possibly be done by storing xml docs depending on whether you need to work with any of the data in the forms or not. The data of the application could also be stored in empty xml templates that you fill in, or in actual tables. At that point you'd have to create a table creation routine that would maintain a table for an app to the spec. Google has something like this with their google DB online.
Hope that helps. Share what you end up coming up with.
Why use PHP?
Appjet does something really similar using 100% Javascript on the client and server side with rhino.
This makes it easier for programmers to use your service, and easier for you to deploy. In fact even their data storage technique uses Javascript (simple native objects), which is a really powerful idea.

Categories

Resources