Storing real time canvas session data on Node JS - javascript

Im writing a real time paint application using nodeJS + HTML5 canvas & websockets
currently server just acts as an relay and what ever each user draws is broadcast to the rest of the users.
The problem here is that when new users show up they start with a empty canvas.
I have two ideas on how to solve this,
1) the event driven approach - this is where i persist in memory each and every draw event. and when the new user shows up to the session. All events are reconstructed and sent to him/her.
2) the server maintains a copy of the canvas. So rather than just relaying the draw events, the server also renders all the draw events. When new user shows up, this state is then passed on to it.
Anyone has any thoughts on the pros and cons of the both approach or better yet, a better way to solve it!

This is my opinionated answer.
The best approach is to have the server maintain a copy of the of the data via a db. This means that whenever your client starts they will always have data to use in the case of dropped packets upon new client starting and also being able to maintain legacy data. When I developed a similar concept I used game objects as an example and got a good return to many clients. Without noticeable lag on a local net even with a faulty design concept. Hope this helps

Related

How to make an iframe accessible by multiple users?

I want to make it possible that multiple users can view the same website (a specific URL that all agree on) and all events of the users will be shared so that everyone has the same state. So that you can use a website with multiple people but the website thinks there is only one person, similar to when you use one computer with multiple people.
I have two ideas for how to do this:
The client-sided approach: everyone loads the same page with an iframe and then it detects all events of the users and sends these to each other so everyone has the same state.
Problems:
Each user might use a different browser and the website can be different for everyone and desynchronisation can also happen.
Emulating clicks might be difficult.
The server-sided approach: load the website only once on the server and then send all user events to the server and stream back the website's pixels back to the users.
Problems:
Streaming back the website's state (its look, the pixels) to all the users could be quite expensive, but maybe it could only update the pixels that actually changed, instead of all pixels of the website.
Because approach 1 doesn't seem very feasible, I would like to try approach 2, but I'm not sure where to start there. Do I make the server open the URL in its browser and let the system emulate clicks on the browser?
What is the best approach to solve this, are there more and better approaches?
ProseMirror is an open source web-based editor that was designed with collaborative editing in mind:
Live demo of collaborative editor
Source code for demo
I suggest using ProseMirror as a base, then modify it for your needs. ProseMirror defines a text-based document data structure. ProseMirror renders this data-structure as as a web-based editor, but you could render the data structure however you desire:
Render the document data structure as a web page.
Clicking the webpage alters the underlying document data structure.
ProseMirror takes care of synchronizing the document data structure on all the clients.
Other clients render the updated document.
If you are more interested in creating your own thing completely from scratch, I would study how ProseMirror did this:
A single authority (the server) maintains the "official" version of the document. Clients send requests to update the document, which the authority handles and broadcasts to all other clients.
Note peer-to-peer (called 'client-sided approach' in the question) can also have an authority, but there is added complexity to determine and maintain the authority. So server-client is simpler to implement
Minimize network bandwidth. Sending every pixel or click event is expensive. (Also clicks may result in different, diverging documents on different clients.) Ideally, transactions are sent, which are deltas describing how the document changed.
There was a similar question, where I went into more detail: Creating collaborative whiteboard drawing application
For the desynchronization problem you can create a website that'll look the same to all by using styles. Like fixed dimensions for all browsers and screen sizes. That'll help in rendering the same version and same dimension to all the users.
For click synchronization you can use node sockets. By using this you can manage a server-side directory for a group of users and share the same events using peer-to-peer sockets. This is similar to the group chat functionality. https://socket.io/ can help you in implementing this. Node Sockets are also used in client-side multi-user games for making the same connected experience to both the users.
This would be a client-server solution so that you can manage everything from the server and provide a good experience to your users.
Hope this helps. If you need additional information on this please drop a comment.
It looks like you need to create a reactive application. So, your web page
will serve any content to many users at the same time and they will be able to interact with this app almost in real time (milliseconds). All users will be able to see all interactions.
I have used the above scenario using Meteor, which is reactive by default. Of course you can use other frameworks too or try the difficult way to manipulate communication between clients by yourself, using some clever javascript.

Website remote render d3.js server side

Looking for a solution to an arguably strange problem. Ok, so we are using d3.js to plot charts and graphs. However our data sets can be very small, to intensely massive. Right now most of what we are doing is internal and just prototyping. However, we do show clients these charts and draw them in real time for them, quite often and rapidly change their inputs.
Doing this in D3 looks great, but can be slow as expected. I'm more interested in what the possibilities are for this process. Go to our website, loging, and show an instance of our dashboard being rendered remotely on the server. Our server cluster is a super demon beast so I'm not worried about it doing any heavy lifting. It can do these processes about 100x faster than our best pc so it seems if we could setup our website to create instances on the fly of our dashboard, BUT only have access to that user accounts data.
This is getting a bit convoluted so let me explain. We have a database, full of millions of data points. We have about 10 user accounts. Each have access to different pieces of this data. One has access to all of it, the other some of it. All of this is not the issue we are looking for a solution on. We are more interested in the ability of our server to create multiple instances of our site, through a window essentially, that the user is remotely controlling. Like a Remote Desktop in a way. We could even start with the user login form being part of the remote render. Where our system is fully hosted and operates on the server itself, and the we page is essentially a KVM on the server in a way. However it needs to handle multiple users at the same time.
We are using Centos 6.4 lots of python for the back end stuff, php HTML and a mixture of Postgres and SQLite, but I doubt any of this is important. Just want to cover my bases.
It seems unlikely to me that you'd be able meaningfully display millions of datapoints on a single screen without grouping and summarizing them in some way. Do the processing and summarize the data on the server and ship the resulting smaller datasets to the client, which will then plot your graphs and charts from that. It's likely you'll have more than one set of data now, but it should result in much better client performance. e.g.
{millions of points} -> transform on server -> data for bar chart to client
{millions of points} -> transform on server -> data for XY-scatter chart
etc.
What you've proposed is not really a programming issue, and isn't going to scale very well.

Where in my stack can streams solve problems, replace parts and/or make it better?

If I take a look at the stream library landscape i see a lot of nice stuff (like mapping/reducing streams) but I'm not sure how to use them effectively.
Say I already have an express app that serves static files and has some JSON REST handlers connecting to a MongoDB database server. I have a client heavy app that can display information in widgets and charts (think highcharts) with the user filtering, drilling down into information etc. I would like to move to using real-time updating of the interface, and this is the perfect little excuse to introduce node.js into the project, I think, however the data isn't really real-time so pushing new data to a lot of client's isn't what I'm trying to achieve (yet). I just want a fast experience.
I want to use browserify, which gives me access to the node.js streams api in the browser (and more..) and given the enormity of the data sets, processing is done server-side (by a backend API over JSONP).
I understand that most of the connections at some point are already expressed as streams, but I'm not sure where I could use streams elsewhere effectively to solve a problem;
Right now, when sliders/inputs are changed, spinning loaders appear in affected components until the new JSON has arrived and is parsed and ready to be shot into the chart/widget. Putting a Node.JS server in between, can streaming things instead of request/responding chunks of JSONPified number data speed up the interactivity of the application?
Say that I have some time series data. Can a stream be reused so that when I say I want to see only a subset of the data (by time), I can have the stream re-send it's data, filtering out the ones I don't care about?
Would streaming data to a (high)chart be a better user experience then using a for loop and an array?

Node JS and Mouse

I am trying to assess the feasibility of the following setup using node js as a mouse recorder. I know that there are simple JS mouse recorders with timers and arrays, but they are not efficient enough when it comes to timing (due to ms deviations in the js timer class).
Lets assume I want to be able to do the following:
1) Instead of pushing the current mouse position every change, I want to buffer it locally and push the data in a set interval (e.g. 5sec). Is that even possible?
2) If so, the stream of this mouse movement is saved as a binary file. The binary file could then be streamed to another client.
Generally I have a difficulty in understanding streams in general. To my understanding streams are just chunks of data that are sent to the client. Is this correct?
1) Yes, its possible, I would recommend you to use Event Emitter <-> Event Listener logic.
2) Sure, you can do it. But tell us more clear about what are you trying to do. Meanwhile you can take a look into socket.io solution for streaming data, or npm install ws. Again , it very depends of what you're tying to develop.
Also there are much more complex and powerful solutions based on RTMP protocol, but I've no idea why you will need it here to send couple of bytes from one side to another. Also you may take a look to broadcaster idea, if you have to send those data chunks to multiple subscribers.

Webapp security for client-side game logic operations

I am working on an in-browser game, taking advantage of the Canvas available in HTML5. However, I realized that I have a big vulnerability in the system. The game score and other statistics about game play are calculated on the client-side in Javascript, then submitted to the server for storage and comparison to other players through XMLHTTPRequest. This obviously exposes the statistics to manipulation and potential cheating.
I am worried about moving these to the server-side due to latency issues. I expect the timing to be close.
Are there other smart ways to deal with this problem? I imagine more and more games will deal with this as HTML5 grows.
Not really. Your server in this scenario is nothing more than a database that trusts the client. You can obfuscate but people will be easily able to figure out what your api is doing. This is an intractable problem with all standalone games, and is why for example, you see Blizzard making Diablo3 a client-server game. The fact that it's a javascript game just makes it even more transparent and easy for people to debug and exploit.
Why don't you just send data to the server every time the client scores a point and then keep a local score of points.
Unfortunately there is not much you can do about this. Minifying/obfuscating your code is always a good option. I'm not entirely sure, but I think putting your code inside
(function() { /* code */ })();
should protect any variables from users editing (unless you have them attached to an object like window). But users can still exploit your ajax call and send whatever score they want to the server. Just, never trust anything that is done client side. Validate everything server-side.
EDIT: Another thing I thought of: maybe generate a code server-side, and send that to the browser. Then with every ajax-call send that code to verify that it is you and not some malicious user.
The 100% security is not achievable when you have to trust to data from client. However, you can make it hard to cheat by obfuscating the js code and also the data that you send from client.
I have got an idea that is similar to gviews comment.
On the server, you should keep track of the players process of the game by batch updates, that you will send from client regularly in some interval... Player will not recognize it in the latency, and you will have the tool to detect obvious cheaters. You know the starting point of the players game, so you can easily detect the cheating right from the beginning.
Also, i would suggest to use some checkpoints where you would check the real state of the game on client and the state on the server. (the state of the client would not change if the cheater changes only the server updates by xhr).
There is lot of ways to make it harder to cheat, and it is quite individual for each game... there is no standard that solves it all.

Categories

Resources