Node JS and Mouse - javascript

I am trying to assess the feasibility of the following setup using node js as a mouse recorder. I know that there are simple JS mouse recorders with timers and arrays, but they are not efficient enough when it comes to timing (due to ms deviations in the js timer class).
Lets assume I want to be able to do the following:
1) Instead of pushing the current mouse position every change, I want to buffer it locally and push the data in a set interval (e.g. 5sec). Is that even possible?
2) If so, the stream of this mouse movement is saved as a binary file. The binary file could then be streamed to another client.
Generally I have a difficulty in understanding streams in general. To my understanding streams are just chunks of data that are sent to the client. Is this correct?

1) Yes, its possible, I would recommend you to use Event Emitter <-> Event Listener logic.
2) Sure, you can do it. But tell us more clear about what are you trying to do. Meanwhile you can take a look into socket.io solution for streaming data, or npm install ws. Again , it very depends of what you're tying to develop.
Also there are much more complex and powerful solutions based on RTMP protocol, but I've no idea why you will need it here to send couple of bytes from one side to another. Also you may take a look to broadcaster idea, if you have to send those data chunks to multiple subscribers.

Related

Duplicate websocket subscription in Azure webapp

I have a node.js app running in Azure as a webApp. On startup it connects to an external service using a websocket subscription. Specifically I'm using the reconnecting-websockets NPM package to wrap it to handle disconnects.
The problem I am having is that because there are 2 instances of the app running on Azure (horizontal scaling for failover) I end up with two subscriptions at any one time.
Is there an obvious way to solve this problem?
For extra context, this is a problem for 2 reasons:
I pay for each message received and am over quota
When messages are received I process then and do database updates, these are also being duplicated.
You basically want to have an AppService with potentially multiple instances, but you don't want your application to run in parallel. At least you don't want two have two subscriptions. Ideally you don't want to touch your application code.
An easy way to implement this would be to wrap your application into a continuous WebJob, and set its scale property to singleton.
Here is one tutorial on how to set up a nodejs webjob: https://morshemesh.medium.com/continuous-deployment-of-web-jobs-with-node-js-2308f95e63b1
You can then use a settings.job file to control that your webjob only runs on a single instance at any one time. Or you can use the Azure Portal to set the value when you manually deploy the Webjob.
{
"is_singleton": true
}
https://github.com/projectkudu/kudu/wiki/WebJobs-API#set-a-continuous-job-as-singleton
PS: Don't forget to enable Always On. It is also mentioned in the docs. But you probably already need that for your current deployment.
If you don't want your subscription to be duplicated then it stands to reason that you only want one process subscribing to the external websocket connection.
Since you mentioned that messages received will be updated in the db, then it makes sense that this would be an isolated backend process since you made it clear that you have multiple instances running for the frontend server (and whether or not a separate backend).
Of course if you want more redundancy, you could use a load balancer with simple distribution of messages to any number of instances behind. Perhaps some persistent queueing system if you feel that it's needed.
If you want these messages to be propagated to the client (not clear from the question), this will be a bit more annoying. If it's a one-way simple channel, then you could consider using SSE which is a rather simple protocol. If it's bilateral then I would myself probably consider running a STOMP server with intermediary broker (like RabbitMq) and connect directly from the client (i.e. the browser, not the server generating the frontend) to the service.
Not sure if you're well versed with Java, but I made some app that you could use for reference in case interested when we had to prepare some internal demos: https://github.com/kimgysen/zwoop-backend/tree/develop/stomp-api/src/main/java/be/zwoop
For all intents and purposes, I'm not sure if all this is worth the hustle for you, it sounds like you're on a tight budget and that you're looking for simple solutions without too much complexity. Have you considered giving up on load balancing the website (is the load really that high?), I don't have enough background knowledge on your project to judge, I believe. But proper caching optimization and initially scaling vertically may be sufficient at the start (?).
Personally I would start simple and gradually increase complexity when needed.
I'm just throwing ideas at you, hopefully it is helpful in any way to have a few considerations.
Btw, I don't understand why other answers on this question were all deleted (?).

Storing real time canvas session data on Node JS

Im writing a real time paint application using nodeJS + HTML5 canvas & websockets
currently server just acts as an relay and what ever each user draws is broadcast to the rest of the users.
The problem here is that when new users show up they start with a empty canvas.
I have two ideas on how to solve this,
1) the event driven approach - this is where i persist in memory each and every draw event. and when the new user shows up to the session. All events are reconstructed and sent to him/her.
2) the server maintains a copy of the canvas. So rather than just relaying the draw events, the server also renders all the draw events. When new user shows up, this state is then passed on to it.
Anyone has any thoughts on the pros and cons of the both approach or better yet, a better way to solve it!
This is my opinionated answer.
The best approach is to have the server maintain a copy of the of the data via a db. This means that whenever your client starts they will always have data to use in the case of dropped packets upon new client starting and also being able to maintain legacy data. When I developed a similar concept I used game objects as an example and got a good return to many clients. Without noticeable lag on a local net even with a faulty design concept. Hope this helps

Custom progressive audio streaming in browser

Say i like to create my very own progressive streaming mechanicsm in Javascript because i'm finding the browser's built in streaming mechanism not fault-tollerant enough or i like to implement my own custom method over WebSocket. I would like to create a buffer which holds the already downloaded segments of a continous media file (say an arraybuffer or something like that). Is it possible to play this file even if it's not already downloaded from start-to-end?
My only idea was the Web Audio API which has a noteOn() function for preceisely timing the start of each segment. However i don't know how gapeless this would be. Also it introduces the problem that i have to know exactly where audio files can be cut safely on the server side so the next part can be decoded without any loss and gaps. E.g. mp3's bit reservoir stores audio data in neighbour audio frames even in CBR mode which makes things difficult.
What about creating a ScriptProcessorNode that feeds from your incoming buffers? The biggest issue is making sure that the segments are convertible to raw audio samples, but otherwise you could write a simple function in the onaudioprocess event handler that pulls in the next available buffer chunk and copies it into the node's output buffers. Since this would be a pull-on-demand mechanism, you wouldn't need to worry about timing segment playback.

Where in my stack can streams solve problems, replace parts and/or make it better?

If I take a look at the stream library landscape i see a lot of nice stuff (like mapping/reducing streams) but I'm not sure how to use them effectively.
Say I already have an express app that serves static files and has some JSON REST handlers connecting to a MongoDB database server. I have a client heavy app that can display information in widgets and charts (think highcharts) with the user filtering, drilling down into information etc. I would like to move to using real-time updating of the interface, and this is the perfect little excuse to introduce node.js into the project, I think, however the data isn't really real-time so pushing new data to a lot of client's isn't what I'm trying to achieve (yet). I just want a fast experience.
I want to use browserify, which gives me access to the node.js streams api in the browser (and more..) and given the enormity of the data sets, processing is done server-side (by a backend API over JSONP).
I understand that most of the connections at some point are already expressed as streams, but I'm not sure where I could use streams elsewhere effectively to solve a problem;
Right now, when sliders/inputs are changed, spinning loaders appear in affected components until the new JSON has arrived and is parsed and ready to be shot into the chart/widget. Putting a Node.JS server in between, can streaming things instead of request/responding chunks of JSONPified number data speed up the interactivity of the application?
Say that I have some time series data. Can a stream be reused so that when I say I want to see only a subset of the data (by time), I can have the stream re-send it's data, filtering out the ones I don't care about?
Would streaming data to a (high)chart be a better user experience then using a for loop and an array?

HTTP data streaming

I've got a backend to be implemented in Python that should stream data to a web browser where the JavaScript is creating the representation (e.g. continuously updating a variable or drawing to a <canvas>).
That data will update at a rate of up to 100 Hz (and might as a worst case scenario even be at 1000 Hz...) with perhaps 10 - 20 Bytes each.
So my first thought of using the COMET pattern would produce far too much overhead, I guess.
My next guess were WebSockets. They would be a perfect fit - but being disabled in Firefox makes them unusable for me.
So what is your recommendation to use in this case?
(Requirement: running in a few modern browsers on pure JavaScript, no Flash or Java allowed. Back end in Python. Already used lib is jQuery. Implementation should be easy, preferably using lightweight libs)
The solution I took now is to use the COMET pattern and transport all data that queued up in the backend since the last request. So I'm not polling during times of slow data generation (-> COMET) and I'll only have that amount of connections that the frontend (i.e. the browser) can handle as it's creating them.
And the overhead is reduced as each request contains a few data points. (You could even say that the overhead is scaled dynamically depending on the data rate. As the data rate gets higher, the overhead sinks...)
As an update to this question, nowadays, you should be able to use Server-sent events. I didn't use XHR due to it keeping the entire response in memory, and didn't use websockets, since I didn't need duplex comms. I had pretty much the same question, answered it here:
How to process streaming HTTP GET data?

Categories

Resources