Node Js process.stdin - javascript

I'm just now getting into node.js and reading input from the command line.
I'm a bit confused on the following code example.
process.stdin.setEncoding('utf8');
process.stdin.on('readable', () => {
var chunk = process.stdin.read();
if (chunk !== null) {
process.stdout.write(`data: ${chunk}`);
}
});
process.stdin.on('end', () => {
process.stdout.write('end');
});
First, in old mode, process.stdin.resume() was needed to make it begin to listen. Doesn't using resume() make more sense for performance? Doesn't this continually listen, using up processing power that it doesn't need to use up?
Also, I read the docs but I'm not understanding what 'end' does here.
The docs say:
This event fires when there will be no more data to read.
But 'readable' is always listening so we never get to the 'end'?

Continually listening for input doesn't necessarily use more resources than resuming the stream manually. It's just a different way of handling the pipes.
The "readable" part stops listening when the "end" event is triggered, as end will close the stream and therefore there won't be anything readable anymore.
The end event is a translation of the end signal fired to the standard input (for instance a CTRL/D on a unix system.

Related

Node-global-key-listener package Linux

I tried to implement node-global-key-listener package for my project and was successful in Windows build as well as in Mac.When I tried it in Ubuntu, for some reason the listener is not listening. I am currently in v20.04.
Upon tracing the code, this listener on the x11ServerKey does not seem to get called.
this.proc.stdout.on("data", data => {
const events = this._getEventData(data);
for (let { event, eventId } of events) {
const stopPropagation = !!this.listener(event);
this.proc.stdin.write(`${stopPropagation ? "1" : "0"},${eventId}\n`);
}
});
The process child does get initialized. Just listener not listening. The cpp part is a little outside of my expertise.
Any idea how to go about this? Let me know if you need more info.
Thank you so much.
I have traced the code up to the part where x11 server key is getting executed as a node child process. I think it is getting initialized but it is not emitting the on 'data' that handles the event.

Where and how does libUV interact with code on node.js

I wondered a question: "where and how does libUV interact with code on node.js". Lately I was investigating streams, also I read source on github.
Well, let's take source of script called as destroy.js. This script is responding for the destruction of streams: stream.destroy(). After that operation:
in function destroy are set states for streams into values:
writable._stateWritable.destroyed = true
readable._stateReadable.destroyed = true
in function _destroy are set states for streams into values:
writable._stateWritable.closed = true
readable._stateReadable.closed = true
in funtion emitCloseNT:
sets value writable._stateWritable.closeEmmited = true
sets value readable._stateReadable.closeEmmited = true
emmits event close
That's all. Where and how does libUV interact with stream.destroy()? Even documentation of node about writable.destroy says:
This is a destructive and immediate way to destroy a stream
But what is it really? I see only the process of setting state for the streams and only it. So, where does libUV actually destroy stream?
I'm not a subject matter expert, but after debugging the following code, I got a rough idea of what happens behind the scenes:
var cnt = 0;
new stream.Readable({
read(size) {
if (++cnt > 10) this.destroy();
this.push(String(cnt));
}
}).pipe(process.stdout);
Upon this.destroy(), the readableState.destroyed is set to true here, and because of this the following this.push("11") returns false here. If readableState.destroyed had been false, it would instead have called addChunk, which would have ensured that reading goes on by emitting a readable event and calling maybeReadMore (see here).
If the readable stream was created by fs.createReadStream, then the _destroy method additionally calls a close method, which closes the file descriptor.

Asynchronously stopping a loop from outside node.js

I am using node.js 14 and currently have a loop that is made by a recursive function and a setTimeout, something like this:
this.timer = null;
async recursiveLoop() {
//Do Stuff
this.timer = setTimeout(this.recursiveLoop.bind(this), rerun_time);
}
But sometimes this loop gets stuck and I want it to automatically notice it, clean up and restart. So I tried doing something like this:
this.timer = null;
async recursiveLoop() {
this.long_timer = setTimeout(() => throw new Error('Taking too long!'), tooLong);
//Do Stuff
this.timer = setTimeout(this.recursiveLoop.bind(this), rerun_time);
}
main() {
//Do other asynchronous stuff
recursiveLoop()
.then()
.catch((e) => {
console.log(e.message);
cleanUp();
recursiveLoop();
}
}
I can't quite debug where it gets stuck, because it seems quite random and the program runs on a virtual machine. I still couldn't reproduce it locally.
This makeshift solution, instead of working, keeps crashing the whole node.js aplication, and now I am the one stuck. I have the constraint of working with node.js 14, without using microservices, and I never used child process before. I am a complete beginner. Please help me!
If you have a black box of code (which is all you've given us) with no way to detect errors on it and you just want to know when it is no longer generating results, you can put it in a child_process and ask the code in the child process to send you a message every time it runs an iteration. Then, in your main process, you can set a timer that resets itself every time it gets one of these "health" messages from the child. If the timer fires without getting a health message, then the child must be "stuck" because you haven't heard from it within your timeout time. You can then kill the child process at that point and restart it.
But, that is a giant hack. You should FIX the code that gets stuck or at least understand what's going on. Probably you're either leaking memory, file handles, database handles, running code that uses locks and messes up or there are unhandled errors happening. All are indications of code that should be fixed.

rxjs pausableBuffered multiple subscriptions

I'm trying to write a websocket rxjs based wrapper.
And I'm struggling with my rxjs understanding.
I have a pause stream which is supposed to pause the pausable buffered streams when an error occures and resume them once i get a "ok" form the websocket.
Somehow only the first subscription on my pauseable buffered streams are fired. From then on only the queue stacks up higher.
I have prepared a jsbin to reproduce the issue.
https://jsbin.com/mafakar/edit?js,console
There the "msg recived" stream only fires for the first subscription. And then the q and observer begin stacking up.
I somehow have the feeling this is about hot and cold obserables but I cannot grasp the issues. I would appreciate any help.
Thank you in advance!
It is not the cold/hot issue. What you do in your onMessage is subscribe, then dispose. The dispose terminates the sequence. The onMessageStream should be subscribed to only once, for example, in the constructor:
this.onmessageStream.subscribe(message => console.log('--- msg --- ', message.data));
The subscribe block, including the dispose should be removed.
Also, note that you used replaySubject without a count, this means that the queue holds all previous values. Unless this is a desired behavior, considered changing it to .replaySubject(1)
Here is a working jsbin.
As #Meir pointed out dispose in a subscribe block is a no no since its behavior is non-deterministic. In general I would avoid the use of Subjects and rely on factory methods instead. You can see a refactored version here: https://jsbin.com/popixoqafe/1/edit?js,console
A quick breakdown of the changes:
class WebSocketWrapper {
// Inject the pauser from the external state
constructor(pauser) {
// Input you need to keep as a subject
this.input$ = new Rx.Subject();
// Create the socket
this._socket = this._connect();
// Create a stream for the open event
this.open$ = Rx.Observable.fromEvent(this._socket, 'open');
// This concats the external pauser with the
// open event. The result is a pauser that won't unpause until
// the socket is open.
this.pauser$ = Rx.Observable.concat(
this.open$.take(1).map(true)
pauser || Rx.Observable.empty()
)
.startWith(false);
// subscribe and buffer the input
this.input$
.pausableBuffered(this.pauser$)
.subscribe(msg => this._socket.send(msg));
// Create a stream around the message event
this.message$ = Rx.Observable.fromEvent(this._socket, 'message')
// Buffer the messages
.pausableBuffered(this.pauser$)
// Create a shared version of the stream and always replay the last
// value to new subscribers
.shareReplay(1);
}
send(request) {
// Push to input
this.input$.onNext(request);
}
_connect() {
return new WebSocket('wss://echo.websocket.org');
}
}
As an aside you should also avoid relying on internal variables like source which are not meant for external consumption. Although RxJS 4 is relatively stable, since those are not meant for public consumption they could be changed out from under you.

How do reactive streams in JS work?

I'm novice in reactive streams and now trying to understand them. The idea looks pretty clear and simple, but on practice I can't understand what's really going on there.
For now I'm playing with most.js, trying to implement a simple dispatcher. The scan method seems to be exactly what I need for this.
My code:
var dispatch;
// expose method for pushing events to stream:
var events = require("most").create(add => dispatch = add);
// initialize stream, so callback in `create` above is actually called
events.drain();
events.observe(v => console.log("new event", v));
dispatch(1);
var scaner = events.scan(
(state, patch) => {
console.log("scaner", patch);
// update state here
return state;
},
{ foo: 0 }
);
scaner.observe(v => console.log("scaner state", v));
dispatch(2);
As I understand, the first observer should be called twice (once per event), and scaner callback and second observer – once each (because they were added after triggering first event).
On practice, however, console shows this:
new event 1
new event 2
scaner state { foo: 0 }
Scaner is never called, no matter how much events I push in stream.
But if I remove first dispatch call (before creating scaner), everything works just as I expected.
Why is this? I'm reading docs, reading articles, but so far didn't found anything even similar to this problem. Where am I wrong in my suggestions?
Most probably, you have studied examples like this from the API:
most.from(['a', 'b', 'c', 'd'])
.scan(function(string, letter) {
return string + letter;
}, '')
.forEach(console.log.bind(console));
They are suggesting a step-by-step execution like this:
Get an array ['a', 'b', 'c', 'd'] and feed its values into the stream.
The values fed are transformed by scan().
... and consumed by forEach().
But this is not entirely true. This is why your code doesn't work.
Here in the most.js source code, you see at line 1340 ff.:
exports.from = from;
function from(a) {
if(Array.isArray(a) || isArrayLike(a)) {
return fromArray(a);
}
...
So from() is forwarding to some fromArray(). Then, fromArray() (below in the code) is creating a new Stream:
...
function fromArray (a) {
return new Stream(new ArraySource(a));
}
...
If you follow through, you will come from Stream to sink.event(0, array[i]);, having 0 for timeout millis. There is no setTimeout in the code, but if you search the code further for .event = function, you will find a lot of additional code that uncovers more. Specially, around line 4692 there is the Scheduler with delay() and timestamps.
To sum it up: the array in the example above is fed into the stream asynchronously, after some time, even if the time seems to be 0 millis.
Which means you have to assume that somehow, the stream is first built, and then used. Even if the program code doesn't look that way. But hey, isn't it always the target to hide complexity :-) ?
Now you can check this with your own code. Here is a fiddle based on your snippet:
https://jsfiddle.net/aak18y0m/1/
Look at your dispatch() calls in the fiddle. I have wrapped them with setTimeout():
setTimeout( function() { dispatch( 1 /* or 2 */); }, 0);
By doing so, I force them also to be asynchronous calls, like the array values in the example actually are.
In order to run the fiddle, you need to open the browser debugger (to see the console) and then press the run button above. The console output shows that your scanner is now called three times:
doc ready
(index):61 Most loaded: [object Object]
(index):82 scanner state Object {foo: 0}
(index):75 scanner! 1
(index):82 scanner state Object {foo: 0}
(index):75 scanner! 2
(index):82 scanner state Object {foo: 0}
First for drain(), then for each event.
You can also reach a valid result (but it's not the same behind scenes) if you use dispatch() synchronously, having them added at the end, after JavaScript was able to build the whole stream. Just uncomment the lines after // Alternative solution, run again and watch the result.
Well, my question appears to be not so general as it sounds. It's just a lib-specific one.
First – approach from topic is not valid for most.js. They argue to 'take a declarative, rather than imperative, approach'.
Second – I tried Kefir.js lib, and with it code from topic works perfect. Just works. Even more, the same approach which is not supported in most.js, is explicitly recommended for Kefir.js.
So, the problem is in a particular lib implementation, not in my head.

Categories

Resources