The print statement in Python is not thread-safe. Is it safe to use console.log in Node.js concurrently?
If so, then is it also interleave-safe? That is, if multiple (even hundreds) of callbacks write to the console, can I be sure that the output won't be clobbered or interleaved?
Looking at the source code, it seems that Node.js queues concurrent attempts to write to a stream (here). On the other hand, console.log's substitution flags come from printf(3). If console.log wraps around printf, then that can interleave output on POSIX machines (as shown here).
Please show me where the async ._write(chunk, encoding, cb) is implemented inside Node.js in your response to this question.
EDIT: If it is fine to write to a stream concurrently, then why does this npm package exist?
Everything in node.js is basically "atomic". That's because node.js is single threaded - no code can ever be interrupted.
The events loop of nodejs is single thread, but all the async calls of nodejs are multi-threaded, it use libuv under the hood, libuv is library that use multi threads.
link:
https://medium.com/the-node-js-collection/what-you-should-know-to-really-understand-the-node-js-event-loop-and-its-metrics-c4907b19da4c
Based on what I see on my Node.js console it is NOT "interleave-safe".
I can see my console-output is sometimes "clobbered or interleaved". Not always. When I run my program it is maybe every 5th time that I see interleaved output from multiple log-statements.
This may of course depend on your Node.js version and the OS you are running it on. For the record my Node.js version is v12.13.0 and OS is Windows 10.0.19042.
Related
I'm using CefSharp.OffScreen (C#) with Cef Package version 89.0.170.0.
I'm running 8 offscreen browsers concurrently to capture and render some web pages but 7 of them throw a "Task canceled" exception when I try and run some async JS execution task at the beginning of the capture, as if only one of them was capable of running JS while the others wait for their turn and fail miserably.
Here is the code I'm using to execute some JS:
JavascriptResponse result = await browser.MainFrame.EvaluateScriptAsync( "some JS code", timeout:TimeSpan.FromMilliseconds( 1000 ) );
I tried putting some delay between each browser's capture job and it seems to be faring a little better, but it's still dependent on the time it takes each browser to execute the scripts and continues to fail for some browsers. I don't like this method as it's not very reliable anyway.
Overall, despite the fact I have 8 different browser instances, it looks like there's only one JS execution engine running and stalling the other browsers.
Am I doing something wrong? Is there a way to make the browsers wait longer before canceling the task? What makes them even cancel the task in the first place?
Best regards.
In my Node.JS code, i wanted to know some of my asynchronous code execution without debugging (debugging could interrupt my asynchronous cals) application by printing out all filename: line numbers, in order of code execution.
As of now, i'm using DEBUG=* npm start to see debug logs on express. But, it's not giving enough information i'm looking for.
Is there any better way or module to achieve this?
never done this before.
I'm using https://github.com/codius/codius-host. CodiuĀ§ development has been abandoned, but I want to salvage part of it to use for my own project. I really need to be able to run codius commands from browser, so I need to develop a library or what you call it.
var codius = require('codius')
codius.upload({host: http://contract.host}
codius-host comes packed with command-line integration,
$ CODIUS_HOST=https://codius.host codius upload
How do I make a .js script do what the command-line command does ?
also posted on https://stackoverflow.com/questions/31126511/if-i-have-a-npm-tool-that-uses-comman-line-commands-how-can-i-create-a-javascri
hard time asking this questions since don't know where to start. help.
Assuming that you have access to the codius-host source code you should find the piece of code which manages the command line arguments. I am sure that they do handle the command and the command line arguments from an entry module/function and than later delegate the real job to a different module/function. What you need to do is to provide correct parameters to the functions which the function/module that handles command line argument calls with command line parameters.
In addition to that there are some nodejs libraries that might imitate a command line call from the program itself. One of those I know is shelljs:
https://www.npmjs.com/package/shelljs
You might want to check this out as well. With this one without bothering with the source code you might be able to imitate command line behaviour.
I have a little snippet of node.js code in front of me that looks like this:
console.time("queryTime");
doAsyncIOBoundThing(function(err, results) {
console.timeEnd("queryTime");
// Process the results...
});
And of course when I run this on my (otherwise idle) development system, I get a nice console message like this:
queryTime: 564ms
However, if I put this into production, won't there likely be several async calls in progress simultaneously, and each of them will overwrite the previous timer? Or does node have some sort of magical execution context that gives each "thread of execution" a separate console timer namespace?
Just use unique labels and it will be safe. That's why you use a label, to uniquely identify the start time.
As long as you don't accidentally use a label twice everything will work exactly as intended. Also note that node has usually only one thread of execution.
Wouldn't this simple code work?
var labelWithTime = "label " + Date.now();
console.time(labelWithTime);
// Do something
console.timeEnd(labelWithTime);
Consider new NodeJS features as it has evolved too. Please look into:
process.hrtime() & NodeJS's other performance API hooks:
https://nodejs.org/api/perf_hooks.html#perf_hooks_performance_timing_api
Recently I have developed a short Python script using the Flask framework to start a process over HTTP (see here). Now there is the question for an adequate technology to "pipe" the standard streams (I am especially interested in stdout and stderr) of started processes to the Web.
What would you say is the most suitable way to accomplish this? First I thought of websockets to fit perfectly. But then it seemed to me that most implementations veil their connected clients. I think that I need this information in order to be able to send the output to all of them.
Edit: I think that my question was a bit unclear: I want to see the output of the executed command in a web-interface. So I have to "convey" data from stdout/stderr of the executed process via HTTP somehow to the web-interface(s). It might be possible that the started command may run for some (longer) time. The example which invokes the dd command is not representative as it does not output anything in the given setting. And sure, I would have to use the output of the subprocess.Process class and objects (e. g. by using the communicate() method).
Not very clear what your goal is. Your code has this. Perhaps that's a clue?
current_app.process = subprocess.Popen(
['dd', 'if=/dev/zero', 'of=/dev/null'])
Trawl through the docs for subprocess. You'll find you can specify stdout and stderr.
http://docs.python.org/library/subprocess.html#module-subprocess
One strategy might be to capture stdout/err to file then rewrite that file to your http response.
One approach that will work (and is non-blocking, can serve multiple clients) is using Twisted:
Connect a ProcessProtocol
http://twistedmatrix.com/documents/current/core/howto/process.html
to a Twisted WebSocket server
https://github.com/tavendo/AutobahnPython
Disclosure: I am author of Autobahn.