I have a little snippet of node.js code in front of me that looks like this:
console.time("queryTime");
doAsyncIOBoundThing(function(err, results) {
console.timeEnd("queryTime");
// Process the results...
});
And of course when I run this on my (otherwise idle) development system, I get a nice console message like this:
queryTime: 564ms
However, if I put this into production, won't there likely be several async calls in progress simultaneously, and each of them will overwrite the previous timer? Or does node have some sort of magical execution context that gives each "thread of execution" a separate console timer namespace?
Just use unique labels and it will be safe. That's why you use a label, to uniquely identify the start time.
As long as you don't accidentally use a label twice everything will work exactly as intended. Also note that node has usually only one thread of execution.
Wouldn't this simple code work?
var labelWithTime = "label " + Date.now();
console.time(labelWithTime);
// Do something
console.timeEnd(labelWithTime);
Consider new NodeJS features as it has evolved too. Please look into:
process.hrtime() & NodeJS's other performance API hooks:
https://nodejs.org/api/perf_hooks.html#perf_hooks_performance_timing_api
Related
I have a Puppeteer CICD project where I use the NodeEnvironment to proceed to our test harness website, pull down the current list of custom xml tests from the dropdown element our QA team uses and then dynamically constructs an array of 1200+ URLs from this. Then I was using this.global to use as a reference array where I would then use clustering to run these in parallel.
I'm having problems wrapping my head around the proper way to get this list populated and then run the tests in parallel. The parameterized test option seems to be close to what I want, but I need a way to populate the dynamic array of urls before I jump to the for loop and it seems that the array is not populated yet and it is trying to run through the for loop and execute tests, even when using a promise.
I know I could probably hack something to get things working, but I would much rather know the proper expected way of doing this that allows me to take advantage of the parallelization playwright provides.
I am currently looking into worker fixtures or sharding to see if it provides me a way of achieving this, but the problem is if each worker goes to the website to populate the array then they will all have the 1200 test cases, which doesn't help either. I'm open to any ideas here, but an important thing to state is that I want each url to have its own test as each test has a series of get requests which I need to capture and perform comparisons against the query params.
A drastically minimized example of what I currently have is below:
dynamic.spec.js
//The object that contains the 1200+ urls that needs to be built at the start
let globalUrlObj;
test.beforeAll( async () => {
globalUrlObj = await setupUrlList(); //<-- This method goes to website and builds list
});
// Because globalUrlObj was not waited on above in beforeAll, it is undefined at this point and the execution comes back stating that there are no tests to run
if (globalUrlObj && globalUrlObj[config.baseKey] !== undefined) {
for (const testUrlObj of globalUrlObj[config.baseKey]){
test(`Testing with url ${testUrlObj.url}`, async () => {
// Perform my analysis
// ...
});
}
}
First thing first. You do not need before all as far as I can see. How it will work in a paralel execution if have before all, every worker that exists will run before all (you do not need that, you need it only once). So just put the code outside of it.
Choose a regular for loop for execution and add i (iterator) that will make your test unique. If this does not help you please write what kind of error you face.
I am writing a simple extension to open browser by clicking the extension button. I would like to know if there is a function which can execute passed shell command as argument. Also, it'd be really helpful if anyone can suggest a good simple reference for extension development.
From https://github.com/GNOME/gnome-shell/blob/master/js/misc/util.js:
// Runs #command_line in the background, handling any errors that
// occur when trying to parse or start the program.
function spawnCommandLine(command_line) {
try {
let [success, argv] = GLib.shell_parse_argv(command_line);
trySpawn(argv);
} catch (err) {
_handleSpawnError(command_line, err);
}
}
There are a few variations on that method in there. Save yourself mountains of headaches and just bookmark the GitHub repository.
Some quick links:
popupMenu.js: working with popup menus
panel.js: a good read for implementing "tray" icons
modalDialog.js: some UI elements were made to be reused, runDialog.js uses this for example
mpris.js: there are also good examples of using frameworks like DBus in gjs
I can't stress enough how much you'll get out of reading the gnome-shell source. Unfortunately, it's compiled into a resource file now so we don't have local copies to stumble upon.
UPDATE (2021)
If you're reading this, please instead see the documentation available on gjs.guide. Specifically the documentation on Spawning Subprocesses, which covers why this is a bad idea in extensions and how to do it slightly less bad.
If you're not interested in the result - i.e. when you want to open a browser window - you can just use GLib.spawn_command_line_async like so:
const GLib = imports.gi.GLib;
...
(this._menuEntries[i]).connect('activate', () => {
GLib.spawn_command_line_async('firefox http://example.com?p='+ my_params[i]);
});
If you need a synchronous result, read https://gjs.guide/guides/gio/subprocesses.html
I have two programs that should be running the same . They are not. I'd like to see where their execution diverges. Is there an option in Chrome or Firefox or Safari to log/echo every line of JavaScript as it is executed ? Or some other way to do this short of manually adding console.log every few lines? Note: the divergence is 10k or 20k maybe 100k lines deep and ideally I'd want it to print variables similar to the Chrome dev tools.
Then I can just dump the logs and find the divergence
Stepping through the code in the debugger is not a solution as it would take hours if not days to step that far.
One idea is to use a babel or uglify plugin to use the last to emit code for each line to print what it is about to do or just did.
Another idea is if there is a way to dump all of memory from js so I can compare all objects and all references. They should be the same so when I see two dumps that differ I'll have found my bug. note: JSON.stringify is not an option as I need to see all variables/objects/classes etc.
Note: I'm not looking for basic answers like "use console.log" or "step in the debugger". I appreciate the help and maybe I've overlooked something simple but I do have quite a bit of JavaScript experience.
Maybe an example would help. Imagine you got the source to an app as large as google docs. You run some processor over it that's not supposed to break anything or change anything. Except it does. You look at the changes and can't see anything wrong. All you know is when you run it it runs but a few things are subtly broken. So you put a breakpoint there and see the data is bad. But when did it go bad? You don't know the code (you just got it). It could have been 100s of thousands of lines before. You have no idea where to put breakpoints or console.logs. It could take weeks. But, given you know the code should run exactly the same if you could print all lines of execution you'd find the bug in minutes instead of days.
You can add debugger; at the begin of the function() or where you want and open the console.
When the debugger is reached it stop the execution. After that you can execute code step by step and add some watches.
It works fine with all recent browser.
Example :
function test()
{
var rand = Math.random();
debugger;
return rand;
}
test();
It is node js but it may be helpful for you. set the NODE_V8_COVERAGE environment variable to a directory, coverage data will be output to that directory when the program exits.
https://blog.npmjs.org/post/178487845610/rethinking-javascript-test-coverage
I have a C++ project that I compile to Javascript using emscripten. This works, however, for resource limits and interactivity reasons I would like to run this inside a webworker.
However, my project uses the stdin. I found a way to provide my own implementation of stdin by overwriting Module['stdin'] with a function that returns a single character at a time of the total stdin, and closes with 0 as EOF.
This works when the script runs inside the page, as the Module object present in the html file is shared with the script.
When you run as a webworker though, this module object is not shared. Instead, message passing makes sure the regular functionality of Module still works. This does not include 'stdin'.
I worked around this by modifying the output javascript:
A: Adding an implementation of a Module object that includes this stdin specification. This function is modified to read a variable of the webworker as if it were the stdin and feed this on a per-character basis.
B: Changing the onmessage of the webworker to call an additional function handling my own events.
C: This additional function listens to the events and reacts when the event is the content of stdin, by setting the variable that the stdin function I specified reads.
D: adding and removing run dependencies on this additional event to prevent the c++ code running without the stdin specified.
In code:
Module['stdin_pointer'] = 0;
Module['stdin_content'] = "";
Module['stdin']=(function () {
if (Module['stdin_pointer'] < Module['stdin_content'].length) {
code = Module['stdin_content'].charCodeAt(Module['stdin_pointer']);
Module['stdin_pointer']=Module['stdin_pointer']+1;
return code;
} else {
return null;
}
});
external = function(message){
switch(message.data.target){
case 'stdin' : {
Module['idpCode'] = message.data.content;
removeRunDependency('stdin');
break;
}
default: throw 'wha? ' + message.data.target;
}
};
[...]
addRunDependency("stdin");
[...]
//Change this in the original onmessage function:
// default: throw 'wha? ' + message.data.target;
//to
default: {external(message);}
Clearly, this a & c part is quite easy because it can be added at the start (or near the start) of the js file, but b & d (adding your own dependencies and getting your own messagehandler in the loop) requires you to edit the code inline.
As my project is very large, finding the necessary lines to edit can be very cumbersome, only more so in optimized and mimified emscripten code.
Automatic scripts to do this, as well as the workaround itself, are likely to break on new emscripten releases.
Is there a nicer, more proper way to reach the same behavior?
Thank you!
//EDIT:
The --separate-asm flag is quite helpful, in the respect that the file that I must edit is now only a few lines long (in mimified form). It greatly reduces the burden, but it is still not a proper way, so I'm reluctant to mark this as resolved.
The only way I know of achieving what you want is to not use the Emscripten-supplied worker API, and roll your own. All the details are probably beyond the scope of a single question, but at a high level you'll need to...
Compile the worker module with your processing code, but not using the BUILD_AS_WORKER flag
At both the UI and worker ends, you'll need to write some JavaScript code to communicate in/out of the C++ worlds, using one of the techniques at http://kripken.github.io/emscripten-site/docs/porting/connecting_cpp_and_javascript/Interacting-with-code.html, that then directly calls the JavaScript worker API https://developer.mozilla.org/en-US/docs/Web/API/Web_Workers_API/Using_web_workers
At the Worker side of this, you will be able to control the Module object, setting stdin as you see fit
As a side-note, I have found that the Emscripten-supplied C++ wrappers for JavaScript functionality, such as workers, graphics, audio, http requests etc, are good to get going at first, but have limitations and don't expose everything that is technically possible. I have often had to roll my own to get the functionally needed. Although not for the same reasons, I have also had to write my own API for workers.
I'm currently developing a tutorial site for teaching the fundamentals of Web development (HTML, CSS, and JavaScript, for starters). I'd like a setup where I could give in-depth coverage of all sorts of topics and then provide a basic sandbox environment where the user could write code which solves the question asked at the end of each tutorial section.
For example, if I'd covered multiplication in a previous tutorial, and the user had just finished a lesson on functions being capable of returning values, I might request that they submit a function which returns the product of two parameters.
Is this not the perfect instance in which using dynamic function creation would be considered a good idea? Let's look at an example.
<script>
function check()
{
eval('var f = ' + document.getElementById('user_code').value);
if (f(5, 10) == 50)
{
// user properly wrote a function which
// returned the product of its parameters
}
}
</script>
Is this at all a bad idea? If so, please explain.
This sounds like it could work. However, the biggest challenge in your environment might be error handling. Students will surely make all sorts of errors:
Compile time errors, that will be detected in eval()
Run time errors, that will be detected when you call the function
Undetectable run time errors, such as an infinite loop or a stack overflow
A more elaborate approach might parse the entered Javascript into a parse tree representation, then compare it to an expected parse tree. If it does not match, then point out what might be wrong and have the student try again. If it does match, then you can eval() and call the function, knowing that it will do what you expect.
Implementing a lexer and parser for Javascript in Javascript would be challenging but certainly not impossible.
Should work as long as you're operating this in a closed environment. Eval opens you up to code injection attacks so I wouldn't put this on a publicly accessible web site, but if it's completely contained within your class room you should be ok.
The code would work, but what if there is an error both syntactically or otherwise ? Perhaps use a try block to catch any error and display it to the user would help things a little...
Not sure if this helps.
Sounds like you want to remake Firebug or even the new Developer Tools in IE8. Due to that, I'm going to have to say there is never a useful case. Not to mention the possibilities of script injection if this site goes public.
In your case, I feel that there is nothing wrong with this. Alternatively you can run the code by using new Function() to build stuff first and then run it. In theory, this would separate the stages of "compiling" and executing. However eval will check the code first and throw errors anyway:
var usercode = document.getElementById('user_code').value;
try {
var f = new Function( 'a','b','return (' + usercode + ')(a,b);' );
if ( f( 5, 10 ) ) {
// user properly wrote a function which
// returned the product of its parameters
}
else {
// user wrote code that ran but produced incorrect results
}
}
catch ( ex ) {
// user wrote something really bad
}
The problem with doing things in this manner is that the exceptions thrown may be nonsensical. "foo;;=bar" will report a "missing ) in parenthetical" error while eval will throw a propper syntax error. You could bypass this by (regexp) grabbing the parameters and body from the user code first and then building it. But then, how would this be any better than an eval?
I think that your real problem will be helping users avoid the pitfalls of implicit globals. How are you going to help users avoid writing code that only works the second time it runs because a global was set the first time? Will you not need to implement a clean sandbox every run? I would take a look at how jsbin.com, firebug and similar tools handle these things.
My feeling is that you should go with eval for now and change it for more elaborate stuff later if the need arrises.