What is the difference between hasPendingMacrotasks or hasPendingMicrotasks within NgZone? The documentation seems to be lacking information. All I know is that they return a boolean. But what exactly are they checking for? What is considered a micro task? And what is considered a macro task?
class NgZone {
static isInAngularZone() : boolean
static assertInAngularZone() : void
static assertNotInAngularZone() : void
constructor({enableLongStackTrace = false}: any)
run(fn: () => any) : any
runGuarded(fn: () => any) : any
runOutsideAngular(fn: () => any) : any
onUnstable : EventEmitter<any>
onMicrotaskEmpty : EventEmitter<any>
onStable : EventEmitter<any>
onError : EventEmitter<any>
isStable : boolean
hasPendingMicrotasks : boolean
hasPendingMacrotasks : boolean
}
My best guess is that micro refers to tasks from within a specific class whereas macro probably refers to a task in regards to the whole application. Can anyone verify or confirm this assumption? Or shed some light on the specifics?
NgZone Docs:
https://angular.io/docs/ts/latest/api/core/index/NgZone-class.html#!#hasPendingMicrotasks-anchor
There are three kinds of tasks
1) MicroTask:
A microtask is work which will execute as soon as possible on empty stack frame. A microtask is guaranteed to run before host environment performs rendering or I/O operations. A microtask queue must be empty before another MacroTask or EventTask runs.
i.e. Promise.then() executes in microtask
2) MacroTask
MacroTasks are interleaved with rendering and I/O operations of the host environment. They are guaranteed to run at least once or canceled (some can run repeatedly such as setInterval). Macro tasks have an implied execution order.
i.e. setTimeout, setInterval, setImmediate
3) EventTask
EventTasks are similar to macro tasks, but unlike macro tasks they may never run. When an EventTask is run, it pre-empts whatever the next task is the macro task queue. Event tasks do not create a queue.
i.e. user click, mousemove, XHR state change.
Why is it useful to know if any of the tasks are currently being performed?
Knowing when a task has executed and a microtask queue is empty allows frameworks to know when it is time to render the UI.
Tracking when all scheduled tasks are executed allows a test framework to know when an asynchronous test has completed.
ng_zone.ts
private checkStable() {
if (this._nesting == 0 && !this._hasPendingMicrotasks && !this._isStable) {
try {
this._nesting++;
this._onMicrotaskEmpty.emit(null);
} finally {
this._nesting--;
if (!this._hasPendingMicrotasks) {
try {
this.runOutsideAngular(() => this._onStable.emit(null));
} finally {
this._isStable = true;
}
}
}
}
}
See also
what is the use of Zone.js in Angular 2
Related
If you work with JavaScript, you probably know about event loop along with macro and micro tasks. So whatwg spec defines a structure of any task that consists from:
steps
source
document
script evaluation environment settings object set
If the first three steps are clear the last one is unclear. It is mostly used in the "prepare to run script" for gathering "environment settings objects" and also it is used as part of event loop for reporting long tasks using gathered "environment settings objects".
My question is: What is this case when the task has in the "script evaluation environment settings object set" more than one "environment settings object"? An example wouldn't be excess.
My supposition: There is one place in whatwg spec that says the following:
These algorithms are not invoked by one script directly calling another, but they can be invoked reentrantly in an indirect manner, e.g. if a script dispatches an event which has event listeners registered.
Hence the example will be following:
/// example based on the chromium flag --process-per-site
setTimeout(() => {
console.log("task has been started");
const auxiliary = window.open("about:blank"); /// or address from same origin
auxiliary.document.body.onclick = () => {
auxiliary.document.writeln(`<script>const observer = new PerformanceObserver((list) => {
for (const entry of list.getEntries()) {
console.log(entry);
}
});
observer.observe({entryTypes: ['longtask']});</script>`);
auxiliary.document.writeln("Here we go");
auxiliary.document.body.onclick = "";
};
auxiliary.document.body.click();
const start = performance.now();
while(performance.now() < start + 3000);
console.log("heavy computation in the entry script is over");
})
What do we see? We see that this code produces one task, also it creates auxiliary browsing context for that is making an observer. Finally this observer is firing in the auxiliary context during the long task.
So I can conclude that this code demonstrating how the task can have more than one "environment settings object" in set of the task structure.
I have two files, first is state.js which you might see below:
import EventEmitter from "events";
import tasklist from 'tasklist';
export const stateManager = new EventEmitter();
//IIFE working every 10 seconds via cron task, from the file below
(async () => {
const processes = await tasklist();
const exist = processes.find(({imageName}) => imageName === 'process.exe');
if (!!exist) {
console.log(!!exist)
stateManager.emit('state', 1);
} else {
console.log(!!exist)
stateManager.emit('state', 0);
}
})();
The second is the eventListener.js file, which running endlessly.
//import stateManager from first file
import { stateManager } from "./src/controller.js";
/**
* Here is part of the code
* which starts the first file
* as a Worker cron-task every 10 seconds
* It doesn't block the thread.
*/
const bree = new Bree({
logger: logger,
jobs: []
});
bree.add({
name: "state",
interval: ms("10s")
})
bree.start();
// First listener
console.log('check');
stateManager.on('state', function firstListener(...args) {
console.log('State updated', args);
});
After I launch the eventListener.js, the output which I see in the console is the following:
check
false
State updated [ 0 ]
[LOGGER] 21-03-01 18:03:67 info : Worker for job "state" online
false
[LOGGER] 21-03-01 18:03:08 info : Worker for job "state" exited with code 0
//every 10 second then =>
[LOGGER] 21-03-01 18:13:69 info : Worker for job "state" online
false
[LOGGER] 21-03-01 18:13:06 info : Worker for job "state" exited with code 0
or if I export export const stateManager = new EventEmitter() from another (third file) not the state.js directly.
check
[LOGGER] 21-03-01 18:03:23 info : Worker for job "state" online
false
[LOGGER] 21-03-01 18:03:63 info : Worker for job "state" exited with code 0
//every 10 second then =>
[LOGGER] 21-03-01 18:13:23 info : Worker for job "state" online
false
[LOGGER] 21-03-01 18:13:63 info : Worker for job "state" exited with code 0
So the task works fine, and the event listener never ends, but somehow, it doesn't react to state event or react only once, at the begining. Why such a problem could be? Since I have no other error in the console?
If it helps, I use bree.js as a job queue manager, which actually runs the cron-task itself.
Solution
I guess I have found the solution to the problem, but I couldn't understand the reason itself.
In my case, it's better to use not the events library, but worker_threads, so the code now looks like that:
import { parentPort } from "worker_threads";
import tasklist from 'tasklist';
//IIFE working every 10 seconds via cron task, from the file below
(async () => {
const processes = await tasklist();
const exist = processes.find(({imageName}) => imageName === 'process.exe');
if (parentPort) parentPort.postMessage({ state: !!exist })
})();
and the eventListener receive side looks like:
bree.on('worker created', (name) => {
bree.workers[name].on('message', (message) => {
console.log(message); // Prints { state: !!exist }
})
});
So the messages from the first file have been delivered successfully.
But I'll be very glad if someone, will explain it to me, why events library can't deliver messages between different works, even if the eventListener has been exported correctly.
Bree basically starts a new process. What you got there is the equivalent of doing:
node eventListener.js
And then:
node state.js
Every 10 seconds. There's an Event Emitter being created every 10 seconds but there's no listener attached to it, because that's on your eventListener.js file which isn't being run, just state.js every 10 seconds.
Bree might be overkill for this, you could just use setInterval or a setTimeout loop. These are async anyway and you aren't doing anything heavy on the JS side so you shouldn't be blocking as much. Though if you do want to use workers, your solution seems OK to me. Might be easier to just work with workers directly and forgo Bree.
Another option would be to set up Bree with eventListener on a separate file. You just run that other file (which will run all your program every 10 seconds).
Seems I understand the reason for such behavior, but I am not sure that it's the Bree issue itself.
Just don't use the IIFE function as a build-in cron-task of Bree. Instead of that, use the cron-task inside the file. Doesn't matter how exactly you'll do it, via await sleep(time) and recursion, or via the node-scheduler module itself.
I have a set of Gulp (v4) tasks that do things like compile Webpack and Sass, compress images, etc. These tasks are automated with a "watch" task while I'm working on a project.
When my watch task is running, if I save a file, the "default" set of tasks gets ran. If I save again before the "default" task finishes, another "default" task begins, resulting in multiple "default" tasks running concurrently.
I've fixed this by checking that the "default" task isn't running before triggering a new one, but this has caused some slow down issues when I save a file, then rapidly make another minor tweak, and save again. Doing this means that only the first change gets compiled, and I have to wait for the entire process to finish, then save again for the new change to get compiled.
My idea to circumvent this is to kill all the old "default" tasks whenever a new one gets triggered. This way, multiples of the same task won't run concurrently, but I can rely on the most recent code being compiled.
I did a bit of research, but I couldn't locate anything that seemed to match my situation.
How can I kill all the "old" gulp tasks, without killing the "watch" task?
EDIT 1: Current working theory is to store the "default" task set as a variable and somehow use that to kill the process, but that doesn't seem to work how I expected it to. I've placed my watch task below for reference.
// watch task, runs through all primary tasks, triggers when a file is saved
GULP.task("watch", () => {
// set up a browser_sync server, if --sync is passed
if (PLUGINS.argv.sync) {
CONFIG_MODULE.config(GULP, PLUGINS, "browsersync").then(() => {
SYNC_MODULE.sync(GULP, PLUGINS, CUSTOM_NOTIFIER);
});
}
// watch for any changes
const WATCHER = GULP.watch("src/**/*");
// run default task on any change
WATCHER.on("all", () => {
if (!currently_running) {
currently_running = true;
GULP.task("default")();
}
});
// end the task
return;
});
https://github.com/JacobDB/new-site/blob/4bcd5e82165905fdc05d38441605087a86c7b834/gulpfile.js#L202-L224
EDIT 2: Thinking about this more, maybe this is more a Node.js question than a Gulp question – how can I stop a function from processing from outside that function? Basically I want to store the executing function as a variable somehow, and kill it when I need to restart it.
There are two ways to set up a Gulp watch. They look very similar, but have the important difference that one supports queueing (and some other features) and the other does not.
The way you're using, which boils down to
const watcher = watch(<path glob>)
watcher.on(<event>, function(path, stats) {
<event handler>
});
uses the chokidar instance that underlies Gulp's watch().
When using the chokidar instance, you do not have access to the Gulp watch() queue.
The other way to run a watch boils down to
function watch() {
gulp.watch(<path>, function(callback) {
<handler>
callback();
});
}
or more idiomatically
function myTask = {…}
const watch = () => gulp.watch(<path>, myTask)
Set up like this watch events should queue the way you're expecting, without your having to do anything extra.
In your case, that's replacing your const WATCHER = GULP.watch("src/**/*"); with
GULP.watch("src/**/*", default);
and deleting your entire WATCHER.on(…);
Bonus 1
That said, be careful with recursion there. I'm extrapolating from your use of a task named "default"… You don't want to find yourself in
const watch = () => gulp.watch("src/**/*", default);
const default = gulp.series(clean, build, serve, watch);
Bonus 2
Using the chokidar instance can be useful for logging:
function handler() {…}
const watcher = gulp.watch(glob, handler);
watcher.on('all', (path, stats) => {
console.log(path + ': ' + stats + 'detected') // e.g. "src/test.txt: change detected" is logged immediately
}
Bonus 3
Typically Browsersync would be set up outside of the watch function, and the watch would end in reloading the server. Something like
…
import browserSync from 'browser-sync';
const server = browserSync.create();
function serve(done) {
server.init(…);
done();
}
function reload(done) {
server.reload();
done();
}
function changeHandler() {…}
const watch = () => gulp.watch(path, gulp.series(changeHandler, reload);
const run = gulp.series(serve, watch);
try installing gulp restart
npm install gulp-restart
As #henry stated, if you switch to the non-chokidar version you get queuing for free (because it is the default). See no queue with chokidar.
But that doesn't speed up your task completion time. There was an issue requesting that the ability to stop a running task be added to gulp - how to stop a running task - it was summarily dealt with.
If one of your concerns is to speed up execution time, you can try the lastRun() function option. gulp lastRun documentation
Retrieves the last time a task was successfully completed during the
current running process. Most useful on subsequent task runs while a
watcher is running.
When combined with src(), enables incremental builds to speed up
execution times by skipping files that haven't changed since the last
successful task completion.
const { src, dest, lastRun, watch } = require('gulp');
const imagemin = require('gulp-imagemin');
function images() {
return src('src/images/**/*.jpg', { since: lastRun(images) })
.pipe(imagemin())
.pipe(dest('build/img/'));
}
exports.default = function() {
watch('src/images/**/*.jpg', images);
};
Example from the same documentation. In this case, if an image was successfully compressed during the current running task, it will not be re-compressed. Depending on your other tasks, this may cut down on your wait time for the queued tasks to finish.
I have a typescript application running on NodeJS where I use Event Emitter class to emit an event when a value of a variable is changed.
I want to wait for the event to happen to proceed further and hence I need to induce a wait in my typescript code.
This is what I am doing,
if (StreamManagement.instance.activeStreams.get(streamObj.streamContext).streamState === 'Paused') {
await StreamManagement.instance.waitForStreamActive(streamObj);
}
This is my waitForStreamActive method,
public async waitForStreamActive(stream: Stream) {
const eventEmitter = new EventEmitter();
return new Promise(( resolve ) => {
eventEmitter.on('resume', resolve );
});
}
And I trigger the emit event like this,
public async updateStream(streamContext: string, state: boolean): Promise<string> {
const eventEmitter = new EventEmitter();
if (state === Resume) {
const streamState = StreamManagement.instance.activeStreams.get(streamContext).streamState = 'Active';
eventEmitter.emit('resume');
return streamState;
}
}
All these three code snippets are in different classes and in different files.
This snippet of code doesn't seem to work as I expected.
I want to achieve a wait in javascript until the promise is resolved. That is until the state is changed to resume.
Can someone please point me where I am going wrong?
Can someone please point me where I am going wrong?
You have two different EventEmitters. Events triggered on one EventEmitter do not fire on others.
More Code Review
Firing and listening on the same EventEmitter will work. That said, Promise is not the correct abstraction for things that return multiple times. A Promise can only be resolved once, whereas Events can fire multiple times. Suggest using EventEmitter as is, or alternatively use some other stream abstraction e.g. Observable 🌹
EventEmmitter(Obserbable pattern) and Promise(Chain of responsibility pattern), they have different obligations. I see that you want use them both. In your case it is not impossible because EventEmitter is not design for to chain observer processors. Use simple promises and builders only. There is a very good library RxJS it provides a lot of functionality. It can do what you ask: to build event driven architecture with sync/async chained cases.
Background
In XULRunner version belowe 12.0 it's works, but when i'm trying port it to version 12.0 or higher it crash application.
Main reason is that in sdk v12 or newer developers remove proxy objects to xpcom components and recommend replace it
by wrapping objects with nsRunnable/nsIRunnable and route invocation to main thread by function NS_DispatchToMainThread (click here)
What i'm developing?
I created db connector which is async and comunicate with main thread by callbacks.
Using: XULRunner v6, porting to XULRunner v17 or above
//nsIDBCallback.idl
[scriptable, function, uuid(XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXXXX)]
interface nsIDBCallback : nsISupports {
void onInfo(in long phase, in long status, in string info);
}
//nsDBService.h, it is XPCOM component
class nsDBService : public nsIDBService, nsIRunnable
{
public:
NS_DECL_ISUPPORTS
NS_DECL_NSIRUNNABLE
NS_DECL_NSIDBSERVICE
private:
std::vector<nsIThread*> threads;
std::vector<nsIDBCallback*> callbacks;
std::vector<const char*> sqls;
nsIThread* makeNewThread();
void runOperationIfNotBussy();
public:
NS_IMETHODIMP Query(const char *sql, nsIDBCallback *callback);
}
//nsDBService.cpp
// adding query and other data to buffers,
// it's thread safe, there are used mutex's
NS_IMETHODIMP nsDBService::Query(const char *sql, nsIDBCallback *callback)
{
callbacks.push_back(callback);
sqls .push_back(sql);
threads .push_back( makeNewThread() );
//run added operation if db driver is free,
//if driver is bussy then invocation is in buffer and need to wait
runOperationIfNotBussy();
return NS_OK;
}
void nsDBService::runOperationIfNotBussy()
{
//some conditions, test's etc.
//run first operation on list
// RUNNING A THREAD, still ok
if(...) threads.front()->Dispatch(this, nsIEventTarget::DISPATCH_NORMAL);
}
//if this method is used by another thread+db query,
//then other operations can't run and need to wait
//operations are stored and supported like fifo
NS_IMETHODIMP nsDBService::Run(void)
{
//some other operations
//real db operations in background
int32_t phase = 3; //endphase
int32_t code = 0; //ok
const char *msg = "OK";
nsIDBCallback *callback = callbacks.pop();
//wrapping callback function with runnable interface
nsIRunnable *runCallback = new nsResultCallback(callback,
phase,
code,
msg);
//routing event to main thread
NS_DispatchToMainThread(runCallback, NS_DISPATCH_NORMAL);
runOperationIfNotBussy();
}
//nsResultCallback.h
class nsResultCallback: public nsRunnable
{
public:
NS_DECL_ISUPPORTS
public:
NS_DECL_NSIRUNNABLE
private:
nsIDBCallback* callback;
int32_t resPhase;
int32_t resStatus;
const char* resMessage;
public:
nsResultCallback(nsIDBCallback* callback,
int32_t phase,
int32_t status,
const std::string &message)
: callback(callback),
resPhase(phase),
resStatus(status),
resMessage(c_str_clone(message.c_str())) {};
~nsResultCallback();
};
//nsResultCallback.cpp
NS_IMETHODIMP nsResultCallback::Run(void)
{
nsresult rv = NS_ERROR_FAILURE;
try
{
// APP HANDS AND CRUSH !
if(this->callback) this->callback->OnInfo(resPhase, resStatus, resMessage);
}
catch(...)
{
rv = NS_ERROR_UNEXPECTED;
ERRF("nsBackpack::Run call method OnInfo from callback failed");
}
return rv;
}
INVOCATION
// *.js
nsDBService.query("SELECT * FROM t", function(phase, code, mes) {
//some UI actions or others db queries
});
Problem:
Application freeze and crash when code execution look like this:
nsDBService::Query //main thread ok
nsDBService::runOperationIfNotBussy //main thread
nsDBService::threads.front()->Dispatch //run bg thread
nsDBService:Run //bg thread
NS_DispatchToMainThread //main thread
nsResultCallback::Run //main thread
nsIDBCallback::OnInfo //main thread, crash
If code execution look like this, everything is ok:
nsDBService::Query //main thread ok
NS_DispatchToMainThread //main thread
nsResultCallback::Run //main thread
nsIDBCallback::OnInfo //main thread ok
Question:
When nsIDBCallback is invoked from NS_DispatchToMainThread and NS_DispatchToMainThread is invoked from other thread then main app thread, then execution fails, what i'm missing, don't understand? Or what is another approach for background tasks?
Cannot reproduce, as you didn't provide a self-contained, complete example, so some remarks instead:
The first thing I noticed is the cross-thread access of std::vector. You wrote something about mutexes in the comments, so this might be OK.
What is certainly wrong, is storing raw pointers to nsIDBCallback. XPCOM objects are ref-counted. So as soon your Query method returns, the underlying object might be deleted if there are no other references to it, leaving behind a dangling pointer in your vector. I think this is what is happening here!
You need to keep the object alive until the thread is done with it, preferably by putting it into a nsCOMPtr<nsIDBCallback> somewhere, e.g. in an nsCOMPArray<nsIDBCallback>.
PS: Turns out this is a somewhat old question, which I missed... So sorry for the delay answering it :p