I'm trying to write a function that measures the execution time of another function:
export class Profiler {
public measureSyncFunc(fn: () => any): Promise<number> {
return new Promise<number>((resolve, reject) => {
let elapsed = 0;
let intervalId = window.setInterval(() => {
elapsed += 1; // this is never called
}, 1);
this.execFunc(fn)
.then((result: any) => {
window.clearInterval(intervalId);
resolve(elapsed);
});
});
}
private execFunc(fn: () => any): Promise<any> {
return new Promise<any>((resolve, reject) => {
resolve(fn());
});
}
}
Then I use it like that:
let array = generateRandomArray(100000);
instance.measureSyncFunc(bubbleSort(array))
.then((elapsed: number) => {
console.log(`end session: ${elapsed} seconds`);
resolve();
});
The bubbleSort function is synchronous and it takes several seconds to complete.
See code here:
The result in the console is "end session: 0 seconds" because the interval callback is never called.
Do you know how I can make it called ?
Thank you very much guys !
If the functions you want to measure will always be synchronous there's really no need to involve promises.
Since the function you want to test takes parameters you it's best to to wrap it in an arrow function in order to be able to call it with another context and not have to manage it's parameters yourself.
Something simple like this will do just fine.
function measure(fn: () => void): number {
let start = performance.now();
fn();
return performance.now() - start;
}
function longRunningFunction(n: number) {
for (let i = 0; i < n; i++) {
console.log(i);
}
}
let duration = measure(() => {
longRunningFunction(100);
});
console.log(`took ${duration} ms`);
If you want to measure the time it takes an async function (if it returns a promise) to resolve you can easily change the code to something like this:
function measurePromise(fn: () => Promise<any>): Promise<number> {
let onPromiseDone = () => performance.now() - start;
let start = performance.now();
return fn().then(onPromiseDone, onPromiseDone);
}
function longPromise(delay: number) {
return new Promise<string>((resolve) => {
setTimeout(() => {
resolve('Done');
}, delay);
});
}
measurePromise(() => longPromise(300))
.then((duration) => {
console.log(`promise took ${duration} ms`);
});
Note: This solution uses the ES6 Promise, if you are using something else you might have to adapt it but the logic should be the same.
You can see both examples working in the playground here.
Don't use setInterval to count milliseconds (It's inaccurate, lags, drifts and has a minimum interval of about 4ms). Just get two timestamps before and after the execution.
function measureAsyncFunc(fn: () => Promise<any>): Promise<number> {
const start = Date.now();
return fn.catch(() => {}).then(() => {
const end = Date.now();
const elapsed = end-start;
return elapsed;
});
}
For higher accuracy, replace Date.now by performance.now.
Have a look at timeFnPromise and the related test cases.
target function is wrapped and executed when the wrapped function is called
appends fulfillment / rejection handler to the underlying Promise that returns the target functions return value as "ret" and the elapsed time as "elapsedTime"
supports arguments by passing them through to the target function
Samples Usage:
const wrappedFn = timeFnPromise(aFunctionThatReturnsAPromise)
wrappedFn()
.then((values)=>{
const {ret, elapsedTime} = values
console.log(`ret:[${ret}] elapsedTime:[${elapsedTime}]`)
})
Also available via NPM module jschest.
Here's a simple wrapper function I wrote. It returns a Promise (via the async keyword), and so you can just call it with your promise. I added the time value as a property to the response. If you cannot have that value in the response, then you would need to remove it afterwards.
const stopwatchWrapper = async (promise) => {
const startTime = Date.now()
const resp = await promise
resp.executionTime = Date.now() - startTime
return resp
}
const axiosPromise = stopwatchWrapper(axios(reqSelected))
const response = await axiosPromise
console.log(response.executionTime)
It would be good to clarify that the proposed approaches by toskv only work with the resolution of a single promise. If we want to use Promise.all() the time result it returns is wrong.
Here is an example with the code that toskv developed, but using Promise.all()
Measure with Promise.all()
If someone needs to measure the time it takes to execute each of the promises executed with a Promise.all() the approach that can be followed is to make use of the interceptors and do the time measurements there
Related
I'm trying to implement a debounce function that works with a promise in javascript. That way, each caller can consume the result of the "debounced" function using a Promise. Here is the best I have been able to come up with so far:
function debounce(inner, ms = 0) {
let timer = null;
let promise = null;
const events = new EventEmitter(); // do I really need this?
return function (...args) {
if (timer == null) {
promise = new Promise(resolve => {
events.once('done', resolve);
});
} else {
clearTimeout(timer);
}
timer = setTimeout(() => {
events.emit('done', inner(...args));
timer = null;
}, ms);
return promise;
};
}
Ideally, I would like to implement this utility function without introducing a dependency on EventEmitter (or implementing my own basic version of EventEmitter), but I can't think of a way to do it. Any thoughts?
I found a better way to implement this with promises:
function debounce(inner, ms = 0) {
let timer = null;
let resolves = [];
return function (...args) {
// Run the function after a certain amount of time
clearTimeout(timer);
timer = setTimeout(() => {
// Get the result of the inner function, then apply it to the resolve function of
// each promise that has been created since the last time the inner function was run
let result = inner(...args);
resolves.forEach(r => r(result));
resolves = [];
}, ms);
return new Promise(r => resolves.push(r));
};
}
I still welcome suggestions, but the new implementation answers my original question about how to implement this function without a dependency on EventEmitter (or something like it).
In Chris's solution all calls will be resolved with delay between them, which is good, but sometimes we need resolve only last call.
In my implementation, only last call in interval will be resolved.
function debounce(f, interval) {
let timer = null;
return (...args) => {
clearTimeout(timer);
return new Promise((resolve) => {
timer = setTimeout(
() => resolve(f(...args)),
interval,
);
});
};
}
And the following typescript(>=4.5) implementation supports aborted features:
Support aborting promise via reject(). If we don't abort it, it cannot execute finally function.
Support custom reject abortValue.
If we catch error, we may need to determine if the error type is Aborted
/**
*
* #param f callback
* #param wait milliseconds
* #param abortValue if has abortValue, promise will reject it if
* #returns Promise
*/
export function debouncePromise<T extends (...args: any[]) => any>(
fn: T,
wait: number,
abortValue: any = undefined,
) {
let cancel = () => { };
// type Awaited<T> = T extends PromiseLike<infer U> ? U : T
type ReturnT = Awaited<ReturnType<T>>;
const wrapFunc = (...args: Parameters<T>): Promise<ReturnT> => {
cancel();
return new Promise((resolve, reject) => {
const timer = setTimeout(() => resolve(fn(...args)), wait);
cancel = () => {
clearTimeout(timer);
if (abortValue!==undefined) {
reject(abortValue);
}
};
});
};
return wrapFunc;
}
/**
// deno run src/utils/perf.ts
function add(a: number) {
return Promise.resolve(a + 1);
}
const wrapFn= debouncePromise(add, 500, 'Aborted');
wrapFn(2).then(console.log).catch(console.log).finally(()=>console.log('final-clean')); // Aborted + final-clean
wrapFn(3).then(console.log).catch(console.log).finally(()=>console.log('final-clean')); // 4 + final_clean
Note:
I had done some memory benchmarks, huge number of pending promises won't cause memory leak. It seems that V8 engine GC will clean unused promises.
I landed here because I wanted to get the return value of the promise, but debounce in underscore.js was returning undefined instead. I ended up using lodash version with leading=true. It works for my case because I don't care if the execution is leading or trailing.
https://lodash.com/docs/4.17.4#debounce
_.debounce(somethingThatReturnsAPromise, 300, {
leading: true,
trailing: false
})
resolve one promise, cancel the others
Many implementations I've seen over-complicate the problem or have other hygiene issues. In this post we will write our own debounce. This implementation will -
have at most one promise pending at any given time (per debounced task)
stop memory leaks by properly cancelling pending promises
resolve only the latest promise
demonstrate proper behaviour with live code demos
We write debounce with its two parameters, the task to debounce, and the amount of milliseconds to delay, ms. We introduce a single local binding for its local state, t -
function debounce (task, ms) {
let t = { promise: null, cancel: _ => void 0 }
return async (...args) => {
try {
t.cancel()
t = deferred()
await t.promise
await task(...args)
}
catch (_) { /* prevent memory leak */ }
}
}
We depend on a reusable deferred function, which creates a new promise that resolves in ms milliseconds. It introduces two local bindings, the promise itself, an the ability to cancel it -
function deferred (ms) {
let cancel, promise = new Promise((resolve, reject) => {
cancel = reject
setTimeout(resolve, ms)
})
return { promise, cancel }
}
click counter example
In this first example, we have a button that counts the user's clicks. The event listener is attached using debounce, so the counter is only incremented after a specified duration -
// debounce, deferred
function debounce (task, ms) { let t = { promise: null, cancel: _ => void 0 }; return async (...args) => { try { t.cancel(); t = deferred(ms); await t.promise; await task(...args); } catch (_) { console.log("cleaning up cancelled promise") } } }
function deferred (ms) { let cancel, promise = new Promise((resolve, reject) => { cancel = reject; setTimeout(resolve, ms) }); return { promise, cancel } }
// dom references
const myform = document.forms.myform
const mycounter = myform.mycounter
// event handler
function clickCounter (event) {
mycounter.value = Number(mycounter.value) + 1
}
// debounced listener
myform.myclicker.addEventListener("click", debounce(clickCounter, 1000))
<form id="myform">
<input name="myclicker" type="button" value="click" />
<output name="mycounter">0</output>
</form>
live query example, "autocomplete"
In this second example, we have a form with a text input. Our search query is attached using debounce -
// debounce, deferred
function debounce (task, ms) { let t = { promise: null, cancel: _ => void 0 }; return async (...args) => { try { t.cancel(); t = deferred(ms); await t.promise; await task(...args); } catch (_) { console.log("cleaning up cancelled promise") } } }
function deferred (ms) { let cancel, promise = new Promise((resolve, reject) => { cancel = reject; setTimeout(resolve, ms) }); return { promise, cancel } }
// dom references
const myform = document.forms.myform
const myresult = myform.myresult
// event handler
function search (event) {
myresult.value = `Searching for: ${event.target.value}`
}
// debounced listener
myform.myquery.addEventListener("keypress", debounce(search, 1000))
<form id="myform">
<input name="myquery" placeholder="Enter a query..." />
<output name="myresult"></output>
</form>
Here's my version in typescript (mostly based on Chris one), if someone need it 😉
function promiseDebounce (exec: (...args: any[]) => Promise<any>, interval: number): () => ReturnType<typeof exec> {
let handle: number | undefined;
let resolves: Array<(value?: unknown) => void> = [];
return async (...args: unknown[]) => {
clearTimeout(handle);
handle = setTimeout(
() => {
const result = exec(...args);
resolves.forEach(resolve => resolve(result));
resolves = [];
},
interval
);
return new Promise(resolve => resolves.push(resolve));
};
}
No clue what you are trying to accomplish as it vastly depends on what your needs are. Below is something somewhat generic though. Without a solid grasp of what is going on in the code below, you really might not want to use it though.
// Debounce state constructor
function debounce(f) {
this._f = f;
return this.run.bind(this)
}
// Debounce execution function
debounce.prototype.run = function() {
console.log('before check');
if (this._promise)
return this._promise;
console.log('after check');
return this._promise = this._f(arguments).then(function(r) {
console.log('clearing');
delete this._promise; // remove deletion to prevent new execution (or remove after timeout?)
return r;
}.bind(this)).catch(function(r) {
console.log('clearing after rejection');
delete this._promise; // Remove deletion here for as needed as noted above
return Promise.reject(r); // rethrow rejection
})
}
// Some function which returns a promise needing debouncing
function test(str) {
return new Promise(function(resolve, reject) {
setTimeout(function() {
console.log('test' + str);
resolve();
}, 1000);
});
}
a = new debounce(test); // Create debounced version of function
console.log("p1: ", p1 = a(1));
console.log("p2: ", p2 = a(2));
console.log("p1 = p2", p1 === p2);
setTimeout(function() {
console.log("p3: ", p3 = a(3));
console.log("p1 = p3 ", p1 === p3, " - p2 = p3 ", p2 === p3);
}, 2100)
View the console when running the code above. I put a few messages to show a bit about what is going on. First some function which returns a promise is passed as an argument to new debounce(). This creates a debounced version of the function.
When you run the debounced function as the code above does (a(1), a(2), and a(3)) you will notice during processing it returns the same promise instead of starting a new one. Once the promise is complete it removes the old promise. In code above I wait for the timeout manually with setTimeout before running a(3).
You can clear the promise in other ways as well, like adding a reset or clear function on debounce.prototype to clear the promise at a different time. You could also set it to timeout. The tests in the console log should show p1 and p2 get the same promise (reference comparison "===" is true) and that p3 is different.
Here is what I came up with to solve this issue. All calls to the debounced function batched to the same invocation all return the same Promise that resolves to the result of the future invocation.
function makeFuture() {
let resolve;
let reject;
let promise = new Promise((d, e) => {
resolve = d;
reject = e;
});
return [promise, resolve, reject];
}
function debounceAsync(asyncFunction, delayMs) {
let timeout;
let [promise, resolve, reject] = makeFuture();
return function(...args) {
clearTimeout(timeout);
timeout = setTimeout(async () => {
const [prevResolve, prevReject] = [resolve, reject];
[promise, resolve, reject] = makeFuture();
try {
prevResolve(await asyncFunction.apply(this, args));
} catch (error) {
prevReject(error);
}
}, delayMs);
return promise;
}
}
const start = Date.now();
const dog = {
sound: 'woof',
bark() {
const delay = Date.now() - start;
console.log(`dog says ${this.sound} after ${delay} ms`);
return delay;
},
};
dog.bark = debounceAsync(dog.bark, 50);
Promise.all([dog.bark(), dog.bark()]).then(([delay1, delay2]) => {
console.log(`Delay1: ${delay1}, Delay2: ${delay2}`);
});
Both Chris and Николай Гордеев have good solutions. The first will resolve all of them. The problem is that they all be resolved, but usually you wouldn't want all of them to run.
The second solution solved that but created a new problem - now you will have multiple awaits. If it's a function that is called a lot (like search typing) you might have a memory issue. I fixed it by creating the following asyncDebounce that will resolve the last one and reject (and the awaiting call will get an exception that they can just catch).
const debounceWithRejection = (
inner,
ms = 0,
reject = false,
rejectionBuilder
) => {
let timer = null;
let resolves = [];
return function (...args) {
clearTimeout(timer);
timer = setTimeout(() => {
const resolvesLocal = resolves;
resolves = [];
if (reject) {
const resolve = resolvesLocal.pop();
resolve.res(inner(...args));
resolvesLocal.forEach((r, i) => {
!!rejectionBuilder ? r.rej(rejectionBuilder(r.args)) : r.rej(r.args);
});
} else {
resolvesLocal.forEach((r) => r.res(inner(...args)));
}
resolves = [];
}, ms);
return new Promise((res, rej) =>
resolves.push({ res, rej, args: [...args] })
);
};
};
The rejection logic is optional, and so is the rejectionBuilder. It's an option to reject with specific builder so you will know to catch it.
You can see runing example.
This may not what you want, but can provide you some clue:
/**
* Call a function asynchronously, as soon as possible. Makes
* use of HTML Promise to schedule the callback if available,
* otherwise falling back to `setTimeout` (mainly for IE<11).
* #type {(callback: function) => void}
*/
export const defer = typeof Promise=='function' ?
Promise.resolve().then.bind(Promise.resolve()) : setTimeout;
This question already has answers here:
Any difference between await Promise.all() and multiple await?
(6 answers)
Closed 1 year ago.
This is a basic question, but i couldn't find the answer to it anywhere.
We have two approaches:
// consider someFunction1() and someFunction2() as functions that returns Promises
Approach #1:
return [await someFunction1(), await someFunction2()]
Approach #2:
return await Promise.all([someFunction1(), someFunction2()])
My Team Leader said that both approaches ended up in the same solution (both functions executting in parallel). But, from my knowledge, the first approach would await someFunction1() to resolve and then would execute someFunction2.
So that's the question, is it really the same, or are there any performance improvements on second approach? Proofs are very welcome!
No, you should not accept that:
return [await someFunction1(), await someFunction2()];
Is the same as:
return await Promise.all([someFunction1(), someFunction2()]);
I should also note that await in the above return await is not needed. Check out this blog post to learn more.
They are different!
The first approach (sequential)
Let's determine the difference by inspecting how each of the two alternatives works.
[await someFunction1(), await someFunction2()];
Here, in an async context, we create an array literal. Note that someFunction1 is called (a function which probably returns a new promise each time it gets called).
So, when you call someFunction1, a new promise is returned, which then "locks" the async context because the preceding await.
In a nutshell, the await someFunction1() "blocks" the array initialization until the returned promise gets settled (by getting resolved or rejected).
The same process is repeated to someFunction2.
Note that, in this first approach, the two promises are awaited in sequence. There is, therefore, no similarity with the approach that uses Promise.all. Let's see why.
The second approach (non-sequential)
Promise.all([someFunction1(), someFunction2()])
When you apply Promise.all, it expects an iterable of promises. It waits for all the promises you give to resolve before returns a new array of resolved values, but don't wait each promise resolve until waiting another one. In essence, it awaits all the promises at the same time, so it is a kind of "non-sequential". As JavaScript is single-threaded, you cannot tell this "parallel", but is very similar in the behavior point of view.
So, when you pass this array:
[someFunction1(), someFunction2()]
You are actually passing an array of promises (which are returned from the functions). Something like:
[Promise<...>, Promise<...>]
Note that the promises are being created outside Promise.all.
So you are, in fact, passing an array of promises to Promise.all. When both of them gets resolved, the Promise.all returns the array of resolved values. I won't explain in all details how Promise.all works, for that, I suggest you checking out the documentation.
You can replicate this "non-sequential" approach by creating the promises before using the await. Like so:
const promise1 = someFunction1();
const promise2 = someFunction2();
return [await promise1, await promise2];
While promise1 is being waited, promise2 is already running (as it was created before the first await), so the behavior is similar to Promise.all's.
My Team Leader said that both approaches ended up in the same solution (both functions executting in parallel).
That is incorrect.
But, from my knowledge, the first approach would await someFunction1() to resolve and then would execute someFunction2.
That is correct.
Here is a demonstration
Approach 1:
const delay = (ms, value) =>
new Promise(resolve => setTimeout(resolve, ms, value));
async function Approach1() {
return [await someFunction1(), await someFunction2()];
}
async function someFunction1() {
const result = await delay(800, "hello");
console.log(result);
return result;
}
async function someFunction2() {
const result = await delay(400, "world");
console.log(result);
return result;
}
async function main() {
const start = new Date();
const result = await Approach1();
const totalTime = new Date() - start;
console.log(`result: ${result}
total time: ${totalTime}`);
}
main();
Result is:
hello
world
result: hello,world
total time: 1205
Which means that someFunction1 runs to completion first and then someFunction2 is executed. It is sequential
Approach 2:
const delay = (ms, value) =>
new Promise(resolve => setTimeout(resolve, ms, value));
async function Approach2() {
return await Promise.all([someFunction1(), someFunction2()]);
}
async function someFunction1() {
const result = await delay(800, "hello");
console.log(result);
return result;
}
async function someFunction2() {
const result = await delay(400, "world");
console.log(result);
return result;
}
async function main() {
const start = new Date();
const result = await Approach2();
const totalTime = new Date() - start;
console.log(`result: ${result}
total time: ${totalTime}`);
}
main();
Result is:
world
hello
result: hello,world
total time: 803
Which means that someFunction2 finishes before someFunction1. The two are parallel.
Easy to see the difference
function createTimer(ms, id) {
console.log(`id: ${id} started ${new Date()}`);
return new Promise((res, rej) => {
setTimeout( () => {
console.log(`id: ${id} finished ${new Date()}`);
res(id);
}, ms );
});
}
(async function() {
var result1 = [await createTimer(5000, '1'), await createTimer(5000, '2')];
var result2 = await Promise.all([createTimer(5000, '3'), createTimer(5000, '4')]);
console.log(result1);
console.log(result2);
})();
The first one starts 1 and when 1 finishes it starts 2.
The second one starts 3 and 4 at almost the very same moment.
If you start with a function which simulates doing some work which outputs at stages of that work but takes some time. eg
function someFunction1(){
return new Promise(resolve => {
let i = 0;
const intervalId = setInterval(() => {
i++;
console.log("someFunction1", i);
if(i == 5){
clearInterval(intervalId)
resolve(1);
}
}, 1000);
});
}
And then you duplicate that with a second, similar, method. You plug your 2 methods in and you see that the one using Promise.all does it in parallel but the one using 2 await calls does it in series.
Parallel
function someFunction1(){
return new Promise(resolve => {
let i = 0;
const intervalId = setInterval(() => {
i++;
console.log("someFunction1", i);
if(i == 5){
clearInterval(intervalId)
resolve(1);
}
}, 1000);
});
}
function someFunction2(){
return new Promise(resolve => {
let i = 0;
const intervalId = setInterval(() => {
i++;
console.log("someFunction2", i);
if(i == 5){
clearInterval(intervalId)
resolve(2);
}
}, 1000);
});
}
(async function(){
const result = await Promise.all([someFunction1(),someFunction2()]);
console.log("result",result);
})();
Series
function someFunction1(){
return new Promise(resolve => {
let i = 0;
const intervalId = setInterval(() => {
i++;
console.log("someFunction1", i);
if(i == 5){
clearInterval(intervalId)
resolve(1);
}
}, 1000);
});
}
function someFunction2(){
return new Promise(resolve => {
let i = 0;
const intervalId = setInterval(() => {
i++;
console.log("someFunction2", i);
if(i == 5){
clearInterval(intervalId)
resolve(2);
}
}, 1000);
});
}
(async function(){
const result = [await someFunction1(),await someFunction2()];
console.log("result",result);
})();
Both give the exact same result but getting there is very different.
MDN documentation for Promise.all() states that
This method can be useful for aggregating the results of multiple promises. It is typically used when there are multiple related asynchronous tasks that the overall code relies on to work successfully — all of whom we want to fulfill before the code execution continues.
While it isn't explicit, you can await Promise.all to track multiple promises. Only when all promises are resolved will the code execution continue.
The other approach you mention of capturing separate asynchronous tasks in an array is not the same due to how await operates.
An await splits execution flow, allowing the caller of the async function to resume execution. After the await defers the continuation of the async function, execution of subsequent statements ensues. If this await is the last expression executed by its function, execution continues by returning to the function's caller a pending Promise for completion of the await's function and resuming execution of that caller.
So, each await will pause execution before resuming. No need for a demonstration.
I'm attempting to define a function that returns a promise. The promise should resolve when a given array is set (push()).
To do this I'm attempting to use a Proxy object (influenced by this):
let a = []
;(async function(){
const observe = array => new Promise(resolve =>
new Proxy(array, {
set(array, key, val) {
array[key] = val;
resolve();
}
}));
while(true){
await observe(a);
console.log(new Date().toLocaleTimeString(),"Blimey Guv'nor:",`${a.pop()}`);
}
})(a);
;(async function(){
await new Promise(resolve => timerID = setTimeout(resolve, 2000))
a.push('ʕ·͡ᴥ·ʔ');
a.push('¯\(°_o)/¯ ')
})(a)
I can't see why this doesn't work. Does anyone have any idea?
More generally, what is a good way to have a promise resolve on push to an array?
The problems with your attempt:
you invoke .push on the original array, not the proxied one. Where you create the proxy, it is returned to no-one: any reference to it is lost (and will be garbage collected).
The code following after the line with await will execute asynchronously, so after all of your push calls have already executed. That means that console.log will execute when the array already has two elements. Promises are thus not the right tool for what you want, as the resolution of a promise can only be acted upon when all other synchronous code has run to completion. To get notifications during the execution synchronously, you need a synchronous solution, while promises are based on asynchronous execution.
Just to complete the answer, I provide here a simple synchronous callback solution:
function observed(array, cb) {
return new Proxy(array, {
set(array, key, val) {
array[key] = val;
if (!isNaN(key)) cb(); // now it is synchronous
return true;
}
});
}
let a = observed([], () =>
console.log(new Date().toLocaleTimeString(),"Blimey Guv'nor:", `${a.pop()}`)
);
a.push('ʕ·͡ᴥ·ʔ');
a.push('¯\(°_o)/¯ ');
As noted before: promises are not the right tool when you need synchronous code execution.
When each push is executed asynchronously
You can use promises, if you are sure that each push happens in a separate task, where the promise job queue is processed in between every pair of push calls.
For instance, if you make each push call as part of an input event handler, or as the callback for a setTimeout timer, then it is possible:
function observed(array) {
let resolve = () => null; // dummy
let proxy = new Proxy(array, {
set(array, key, val) {
array[key] = val;
if (!isNaN(key)) resolve();
return true;
}
});
proxy.observe = () => new Promise(r => resolve = r);
return proxy;
}
let a = observed([]);
(async () => {
while (true) {
await a.observe();
console.log(new Date().toLocaleTimeString(),"Blimey Guv'nor:",`${a.pop()}`);
}
})();
setTimeout(() => a.push('ʕ·͡ᴥ·ʔ'), 100);
setTimeout(() => a.push('¯\(°_o)/¯ '), 100);
In single-threaded, synchronous, non-recursive code, we can be sure that for any given function, there is never more than one invocation of it in progress at a time.
However, in the async/await world, the above no longer applies: while we're awaiting something during execution of async function f, it might be called again.
It occurred to me that, using event emitters and a queue, we could write a wrapper around an async function to guarantee that it never had more than one invocation at a time. Something like this:
const events = require('events')
function locked(async_fn) {
const queue = [] // either actively running or waiting to run
const omega = new events()
omega.on('foo', () => {
if (queue.length > 0) {
queue[0].emit('bar')
}
})
return function(...args) {
return new Promise((resolve) => {
const alpha = new events()
queue.push(alpha)
alpha.on('bar', async () => {
resolve(await async_fn(...args))
queue.shift()
omega.emit('foo')
})
if (queue.length === 1) omega.emit('foo')
})
}
}
The idea is that if f is an async function then locked(f) is a function that does the same thing except that if f is called during execution of f, the new invocation doesn't begin until the first invocation returns.
I suspect my solution has lots of room for improvement, so I wonder: is there a better way of doing this? In fact, is there one already built into Node, or available via npm?
EDIT to show how this is used:
async function f() {
console.log('f starts')
await new Promise(resolve => setTimeout(resolve, 1000))
console.log('f ends')
}
const g = locked(f)
for (let i = 0; i < 3; i++) {
g()
}
Running this takes 3 seconds and we get the following output:
f starts
f ends
f starts
f ends
f starts
f ends
Whereas if we replace g() with f() in the for loop, execution takes 1 second and get the following:
f starts
f starts
f starts
f ends
f ends
f ends
(I realise this is a fairly minor question, and if it's not appropriate for stackoverflow I apologise, but I didn't know of a better place for it.)
In case you stumble on this question, here is the code that does exactly what OP wanted:
const disallowConcurrency = (fn) => {
let inprogressPromise = Promise.resolve()
return async (...args) => {
await inprogressPromise
inprogressPromise = inprogressPromise.then(() => fn(...args))
return inprogressPromise
}
}
Use it like this:
const someAsyncFunction = async (arg) => {
await new Promise( res => setTimeout(res, 1000))
console.log(arg)
}
const syncAsyncFunction = disallowConcurrency(someAsyncFunction)
syncAsyncFunction('I am called 1 second later')
syncAsyncFunction('I am called 2 seconds later')
You also might want to change function name to something more clear, because promises have actually nothing to do with concurrency.
Here is the decorator from my previous answers: (Live Demo)
function asyncBottleneck(fn, concurrency = 1) {
const queue = [];
let pending = 0;
return async (...args) => {
if (pending === concurrency) {
await new Promise((resolve) => queue.push(resolve));
}
pending++;
return fn(...args).then((value) => {
pending--;
queue.length && queue.shift()();
return value;
});
};
}
Usage:
const task = asyncBottleneck(async () => {
console.log("task started");
await new Promise((resolve) => setTimeout(resolve, 1000));
console.log("end");
});
task();
task();
task();
task();
Can I suggest my module here: https://www.npmjs.com/package/job-pipe
Basically if you have an async method:
const foo = async () => {...}
You create a pipe for it:
const pipe = createPipe({ maxQueueSize: Infinity })
Then you wrap your method like this:
const limitedFoo = pipe(foo)
An then you can do this kind of magic:
limitedFoo()
limitedFoo()
limitedFoo()
limitedFoo()
await limitedFoo()
Even though I am awaiting only for the last one, these functions will be executed one by one due to pipe restriction.
job-pipe allows combining multiple different methods into one pipe. It allows configuration where X number of parallel jobs are permitted. Also, you can monitor how many jobs are running and how many are queued at any time. You can also choose to abort them all if needed.
I know this is an old post but I hope it will help someone.
So, it'd be a hacky way to do it, but you could also just cache the fact the function was called.
let didRun = false;
async function runMeOnce() {
if (didRun) return;
didRun = true;
... do stuff
}
await runMeOnce():
await runMeOnce(); // will just return;
I'm sure there are much better solutions - but this would work with very little effort.
This question is somewhat academic in that I don't have a real need to do this.
I'm wondering if I can force the resolution of a promise into a returned value from a function such that the function callers are not aware that the functions contain promised async operations.
In .NET I can do things like this by using functions on Task[] or return Task.Result which causes the caller to await the completion of the task and callers won't know or care that the work has been done using tasks.
If you're using ES6 you can use a generator to make code like this. It essentially comes close to 'blocking' on the promise, so you have the appearance of a long-running method that just returns the value you want, but async/promises live under the covers.
let asyncTask = () =>
new Promise(resolve => {
let delay = Math.floor(Math.random() * 100);
setTimeout(function () {
resolve(delay);
}, delay);
});
let makeMeLookSync = fn => {
let iterator = fn();
let loop = result => {
!result.done && result.value.then(res =>
loop(iterator.next(res)));
};
loop(iterator.next());
};
makeMeLookSync(function* () {
let result = yield asyncTask();
console.log(result);
});
More explanation and the source available here: http://www.tivix.com/blog/making-promises-in-a-synchronous-manner/
Here is the code compiled on Babeljs.io