Recently I started to think - do I handle async operations inside Nodejs stream in a right way? Therefore I just want to make sure it's so.
class AsyncTransform extends Transform {
constructor() {
super({objectMode: true});
}
public async _transform(chunk, enc, done) {
const result = await someAsyncStuff();
result && this.push(result);
done();
}
}
This small example works great, but basically, all async stuff takes some time for execution and in most cases, I want to work with chunks in parallel. I will put up done on the top of _transform, but this is not a solution for some reasons, and one of them is that fact that when the last chunk will invoke done, the next push will throw an error Error: stream.push() after EOF. So we can't make a push if the last chunk has already invoked with its done.
For handle this case I use the _flush transform's method in conjunction with chunk's queue counter (increase when it come in, decrease on push). God damn, there are already so many words, so here is just a sample of my code.
const minute = 60000;
const second = 1000;
const CRITICAL_QUEUE_POINT = 50;
export class AsyncTransform extends Transform {
private queue: number = 0;
constructor() {
super({objectMode: true});
}
public async _transform(chunk, enc, done) {
this.checkQueue()
.then(() => this.init(chunk, done));
}
public _flush(done) {
this._done(done, true);
}
private async init(chunk, done) {
this.increaseQueueCounter();
this._done(done);
const user = await new UserRepository().search(chunk);
this.decreaseQueueCounter();
this._push(user);
}
/**
* Queue
* */
private checkQueue(): Promise<any> {
return new Promise((resolve) => {
const _checkQueue = () => {
if (this.queue >= CRITICAL_QUEUE_POINT) {
return setTimeout(_checkQueue, second * 10);
}
resolve();
};
_checkQueue();
});
}
private increaseQueueCounter(): void {
this.queue++;
}
private decreaseQueueCounter(): void {
this.queue--;
}
/**
* Transform API
* */
private _push(user) {
this.push(user);
}
private _done(done, isFlush: boolean = false) {
if (!isFlush) {
return done();
}
if (this.queue === 0) {
return done();
}
setTimeout(() => this._done(done, isFlush), second * 10);
}
}
I thought about it two years ago and was looking for a solution - a framework of some sort. I found a couple frameworks - highland and event stream - but all were so complex to use that I decided to write a new one: scramjet.
Now your code can be as simple as:
const {DataStream} = require('scramjet');
yourDataStream.pipe(new DataStream())
.map(async (chunk) => {
await checkQueue();
return new UserRepository().search(chunk);
});
Or, if I understand the checkQueue() correctly it just keeps the simultaneous number of connections below the critical level, then it's even simpler like this:
yourDataStream.pipe(new DataStream({maxParallel: CRITICAL_QUEUE_POINT }))
.map(async (chunk) => new UserRepository().search(chunk));
It will keep the number of connections at a stable level (everytime there's a response it'll start a new thread).
Related
I'm using Google Gmail API to get sent emails.
I'm using 2 APIs for this -
list (https://developers.google.com/gmail/api/reference/rest/v1/users.messages/list)
get (https://developers.google.com/gmail/api/reference/rest/v1/users.messages/get)
The list API gives a list of messages IDs which I use to get specific data from the get API.
Here's the code for this -
await Promise.all(
messages?.map(async (message) => {
const messageData = await contacts.getSentGmailData(
accessToken,
message.id
);
return messageData;
})
);
getSentGmailData is the get API here.
The problem here is, while mapping and making requests to this API continuously, I get a 429 (rateLimitExceeded) error.
What I tried is adding a buffer between each request like this -
function delay(ms) {
return new Promise((resolve) => {
setTimeout(resolve, ms);
});
}
const messageData = await contacts.getSentGmailData(accessToken,message.id);
await delay(200);
But this doesn't seem to work.
How can I work around this?
You can use the below solution like code for adding some more buffer time when you will get 429 (to many requests from google api).
Basically this code will help you to stop calling api when you exceed Rate Limiter.
Note: This doesn't mean that you can bypass Google api Rate Limiter.
async function getSentGmailDataWithBackoff(accessToken, messageId) {
const MAX_RETRIES = 5;
let retries = 0;
let delay = 200;
while (true) {
try {
const messageData = await contacts.getSentGmailData(accessToken, messageId);
return messageData;
} catch (error) {
if (error.response && error.response.status === 429 && retries < MAX_RETRIES) {
retries++;
console.log(`Rate limit exceeded. Retrying in ${delay}ms.`);
await delay(delay);
delay *= 2;
} else {
throw error;
}
}
}
}
async function getSentGmailDataWithBackoffBatch(accessToken, messageIds) {
return Promise.all(
messageIds.map(async (messageId) => {
const messageData = await getSentGmailDataWithBackoff(accessToken, messageId);
return messageData;
})
);
}
function delay(ms) {
return new Promise((resolve) => {
setTimeout(resolve, ms);
});
}
The reason the delay is not working is because it does not wait for the Promise to be resolved. The same reasoning applied to forEach, filter, reduce etc. You can get some idea here: https://gist.github.com/joeytwiddle/37d2085425c049629b80956d3c618971
If you had used a for-of loop or another for-loop for this purpose, it would have worked.
for(let message of messages) {
const messageData = await contacts.getSentGmailData(accessToken,message.id);
await delay(200);
}
You could also write your own rate-limiting function (also commonly called throttling function) or use one provided by libraries like Lodash: https://lodash.com/docs#throttle
I have created my own Xtext based DSL and vscode based editor with language server protocol. I parse the model from the current TextDocument with antlr4ts. Below is the code snippet for the listener
class TreeShapeListener implements DebugInternalModelListener {
public async enterRuleElem1(ctx: RuleElem1Context): Promise<void> {
...
// by the time the response is received, 'walk' function returns
var resp = await this.client.sendRequest(vscode_languageserver_protocol_1.DefinitionRequest.type,
this.client.code2ProtocolConverter.asTextDocumentPositionParams(this.document, position))
.then(this.client.protocol2CodeConverter.asDefinitionResult, (error) => {
return this.client.handleFailedRequest(vscode_languageserver_protocol_1.DefinitionRequest.type, error, null);
});
...
this.model.addElem1(elem1);
}
public async enterRuleElem2(ctx: RuleElem2Context): void {
...
this.model.addElem2(elem2);
}
and here I create the parser and the tree walker.
// Create the lexer and parser
let inputStream = antlr4ts.CharStreams.fromString(document.getText());
let lexer = new DebugInternaModelLexer(inputStream);
let tokenStream = new antlr4ts.CommonTokenStream(lexer);
let parser = new DebugInternalModelParser(tokenStream);
parser.buildParseTree = true;
let tree = parser.ruleModel();
let model = new Model();
ParseTreeWalker.DEFAULT.walk(new TreeShapeListener(model, client, document) as ParseTreeListener, tree);
console.log(model);
The problem is that while processing one of the rules (enterRuleElem1), I have an async function (client.sendRequest) which is returned after ParseTreeWalker.DEFAULT.walk returns. How can I make walk wait till all the rules are completed?
Edit 1: Not sure if this is how walk function works, but tried to recreate the above scenario with a minimal code below
function setTimeoutPromise(delay) {
return new Promise((resolve, reject) => {
if (delay < 0) return reject("Delay must be greater than 0")
setTimeout(() => {
resolve(`You waited ${delay} milliseconds`)
}, delay)
})
}
async function enterRuleBlah() {
let resp = await setTimeoutPromise(2500);
console.log(resp);
}
function enterRuleBlub() {
console.log('entered blub');
}
function walk() {
enterRuleBlah();
enterRuleBlub();
}
walk();
console.log('finished parsing');
and the output is
entered blub
finished parsing
You waited 2500 milliseconds
Edit 2: I tried the suggestion from the answer and now it works! My solution looks like:
public async doStuff() {
...
return new Promise((resolve)=> {
resolve(0);
})
}
let listener = new TreeShapeListener(model, client, document);
ParseTreeWalker.DEFAULT.walk(listener as ParseTreeListener, tree);
await listener.doStuff();
The tree walk is entirely synchronous, regardless whether you make your listener/visitor rules async or not. Better separate the requests from the walk, which should only collect all the information need to know what to send and after that process this collection and actually send the requests, for which you then can wait.
I'm working on a library and I'd like to prevent users from calling a specific function in order to prevent infinite loops.
Usually I'd go about doing it like this:
let preventFooCalls = false;
function fireUserCallbacks() {
preventFooCalls = true;
// Fire callbacks of the user here...
preventFooCalls = false;
}
function foo() {
if (preventFooCalls) throw Error();
// Run the content of foo() ...
// It will probably call fireUserCallbacks() at some point
}
However, if fireUserCallbacks is async, this method is not possible. It might be called multiple times, and with async user callbacks, preventFooCalls is not guaranteed to have the correct value. For instance:
let preventFooCalls = false;
async function fireUserCallbacks() {
preventFooCalls = true;
// Fire callbacks of the user here one of which being:
await new Promise(r => setTimeout(r, 1000));
preventFooCalls = false;
}
// Then when doing:
fireUserCallbacks();
foo(); // This will throw even though it's being called from outside fireUserCallbacks()
How can I detect if code is running from within a specific promise?
The only thing I can think of is new Error().stack, but oof that sounds like a terrible way to do it.
Some context
The reason why I want this is because I'm working on a part of a library that takes care of loading assets. Some of these assets might contain other assets with the possibility of infinite recursion. In order to handle recursion I have another function that I want users to call instead. Therefore I want to warn users when they call foo() from within one of the fireUserCallbacks() callbacks. While this will only be an issue when assets actually contain infinite loops, I'd rather block the usage of foo() completely to prevent unexpected hangs due to infinite loops.
edit: Here's a somewhat more sophisticated example of my actual code. I would share my actual code but that is really way too long for this format, tbh this example is already getting a bit too complex.
class AssetManager {
constructor() {
this.registeredAssetTypes = new Map();
this.availableAssets = new Map();
}
registerAssetType(typeId, assetTypeConstructor) {
this.registeredAssetTypes.set(typeId, assetTypeConstructor);
}
fillAvailableAssets(assetDatas) {
for (const assetData of assetDatas) {
const constructor = this.registeredAssetTypes.get(assetData.type);
const asset = new constructor(assetData.id, assetData.data);
this.availableAssets.set(assetData.id, asset);
}
}
async loadAsset(assetId, recursionTracker = null) {
// I have some extra code here that makes sure this function will only have
// a single running instance, but this example is getting way too long already
const asset = this.availableAssets.get(assetId);
let isRootRecursionTracker = false;
if (!recursionTracker) {
isRootRecursionTracker = true;
recursionTracker = new RecursionTracker(assetId);
}
const assetData = await asset.generateAsset(recursionTracker);
if (isRootRecursionTracker) {
// If this call was made from outside any `generateAsset` implementation,
// we will wait for all assets to be loaded and put on the right place.
await recursionTracker.waitForAll();
// Finally we will give the recursionTracker the created root asset,
// in case any of the sub assets reference the root asset.
// Note that circular references in any of the sub assets (i.e. not
// containing the root asset anywhere in the chain) are already taken care of.
if (recursionTracker.rootLoadingAsset) {
recursionTracker.rootLoadingAsset.setLoadedAssetData(assetData);
}
}
return assetData;
}
}
const assetManager = new AssetManager();
class RecursionTracker {
constructor(rootAssetId) {
this.rootAssetId = rootAssetId;
this.rootLoadingAsset = null;
this.loadingAssets = new Map();
}
loadAsset(assetId, cb){
let loadingAsset = this.loadingAssets.get(assetId);
if (!loadingAsset) {
loadingAsset = new LoadingAsset(assetId);
this.loadingAssets.set(assetId, loadingAsset);
if (assetId != this.rootAssetId) {
loadingAsset.startLoading(this);
} else {
this.rootLoadingAsset = loadingAsset;
}
}
loadingAsset.onLoad(cb);
}
async waitForAll() {
const promises = [];
for (const loadingAsset of this.loadingAssets.values()) {
promises.push(loadingAsset.waitForLoad());
}
await Promise.all(promises);
}
}
class LoadingAsset {
constructor(assetId) {
this.assetId = assetId;
this.onLoadCbs = new Set();
this.loadedAssetData = null;
}
async startLoading(recursionTracker) {
const loadedAssetData = await assetManager.loadAsset(this.assetId, recursionTracker);
this.setLoadedAssetData(loadedAssetData);
}
onLoad(cb) {
if (this.loadedAssetData) {
cb(this.loadedAssetData)
} else {
this.onLoadCbs.add(cb);
}
}
setLoadedAssetData(assetData) {
this.loadedAssetData = assetData;
this.onLoadCbs.forEach(cb => cb(assetData));
}
async waitForLoad() {
await new Promise(r => this.onLoad(r));
}
}
class AssetTypeInterface {
constructor(id, rawAssetData) {
this.id = id;
this.rawAssetData = rawAssetData;
}
async generateAsset(recursionTracker) {}
}
class AssetTypeFoo extends AssetTypeInterface {
async generateAsset(recursionTracker) {
// This is here just to simulate network traffic, an indexeddb lookup, or any other async operation:
await new Promise(r => setTimeout(r, 200));
const subAssets = [];
for (const subAssetId of this.rawAssetData.subAssets) {
// This won't work, as it will start waiting for itself to finish:
// const subAsset = await assetManager.loadAsset(subAssetId);
// subAssets.push(subAsset);
// So instead we will create a dummy asset:
const dummyAsset = {}
const insertionIndex = subAssets.length;
subAssets[insertionIndex] = dummyAsset;
// and load the asset with a callback rather than waiting for a promise
recursionTracker.loadAsset(subAssetId, (loadedAsset) => {
// since this will be called outside the `generateAsset` function, this won't hang
subAssets[insertionIndex] = loadedAsset;
});
}
return {
foo: this.id,
subAssets,
}
}
}
assetManager.registerAssetType("foo", AssetTypeFoo);
class AssetTypeBar extends AssetTypeInterface {
async generateAsset(recursionTracker) {
// This is here just to simulate network traffic, an indexeddb lookup, or any other async operation:
await new Promise(r => setTimeout(r, 200));
// We'll just return a simple object for this one.
// No recursion here...
return {
bar: this.id,
};
}
}
assetManager.registerAssetType("bar", AssetTypeBar);
// This is all the raw asset data as stored on the users disk.
// These are not instances of the assets yet, so no circular references yet.
// The assets only reference other assets by their "id"
assetManager.fillAvailableAssets([
{
id: "mainAsset",
type: "foo",
data: {
subAssets: ["subAsset1", "subAsset2"]
}
},
{
id: "subAsset1",
type: "bar",
data: {},
},
{
id: "subAsset2",
type: "foo",
data: {
subAssets: ["mainAsset"]
}
}
]);
// This sets the loading of the "mainAsset" in motion. It recursively loads
// all referenced assets and finally puts the loaded assets in the right place,
// completing the circle.
(async () => {
const asset = await assetManager.loadAsset("mainAsset");
console.log(asset);
})();
Maintain a queue and a set. The queue contains pending requests. The set contains pending requests, requests in progress, and successfully completed requests. (Each item would include the request itself; the request's status: pending, processing, complete; and possibly a retry counter.)
When request is made, check if it is in the set. If it is in the set, it was already requested and will be processed, is being processed, or was processed successfully and is already available. If not in the set, add it to both the set and the queue, then trigger queue processing. If queue processing is already running, the trigger is ignored. If not, queue processing starts.
Queue processing pulls requests off the queue, one by one, and processes them. If a request fails, it can either be put back onto the queue for repeat attempts (a counter can be included in the item to limit retries) or it can be removed from the set so it can be requested again later. Queue processing ends when the queue is empty.
This avoids recursion and unnecessary repeat requests.
I would like to calculate how long an async function (async/await) is taking in JavaScript.
One could do:
const asyncFunc = async function () {};
const before = Date.now();
asyncFunc().then(() => {
const after = Date.now();
console.log(after - before);
});
However, this does not work, because promises callbacks are run in a new microtask. I.e. between the end of asyncFunc() and the beginning of then(() => {}), any already queued microtask will be fired first, and their execution time will be taken into account.
E.g.:
const asyncFunc = async function () {};
const slowSyncFunc = function () {
for (let i = 1; i < 10 ** 9; i++) {}
};
process.nextTick(slowSyncFunc);
const before = Date.now();
asyncFunc().then(() => {
const after = Date.now();
console.log(after - before);
});
This prints 1739 on my machine, i.e. almost 2 seconds, because it waits for slowSyncFunc() to complete, which is wrong.
Note that I do not want to modify the body of asyncFunc, as I need to instrument many async functions without the burden of modifying each of them. Otherwise I could just add a Date.now() statement at the beginning and the end of asyncFunc.
Note also that the problem is not about how the performance counter is being retrieved. Using Date.now(), console.time(), process.hrtime() (Node.js only) or performance (browser only) will not change the base of this problem. The problem is about the fact that promise callbacks are run in a new microtask. If you add statements like setTimeout or process.nextTick to the original example, you are modifying the problem.
Any already queued microtask will be fired first, and their execution time will be taken into account.
Yes, and there's no way around that. If you don't want to have other tasks contribute to your measurement, don't queue any. That's the only solution.
This is not a problem of promises (or async functions) or of the microtask queue specifically, it's a problem shared by all asynchronous things which run callbacks on a task queue.
The problem we have
process.nextTick(() => {/* hang 100ms */})
const asyncFunc = async () => {/* hang 10ms */}
const t0 = /* timestamp */
asyncFunc().then(() => {
const t1 = /* timestamp */
const timeUsed = t1 - t0 /* 110ms because of nextTick */
/* WANTED: timeUsed = 10ms */
})
A solution (idea)
const AH = require('async_hooks')
const hook = /* AH.createHook for
1. Find async scopes that asycnFunc involves ... SCOPES
(by handling 'init' hook)
2. Record time spending on these SCOPES ... RECORDS
(by handling 'before' & 'after' hook) */
hook.enable()
asyncFunc().then(() => {
hook.disable()
const timeUsed = /* process RECORDS */
})
But this wont capture the very first sync operation; i.e. Suppose asyncFunc as below, $1$ wont add to SCOPES(as it is sync op, async_hooks wont init new async scope) and then never add time record to RECORDS
hook.enable()
/* A */
(async function asyncFunc () { /* B */
/* hang 10ms; usually for init contants etc ... $1$ */
/* from async_hooks POV, scope A === scope B) */
await /* async scope */
}).then(..)
To record those sync ops, a simple solution is to force them to run in a new ascyn scope, by wrapping into a setTimeout. This extra stuff does take time to run, ignore it because the value is very small
hook.enable()
/* force async_hook to 'init' new async scope */
setTimeout(() => {
const t0 = /* timestamp */
asyncFunc()
.then(()=>{hook.disable()})
.then(()=>{
const timeUsed = /* process RECORDS */
})
const t1 = /* timestamp */
t1 - t0 /* ~0; note that 2 `then` callbacks will not run for now */
}, 1)
Note that the solution is to 'measure time spent on sync ops which the async function involves', the async ops e.g. timeout idle will not count, e.g.
async () => {
/* hang 10ms; count*/
await new Promise(resolve => {
setTimeout(() => {
/* hang 10ms; count */
resolve()
}, 800/* NOT count*/)
}
/* hang 10ms; count*/
}
// measurement takes 800ms to run
// timeUsed for asynFunc is 30ms
Last, I think it maybe possible to measure async function in a way that includes both sync & async ops(e.g. 800ms can be determined) because async_hooks does provide detail of scheduling, e.g. setTimeout(f, ms), async_hooks will init an async scope of "Timeout" type, the scheduling detail, ms, can be found in resource._idleTimeout at init(,,,resource) hook
Demo(tested on nodejs v8.4.0)
// measure.js
const { writeSync } = require('fs')
const { createHook } = require('async_hooks')
class Stack {
constructor() {
this._array = []
}
push(x) { return this._array.push(x) }
peek() { return this._array[this._array.length - 1] }
pop() { return this._array.pop() }
get is_not_empty() { return this._array.length > 0 }
}
class Timer {
constructor() {
this._records = new Map/* of {start:number, end:number} */
}
starts(scope) {
const detail =
this._records.set(scope, {
start: this.timestamp(),
end: -1,
})
}
ends(scope) {
this._records.get(scope).end = this.timestamp()
}
timestamp() {
return Date.now()
}
timediff(t0, t1) {
return Math.abs(t0 - t1)
}
report(scopes, detail) {
let tSyncOnly = 0
let tSyncAsync = 0
for (const [scope, { start, end }] of this._records)
if (scopes.has(scope))
if (~end) {
tSyncOnly += end - start
tSyncAsync += end - start
const { type, offset } = detail.get(scope)
if (type === "Timeout")
tSyncAsync += offset
writeSync(1, `async scope ${scope} \t... ${end - start}ms \n`)
}
return { tSyncOnly, tSyncAsync }
}
}
async function measure(asyncFn) {
const stack = new Stack
const scopes = new Set
const timer = new Timer
const detail = new Map
const hook = createHook({
init(scope, type, parent, resource) {
if (type === 'TIMERWRAP') return
scopes.add(scope)
detail.set(scope, {
type: type,
offset: type === 'Timeout' ? resource._idleTimeout : 0
})
},
before(scope) {
if (stack.is_not_empty) timer.ends(stack.peek())
stack.push(scope)
timer.starts(scope)
},
after() {
timer.ends(stack.pop())
}
})
// Force to create a new async scope by wrapping asyncFn in setTimeout,
// st sync part of asyncFn() is a async op from async_hooks POV.
// The extra async scope also take time to run which should not be count
return await new Promise(r => {
hook.enable()
setTimeout(() => {
asyncFn()
.then(() => hook.disable())
.then(() => r(timer.report(scopes, detail)))
.catch(console.error)
}, 1)
})
}
Test
// arrange
const hang = (ms) => {
const t0 = Date.now()
while (Date.now() - t0 < ms) { }
}
const asyncFunc = async () => {
hang(16) // 16
try {
await new Promise(r => {
hang(16) // 16
setTimeout(() => {
hang(16) // 16
r()
}, 100) // 100
})
hang(16) // 16
} catch (e) { }
hang(16) // 16
}
// act
process.nextTick(() => hang(100)) // 100
measure(asyncFunc).then(report => {
// inspect
const { tSyncOnly, tSyncAsync } = report
console.log(`
∑ Sync Ops = ${tSyncOnly}ms \t (expected=${16 * 5})
∑ Sync&Async Ops = ${tSyncAsync}ms \t (expected=${16 * 5 + 100})
`)
}).catch(e => {
console.error(e)
})
Result
async scope 3 ... 38ms
async scope 14 ... 16ms
async scope 24 ... 0ms
async scope 17 ... 32ms
∑ Sync Ops = 86ms (expected=80)
∑ Sync&Async Ops = 187ms (expected=180)
Consider using perfrmance.now() API
var time_0 = performance.now();
function();
var time_1 = performance.now();
console.log("Call to function took " + (time_1 - time_0) + " milliseconds.")
As performance.now() is the bare-bones version of console.time , it provide more accurate timings.
you can use console.time('nameit') and console.timeEnd('nameit') check the example below.
console.time('init')
const asyncFunc = async function () {
};
const slowSyncFunc = function () {
for (let i = 1; i < 10 ** 9; i++) {}
};
// let's slow down a bit.
slowSyncFunc()
console.time('async')
asyncFunc().then((data) => {
console.timeEnd('async')
});
console.timeEnd('init')
I periodically have to download/parse a bunch of Json data, about 1000~1.000.000 lines.
Each request has a chunk limit of 5000. So I would like to fire of a bunch of request at the time, stream each output through its own Transfomer for filtering out the key/value's and then write to a combined stream that writes its output to the database.
But with every attempt it doesn't work, or it gives errors because to many event listeners are set. What seems correct if I understand the the 'last pipe' is always the reference next in the chain.
Here is some code (changed it lot of times so could make little sense).
The question is: Is it bad practice to join multiple streams to one? Google also doesn't show a whole lot about it.
Thanks!
brokerApi/getCandles.js
// The 'combined output' stream
let passStream = new Stream.PassThrough();
countChunks.forEach(chunk => {
let arr = [];
let leftOver = '';
let startFound = false;
let lastPiece = false;
let firstByte = false;
let now = Date.now();
let transformStream = this._client
// Returns PassThrough stream
.getCandles(instrument, chunk.from, chunk.until, timeFrame, chunk.count)
.on('error', err => console.error(err) || passStream.emit('error', err))
.on('end', () => {
if (++finished === countChunks.length)
passStream.end();
})
.pipe(passStream);
transformStream._transform = function(data, type, done) {
/** Treansform to typedArray **/
this.push(/** Taansformed value **/)
}
});
Extra - Other file that 'consumes' the stream (writes to DB)
DataLayer.js
brokerApi.getCandles(instrument, timeFrame, from, until, count)
.on('data', async (buf: NodeBuffer) => {
this._dataLayer.write(instrument, timeFrame, buf);
if (from && until) {
await this._mapper.update(instrument, timeFrame, from, until, buf.length / (10 * Float64Array.BYTES_PER_ELEMENT));
} else {
if (buf.length) {
if (!from)
from = buf.readDoubleLE(0);
if (!until) {
until = buf.readDoubleLE(buf.length - (10 * Float64Array.BYTES_PER_ELEMENT));
console.log('UNTIL TUNIL', until);
}
if (from && until)
await this._mapper.update(instrument, timeFrame, from, until, buf.length / (10 * Float64Array.BYTES_PER_ELEMENT));
}
}
})
.on('end', () => {
winston.info(`Cache: Fetching ${instrument} took ${Date.now() - now} ms`);
resolve()
})
.on('error', reject)
Check out the stream helpers from highlandjs, e.g. (untested, pseudo code):
function getCandle(candle) {...}
_(chunks).map(getCandle).parallel(5000).pipe(...)