I wrote a node.js script to fetch some prices from exchanges. It looks like this:
async function main() {
async function func() {
var start_time = performance.now();
for (let route of routes) {
var result_amount = await calc(route, amount_wei);
if (result_amount[5] > amount_start * 1) {
console.log("Good Trade");
}
while (true) {
await func();
}
}
and one route (route of routes) looks like this:
[
"quick / sushi - 1x1",
token_Address_usdc,
token_Address_dai,
token_Address_usdc,
"-",
"-",
"USDC - DAI - USDC",
]
So first I am fetching the output if I swap usdc to dai on quickswap. Then from dai to usdc on sushiswap. I save the output in an array (result_amount) and give it back to the main program (Now result is compared to the start amount).
I do have like 10 trading routes and the program needs about 20 seconds, so 2 seconds per route.
The routes are absolutely independent from each other, so it should be possible to fetch all routes at the same time right?
I have read something about multi threads with workers, but I have to say, I didn't get it. Can someone help me with this problem?
Thank you
It's important to understand that node.js is generally single threaded and not multithreaded. Even though asynchronous operations sometime give the impression of parallelism, it is not a requirement. Meaning, just because an operation is asynchronous, does not mean it has to run in its separate thread.
The routes are absolutely independent from each other, so it should be possible to fetch all routes at the same time right?
It depends, the easiest case would be complete independence, where the result of said request isn't used in conjunction with the results of other fetches.
You really didn't provide too much detail, so it's not really possible to answer your question.
What I can recommend, though, is: always try to avoid multithreading if possible.
ControlAltDel's answer is a good starting point.
Stop using await and use Promise.all
This will allow you to wait for all of your data to come in in parallel, rather than serially
function func() {
var promises = [];
for (let route of routes) {
promises.push (calc(route, amount_wei));
}
Promise.all(promises).then(function(completedItems) {
completedItems.forEach(function(val) {
var result_amount = val;
if (result_amount[5] > amount_start * 1) {
console.log("Good Trade");
}
}
});
}
Related
This answer to a similar question does a great job at explaining how fastify-plugin works and what it does. After reading the explanation, I still have a question remaining; how is this different from a normal function call instead of using the .register() method?
To clarify with an example, how are the two approaches below different from each other:
const app = fastify();
// Register a fastify-plugin that decorates app
const myPlugin = fp((app: FastifyInstance) => {
app.decorate('example', 10);
});
app.register(myPlugin);
// Just decorate the app directly
const decorateApp = (app: FastifyInstance) => {
app.decorate('example', 10);
};
decorateApp(app);
By writing a decorateApp function you are creating your own "API" to load your application.
That said, the first burden you will face soon is sync or async:
decorateApp is a sync function
decorateAppAsync within an async function
For example, you need to preload something from the database before you can start your application.
const decorateApp = (app) => {
app.register(require('#fastify/mongodb'))
};
const businessLogic = async (app) => {
const data = await app.mongo.db.collection('data').find({}).toArray()
}
decorateApp(app)
businessLogic(app) // whoops: it is async
In this example you need to change a lot of code:
the decorateApp function must be async
the mongodb registration must be awaited
the main code that loads the application must be async
Instead, by using the fastify's approach, you need to update only the plugin that loads the database:
const applicationConfigPlugin = fp(
+ async function (fastify) {
- function (fastify, opts, next) {
- app.register(require('#fastify/mongodb'))
- next()
+ await app.register(require('#fastify/mongodb'))
}
)
PS: note that fastify-plugin example code misses the next callback since it is a sync function.
The next bad pattern will be high hidden coupling between functions.
Every application needs a config. Usually, the fastify instance is decorated with it.
So, you will have something like:
decorateAppWithConfig(app);
decorateAppWithSomethingElse(app);
Now, decorateAppWithSomethingElse will need to know that it is loaded after decorateAppWithConfig.
Instead, by using the fastify-plugin, you can write:
const applicationConfigPlugin = fp(
async function (fastify) {
fastify.decorate('config', 42);
},
{
name: 'my-app-config',
}
)
const applicationBusinessLogic = fp(
async function (fastify) {
// ...
},
{
name: 'my-app-business-logic',
dependencies: ['my-app-config']
}
)
// note that the WRONG order of the plugins
app.register(applicationBusinessLogic);
app.register(applicationConfigPlugin);
Now, you will get a nice error, instead of a Cannot read properties of undefined when the config decorator is missing:
AssertionError [ERR_ASSERTION]: The dependency 'my-app-config' of plugin 'my-app-business-logic' is not registered
So, basically writing a series of functions that use/decorate the fastify instance is doable but it adds
a new convention to your code that will have to manage the loading of the plugins.
This job is already implemented by fastify and the fastify-plugin adds many validation checks to it.
So, by considering the question's example: there is no difference, but using that approach to a bigger application
will lead to a more complex code:
sync/async loading functions
poor error messages
hidden dependencies instead of explicit ones
Let's assume I have an endpoint /print. Whenever a request is made to this endpoint, it executes a function printSomething(). While printSomething() is processing, if the user or another user hits this endpoint it will execute printSomething() again. If this occurs multiple times the method will be executed multiple times in parallel.
app.get('/print', (req, res) => {
printSomething(someArguments);
res.send('processing receipts');
});
The issue with the default behavior is that, inside printSomething() I create some local files which are needed during the execution of printSomething() and when another call is made to the printSomething() it will override those files, hence none of the call's return the desired result.
What I want to do is to make sure printSomething() execution is completed, before another execution of printSomething() is started. or to stop the current execution of printSomething() once a new request is made to the endpoint.
You have different option based on what you need, how much request you expect to receive and what you do with those file you are creating
Solution 1
If you expect little traffic on this endpoint
you can add some unique key to someArguments based on your req or randomly generated and create the files you need in a different directory
Solution 2
If you think that it will cause some performance issue you have to create some sort of queue and worker to handle the tasks.
In this way you can handle how many task can be executed simultaneously
If you are building a REST system with distributed calls this sounds problematic. Usually you don't want one request to block another.
If the Order of operation is crucial (FIFO) then it looks like a classing Queue problem.
There are many different way to implement the Queue, you could use an array or something or implement a singleton class extending eventEmitter.
const myQueue = new Q()
const crypto = require('crypto');
const route = (req,res) => {
const uniqeID =crypto.randomUUID()
myQueue.once(uniqeID, (data) =>{
res.send(data)
})
myQueue.process(uniqID, req.someDaTa)
}
You could use app.locals, as the docs suggest: Once set, the value of app.locals properties persist throughout the life of the application.
You need to use req.app.locals, something like
req.app.locals.isPrinting = true;
----
//Then check it
if (req.app.locals.isPrinting)
req.app.locals is the way express helps you access app.locals
For load testing in the vu stage I generate a lot of objects with unique ids that I put them in the database. I want to delete them during teardown stage in order not to pollute the database.
When keeping the state like this
let ids = [];
export function setup() {
ids.push('put in setup id');
}
export default function () {
ids.push('put in vu id');
}
export function teardown() {
ids.push('put in teardown id');
console.log('Resources: ' + ids);
}
it doesn't work as the array always contains the data I put in teardown stage.
Passing data between stages also doesn't work due to well-know Cannot extend Go slice issue, but even with that, you cannot pass the data from vu stage to teardown as it always gets the data from setup stage.
The only remaining solution is either playing around with console log or just use a plain preset of ids and use them in tests. Is there another way?
The setup(), teardown(), and the VUs' default functions are executed in completely different JavaScript runtimes. For distributed execution, they may be executed on completely different machines. So you can't just have a global ids variable that you're able to access from everywhere.
That limitation is the reason why you're supposed to return any data you care about from setup() - k6 will copy it and pass it as a parameter to the default function (so you can use whatever resources you set up) and teardown() (so you can clean them up).
Your example has to look somewhat like this:
export function setup() {
let ids = [];
ids.push('put in setup id');
return ids;
}
export default function (ids) {
// you cannot push to ids here
console.log('Resources: ' + ids);
}
export function teardown(ids) {
console.log('Resources: ' + ids);
}
You can find more information at https://k6.io/docs/using-k6/test-life-cycle
To expand on #na--'s answer, I propose an external workaround using Redis and Webdis to manage the IDs.
It's actually quite simple, if you don't mind running an additional process, and shouldn't impact performance greatly:
Start a Webdis/Redis container:
docker run --rm -it -p 127.0.0.1:7379:7379 nicolas/webdis
script.js:
import http from 'k6/http';
const url = "http://127.0.0.1:7379/"
export function setup() {
const ids = [1, 2, 3];
for (let id of ids) {
http.post(url, `LPUSH/ids/${id}`);
}
}
export default function () {
const id = Math.floor(Math.random() * 10);
http.post(url, `LPUSH/ids/${id}`);
}
export function teardown() {
let res = http.get(`${url}LRANGE/ids/0/-1`);
let ids = JSON.parse(res.body)['LRANGE'];
for (let id of ids) {
console.log(id);
}
// cleanup
http.post(url, 'DEL/ids');
}
Run 5 iterations with:
k6 run -i 5 script.js
Example output:
INFO[0000] 7
INFO[0000] 2
INFO[0000] 2
INFO[0000] 6
INFO[0000] 5
INFO[0000] 3
INFO[0000] 2
INFO[0000] 1
A drawback of this solution is that it will skew the overall test results, because of the additional HTTP requests that are not relevant to the test itself. There might be a way to exclude these with tags, otherwise it would be a good feature request. :)
Using a Node.js Redis client to avoid HTTP requests could be an alternative, but those libraries usually aren't "browserifiable" so they likely wouldn't work in k6.
I am writing Webdriver automation for a web app. I have a test that looks like this:
it('has five items', async function(done) {
try {
await driver.wait(until.elementLocated(By.className('item-class')),5000);
const items = await driver.findElements(By.className('item-class'));
expect(items.length).toBe(5);
done();
}
catch(err) {
console.log(err)
}
}
This test will pass about 2/3 of the time, but will sometimes fail with:
Expected 0 to be 5.
I would think that there should be no way to get this response, since the first line is supposed to make it wait until some of these items exist. I could understand a result of "Expected 1 to equal 5.", in the case that one item was added to the page, and the rest of the test completed before they were all there, but reaching the expect() call with 0 items on the page does not make sense to me.
The questions, then, are:
1) What am I missing / not understanding, such that this result is in fact possible?
2) Is there a different construct / method I should be using to make it wait until the expected items are on the page?
I checked the source code and elementLocatedBy uses findElements, see here. And findElements can return an empty array of elements after the timeout and hence 0 is expected (learnt something new today).
You can write something custom or use some ready-made method from here that doesn't use findElements
driver.wait(async function() {
const items = await driver.findElements(By.className('item-class'))
return items.length > 0;
}, 5000);
well I think a good way to solve this issue would be
try {
const items = await driver.wait(until.elementsLocated(By.className('item-class')));
return items.length > 0;
}
catch(err) {
console.log(err)
}
this way will always wait for ALL elementS (it's elementSlocated) to be located and will return an array of items (remember that without await it will return an array of promises).
It has no timeout so it will wait until they are all ready (or you can put a limit so if something weird is happening you can see it).
I'm working on a Node.js module/utility which will allow me to scaffold some directories/files. Long story short, right now I have main function which looks something like this:
util.scaffold("FileName")
This "scaffold" method returns an EventEmitter instance, so, when using this method I can do something like this:
util.scaffold("Name")
.on("done", paths => console.log(paths)
In other words, when all the files are created, the event "done" will be emitted with all the paths of the scaffolded files.
Everything good so far.
Right now, I'm trying to do some tests and benchmarks with this method, and I'm trying to find a way to perform some operations (assertions, logs, etc) after this "scaffold" method has been called multiple times with a different "name" argument. For example:
const names = ["Name1", "Name2", "Name3"]
const emitters = names.map(name => {
return util.scaffold(name)
})
If I was returning a Promise instead of an EventEmitter, I know that I could do something like this:
Promise.all(promises).then(()=> {
//perform assertions, logs, etc
})
However, I'm not sure how can I do the equivalent using EventEmitters. In other words, I need to wait until all these emitters have emitted this same event (i.e. "done") and then perform another operation.
Any ideas/suggestions how to accomplish this?
Thanks in advance.
With promise.all you have a unique information when "everything" is done.
Of course that is when all Promises inside are fullfiled/rejected.
If you have an EventEmitter the information when "everything" is done can not be stored inside your EventEmitter logic because it doesn't know where or how often the event is emmited.
So first solution would be to manage an external state "everything-done" and when this changes to true you perform the other operation.
So like promise.all you have to wrap around it.
The second approach i could imagine is a factory where you build your EventEmitters that keeps track of the instances. Then this factory could provide the information whether all instances have been fired. But this approach could fail on many levels: One Instance->many Calls; One Instance->no Call; ...
just my 5 cent and i would be happy to see another solution
The simplest approach, as mentioned by others, is to return promises instead of EventEmitter instances. However, pursuant to your question, you can write your callback for the done event as follows:
const names = ['Name1', 'Name2', 'Name3']
let count = 0
util.scaffold('Name').on('done', (paths) => {
count += 1
if (count < names.length) {
// There is unfinished scaffolding
} else {
// All scaffolding complete
}
})
I ended up doing what #theGleep suggested and wrapping each of those emitters inside a Promise, like this:
const names = ["Name1", "Name2", "Name3"]
const promises = names.map(name => {
return new Promise((resolve) => {
util.scaffold(name).on("done", paths => {
resolve(paths)})
})
})
// and then
Promise.all(promises).then(result => {
// more operations
})
It seems to be doing what I need so far, so I'll just use this for now. Thanks everyone for your feedback :)