remove ids generated during the tests - javascript

For load testing in the vu stage I generate a lot of objects with unique ids that I put them in the database. I want to delete them during teardown stage in order not to pollute the database.
When keeping the state like this
let ids = [];
export function setup() {
ids.push('put in setup id');
}
export default function () {
ids.push('put in vu id');
}
export function teardown() {
ids.push('put in teardown id');
console.log('Resources: ' + ids);
}
it doesn't work as the array always contains the data I put in teardown stage.
Passing data between stages also doesn't work due to well-know Cannot extend Go slice issue, but even with that, you cannot pass the data from vu stage to teardown as it always gets the data from setup stage.
The only remaining solution is either playing around with console log or just use a plain preset of ids and use them in tests. Is there another way?

The setup(), teardown(), and the VUs' default functions are executed in completely different JavaScript runtimes. For distributed execution, they may be executed on completely different machines. So you can't just have a global ids variable that you're able to access from everywhere.
That limitation is the reason why you're supposed to return any data you care about from setup() - k6 will copy it and pass it as a parameter to the default function (so you can use whatever resources you set up) and teardown() (so you can clean them up).
Your example has to look somewhat like this:
export function setup() {
let ids = [];
ids.push('put in setup id');
return ids;
}
export default function (ids) {
// you cannot push to ids here
console.log('Resources: ' + ids);
}
export function teardown(ids) {
console.log('Resources: ' + ids);
}
You can find more information at https://k6.io/docs/using-k6/test-life-cycle

To expand on #na--'s answer, I propose an external workaround using Redis and Webdis to manage the IDs.
It's actually quite simple, if you don't mind running an additional process, and shouldn't impact performance greatly:
Start a Webdis/Redis container:
docker run --rm -it -p 127.0.0.1:7379:7379 nicolas/webdis
script.js:
import http from 'k6/http';
const url = "http://127.0.0.1:7379/"
export function setup() {
const ids = [1, 2, 3];
for (let id of ids) {
http.post(url, `LPUSH/ids/${id}`);
}
}
export default function () {
const id = Math.floor(Math.random() * 10);
http.post(url, `LPUSH/ids/${id}`);
}
export function teardown() {
let res = http.get(`${url}LRANGE/ids/0/-1`);
let ids = JSON.parse(res.body)['LRANGE'];
for (let id of ids) {
console.log(id);
}
// cleanup
http.post(url, 'DEL/ids');
}
Run 5 iterations with:
k6 run -i 5 script.js
Example output:
INFO[0000] 7
INFO[0000] 2
INFO[0000] 2
INFO[0000] 6
INFO[0000] 5
INFO[0000] 3
INFO[0000] 2
INFO[0000] 1
A drawback of this solution is that it will skew the overall test results, because of the additional HTTP requests that are not relevant to the test itself. There might be a way to exclude these with tags, otherwise it would be a good feature request. :)
Using a Node.js Redis client to avoid HTTP requests could be an alternative, but those libraries usually aren't "browserifiable" so they likely wouldn't work in k6.

Related

What's the advantage of fastify-plugin over a normal function call?

This answer to a similar question does a great job at explaining how fastify-plugin works and what it does. After reading the explanation, I still have a question remaining; how is this different from a normal function call instead of using the .register() method?
To clarify with an example, how are the two approaches below different from each other:
const app = fastify();
// Register a fastify-plugin that decorates app
const myPlugin = fp((app: FastifyInstance) => {
app.decorate('example', 10);
});
app.register(myPlugin);
// Just decorate the app directly
const decorateApp = (app: FastifyInstance) => {
app.decorate('example', 10);
};
decorateApp(app);
By writing a decorateApp function you are creating your own "API" to load your application.
That said, the first burden you will face soon is sync or async:
decorateApp is a sync function
decorateAppAsync within an async function
For example, you need to preload something from the database before you can start your application.
const decorateApp = (app) => {
app.register(require('#fastify/mongodb'))
};
const businessLogic = async (app) => {
const data = await app.mongo.db.collection('data').find({}).toArray()
}
decorateApp(app)
businessLogic(app) // whoops: it is async
In this example you need to change a lot of code:
the decorateApp function must be async
the mongodb registration must be awaited
the main code that loads the application must be async
Instead, by using the fastify's approach, you need to update only the plugin that loads the database:
const applicationConfigPlugin = fp(
+ async function (fastify) {
- function (fastify, opts, next) {
- app.register(require('#fastify/mongodb'))
- next()
+ await app.register(require('#fastify/mongodb'))
}
)
PS: note that fastify-plugin example code misses the next callback since it is a sync function.
The next bad pattern will be high hidden coupling between functions.
Every application needs a config. Usually, the fastify instance is decorated with it.
So, you will have something like:
decorateAppWithConfig(app);
decorateAppWithSomethingElse(app);
Now, decorateAppWithSomethingElse will need to know that it is loaded after decorateAppWithConfig.
Instead, by using the fastify-plugin, you can write:
const applicationConfigPlugin = fp(
async function (fastify) {
fastify.decorate('config', 42);
},
{
name: 'my-app-config',
}
)
const applicationBusinessLogic = fp(
async function (fastify) {
// ...
},
{
name: 'my-app-business-logic',
dependencies: ['my-app-config']
}
)
// note that the WRONG order of the plugins
app.register(applicationBusinessLogic);
app.register(applicationConfigPlugin);
Now, you will get a nice error, instead of a Cannot read properties of undefined when the config decorator is missing:
AssertionError [ERR_ASSERTION]: The dependency 'my-app-config' of plugin 'my-app-business-logic' is not registered
So, basically writing a series of functions that use/decorate the fastify instance is doable but it adds
a new convention to your code that will have to manage the loading of the plugins.
This job is already implemented by fastify and the fastify-plugin adds many validation checks to it.
So, by considering the question's example: there is no difference, but using that approach to a bigger application
will lead to a more complex code:
sync/async loading functions
poor error messages
hidden dependencies instead of explicit ones

How to call an API dynamically in functional programming paradigm

My application receives http requests from humain clients.
My application needs to call only one API, among 12 other APIs, depending on one specific data in the input it receives.
My first thought was of course
// requestPrice.js
const service = req.body.service
const APIs = {
ser1: callAPI1,
ser2: callAPI2,
ser3: callAPI3,
// ...
ser12: callAPI12,
}
return APIs[service](req.body)
This works fine but I guess needs some refactoring to make it SOLID compliant.
Normally in OOP I would go with one of the design patterns such as strategy or chain of responsibility maybe.
However I'm using the functional programming so a bit different.
I thought of doing the following:
// ser1.js
export default callAPI(data) {
// code 1
}
// ser2.js
export default callAPI(data) {
// code 2
}
// ser3.js
export default callAPI(data) {
// code 3
}
//...
// ser12.js
export default callAPI(data) {
// code 12
}
// requestPrice.js
const service = req.body.service
const api = require(`./${service}`)
return api(req.body)
This looks much better than the first version as it follows much better the Single responsibility principle. Plus it follows Open/Closed principle as well, I guess, as the requestPrice.js won't change if a 13th api is to be added.
In the other hand, I should be able to easily unit test even the file requestPrice.js as the req can be injected.
Is it SOLID principles compliant to do so or is there a better and cleaner way?
I would suggest a factory method (implemented as a curried function in FP) so that decision of which service to call and what to do in each service becomes separated. request.body should be passed to the returned impl function.
function createService(body) {
if(checkInput(body) == [something]) return service1;
else if(checkInput(body) == [something2]) return service2;
..
}
function service1(body) {..}
function service2(body) {..}
..
let service = createService(req.body);
service(request.body);
I haven't put it in different files but you may do so. Now createService can be in a different module. And each impl (service1, service2, etc) can be in their own separate files, and the caller of service doesn't need to know which impl to call, hence maintaining Dependency inversion. Higher level module doesn't know about the lower level module. :)

NodeJS On Run-Time Adding/Removing/Reloading requires WITHOUT server restart (No nodemon either)

I have a project for work where I am basically creating a form of CMS to which we will add applications as time moves forward.
The issue we're having is getting those applications loaded in (and more specifically modified) on run-time within the server.
The reason we're requiring this form of "hot loading" is because we don't want the server to restart whenever a change has been made, and more specifically, we'd like to add the new applications through an admin panel.
Nodemon is a useful tool for development, but for our production environment we want to be able to replace an existing application (or module/plugin if you will) without having to restart the server (whether it's manually or through nodemon, the server needs to be running at all time).
You could compare this to how CMS' like Drupal, Yoomla, or Wordpress do things, but for our needs, we decided that Node was the better way to go for many reasons.
Code wise, I am looking for something like this, but that will work:
let applications = []
//add a new application through the web interface calling the appropiate class method, within the method the following code runs:
applications.push(require('path/to/application');
//when an application gets modified:
applications.splice(index,1);
applications.push('path/to/application');
But I require existing instances of said application to be adjusted as well.
Example:
// file location: ./Applications/application/index.js
class application {
greet() {
console.log("Hello");
}
}
module.exports = application;
the app loader would load in said application:
class appLoader {
constructor() {
this.List = new Object();
}
Add(appname) {
this.List[appname] = require(`./Applications/${appname}/index`);
}
Remove(appname) {
delete require.cache[require.resolve(`./Applications/${appname}/index`)]
delete this.List[appname];
}
Reload(appname) {
this.Remove(appname);
this.Add(appname);
}
}
The running code:
const AppLoader = require('appLoader');
const applications = new AppLoader();
applications.add('application'); // adds the application created above
var app = new applications.List['application']();
app.greet();
// Change is made to the application file, .greet() now outputs "Hello World" instead of "Hello"
//do something to know it has to reload, either by fs.watch, or manual trigger
applications.Reload('application');
app.greet();
The expected behavior is:
Hello
Hello World
In reality, I'm getting:
Hello
Hello
If anyone can help me figure out a way to dynamically load in applications like this, but also remove/reload them during run-time, it would be greatly appreciated!
Edit: if there is a way to run my application code without the use of require that would allow a dynamic load/reload/remove, that is also a welcome solution
Ok, thanks to #jfriend00 I realized I need to fix something else with my code, so his comments can still be useful for other people. As to my issue of unloading required modules or reloading them without a server restart, I figured out a relatively elegant way of making it happen.
Let me start by showing you all my test class and app.js and I'll explain what I did and how it works.
Class.js:
"use strict";
class Class {
constructor() {
// this.file will be put in comments post run-time, and this.Output = "Hey" will be uncommented to change the source file.
var date = new Date()
this.Output = date.getHours() + ":" + date.getMinutes() + ":" + date.getSeconds() + "." + date.getMilliseconds();
this.file = global.require.fs.readFileSync('./file.mov');
//this.Output = "Hey";
}
}
module.exports = Class;
app.js:
'use strict';
global.require = {
fs: require('fs')
};
const arr = [];
const mod = './class.js'
let Class = [null];
Class[0] = require(mod);
let c = [];
c.push(new Class[0]());
console.log(c[0].Output);
console.log(process.memoryUsage());
setTimeout(() => {
delete require.cache[require.resolve(mod)];
delete Class[0];
Class[0] = require(mod);
console.log(Class)
delete c[0];
c[0] = new Class[0]();
console.log(c[0].Output);
console.log(process.memoryUsage());
}, 10000);
Now let me explain here for a bit, and mind you, this is testing code so the naming is just horrid.
This is how I went to work:
Step 1
I needed a way to separate required modules (like fs, or websocket, express, etc.) so it wouldn't mess with the whole delete require_cache() part of the code, my solution was making those globally required:
global.required = {
fs: require('fs')
}
Step 2
Figure out a way to make sure the Garbage Collector removes the unloaded code, I achieved this by putting my requires and class declarations inside of a variable so that I could use the delete functionality within Node/Javascript. (I used let in my test code because I was testing another method beforehand, haven't tested if const would work again).
I also made a variable that contains the path string for the file (in this case './Class.js' but for my explanation below I'll just write it in as is)
let Class = [null] //this declares an array that has an index '0'
Class[0] = require('./Class');
let c = [new Class[0]()] // this declares an array that has the class instantiated inside of index '0'
As for the garbage collection, I'm simply able to do the following:
delete Class[0];
delete c[0];
After this I am able to redo the declaration of the required class and subsequently the class itself and keep my code working without requiring a restart.
Take in mind that his takes a lot of work to implement in an actual project, but you could split it up by adding an unload() method to a class to unload underlying custom classes. But my initial testing shows that this works like a charm!
Edit: I feel required to note that without jfriend00's comments I'd never have figured out this solution
Output
When the project start, it outputs the current time and the process.memoryUsage()
13:49:13.540
{ rss: 50343936,
heapTotal: 7061504,
heapUsed: 4270696,
external: 29814377 }
during the 10 second wait, I change the Class.js file to not read the file.mov and say "Hey" instead of the time, after the 10s timout this is the output:
Hey
{ rss: 48439296,
heapTotal: 7585792,
heapUsed: 4435408,
external: 8680 }

Postman:how to set up library of (semi-)complicated reusable scripts for collection

Update
I've completely rewritten this question based on subsequent investigation. Hopefully this will generate some answers.
I'm new to Postman, and trying to figure out how to most efficiently build a collection of tests for a REST application. There are a bunch of utility functions that I'd like to have accessible in each of my test scripts, but cut-and-paste-ing them in to each test script seems like a horrible solution.
In looking at the various "scopes" that Postman allows you to squirrel away data (e.g. globals, environment, collection), it seems that all of these are merely string/number stores. In other words, it properly stores them if you can/do stringify the results. But it doesn't actually allow you to store proper objects or functions. This makes sense, since each script seems to be run as a separate execution, so the idea of sharing pointers to things between different scripts doesn't make sense.
It seems like the accepted way to share utility functions is to toString() the function in the defining script (e.g. the Collection Pre-Req script), and then eval() that stringified version in the test script. For instance:
Collection Pre-Req Script
const utilFunc = () => { console.log("I am a utility function"); };
pm.environment.set("utilFunc",utilFunc.toString() );
Test Script
const utilFunc = eval(pm.environment.get("utilFunc"));
utilFunc();
The test script will successfully print to console "I am a utility function".
I've seen people do more complicated things where, if they have more than one utility function, put them in to an object like utils.func1 and utils.func2, and have the overall function return the utils object, so the test script still only has to have a single line at the top importing the whole thing.
The problem I'm running in to is scoping - since the literal text of the function is executed in the Test Script, everything thing that the utility function has to have must be in that code, or otherwise exist at eval() time in the Test Script. For instance, if I do:
Collection Pre-Req Script
const baseUtilFunc = (foo) => { console.log(foo); };
const utilFunc1 = (param) => { baseUtilFunc("One: " + param); };
const utilFunc2 = (param) => { baseUtilFunc("Two: " + param); };
pm.environment.set("utilFunc1",utilFunc1.toString() );
pm.environment.set("utilFunc2",utilFunc2.toString() );
Test Script
const utilFunc1 = eval(pm.environment.get("utilFunc1"));
const utilFunc2 = eval(pm.environment.get("utilFunc2"));
utilFunc1("Test");
This fails because, in the Test Script, baseUtilFunc does not exist. Obviously, in this example, it'd be easy to fix. But in a more complicated world where the utility functions I expect to use in my Test Scripts are themselves built on top of underlying helper functions, it gets more difficult.
So what is the right way to handle this issue? Do people just cram all the relevant logic in to one big function that they then call toString() on? Do they embed an extraction-from-environment-and-then-eval in each util function within its definition, so that it works in the Test Script context? Do they export each individual method?
There are different ways to do it. The way I did recently for one of the projects is creating a project in Git and then using raw url to fetch the data. I have a sample created at below repo
https://github.com/tarunlalwani/postman-utils
To load the file you will need to associate the below code at collection level
if (typeof pmutil == "undefined") {
var url = "https://raw.githubusercontent.com/tarunlalwani/postman-utils/master/pmutils.js";
if (pm.globals.has("pmutiljs"))
eval(pm.globals.get("pmutiljs"))
else {
console.log("pmutil not found. loading from " + url);
pm.sendRequest(url, function (err, res) {
eval(res.text());
pm.globals.set('pmutiljs', res.text())
});
}
}
As shown in below screenshot
And the later in the tests or Pre-Requests you will run the below line of code to load it
eval(pm.globals.get("pmutiljs"))
And then you can use the functions easily in test.

Calling object functions with variables

I'm building a simple node.js websocket server and I want to be able to send a request from a client to the server and have it just take care of things (nothing that could cause harm). Ideally the client will pass the server an object with 2 variables, one of them for the object and the other for the specific function in that object to call. Something like this:
var callObject = {
'obj': 'testObject',
'func':'testFunc'
}
var testObject = {
func: function(){
alert('it worked');
}
}
// I would expect to be able to call it with sometihng like.
console.log( window[callObject.obj] );
console.log( window[callObject.obj][callObject.func] );
I tried calling it with global (since node.js doesn't uses it instead of a browsers window) but it won't work, it always tells me that it can't find callObject.func of undefined. If I call a console.log on callObject.obj it shows the objects variable, as a string, as expected. If run a console.log on the object itself I get the object back.
I'm guessing this is something rather simple, but my Google-fu has failed me.
My recommendation is to resist that pattern and not have client code pick any function to call. If you are not careful you have built yourself a nice large security hole. Especially if you are considering using eval.
Instead have a more explicit mapping between data sent by the client and server code. (Similar to what routes in express what give you).
You might have something like this
const commands = { doSomething() { ... } );
// Then you should be able to say:
let clientCommand = 'doSomething'; // from client
commands[clientCommand](param);
This should be pretty close to what you want to achieve.
Just make sure doSomething validates any parameters passed in.
For two levels of indirection:
const commandMap = { room: { join() { ...} }, chat: { add() { ... } }};
// note this is ES6 syntax
let clientCmd = 'room';
let clientFn = 'join';
commandMap[clientCmd][clientFn]();
I think you might just have to find the right place to put the command map. Show your web socket handler code.

Categories

Resources