Writing unit testing for a module that creates Json REST services - javascript

I recently finished up https://github.com/mercmobily/JsonRestStores. I feel a little uneasy as I haven't yet written any unit testing.
The module is tricky at best to test: it allows you to create Json REST stores, AND interact with the store using the API directly.
So, the unit test should:
Start a web server that implements a number of stores. Ideally, I should have one store for each tested feature I suppose
Test the results while manipulating that store, both using HTTP calls and direct API calls
The problem is that each store can have override a lot of functions. To make things more complicated, the store has a range of database drivers it can use (well, potentially -- at the moment I only have the MongoDB driver). So, wanting to test the module with MongoDB, I would have to first create a collection, and then test things using each DB layer...
I mean, it would be a pretty epic task. Can anybody shed some light on how to make something like this simpler? It seems to have all of the ingredients to make the Unit Testing from hell (API calls, direct calls, database, different configurable DB drivers, highly configurable class which encourages method overriding...)
Help?

You can write first the unit tests instead of start writing system tests
when you are going to add unit tests, you will need to learn mocking tests.

Related

What are "use-cases" in the Clean Architecture?

I am trying to implement the Clean Architecture structure in an app that I am developing and I am having a hard time figuring out exactly what is what.
For example, if I am right, the entities of my appliaction are Employee, Department, EmployeeSkill the entities also include all of the "validation" logic, to ensure that these entities are valid.
And the use-cases are the various actions that I can do with these entities?
For example, use-cases about the Employee:
add-employee.js
remove-employee-by-id.js
update-employee-department.js
update-employee-phone-number.js
...and-more-employee-updates.js
Are these all actually use-cases?
Now the add and remove i dont think have much to discuss about, but what about the updates? Should they be granulated like this?
Also with such architecture, doesnt that mean that, I if I want to update both the employees department and phone number at the same time, I will have to make two separate calls to the database, for something that can be done with one, because the database adapter is being injected into the use case, and every use case starts with "finding" the entity in the database?
Defer thinking about the entities for now. Often, you get stuck trying to abstract the code after your mental model of the world and that is not as helpful as we are lead to believe.
Instead, couple code that change together for one reason into use-cases. A good start can be one for each crud operation in the GUI. what will be by a new method, or new parameter, or new class etc is not part of the CA pattern, that is part of the normal tradeoffs you face when you write code.
I can't see any entities from your example. In my code base ContactCard (I work on an yellow pages in 2021 kind of app) and UserContext (security) are the only entities, these two things are used all over the place.
Other things are just data holders and not really an entity. I have many duplicates of the data holders so that things that are not coupled, stay uncoupled.
Your repository should probably implement the bridge pattern. That means that the business logic defines a bridge, that some repository implements. The use-case is not aware of database tables, so it does not have any granular requirements (think if it as ordering food at mcdonalds, it won't say from the grill I want xxx, and from the fryer I want yyy).
The use case is very demanding in the bridge definition. So much, that many repositories end up having api layers that import and manage the implementation of the bridge, and then they adapters to the internal logic.
This is the difference between api layers in business apps and most B2C apis. An enterprise API for an use case, is the just what the use-case needed.
If you have already constrained yourself by a made up model of the world, and decided to split repos in that, instead of splitting them per use-case, then you end up with poor alignment. Having the same sql query, or parts of it, in more than one repository is not an issue. Over time, the queries look different a lot of the time even if they start out very similar.
I would call your example use-case for UpdatePhoneNumberEverywhere. And then the UpdatePhoneNumberEverywhereRepository implementation, can do what the heck it wants to, that is a detail. The use case does not care.
Another one I might do is UpdatePhoneNumber and the use-case accepts a strategy. Strategy.CASCADE or Strategy.Leaf etc.
In terms of your table design, even though it's a detail, Contacts probably deserves to be broken out.
Every use-case does not start with finding something from the database. Commands and queries are passed in or w/e you call it, and the use-case does something useful.
The most practical way to write a use-case, is to just implement exactly what you need for the business requirement, write all the tests against the public api in the use-case. Just pass in data as a start, a dictionary is often fine.
Entities are often found later when you cannot stand to have so many versions of something, you just need that something to be stable and the same all over, and your end users expects as much too. Then, just refactor.

How to share an object between multiple test suites in Jest?

I need to share an object (db connection pool) between multiple test suites in Jest.
I have read about globalSetup and it seems to be the right place to initialize this object, but how can I pass that object from globalSetup context to each test suite context?
I'm dealing with a very similar issue, although in my case I was trying to share a single DB container (I'm using dockerode) that I spin up once before all tests and remove after to conserve the overhead of spinning a container for each suite.
Unfortunately for us, after going over a lot of documentation and Github issues my conclusion is that this is by design since Jest's philosophy is to run tests sandboxed and under that restriction, I totally get why they choose not to support this.
Specifically, for my use case I ended up spinning a container in a globalSetup script, tagging it with some unique identifier for the current test run (say, timestamp) and removing it at the end with a globalTeardown (that works since global steps can share state between them).
This looks roughly like:
const options: ContainerCreateOptions = {
Image: POSTGRES_IMAGE,
Tty: false,
HostConfig: {
AutoRemove: true,
PortBindings: {'5432/tcp': [{HostPort: `${randomPort}/tcp`}]}
},
Env: [`POSTGRES_DB=${this.env['DB_NAME']}`,
`POSTGRES_USER=${this.env['DB_USERNAME']}`,
`POSTGRES_PASSWORD=${this.env['DB_PASSWORD']}`]
};
options.Labels = { testContainer: `${CURRENT_TEST_TIMESTAMP}`};
let container = await this.docker.createContainer(options);
await container.start();
where this.env in my case is a .env file I preload based on the test.
To get the container port/ip (or whatever else I'm interested in) for my tests I use a custom envrironment that my DB-requiring tests use to expose the relevant information on a global variable (reminder: you can only put primitives and objects on global, not instances).
In your case, the requirement to pass a connection pool between suites is probably just not suited for jest since it will never let you pass around an instance between different suites (at least that's my understanding based on all of the links I shared). You can, however, try either:
Put all tests that need the same connection pool into a single suite/file (not great but would answer your needs)
Do something similar to what I suggested (setup DB once in global, construct connection pool in custom env and then at least each suite gets its own pool) that way you have a lot of code reuse for setup which is nice. Something similar is discussed here.
Miscalenous: I stumbled across testdeck which might answer your needs as well (setup an abstract test class with your connection pool and inherit it in all required tests) - I didn't try it so I don't know how it will behave with Jest, but when I was writing in Java that's how we used to achieve similar functionality with TestNG
In each test file you can define a function to load the necessary data or to mock the data. User the beforeEach() function for this. Documentation example here!

Creating Cycle.js reusable modules

Let's imagine, in a OO world, I want to build a Torrent object which listens to the network and lets me interact with it. It would inherit an EventEmitter and would look something like this:
var torrent = new Torrent(opts)
torrent.on('ready', cb) // add torrent to the UI
torrent.on('metadata', cb) // update data in the UI
and I can also make it do things:
torrent.stop()
torrent.resume()
Then of course if I want to do delete the torrent from memory I can call torrent.destroy().
The cool thing about this OO approach is that I can easily package this functionality in its own npm module, test the hell out of, and give users a nice clean reusable API.
My question is, how do I achieve this with Cycle.js apps?
If I create a driver it's unclear how I would go about creating many torrents and having their own independent listeners. Also consider I'd like to package functionality in a way that others get to easily reuse it in other Cycle.js apps.
It seems to me that you are trying to solve a problem thinking about it as you would write "imperative code".
I think creating Torrent instances with their own listeners is not something you should be using in cycle components.
I would go about it differently - creating Torrent module and figuring out what would be its sources and sinks. If this module should be reusable and published, you can create it as a function that would receive streams as arguments. Maybe something similar to TodoMVC Task component (which is then used in its parent component).
Since this module can be created as a pure function, testing it should be at least just as easy.
This implementation of course depends on your requirements but communication with the module would then be done only with streams and since it would be declarative there would be no need for methods like stop() and destroyed() which you would call from elsewhere.
How do I test it?
In cycle.js you'd write a component with intent model and view functions.
You'd test intent(), for given input Streams, produces Streams of actions that you want. For models, you'd test that given http and action streams, you get the state you want, and for view, you test that given a state you get the VDom you want.
One tricky bit with cycle.js is that since it passes functions around, normal JavaScript objects that use the 'this' keyword are not worth the trouble due to 'this' context problems. If you are working with cycle.js and you think you might write a JS class for use with Isolate, Onionify, or Collections most likely, you are going in the wrong direction. See MDN docs about 'this'
how I would go about creating many torrents
The Cycle.js people have several ways to deal with groups of things like this.
This ticket describes some things that might work for that:
Wrap subapp in Web Component
Stanga and similars.
Cycle Collections
Cycle Onionify

Isn't it dangerous to have query information in javascript using breezejs?

Just starting to play with breeze.js because of the obvious gains in coding time, i.e. managing to access model data from the server direct within Javascript (I am a newbie here, so obviously bare with!).
In the past I have used the stock ajax calls to get/post data to the server, and I have used a few different client tools in the past to provide some help in querying local data, such as jLinq.
My question is this. Isn't it dangerous to have essentially full model query access in Javascript? I must be missing something because it looks like a really well thought through tool. In the past I have at least controlled what can be sent to the client via the backend query process, and again using something like jLinq etc I could filter the data etc. I can also understand the trade-off perhaps with gaining the direct query/none-duplicating local model problem, so just if anyone could provide some insight to this?
Thanks!
EDIT
Obviously I am not the only one, however I am guessing there is a reasonable response - maybe limiting the data being requested using DTO methods or something? The other question posted is here
It can be dangerous to expose the full business model. It can be dangerous to allow unrestrained querying of even that part of the model that you want to expose to the client. This is true whether you offer an easy-to-query API or one that is difficult to query.
That's why our teams are careful about how we construct our services.
You should only expose types that your client app needs. If you want to limit access to authorized instances of a type, you can write carefully prescribed non-queryable service methods. Breeze can call them just fine. You don't have to use the Breeze query facilities for every request. You'll still benefit from the caching, related-entity-navigation, change-tracking, validation, save-bundling, cache-querying, offline support.
Repeat: your service methods don't have to return IQueryable. Even when they do return IQueryable, you can easily write the service method to constrain the query results to just those entities the user is authorized to see.
Fortunately, you can blend the two approaches in the same service or in collaborating services.
Breeze gives you choices. It's up to you to exercise those choices wisely. Go out there and design your services to fit your requirements.
Breeze isn't meant to be your business logic in that sense. Keeping in mind the rule of thumb that if you do something in Javascript, anyone can do it, you ought to be restricting the visibility of your own service data as needed.
In other words, it's useful for you if you meant to make the data publicly visible anyway. But only expose the entities that you're happy exposing and allowing anyone to query; another way to look at it is that your API becomes a public API for your website (but not one you advertise and tell everyone to use).
I am personally not a fan of doing things this way as there is a dependency created on the schema of the backend implementation. If I want to make changes to my database tables, I now have to take my Javascript into consideration. I also lack in terms of integration and unit testing.
However, it can have its uses if you want to quickly build a website feature on non-sensitive data without having to build the service methods and various layers of implementation of it.
What about when you expose the Metadata? Isn't that considered dangerous. IMHO is not safe to expose metadata from the DbContext. I know you can construct metadata on the client, but the point is to do things as quickly as possible(if safe).

mongoose vs mongodb (nodejs modules/extensions), which better? and why?

I've just arrived to Node.js and see that there are many libs to use with the MongoDB, the most popular seem to be these two: (mongoose and mongodb). Can I get pros and cons of those extensions? Are there better alternatives to these two?
Edit: Found a new library that seems also interesting node-mongolian and is "Mongolian DeadBeef is an awesome Mongo DB node.js driver that attempts to closely approximate the mongodb shell." (readme.md)
https://github.com/marcello3d/node-mongolian
This is just to add more resources to new people that view this, so basically Mongolian its like an ODM...
Mongoose is higher level and uses the MongoDB driver (it's a dependency, check the package.json), so you'll be using that either way given those options. The question you should be asking yourself is, "Do I want to use the raw driver, or do I need an object-document modeling tool?" If you're looking for an object modeling (ODM, a counterpart to ORMs from the SQL world) tool to skip some lower level work, you want Mongoose.
If you want a driver, because you intend to break a lot of rules that an ODM might enforce, go with MongoDB. If you want a fast driver, and can live with some missing features, give Mongolian DeadBeef a try: https://github.com/marcello3d/node-mongolian
Mongoose is, by far, the most popular. I use it, and have not used others. So I can't speak about the others, but I can tell you my gripes with Mongoose.
Difficult / poor documentation
Models are used. And they define structure for your documents. Yet this seems odd for Mongo where one of its advantages is that you can throw in a column (err, attribute?) or simply not add one.
Models are case sensitive - Myself and other devs I work with have had issues where the case of the collection name that the model is defined with can cause it to not save anything, w/o error. We have found that using all lowercase names works best. E.g. instead of doing something like mongooseInstace.model('MyCollection', { "_id": Number, "xyz": String }) it's better to do (even though the collection name is really MyCollection): mongooseInstace.model('mycollection', { "_id": Number, "xyz": String })
But honestly, it's really useful. The biggest issue is the documentation. It's there, but it's dry and hard to find what you need. It could use better explanations and more examples. But once you get past these things it works really really well.
I'm building new app and designing now structure of it, here are some thoughts about why to use or not to use mongoose:
Mongoose will be slower (for big apps)
Mongoose is harder with more complicated queries
There will be situations when you want more speed and you will choose to go without mongoose then you will have half queries with mongoose and half w/o. That's crazy situation, had once..
Mongoose will make you code faster with simple apps with simple db structure
Mongoose will make you read mongodb docs AND mongoose docs
With mongoose your stack will get one more thing to depend on and it's one more possibility to crash and burn to ashes.
mongodb driver is raw driver, you communicate directly to mongodb.
mongoose is abstraction layer. You get easier I/O to db while your db structure is simple enough.
Abstraction brings in it's requirements and you have to follow those. Your app will be slower, eat more RAM and be more complicated, but if you know how to use it, you can faster write simple objects, save those to database.
Without mongoose you will have faster application with direct connection to mongodb. No-one says, that you can't write your own models to save stuff to db. You can. And I think it's easier. You write code, which you will use, you know what you need. You abstraction layer will be way smaller, then mongoose's.
I'm coming from PHP world, there we had raw sql with depreciated mysql_ functions, then we got PDO - object orientated abstraction layer to communicate with sql. Or you can choose some heavy ORM like Doctrine to have similar stuff to mongoose on mongoDB. Objects with setter/getters/save method and so on. That's fine, but by adding more abstraction you are adding more files, more logic, more documentation, more dependencies. I like to keep stuff simple and have less dependencies in my stack. BTW, that was why I moved from PHP to server-client Javascript in first place..
With mongoose I think is great to write some simple apps, that have simple db structure similar to sql. When you start having subdocuments and want to make all those crazy queries i found it really hard with mongoose. You have to look at mongodb docs, then look at mongoose docs to find out how to make a query you want. Sometimes you will find, that X future of mongodb is not in mongoose, so you go down to raw mongodb driver and write raw mongodb queries in one or another place. Without mongoose, you look at mongodb docs and do your query.
I have only used mongodb. In my personal opinion, I would recommend starting of with something low level and then moving up. Otherwise you may find yourself using the additional advanced features provided by higher level drivers like mongoose to no actual benefit.
The problem I have had with mongodb, which is endemic to node.js is the poor documentation. There is documentation and a lot of it but it isn't always the most helpful. That I have seen so far there are no good and thorough examples of production usage of the driver. The documentation is filled with the same templated example of open a connection, issue a command and close the connection. You can tell it's copy and pasted from a template because every example includes required for everything that might be needed rather than only what is needed for each example.
To give an example taken entirely at random:
raw {Boolean, default:false}, perform operations using raw bson buffers.
What exactly does "perform operations using raw bson buffers" do? I can't find it explained anywhere and a Google search for that phrase doesn't help. Perhaps I could Google further but I shouldn't have to. The information should be there. Are there any performance, stability, integrity, compatibility, portability or functionally advantages for enabling/disabling this option? I really have no idea without diving deeply into the code and if you're in my boat that's a serious problem. I have a daemon where perfect persistence isn't required but the program needs to be very stable at runtime. I could assume this means that it expects me to deserialize and serialize to JSON or is something low level, internal and transparent to the user but I could be wrong. Although I tend to make good assumptions I cant rely on assumption and guesswork when making vital systems. So here I can either test my assertion with code or dig much deeper into Google or their code. As a one off this isn't so bad but I find my self in this situation many times when reading their documentation. The difference can mean days spent on a task versus hours. I need confirmation and the documentation barely gives me explanation, let alone confirmation.
The documentation is rushed. It doesn't explain events, gives vague details about when errors are thrown or the nature of those errors and there are often several ways to accomplish connectivity which can be unclear. You can get by and its not completely useless, but it is very rough around the edges. You'll find some things are left to guesswork and experimentation.

Categories

Resources