How to handle foreign key constraint with Jest/Supertest/Knex/Postgres - javascript

I'm trying to start writing tests for my application. I'm using Jest & Supertest to run all of my tests. When I try and run my test suite, I'm getting an error regarding a foreign key constraint.
The error:
error: truncate "users" restart identity - cannot truncate a table referenced in a foreign key constrainterror: cannot truncate a table referenced in a foreign key constraint
This is my server.spec.js file:
const request = require('supertest');
const server = require('./server.js');
const db = require('../data/db-config.js');
describe('server.js', () => {
describe('POST /register', () => {
it('should return 201 created', async () => {
const user =
{
name: "test",
username: "test",
email: "test77#test.com",
password: "password"
}
const res = await request(server).post('/api/auth/register').send(user);
expect(res.status).toBe(201);
})
beforeEach(async () => {
await db("graphs").truncate();
await db("users").truncate();
});
})
})
And here is my knex migration file:
exports.up = function(knex) {
return (
knex.schema
.createTable('users', tbl => {
tbl.increments();
tbl.string('username', 255).notNullable();
tbl.string('password', 255).notNullable();
tbl.string('name', 255).notNullable();
tbl.string('email', 255).unique().notNullable();
})
.createTable('graphs', tbl => {
tbl.increments();
tbl.string('graph_name', 255).notNullable();
tbl.specificType('dataset', 'integer[]').notNullable();
tbl
.integer('user_id')
.unsigned()
.notNullable()
.references('id')
.inTable('users')
.onDelete('CASCADE')
.onUpdate('CASCADE');
})
)
};
exports.down = function(knex) {
return (
knex.schema
.dropTableIfExists('graphs')
.dropTableIfExists('users')
)
};
I came across this answer in my research: How to test tables linked with foreign keys?
I'm new to both Postgres as well as testing. It makes sense that I would need to drop the tables in the reverse order like I have in my migration. But when I try to truncate them in the beforeEach section of my test, it doesn't seem to matter what order the tables are listed in.
I'm not sure where exactly to go from here. Any help would be greatly appreciated.

I think the trick here will be to resort to a bit of knex.raw:
await db.raw('TRUNCATE graphs, users RESTART IDENTITY CASCADE');
CASCADE because you don't want foreign key constraints getting in the way, and RESTART IDENTITY because the default Postgres behaviour is not to reset sequences. See TRUNCATE.
While we're on a related subject, allow me to introduce something that might make your life a lot easier: template databases! Templates are databases that Postgres can use to very rapidly recreate a database from a known state (by copying it). This can be faster than even truncating tables, and it allows us to skip all the annoying foreign key stuff when testing.
For example, you could use raw to do the following:
DROP DATABASE testdb;
CREATE DATABASE testdb TEMPLATE testdb_template;
This is a surprisingly inexpensive operation, and is great for testing because you can begin with a known state (not necessarily an empty one) each time you do a test run. I guess the caveats are that your knexfile.js will need to specify a connection with a user sufficiently credentialled to create and delete databases (so maybe an 'admin' connection that knows about localhost only) and that the template must be created and maintained. See Template Databases for more.

Related

how to get users from firestore database with Like query

this function work fine with equal string but i need to search a substring: if i write "h" and the string is "hello" i need to return that
async getUsers(searchUser) {
return firestore().collection('Users').where(searchUser).where('firstName', '==', searchUser)
.limit(20).get().then(snapshot => {
snapshot.docs.forEach(doc => {
const usersData = { ...doc.data(), id: doc.id };
return usersData
});
})
}
I can give you answer!
In that case, you must use a dedicated third-party search service. These services provide advanced indexing and search capabilities far beyond what any simple database query can offer.
Please use "Algolia" and at that time your code according to your expectations must be like this.
const client = algoliasearch('YourApplicationID', 'YourSearchOnlyAPIKey');
const index = client.initIndex('firstName');
index.search(searchUser, {
attributesToRetrieve: ['firstname', 'lastname'/*, .. etc the fields you need*/],
hitsPerPage: 20 /* the page Size */,
}).then(({ hits }) => {
console.log(hits); // the results you want
});
Just try it.
Helpful for you? If it's successful, I would be happy.
If you have a question please contact "Nykolai.B0411#outlook.com". I will help you.
Thanks.
When you register a new user to your app you can store their username/firstname in firestore as an array that includes the possible ways you would search for a user (look at the attached image). You can do that by splitting the name string.
then you can query the users collection by searching in that array using arrayContains like this:
await usersCollection
.where('searchOptions', arrayContains: searchText)
.get()
.then((value) =>
value.docs.map((doc) => User.fromSnapShot(doc)).toList());
If you need more capabilities than that you might need to use a 3rd party service. but this solution should be sufficient for your case.

How to isolate a Gunjs database?

I've been trying out GunJs for a couple of days now and I'm really enjoying it. As a starter project I've followed the Fireship chat dapp video aimed at building your own chat.
Here's the issue, now that I've finished the tutorial I would like to create my own chat. However, for some reason if I get a 'chat' node within my own app it seems to pick up on the same 'chat' node as the tutorial one that is online.
onMount(() => {
// Get Messages in large chat
db.get('chat')
.map()
.once(async (data, id) => {
if (data) {
// key for E2E - to do: change for web3
const key = '#foo';
var message = {
//transform the data
who: await db.user(data).get('alias'),
what: (await SEA.decrypt(data.what, key)) + '',
when: GUN.state.is(data, 'what'),
};
if (message.what) {
messages = [...messages.slice(-100), message]
}
}
})
})
This is also the case if I change the encryption key (then the messages just become undefined). Multiple questions arise from this:
Are graph node names unique within the whole of GunDb?
How do you handle conflicts where two gun-based apps call on the same node name?
Is this problem generally solved through filtering using 'header' props?
How do I make it pick up on only my data?
Even if I've read most of the docs, there seems to be something I'm missing in my comprehension of how the graph is generally seperated between apps. Any insight on how this works would be much appreciated.
Are graph node names unique within the whole of GunDb?
Yes.
How do you handle conflicts where two gun-based apps call on the same node name?
You don't. The expected result will be, they will overwrite each other.
Is this problem generally solved through filtering using 'header' props?
I don't think it's the right way to do it.
How do I make it pick up on only my data?
Use your own relay server.
Conclusion :
gunDB doesn't really care about the who fetch / put the data. If you want to protect your data, use your own relay server (not a public one), and put data in your user space. user space is readonly to the public, but read/write for the owner.

Trying to work with POST request using Postman, cosmosDB and NodeJs

I am trying to learn the way API's work. Here I am trying to get the POST method to work. I am using this code to make the document in the database,
app.post('/add', async (req, res) => {
try {
const data = require('./test.json');
const newItemId = Math.floor(Math.random() * 1000 + 10).toString();
data.id = newItemId;
data.Partnership_Id = newItemId;
//for testing purpose only
let documentDefinition = {
"id": newItemId,
"name": "Angus MacGyver",
"state": "Building stuff"
};
// Open a reference to the database
const dbResponse = await cosmosClient.databases.createIfNotExists({
id: databaseId
});
let database = dbResponse.database;
const { container } = await database.containers.createIfNotExists({id: containerId});
// Add a new item to the container
console.log("** Create item **");
const createResponse = await container.items.create(data);
res.redirect('/');
} catch (error) {
console.log(error);
res.status(500).send("Error with database query: " + error.body);
}
})
Here I am using test.json for the data input. I am making a fake id using newItemId for data.id and data.Partnership_Id.
With this approach, I can make a document in the database and can check on Postman too but there is nothing in the Body tag in Postman.
I am confused on this part, I feel like the data for the new document should be passed through the Postman Body rather than me using newItemId for it.
This might be a silly question to ask but I am trying to get my head around how API works and how to pass data in them.
IDs are almost always auto generated on the backend (or at least should be) when creating a database resource, so what you have seems to be correct. I would recommend using a library like nanoid to generate the ids though, just to remove the potential for errors.
Its is RESTful convention to return created data, so in this case you would return a JSON on the created document, and then redirect etc on the front end (to ensure complete separation of backend front end - so you can say host them separately). Your approach is also fine and works though.
My advice is to think of your backend and frontend as been completely separate, I would have a project for each personally. This was it is more clear how everything links together.

Node.js pg-promise Azure function to write to Postgres (timescaleDB)

Azure is driving me mad again. What I try to achieve is that the data that comes in through an Event Hub needs to be written to the database. What I got working thus far is that the data arrives at the Event Hub and that the Azure function is able to post data to the database. I would prefer to do this with Node.JS as the integration seems kind of nice in Azure. The script I use to send some bogus data to the database is as follows:
module.exports = async function (context, eventHubMessages){
const initOptions = {
query(e) {context.log(e.query)},
capSQL: true
//capSQL: true // capitalize all generated SQL
};
const pgp = require('pg-promise')(initOptions);
const db = pgp({
host: '####',
user: '####',
password: '####',
database: 'iotdemo',
port: 5432,
ssl: true
});
// our set of columns, to be created only once (statically), and then reused,
// to let it cache up its formatting templates for high performance:
const cs = new pgp.helpers.ColumnSet(['customer', 'tag', 'value', 'period'], {table: 'testtable'});
// generating a multi-row insert query:
const query = pgp.helpers.insert(JSON.parse(eventHubMessages), cs);
//=> INSERT INTO "tmp"("col_a","col_b") VALUES('a1','b1'),('a2','b2')
// executing the query:
db.none(query);
};
And yes, this is a snippet from somewhere else. The 'eventHubMessages' should contain the payload. A couple of issues that I have had thus far are:
I can send a payload defined within the script or whilst giving it a testing payload, but I cant send the payload of the actual message
pg-promise returns a 202 regardless of whether it fails or not, so debugging is 'blind' at the moment. Any tips on how to get proper logging would be much appreciated.
I used 'capture events' in the event hub instance to capture the actual messages. These were stored in a blob storage. In noticed that the format is Avro. Do I need to peel away at that object to get to the actual array?
The input should look something like this:
[{"customer": duderino, "tag": nice_rug, "value": 10, "period": 163249839}]
I think I have 2 issues:
I dont know how to get meaningful logging out of the Azure function using Node.JS
Something is off about how my payload is coming in.
A more deeper question is, how do I know whether the Azure function is getting the data that it should. I know that the Event Grid gets the data, but there is no throughput. Namespaces are consistent and the Azure Function should be triggered by that namespace and get the input as a string.
I am seriously lost and out of my depth. Apart from the solution I would also appreciate feedback on my request. I am not a pro on StackOverflow and don't want to waste your time.
Regards
Ok, so after some digging I found a few things to resolve the issue. First of all, I was receiving the payload as a string, meaning that I needed it to parse first, before I could make it a callable object. In terms of code its simple, and part of the base functions of node.js
var parsed_payload = JSON.parse(payload_that_is_a_string);
Lastly, to get meaningful logging I found that the PG-Promise module has great support for that, and that this can be configured when loading the module itself. I was particularly interested in errors, so I enabled that option like so:
const initOptions = {
query(e) {console.log(e.query)},
capSQL: true,
//capSQL: true // capitalize all generated SQL
error: function (error, e) {
if (e.cn) {
// A connection-related error;
// console.log("DC:", e.cn);
// console.log("EVENT:", error.message);
}
}
};
That then can be used as a settings object for loading PG-Promise:
const pgp = require('pg-promise')(initOptions);
Thanks for considering my ask for help. I hope this proves useful for anyone out there!
Regards Pieter

How to (using React JS web) and Firestore, can you find out when a chatRoom (on the Firestore Database) receives new messages?

I am trying to build an app using FireStore and React JS (Web)
My Firestore database basically has:
A collection of ChatRooms ChatRooms
Every chat-room has many messages which is a subcollection, for example:
this.db.collection("ChatRooms").doc(phone-number-here).collection("messages")
Also, every chat-room has some client info like first-name, last-name etc, and one that's very important:
lastVisited which is a timestamp (or firestamp whatever)
I figured I would put a React Hook that updates every second the lastVisited field, which means to try to record as accurately as possible on Firestore the last time I left a chat-room.
Based on that, I want to retrieve all the messages for every customer (chat-room) that came in after the last visit,
=> lastVisited field. :)
And show a notification.
I have tried from .onSnapshot listener on the messages subcollection, and a combination of Firestore Transactions but I haven't been lucky. My app is buggy and it keeps showing two, then one, then nothing, back to two, etc, and I am suffering much.
Here's my code!
Please I appreciate ANY help!!!
unread_messages = currentUser => {
const chatRoomsQuery = this.db.collection("ChatRooms");
// const messagesQuery = this.db.collection("ChatRooms");
return chatRoomsQuery.get().then(snapshot => {
return snapshot.forEach(chatRoom => {
const mess = chatRoomsQuery
.doc(chatRoom.id)
.collection("messages")
.where("from", "==", chatRoom.id)
.orderBy("firestamp", "desc")
.limit(5);
// the limit of the messages could change to 10 on production
return mess.onSnapshot(snapshot => {
console.log("snapshot SIZE: ", snapshot.size);
return snapshot.forEach(message => {
// console.log(message.data());
const chatRef = this.db
.collection("ChatRooms")
.doc(message.data().from);
// run transaction
return this.db
.runTransaction(transaction => {
return transaction.get(chatRef).then(doc => {
// console.log("currentUser: ", currentUser);
// console.log("doc: ", doc.data());
if (!doc.exists) return;
if (
currentUser !== null &&
message.data().from === currentUser.phone
) {
// the update it
transaction.update(chatRef, {
unread_messages: []
});
}
// else
else if (
new Date(message.data().timestamp).getTime() >
new Date(doc.data().lastVisited).getTime()
) {
console.log("THIS IS/ARE THE ONES:", message.data());
// newMessages.push(message.data().customer_response);
// the update it
transaction.update(chatRef, {
unread_messages: Array.from(
new Set([
...doc.data().unread_messages,
message.data().customer_response
])
)
});
}
});
})
.then(function() {
console.log("Transaction successfully committed!");
})
.catch(function(error) {
console.log("Transaction failed: ", error);
});
});
});
});
});
};
Searching about it, it seems that the best option for you to achieve that comparison, would be to convert your timestamps in milliseconds, using the method toMillis(). This way, you should be able to compare the results better and easier - more information on the method can be found in the official documentation here - of the timestamps of last message and last access.
I believe this would be your best option as it's mentioned in this Community post here, that this would be the only solution for comparing timestamps on Firestore - there is a method called isEqual(), but it doesn't make sense for your use case.
I would recommend you to give it a try using this to compare the timestamps for your application. Besides that, there is another question from the Community - accessible here: How to compare firebase timestamps? - where the user has a similar use cases and purpose as yours, that I believe might help you with some ideas and thoughts as well.
Let me know if the information helped you!

Categories

Resources