Clear the Session Store - javascript

I am currently developing a Node.js application. I am using connect-mongo with Express to handle my session store. I am also using Mongoose for other database operations.
However, when I re-start my server in order to test new functionality, the old session data is still there. This is leading to data inconsistencies and some tricky bugs.
So when the server first starts up, I would like to be able to clear out all data from the sessions collection in my Mongo database.
I know that, using Mongoose, I can do something like this to clear out a collection:
User.remove({}, function (error) {
console.log('Emptied user collection')
});
However, I don't know how I can get a reference to a collection without declaring its schema.
Does anyone know how I can make this work? Or is there some entirely different approach I should be taking to handle all of this?

OK, so I think I found some solutions to my problem.
It is actually quite simple to get a reference to a collection without declaring its schema. Not sure how I hadn't found this in my research before I asked this question... So here is the complete solution to what I had been asking about:
mongoose.connection.db.collection('sessions', function (error, collection) {
if (error) {
console.error('Problem retrieving sessions collection:', error);
} else {
collection.remove({}, function (error) {
if (error) {
console.error('Problem emptying sessions collection:', error);
} else {
console.log('Emptied sessions collection');
}
});
}
});
However, in my case, I actually wanted to delete all data from the database, and was trying to do this one collection at a time (because I knew how to clear out a collection). Instead, I just needed to use db.dropDatabase().

Related

Using global variable as database cache?

Site is working with nodejs+socketio+mysql.
Is it normal to create a global object just before starting my app to store everything I have in the database? Something like user's password hashes for a very quick authentication process, compare the given token + userid.
var GS= {
users: {
user1: {
token: "Djaskdjaklsdjklasjd"
}
,
user555: {
token: "zxczxczxczxc"
}
,
user1239: {
token: "ertertertertertret"
}
}
};
On connect, node check user with gived user_id.
if (GS.hasOwnPropery("user"+user_id)) {
//compare gived token GS["user"+user_id].token
} else {
//go to database to get unknown id and then store it in GS
GS["user"+user_id] = { token: database_result };
}
And with everything else the same thing, using object property instead of querying the database. So if someone go to url /gameinfo/id/1, I just look in variable GS["game"+url_param] = GS["game"+1] = GS.game1
And of course, we don't talk about millions of rows in the database. 50-70k max.Don't really want to use something like Redis or Tarantool.
You can have a global object to store these info, but there are something to consider:
If you app are running by more than one machine (instance), this object won't be shared between these them.
This leads to some functional downsides, like:
you would need sticky session to make sure request from one particular client always directed to one particular instance
you can not check status of an user having data stored in another instance ...
Basically, anything that requires you to access user session data, will be hard, if not impossible, to do
In case your server goes down, all session data will be lost
Having a big, deep nested object is dangerously easy to mess up
If you are confident that you can handle these downsides, or you will not encounter them in your application, then go ahead. Otherwise, you should consider using a real cache library, framework.

Efficient DB design with PouchDB/CouchDB

So I was reading a lot about how to actually store and fetch data in an efficient way. Basically my application is about time management/capturing for projects. I am very happy for any opinion on which strategy I should use or even suggestions for other strategies. The main concern is about the limited resources for local storage on the different Browsers.
This is the main data I have to store:
db_projects: This is a database where the projects itself are stored.
db_timestamps: Here go the timestamps per project whenever a project is running.
I came up with the following strategies:
1: Storing the status of the project in the timestamps
When a project is started, there is addad a timestamp to db_timestamps like so:
db_timestamps.put({
_id: String(Date.now()),
title: projectID,
status: status //could be: 1=active/2=inactive/3=paused
})...
This follows the strategy to only add data to the db and not modify any entries. The problem I see here is that if I want to get all active projects for example, I would need to query the whole db_timestamp which can contain thousands of entries. Since I can not use the ID to search all active projects, this could result in a quite heavy DB query.
2: Storing the status of the project in db_projects
Each time a project changes it's status, there is a update to the project itself. So the "get all active projects"-query would be much resource friendly, since there are a lot less projects than timestamps. But this would also mean that each time a status change happens, the project entry would be revisioned and therefor would produce "a lot" of overhead. I'm also not sure if the compaction feature would do a good job, since not all revision data is deleted (the documents are, but the leaf revisions not). This means for a state change we have at least the _rev information which is still a string of 34 chars for changing only the status (1 char). Or can I delete the leaf revisions after conflict resolution?
3: Storing the status in a separate DB like db_status
This leads to the same problem as in #2 since status changes lead to revisions on this DB. Or if the states would be added in "only add data"-mode (like in #1), it would just quickly fill with entries.
The general problem is that you have a limited amount of space that you could put into indexedDB. On the other hand the principle of ChouchDB is that storage space is cheap (which it is indeed true when you store on the server side only). Here an interesting discussion about that.
So this is the solution that I use for now. I am using a mix between solution 1 and solution 2 from above with the following additions:
Storing only the timesamps in a synced Database (db_timestamps) with the "only add data" principle.
Storing the projects and their states in a separate local (not
synced) database (db_projects). Therefor I still use pouchDB since
it has a lot simpler API than indexedDB.
Storing the new/changed
project status in each timestamp aswell (so you could rebuild db_projects
out of db_timestams if needed)
Deleting db_projects every so often and repopulate it, so the
revision data (overhead for this db in my case) is eliminated and the size is acceptable.
I use the following code to rebuild my DB:
//--------------------------------------------------------------------
function rebuild_db_project(){
db_project.allDocs({
include_docs: true,
//attachments: true
}).then(function (result) {
// do stuff
console.log('I have read the DB and delete it now...');
deleteDB('db_project', '_pouch_DB_Projekte');
return result;
}).then(function (result) {
console.log('Creating the new DB...'+result);
db_project = new PouchDB('DB_Projekte');
var dbContentArray = [];
for (var row in result.rows) {
delete result.rows[row].doc._rev; //delete the revision of the doc. else it would raise an error on the bulkDocs() operation
dbContentArray.push(result.rows[row].doc);
}
return db_project.bulkDocs(dbContentArray);
}).then(function(response){
console.log('I have successfully populated the DB with: '+JSON.stringify(response));
}).catch(function (err) {
console.log(err);
});
}
//--------------------------------------------------------------------
function deleteDB(PouchDB_Name, IndexedDB_Name){
console.log('DELETE');
new PouchDB(PouchDB_Name).destroy().then(function () {
// database destroyed
console.log("pouchDB destroyed.");
}).catch(function (err) {
// error occurred
});
var DBDeleteRequest = window.indexedDB.deleteDatabase(IndexedDB_Name);
DBDeleteRequest.onerror = function(event) {
console.log("Error deleting database.");
};
DBDeleteRequest.onsuccess = function(event) {
console.log("IndexedDB deleted successfully");
console.log(request.result); // should be null
};
}
So I not only use the pouchDB.destroy() command but also the indexedDB.deleteDatabase() command to get the storage freed nearly completely (there is still some 4kB that are not freed, but this is insignificant to me.)
The timings are not really proper but it works for me. I'm happy if somone has an idea to make the timing work properly (The problem for me is that indexedDB does not support promises).

Adding a filter inside a beforeRemote remote hook

I have a problem I can't find an answer to in Loopback's docs.
Say I have a model Company and a modelEmployee. There is an 1Xn relation between the Company and its Employees. When /api/Employees is called, server returns all the employees.
I only want to return the list of employees who are in the same company with the user requesting the list.
For this, I created a remote hook
Employee.beforeRemote('find', function(context, modelInstance, next) {
var reject = function() {
process.nextTick(function() {
next(null, false);
});
};
// do not allow anonymous users
var userId = context.req.accessToken.userId;
if (!userId) {
return reject();
}
//I get the details of the user who sent the request
//to learn which company does he belong to
Employee.findById(userId, function(err, user) {
if(!context.req.query.filter) context.req.query.filter={};
context.req.query.filter.where = {brandId:user.companyId};
console.log(context.req.query);
next();
});
});
I thought this should work every time, but appearantly it only works when find already has some query filters like include - although the console.log prints a correct context.req.query object.
What am I missing? Any help would be greatly appreciated!
context.args.filter seems to work for this purpose.
As a side note, instead of replacing where, you might want to merge it with something provided by client. For implementation idea you can refer to: https://github.com/strongloop/loopback-datasource-juggler/blob/master/lib/utils.js#L56-L122

How to synchronise multiple RESTFul requests when using NodeJS and saving to MongoDB?

I have been trying to implement a RESTFul API with NodeJS and I use Mongoose (MongoDB) as the database backend.
The following example code registers multiple users with the same username when requests are sent at the same time, which is not what I desire. Although I tried to add a check!
I know this happens because of the asynchronous nature of NodeJS, but I could not find a method to do this properly. It looks like "findOne" method immediately returns, causing registerUser to return and then another request is processed.
By the way, I don't want to check for existing users with a separate API function, I need to check at the registration stage. Is there any way to do this?
Controller.prototype.registerUser = function (req, res) {
Users.findOne({'user_name': req.body.user_name}, function(err, user) {
if(!user) {
new User({user_name: req.body.user_name}).save(function(err) {
if(!err) {
res.send("User saved");
} else {
res.send("DB Error: Could not save user!");
}
});
} else {
res.send("User exists");
}
});
}
You should consider setting the user_name to be unique in the Schema. That would ensure that the user_name stays unique even if simultaneous requests are made to set an identical user name.
Yes, the reason this is happening is as you suspected because multiple requests can execute the code simultaneously and therefore the User.fineOne can return false multiple times. Incidentally this can happen with other stacks as well, even ones that use one thread per request.
To solve this, you need a way to somehow either control that just one user is being worked on at the time, you can accomplish this by adding all registerUser requests to a queue and then pulling them off the queue one by one and calling res.Send only after it's processed form the queue.
Alternatively, maybe you can keep a local array of user names, and each time a new request comes in and check the array if it's already there. If it isn't add it to the array and work on it. If it is in the array, send the response "User exists". Then, once the user has been successfully created, you can remove it from that array. (I haven't thought this one through 100% but I think it should work as well.)

mongodb getting user info for every document

I'm trying to display a forum/category. I need to get the latest posts. The problem is that I also need data on the user for each post as well as the latest reply.
db.post.find({
"inForum": forumID,
},
{
'sort': [['date', -1]]
},
function(err, cursor) {
cursor.count(function(err, count) {
cursor.skip(skip).limit(20).toArray(function(err, posts) {
var complete = _.after(nodes.length, function () {
res.send(posts)
});
// for every post get its author info and the latest post info
posts.forEach(function (post) {
var users = _.pluck(posts, 'user');
user.load(users, function (profiles) {
_.each(posts,
function(post, k) {
if (profiles[post.user]) post.fieldAvatar = profiles[post.user].fieldAvatar;
});
if (post.latestReply) {
post.load(post.latestReply.id, function (latestReply) {
if (latestReply) post.latestReply = latestReply
complete()
})
}
else {
complete()
}
})
})
});
})
})
This is what I'm doing and it seems really slow / really inelegant to me. Am I doing this correctly and is there any advice for speeding this up?
Thanks.
The best thing you should do here is to embed some information for the author of the posts (username & email or avatar) into those posts so that you don't make multiple queries to the database, one should suffice (sure you have some duplicate data, but the performance is optimal).
If you don't want to / can't do that you can also modify your second query to find all authors in [array_of_ids_of_the_posts]. That would reduce your [number_of_posts] queries into only one.
You could use some caching. For example you could save the users in an dictionary during the loop so you only have to fetch it on the first occurrence from mongodb.
Maybe you could create some kind of thread model where you save basic information about the containing posts, so you only have to go through the threads.
You could save the result of the function and delete it when a new post is added .. so won't go through all posts on every call.
You should not use a document storage like a sql database. Maybe it is better to generate the forum page directly when a post is created/edited and save the whole data in a document, so you only have to make one read call to mongo to show it.

Categories

Resources